--- /dev/null
+
+-------------------------------------------------------------------------
+Release Notes for Linux on Intel's IXP4xx Network Processor
+
+Maintained by Deepak Saxena <dsaxena@plexity.net>
+-------------------------------------------------------------------------
+
+1. Overview
+
+Intel's IXP4xx network processor is a highly integrated SOC that
+is targeted for network applications, though it has become popular
+in industrial control and other areas due to low cost and power
+consumption. The IXP4xx family currently consists of several processors
+that support different network offload functions such as encryption,
+routing, firewalling, etc. For more information on the various
+versions of the CPU, see:
+
+ http://developer.intel.com/design/network/products/npfamily/ixp4xx.htm
+
+Intel also made the IXCP1100 CPU for sometime which is an IXP4xx
+stripped of much of the network intelligence.
+
+2. Linux Support
+
+Linux currently supports the following features on the IXP4xx chips:
+
+- Dual serial ports
+- PCI interface
+- Flash access (MTD/JFFS)
+- I2C through GPIO
+- GPIO for input/output/interrupts
+ See include/asm-arm/arch-ixp4xx/platform.h for access functions.
+- Timers (watchdog, OS)
+
+The following components of the chips are not supported by Linux and
+require the use of Intel's propietary CSR softare:
+
+- USB device interface
+- Network interfaces (HSS, Utopia, NPEs, etc)
+- Network offload functionality
+
+If you need to use any of the above, you need to download Intel's
+software from:
+
+ http://developer.intel.com/design/network/products/npfamily/ixp425swr1.htm
+
+DO NOT POST QUESTIONS TO THE LINUX MAILING LISTS REGARDING THE PROPIETARY
+SOFTWARE.
+
+There are several websites that provide directions/pointers on using
+Intel's software:
+
+http://ixp4xx-osdg.sourceforge.net/
+ Open Source Developer's Guide for using uClinux and the Intel libraries
+
+http://gatewaymaker.sourceforge.net/
+ Simple one page summary of building a gateway using an IXP425 and Linux
+
+http://ixp425.sourceforge.net/
+ ATM device driver for IXP425 that relies on Intel's libraries
+
+3. Known Issues/Limitations
+
+3a. Limited inbound PCI window
+
+The IXP4xx family allows for up to 256MB of memory but the PCI interface
+can only expose 64MB of that memory to the PCI bus. This means that if
+you are running with > 64MB, all PCI buffers outside of the accessible
+range will be bounced using the routines in arch/arm/common/dmabounce.c.
+
+3b. Limited outbound PCI window
+
+IXP4xx provides two methods of accessing PCI memory space:
+
+1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
+ To access PCI via this space, we simply ioremap() the BAR
+ into the kernel and we can use the standard read[bwl]/write[bwl]
+ macros. This is the preffered method due to speed but it
+ limits the system to just 64MB of PCI memory. This can be
+ problamatic if using video cards and other memory-heavy devices.
+
+2) If > 64MB of memory space is required, the IXP4xx can be
+ configured to use indirect registers to access PCI This allows
+ for up to 128MB (0x48000000 to 0x4fffffff) of memory on the bus.
+ The disadvantadge of this is that every PCI access requires
+ three local register accesses plus a spinlock, but in some
+ cases the performance hit is acceptable. In addition, you cannot
+ mmap() PCI devices in this case due to the indirect nature
+ of the PCI window.
+
+By default, the direct method is used for performance reasons. If
+you need more PCI memory, enable the IXP4XX_INDIRECT_PCI config option.
+
+3c. GPIO as Interrupts
+
+Currently the code only handles level-sensitive GPIO interrupts
+
+4. Supported platforms
+
+ADI Engineering Coyote Gateway Reference Platform
+http://www.adiengineering.com/productsCoyote.html
+
+ The ADI Coyote platform is reference design for those building
+ small residential/office gateways. One NPE is connected to a 10/100
+ interface, one to 4-port 10/100 switch, and the third to and ADSL
+ interface. In addition, it also supports to POTs interfaces connected
+ via SLICs. Note that those are not supported by Linux ATM. Finally,
+ the platform has two mini-PCI slots used for 802.11[bga] cards.
+ Finally, there is an IDE port hanging off the expansion bus.
+
+Gateworks Avila Network Platform
+http://www.gateworks.com/avila_sbc.htm
+
+ The Avila platform is basically and IXDP425 with the 4 PCI slots
+ replaced with mini-PCI slots and a CF IDE interface hanging off
+ the expansion bus.
+
+Intel IXDP425 Development Platform
+http://developer.intel.com/design/network/products/npfamily/ixdp425.htm
+
+ This is Intel's standard reference platform for the IXDP425 and is
+ also known as the Richfield board. It contains 4 PCI slots, 16MB
+ of flash, two 10/100 ports and one ADSL port.
+
+Motorola PrPMC1100 Processor Mezanine Card
+http://www.fountainsys.com/datasheet/PrPMC1100.pdf
+
+ The PrPMC1100 is based on the IXCP1100 and is meant to plug into
+ and IXP2400/2800 system to act as the system controller. It simply
+ contains a CPU and 16MB of flash on the board and needs to be
+ plugged into a carrier board to function. Currently Linux only
+ supports the Motorola PrPMC carrier board for this platform.
+ See https://mcg.motorola.com/us/ds/pdf/ds0144.pdf for info
+ on the carrier board.
+
+5. TODO LIST
+
+- Add support for Coyote IDE
+- Add support for edge-based GPIO interrupts
+- Add support for CF IDE on expansion bus
+
+6. Thanks
+
+The IXP4xx work has been funded by Intel Corp. and MontaVista Software, Inc.
+
+The following people have contributed patches/comments/etc:
+
+Lutz Jaenicke
+Justin Mayfield
+Robert E. Ranslam
+[I know I've forgotten others, please email me to be added]
+
+-------------------------------------------------------------------------
+
+Last Update: 5/13/2004
--- /dev/null
+README on the SDRAM Controller for the LH7a40X
+==============================================
+
+The standard configuration for the SDRAM controller generates a sparse
+memory array. The precise layout is determined by the SDRAM chips. A
+default kernel configuration assembles the discontiguous memory
+regions into separate memory nodes via the NUMA (Non-Uniform Memory
+Architecture) facilities. In this default configuration, the kernel
+is forgiving about the precise layout. As long as it is given an
+accurate picture of available memory by the bootloader the kernel will
+execute correctly.
+
+The SDRC supports a mode where some of the chip select lines are
+swapped in order to make SDRAM look like a synchronous ROM. Setting
+this bit means that the RAM will present as a contiguous array. Some
+programmers prefer this to the discontiguous layout. Be aware that
+may be a penalty for this feature where some some configurations of
+memory are significantly reduced; i.e. 64MiB of RAM appears as only 32
+MiB.
+
+There are a couple of configuration options to override the default
+behavior. When the SROMLL bit is set and memory appears as a
+contiguous array, there is no reason to support NUMA.
+CONFIG_LH7A40X_CONTIGMEM disables NUMA support. When physical memory
+is discontiguous, the memory tables are organized such that there are
+two banks per nodes with a small gap between them. This layout wastes
+some kernel memory for page tables representing non-existent memory.
+CONFIG_LH7A40X_ONE_BANK_PER_NODE optimizes the node tables such that
+there are no gaps. These options control the low level organization
+of the memory management tables in ways that may prevent the kernel
+from booting or may cause the kernel to allocated excessively large
+page tables. Be warned. Only change these options if you know what
+you are doing. The default behavior is a reasonable compromise that
+will suit all users.
+
+--
+
+A typical 32MiB system with the default configuration options will
+find physical memory managed as follows.
+
+ node 0: 0xc0000000 4MiB
+ 0xc1000000 4MiB
+ node 1: 0xc4000000 4MiB
+ 0xc5000000 4MiB
+ node 2: 0xc8000000 4MiB
+ 0xc9000000 4MiB
+ node 3: 0xcc000000 4MiB
+ 0xcd000000 4MiB
+
+Setting CONFIG_LH7A40X_ONE_BANK_PER_NODE will put each bank into a
+separate node.
--- /dev/null
+Release notes for Linux Kernel VFP support code
+-----------------------------------------------
+
+Date: 20 May 2004
+Author: Russell King
+
+This is the first release of the Linux Kernel VFP support code. It
+provides support for the exceptions bounced from VFP hardware found
+on ARM926EJ-S.
+
+This release has been validated against the SoftFloat-2b library by
+John R. Hauser using the TestFloat-2a test suite. Details of this
+library and test suite can be found at:
+
+ http://www.cs.berkeley.edu/~jhauser/arithmetic/SoftFloat.html
+
+The operations which have been tested with this package are:
+
+ - fdiv
+ - fsub
+ - fadd
+ - fmul
+ - fcmp
+ - fcmpe
+ - fcvtd
+ - fcvts
+ - fsito
+ - ftosi
+ - fsqrt
+
+All the above pass softfloat tests with the following exceptions:
+
+- fadd/fsub shows some differences in the handling of +0 / -0 results
+ when input operands differ in signs.
+- the handling of underflow exceptions is slightly different. If a
+ result underflows before rounding, but becomes a normalised number
+ after rounding, we do not signal an underflow exception.
+
+Other operations which have been tested by basic assembly-only tests
+are:
+
+ - fcpy
+ - fabs
+ - fneg
+ - ftoui
+ - ftosiz
+ - ftouiz
+
+The combination operations have not been tested:
+
+ - fmac
+ - fnmac
+ - fmsc
+ - fnmsc
+ - fnmul
--- /dev/null
+Anticipatory IO scheduler
+-------------------------
+Nick Piggin <piggin@cyberone.com.au> 13 Sep 2003
+
+Attention! Database servers, especially those using "TCQ" disks should
+investigate performance with the 'deadline' IO scheduler. Any system with high
+disk performance requirements should do so, in fact.
+
+If you see unusual performance characteristics of your disk systems, or you
+see big performance regressions versus the deadline scheduler, please email
+me. Database users don't bother unless you're willing to test a lot of patches
+from me ;) its a known issue.
+
+Also, users with hardware RAID controllers, doing striping, may find
+highly variable performance results with using the as-iosched. The
+as-iosched anticipatory implementation is based on the notion that a disk
+device has only one physical seeking head. A striped RAID controller
+actually has a head for each physical device in the logical RAID device.
+
+However, setting the antic_expire (see tunable parameters below) produces
+very similar behavior to the deadline IO scheduler.
+
+
+Selecting IO schedulers
+-----------------------
+To choose IO schedulers at boot time, use the argument 'elevator=deadline'.
+'noop' and 'as' (the default) are also available. IO schedulers are assigned
+globally at boot time only presently.
+
+
+Anticipatory IO scheduler Policies
+----------------------------------
+The as-iosched implementation implements several layers of policies
+to determine when an IO request is dispatched to the disk controller.
+Here are the policies outlined, in order of application.
+
+1. one-way Elevator algorithm.
+
+The elevator algorithm is similar to that used in deadline scheduler, with
+the addition that it allows limited backward movement of the elevator
+(i.e. seeks backwards). A seek backwards can occur when choosing between
+two IO requests where one is behind the elevator's current position, and
+the other is in front of the elevator's position. If the seek distance to
+the request in back of the elevator is less than half the seek distance to
+the request in front of the elevator, then the request in back can be chosen.
+Backward seeks are also limited to a maximum of MAXBACK (1024*1024) sectors.
+This favors forward movement of the elevator, while allowing opportunistic
+"short" backward seeks.
+
+2. FIFO expiration times for reads and for writes.
+
+This is again very similar to the deadline IO scheduler. The expiration
+times for requests on these lists is tunable using the parameters read_expire
+and write_expire discussed below. When a read or a write expires in this way,
+the IO scheduler will interrupt its current elevator sweep or read anticipation
+to service the expired request.
+
+3. Read and write request batching
+
+A batch is a collection of read requests or a collection of write
+requests. The as scheduler alternates dispatching read and write batches
+to the driver. In the case a read batch, the scheduler submits read
+requests to the driver as long as there are read requests to submit, and
+the read batch time limit has not been exceeded (read_batch_expire).
+The read batch time limit begins counting down only when there are
+competing write requests pending.
+
+In the case of a write batch, the scheduler submits write requests to
+the driver as long as there are write requests available, and the
+write batch time limit has not been exceeded (write_batch_expire).
+However, the length of write batches will be gradually shortened
+when read batches frequently exceed their time limit.
+
+When changing between batch types, the scheduler waits for all requests
+from the previous batch to complete before scheduling requests for the
+next batch.
+
+The read and write fifo expiration times described in policy 2 above
+are checked only when in scheduling IO of a batch for the corresponding
+(read/write) type. So for example, the read FIFO timeout values are
+tested only during read batches. Likewise, the write FIFO timeout
+values are tested only during write batches. For this reason,
+it is generally not recommended for the read batch time
+to be longer than the write expiration time, nor for the write batch
+time to exceed the read expiration time (see tunable parameters below).
+
+When the IO scheduler changes from a read to a write batch,
+it begins the elevator from the request that is on the head of the
+write expiration FIFO. Likewise, when changing from a write batch to
+a read batch, scheduler begins the elevator from the first entry
+on the read expiration FIFO.
+
+4. Read anticipation.
+
+Read anticipation occurs only when scheduling a read batch.
+This implementation of read anticipation allows only one read request
+to be dispatched to the disk controller at a time. In
+contrast, many write requests may be dispatched to the disk controller
+at a time during a write batch. It is this characteristic that can make
+the anticipatory scheduler perform anomalously with controllers supporting
+TCQ, or with hardware striped RAID devices. Setting the antic_expire
+queue paramter (see below) to zero disables this behavior, and the anticipatory
+scheduler behaves essentially like the deadline scheduler.
+
+When read anticipation is enabled (antic_expire is not zero), reads
+are dispatched to the disk controller one at a time.
+At the end of each read request, the IO scheduler examines its next
+candidate read request from its sorted read list. If that next request
+is from the same process as the request that just completed,
+or if the next request in the queue is "very close" to the
+just completed request, it is dispatched immediately. Otherwise,
+statistics (average think time, average seek distance) on the process
+that submitted the just completed request are examined. If it seems
+likely that that process will submit another request soon, and that
+request is likely to be near the just completed request, then the IO
+scheduler will stop dispatching more read requests for up time (antic_expire)
+milliseconds, hoping that process will submit a new request near the one
+that just completed. If such a request is made, then it is dispatched
+immediately. If the antic_expire wait time expires, then the IO scheduler
+will dispatch the next read request from the sorted read queue.
+
+To decide whether an anticipatory wait is worthwhile, the scheduler
+maintains statistics for each process that can be used to compute
+mean "think time" (the time between read requests), and mean seek
+distance for that process. One observation is that these statistics
+are associated with each process, but those statistics are not associated
+with a specific IO device. So for example, if a process is doing IO
+on several file systems on separate devices, the statistics will be
+a combination of IO behavior from all those devices.
+
+
+Tuning the anticipatory IO scheduler
+------------------------------------
+When using 'as', the anticipatory IO scheduler there are 5 parameters under
+/sys/block/*/iosched/. All are units of milliseconds.
+
+The parameters are:
+* read_expire
+ Controls how long until a read request becomes "expired". It also controls the
+ interval between which expired requests are served, so set to 50, a request
+ might take anywhere < 100ms to be serviced _if_ it is the next on the
+ expired list. Obviously request expiration strategies won't make the disk
+ go faster. The result basically equates to the timeslice a single reader
+ gets in the presence of other IO. 100*((seek time / read_expire) + 1) is
+ very roughly the % streaming read efficiency your disk should get with
+ multiple readers.
+
+* read_batch_expire
+ Controls how much time a batch of reads is given before pending writes are
+ served. A higher value is more efficient. This might be set below read_expire
+ if writes are to be given higher priority than reads, but reads are to be
+ as efficient as possible when there are no writes. Generally though, it
+ should be some multiple of read_expire.
+
+* write_expire, and
+* write_batch_expire are equivalent to the above, for writes.
+
+* antic_expire
+ Controls the maximum amount of time we can anticipate a good read (one
+ with a short seek distance from the most recently completed request) before
+ giving up. Many other factors may cause anticipation to be stopped early,
+ or some processes will not be "anticipated" at all. Should be a bit higher
+ for big seek time devices though not a linear correspondence - most
+ processes have only a few ms thinktime.
+
--- /dev/null
+Deadline IO scheduler tunables
+==============================
+
+This little file attempts to document how the deadline io scheduler works.
+In particular, it will clarify the meaning of the exposed tunables that may be
+of interest to power users.
+
+Each io queue has a set of io scheduler tunables associated with it. These
+tunables control how the io scheduler works. You can find these entries
+in:
+
+/sys/block/<device>/iosched
+
+assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
+you can do so by typing:
+
+# mount none /sys -t sysfs
+
+
+********************************************************************************
+
+
+read_expire (in ms)
+-----------
+
+The goal of the deadline io scheduler is to attempt to guarentee a start
+service time for a request. As we focus mainly on read latencies, this is
+tunable. When a read request first enters the io scheduler, it is assigned
+a deadline that is the current time + the read_expire value in units of
+miliseconds.
+
+
+write_expire (in ms)
+-----------
+
+Similar to read_expire mentioned above, but for writes.
+
+
+fifo_batch
+----------
+
+When a read request expires its deadline, we must move some requests from
+the sorted io scheduler list to the block device dispatch queue. fifo_batch
+controls how many requests we move, based on the cost of each request. A
+request is either qualified as a seek or a stream. The io scheduler knows
+the last request that was serviced by the drive (or will be serviced right
+before this one). See seek_cost and stream_unit.
+
+
+write_starved (number of dispatches)
+-------------
+
+When we have to move requests from the io scheduler queue to the block
+device dispatch queue, we always give a preference to reads. However, we
+don't want to starve writes indefinitely either. So writes_starved controls
+how many times we give preference to reads over writes. When that has been
+done writes_starved number of times, we dispatch some writes based on the
+same criteria as reads.
+
+
+front_merges (bool)
+------------
+
+Sometimes it happens that a request enters the io scheduler that is contigious
+with a request that is already on the queue. Either it fits in the back of that
+request, or it fits at the front. That is called either a back merge candidate
+or a front merge candidate. Due to the way files are typically laid out,
+back merges are much more common than front merges. For some work loads, you
+may even know that it is a waste of time to spend any time attempting to
+front merge requests. Setting front_merges to 0 disables this functionality.
+Front merges may still occur due to the cached last_merge hint, but since
+that comes at basically 0 cost we leave that on. We simply disable the
+rbtree front sector lookup when the io scheduler merge function is called.
+
+
+Nov 11 2002, Jens Axboe <axboe@suse.de>
+
+
--- /dev/null
+
+PowerNow! and Cool'n'Quiet are AMD names for frequency
+management capabilities in AMD processors. As the hardware
+implementation changes in new generations of the processors,
+there is a different cpu-freq driver for each generation.
+
+Note that the driver's will not load on the "wrong" hardware,
+so it is safe to try each driver in turn when in doubt as to
+which is the correct driver.
+
+Note that the functionality to change frequency (and voltage)
+is not available in all processors. The drivers will refuse
+to load on processors without this capability. The capability
+is detected with the cpuid instruction.
+
+The drivers use BIOS supplied tables to obtain frequency and
+voltage information appropriate for a particular platform.
+Frequency transitions will be unavailable if the BIOS does
+not supply these tables.
+
+6th Generation: powernow-k6
+
+7th Generation: powernow-k7: Athlon, Duron, Geode.
+
+8th Generation: powernow-k8: Athlon, Athlon 64, Opteron, Sempron.
+Documentation on this functionality in 8th generation processors
+is available in the "BIOS and Kernel Developer's Guide", publication
+26094, in chapter 9, available for download from www.amd.com.
+
+BIOS supplied data, for powernow-k7 and for powernow-k8, may be
+from either the PSB table or from ACPI objects. The ACPI support
+is only available if the kernel config sets CONFIG_ACPI_PROCESSOR.
+The powernow-k8 driver will attempt to use ACPI if so configured,
+and fall back to PST if that fails.
+The powernow-k7 driver will try to use the PSB support first, and
+fall back to ACPI if the PSB support fails. A module parameter,
+acpi_force, is provided to force ACPI support to be used instead
+of PSB support.
--- /dev/null
+dm-io
+=====
+
+Dm-io provides synchronous and asynchronous I/O services. There are three
+types of I/O services available, and each type has a sync and an async
+version.
+
+The user must set up an io_region structure to describe the desired location
+of the I/O. Each io_region indicates a block-device along with the starting
+sector and size of the region.
+
+ struct io_region {
+ struct block_device *bdev;
+ sector_t sector;
+ sector_t count;
+ };
+
+Dm-io can read from one io_region or write to one or more io_regions. Writes
+to multiple regions are specified by an array of io_region structures.
+
+The first I/O service type takes a list of memory pages as the data buffer for
+the I/O, along with an offset into the first page.
+
+ struct page_list {
+ struct page_list *next;
+ struct page *page;
+ };
+
+ int dm_io_sync(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ unsigned long *error_bits);
+ int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ io_notify_fn fn, void *context);
+
+The second I/O service type takes an array of bio vectors as the data buffer
+for the I/O. This service can be handy if the caller has a pre-assembled bio,
+but wants to direct different portions of the bio to different devices.
+
+ int dm_io_sync_bvec(unsigned int num_regions, struct io_region *where,
+ int rw, struct bio_vec *bvec,
+ unsigned long *error_bits);
+ int dm_io_async_bvec(unsigned int num_regions, struct io_region *where,
+ int rw, struct bio_vec *bvec,
+ io_notify_fn fn, void *context);
+
+The third I/O service type takes a pointer to a vmalloc'd memory buffer as the
+data buffer for the I/O. This service can be handy if the caller needs to do
+I/O to a large region but doesn't want to allocate a large number of individual
+memory pages.
+
+ int dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, unsigned long *error_bits);
+ int dm_io_async_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, io_notify_fn fn, void *context);
+
+Callers of the asynchronous I/O services must include the name of a completion
+callback routine and a pointer to some context data for the I/O.
+
+ typedef void (*io_notify_fn)(unsigned long error, void *context);
+
+The "error" parameter in this callback, as well as the "*error" parameter in
+all of the synchronous versions, is a bitset (instead of a simple error value).
+In the case of an write-I/O to multiple regions, this bitset allows dm-io to
+indicate success or failure on each individual region.
+
+Before using any of the dm-io services, the user should call dm_io_get()
+and specify the number of pages they expect to perform I/O on concurrently.
+Dm-io will attempt to resize its mempool to make sure enough pages are
+always available in order to avoid unnecessary waiting while performing I/O.
+
+When the user is finished using the dm-io services, they should call
+dm_io_put() and specify the same number of pages that were given on the
+dm_io_get() call.
+
--- /dev/null
+kcopyd
+======
+
+Kcopyd provides the ability to copy a range of sectors from one block-device
+to one or more other block-devices, with an asynchronous completion
+notification. It is used by dm-snapshot and dm-mirror.
+
+Users of kcopyd must first create a client and indicate how many memory pages
+to set aside for their copy jobs. This is done with a call to
+kcopyd_client_create().
+
+ int kcopyd_client_create(unsigned int num_pages,
+ struct kcopyd_client **result);
+
+To start a copy job, the user must set up io_region structures to describe
+the source and destinations of the copy. Each io_region indicates a
+block-device along with the starting sector and size of the region. The source
+of the copy is given as one io_region structure, and the destinations of the
+copy are given as an array of io_region structures.
+
+ struct io_region {
+ struct block_device *bdev;
+ sector_t sector;
+ sector_t count;
+ };
+
+To start the copy, the user calls kcopyd_copy(), passing in the client
+pointer, pointers to the source and destination io_regions, the name of a
+completion callback routine, and a pointer to some context data for the copy.
+
+ int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from,
+ unsigned int num_dests, struct io_region *dests,
+ unsigned int flags, kcopyd_notify_fn fn, void *context);
+
+ typedef void (*kcopyd_notify_fn)(int read_err, unsigned int write_err,
+ void *context);
+
+When the copy completes, kcopyd will call the user's completion routine,
+passing back the user's context pointer. It will also indicate if a read or
+write error occurred during the copy.
+
+When a user is done with all their copy jobs, they should call
+kcopyd_client_destroy() to delete the kcopyd client, which will release the
+associated memory pages.
+
+ void kcopyd_client_destroy(struct kcopyd_client *kc);
+
--- /dev/null
+dm-linear
+=========
+
+Device-Mapper's "linear" target maps a linear range of the Device-Mapper
+device onto a linear range of another device. This is the basic building
+block of logical volume managers.
+
+Parameters: <dev path> <offset>
+ <dev path>: Full pathname to the underlying block-device, or a
+ "major:minor" device-number.
+ <offset>: Starting sector within the device.
+
+
+Example scripts
+===============
+[[
+#!/bin/sh
+# Create an identity mapping for a device
+echo "0 `blockdev --getsize $1` linear $1 0" | dmsetup create identity
+]]
+
+
+[[
+#!/bin/sh
+# Join 2 devices together
+size1=`blockdev --getsize $1`
+size2=`blockdev --getsize $2`
+echo "0 $size1 linear $1 0
+$size1 $size2 linear $2 0" | dmsetup create joined
+]]
+
+
+[[
+#!/usr/bin/perl -w
+# Split a device into 4M chunks and then join them together in reverse order.
+
+my $name = "reverse";
+my $extent_size = 4 * 1024 * 2;
+my $dev = $ARGV[0];
+my $table = "";
+my $count = 0;
+
+if (!defined($dev)) {
+ die("Please specify a device.\n");
+}
+
+my $dev_size = `blockdev --getsize $dev`;
+my $extents = int($dev_size / $extent_size) -
+ (($dev_size % $extent_size) ? 1 : 0);
+
+while ($extents > 0) {
+ my $this_start = $count * $extent_size;
+ $extents--;
+ $count++;
+ my $this_offset = $extents * $extent_size;
+
+ $table .= "$this_start $extent_size linear $dev $this_offset\n";
+}
+
+`echo \"$table\" | dmsetup create $name`;
+]]
--- /dev/null
+dm-stripe
+=========
+
+Device-Mapper's "striped" target is used to create a striped (i.e. RAID-0)
+device across one or more underlying devices. Data is written in "chunks",
+with consecutive chunks rotating among the underlying devices. This can
+potentially provide improved I/O throughput by utilizing several physical
+devices in parallel.
+
+Parameters: <num devs> <chunk size> [<dev path> <offset>]+
+ <num devs>: Number of underlying devices.
+ <chunk size>: Size of each chunk of data. Must be a power-of-2 and at
+ least as large as the system's PAGE_SIZE.
+ <dev path>: Full pathname to the underlying block-device, or a
+ "major:minor" device-number.
+ <offset>: Starting sector within the device.
+
+One or more underlying devices can be specified. The striped device size must
+be a multiple of the chunk size and a multiple of the number of underlying
+devices.
+
+
+Example scripts
+===============
+
+[[
+#!/usr/bin/perl -w
+# Create a striped device across any number of underlying devices. The device
+# will be called "stripe_dev" and have a chunk-size of 128k.
+
+my $chunk_size = 128 * 2;
+my $dev_name = "stripe_dev";
+my $num_devs = @ARGV;
+my @devs = @ARGV;
+my ($min_dev_size, $stripe_dev_size, $i);
+
+if (!$num_devs) {
+ die("Specify at least one device\n");
+}
+
+$min_dev_size = `blockdev --getsize $devs[0]`;
+for ($i = 1; $i < $num_devs; $i++) {
+ my $this_size = `blockdev --getsize $devs[$i]`;
+ $min_dev_size = ($min_dev_size < $this_size) ?
+ $min_dev_size : $this_size;
+}
+
+$stripe_dev_size = $min_dev_size * $num_devs;
+$stripe_dev_size -= $stripe_dev_size % ($chunk_size * $num_devs);
+
+$table = "0 $stripe_dev_size striped $num_devs $chunk_size";
+for ($i = 0; $i < $num_devs; $i++) {
+ $table .= " $devs[$i] 0";
+}
+
+`echo $table | dmsetup create $dev_name`;
+]]
+
--- /dev/null
+dm-zero
+=======
+
+Device-Mapper's "zero" target provides a block-device that always returns
+zero'd data on reads and silently drops writes. This is similar behavior to
+/dev/zero, but as a block-device instead of a character-device.
+
+Dm-zero has no target-specific parameters.
+
+One very interesting use of dm-zero is for creating "sparse" devices in
+conjunction with dm-snapshot. A sparse device reports a device-size larger
+than the amount of actual storage space available for that device. A user can
+write data anywhere within the sparse device and read it back like a normal
+device. Reads to previously unwritten areas will return a zero'd buffer. When
+enough data has been written to fill up the actual storage space, the sparse
+device is deactivated. This can be very useful for testing device and
+filesystem limitations.
+
+To create a sparse device, start by creating a dm-zero device that's the
+desired size of the sparse device. For this example, we'll assume a 10TB
+sparse device.
+
+TEN_TERABYTES=`expr 10 \* 1024 \* 1024 \* 1024 \* 2` # 10 TB in sectors
+echo "0 $TEN_TERABYTES zero" | dmsetup create zero1
+
+Then create a snapshot of the zero device, using any available block-device as
+the COW device. The size of the COW device will determine the amount of real
+space available to the sparse device. For this example, we'll assume /dev/sdb1
+is an available 10GB partition.
+
+echo "0 $TEN_TERABYTES snapshot /dev/mapper/zero1 /dev/sdb1 p 128" | \
+ dmsetup create sparse1
+
+This will create a 10TB sparse device called /dev/mapper/sparse1 that has
+10GB of actual storage space available. If more than 10GB of data is written
+to this device, it will start returning I/O errors.
+
--- /dev/null
+
+What is sisfb?
+==============
+
+sisfb is a framebuffer device driver for SiS (Silicon Integrated Systems)
+graphics chips. Supported are:
+
+- SiS 300 series: SiS 300/305, 540, 630(S), 730(S)
+- SiS 315 series: SiS 315/H/PRO, 55x, (M)65x, 740, (M)661(F/M)X, (M)741(GX)
+- SiS 330 series: SiS 330 ("Xabre"), (M)760
+
+
+Why do I need a framebuffer driver?
+===================================
+
+sisfb is eg. useful if you want a high-resolution text console. Besides that,
+sisfb is required to run DirectFB (which comes with an additional, dedicated
+driver for the 315 series).
+
+On the 300 series, sisfb on kernels older than 2.6.3 furthermore plays an
+important role in connection with DRM/DRI: Sisfb manages the memory heap
+used by DRM/DRI for 3D texture and other data. This memory management is
+required for using DRI/DRM.
+
+Kernels >= around 2.6.3 do not need sisfb any longer for DRI/DRM memory
+management. The SiS DRM driver has been updated and features a memory manager
+of its own (which will be used if sisfb is not compiled). So unless you want
+a graphical console, you don't need sisfb on kernels >=2.6.3.
+
+Sidenote: Since this seems to be a commonly made mistake: sisfb and vesafb
+cannot be active at the same time! Do only select one of them in your kernel
+configuration.
+
+
+How are parameters passed to sisfb?
+===================================
+
+Well, it depends: If compiled statically into the kernel, use lilo's append
+statement to add the parameters to the kernel command line. Please see lilo's
+(or GRUB's) documentation for more information. If sisfb is a kernel module,
+parameters are given with the modprobe (or insmod) command.
+
+Example for sisfb as part of the static kernel: Add the following line to your
+lilo.conf:
+
+ append="video=sisfb:mode:1024x768x16,mem:12288,rate:75"
+
+Example for sisfb as a module: Start sisfb by typing
+
+ modprobe sisfb mode=1024x768x16 rate=75 mem=12288
+
+A common mistake is that folks use a wrong parameter format when using the
+driver compiled into the kernel. Please note: If compiled into the kernel,
+the parameter format is video=sisfb:mode:none or video=sisfb:mode:1024x768x16
+(or whatever mode you want to use, alternatively using any other format
+described above or the vesa keyword instead of mode). If compiled as a module,
+the parameter format reads mode=none or mode=1024x768x16 (or whatever mode you
+want to use). Using a "=" for a ":" (and vice versa) is a huge difference!
+Additionally: If you give more than one argument to the in-kernel sisfb, the
+arguments are separated with ",". For example:
+
+ video=sisfb:mode:1024x768x16,rate:75,mem:12288
+
+
+How do I use it?
+================
+
+Preface statement: This file only covers very little of the driver's
+capabilities and features. Please refer to the author's and maintainer's
+website at http://www.winischhofer.net/linuxsisvga.shtml for more
+information. Additionally, "modinfo sisfb" gives an overview over all
+supported options including some explanation.
+
+The desired display mode can be specified using the keyword "mode" with
+a parameter in one of the follwing formats:
+ - XxYxDepth or
+ - XxY-Depth or
+ - XxY-Depth@Rate or
+ - XxY
+ - or simply use the VESA mode number in hexadecimal or decimal.
+
+For example: 1024x768x16, 1024x768-16@75, 1280x1024-16. If no depth is
+specified, it defaults to 8. If no rate is given, it defaults to 60Hz. Depth 32
+means 24bit color depth (but 32 bit framebuffer depth, which is not relevant
+to the user).
+
+Additionally, sisfb understands the keyword "vesa" followed by a VESA mode
+number in decimal or hexadecimal. For example: vesa=791 or vesa=0x117. Please
+use either "mode" or "vesa" but not both.
+
+Linux 2.4 only: If no mode is given, sisfb defaults to "no mode" (mode=none) if
+compiled as a module; if sisfb is statically compiled into the kernel, it
+defaults to 800x600x8 unless CRT2 type is LCD, in which case the LCD's native
+resolution is used. If you want to switch to a different mode, use the fbset
+shell command.
+
+Linux 2.6 only: If no mode is given, sisfb defaults to 800x600x8 unless CRT2
+type is LCD, in which case it defaults to the LCD's native resolution. If
+you want to switch to another mode, use the stty shell command.
+
+You should compile in both vgacon (to boot if you remove you SiS card from
+your system) and sisfb (for graphics mode). Under Linux 2.6, also "Framebuffer
+console support" (fbcon) is needed for a graphical console.
+
+You should *not* compile-in vesafb. And please do not use the "vga=" keyword
+in lilo's or grub's configuration file; mode selection is done using the
+"mode" or "vesa" keywords as a parameter. See above and below.
+
+
+X11
+===
+
+If using XFree86 or X.org, it is recommended that you don't use the "fbdev"
+driver but the dedicated "sis" X driver. The "sis" X driver and sisfb are
+developed by the same person (Thomas Winischhofer) and cooperate well with
+each other.
+
+
+SVGALib
+=======
+
+SVGALib, if directly accessing the hardware, never restores the screen
+correctly, especially on laptops or if the output devices are LCD or TV.
+Therefore, use the chipset "FBDEV" in SVGALib configuration. This will make
+SVGALib use the framebuffer device for mode switches and restoration.
+
+
+Configuration
+=============
+
+(Some) accepted options:
+
+off - Disable sisfb. This option is only understood if sisfb is
+ in-kernel, not a module.
+mem:X - size of memory for the console, rest will be used for DRI/DRM. X
+ is in kilobytes. On 300 series, the default is 4096, 8192 or
+ 16384 (each in kilobyte) depending on how much video ram the card
+ has. On 315/330 series, the default is the maximum available ram
+ (since DRI/DRM is not supported for these chipsets).
+noaccel - do not use 2D acceleration engine. (Default: use acceleration)
+noypan - disable y-panning and scroll by redrawing the entire screen.
+ This is much slower than y-panning. (Default: use y-panning)
+vesa:X - selects startup videomode. X is number from 0 to 0x1FF and
+ represents the VESA mode number (can be given in decimal or
+ hexadecimal form, the latter prefixed with "0x").
+mode:X - selects startup videomode. Please see above for the format of
+ "X".
+
+Boolean options such as "noaccel" or "noypan" are to be given without a
+parameter if sisfb is in-kernel (for example "video=sisfb:noypan). If
+sisfb is a module, these are to be set to 1 (for example "modprobe sisfb
+noypan=1").
+
+--
+Thomas Winischhofer <thomas@winischhofer.net>
+May 27, 2004
+
+
--- /dev/null
+Support is available for filesystems that wish to do automounting support (such
+as kAFS which can be found in fs/afs/). This facility includes allowing
+in-kernel mounts to be performed and mountpoint degradation to be
+requested. The latter can also be requested by userspace.
+
+
+======================
+IN-KERNEL AUTOMOUNTING
+======================
+
+A filesystem can now mount another filesystem on one of its directories by the
+following procedure:
+
+ (1) Give the directory a follow_link() operation.
+
+ When the directory is accessed, the follow_link op will be called, and
+ it will be provided with the location of the mountpoint in the nameidata
+ structure (vfsmount and dentry).
+
+ (2) Have the follow_link() op do the following steps:
+
+ (a) Call do_kern_mount() to call the appropriate filesystem to set up a
+ superblock and gain a vfsmount structure representing it.
+
+ (b) Copy the nameidata provided as an argument and substitute the dentry
+ argument into it the copy.
+
+ (c) Call do_add_mount() to install the new vfsmount into the namespace's
+ mountpoint tree, thus making it accessible to userspace. Use the
+ nameidata set up in (b) as the destination.
+
+ If the mountpoint will be automatically expired, then do_add_mount()
+ should also be given the location of an expiration list (see further
+ down).
+
+ (d) Release the path in the nameidata argument and substitute in the new
+ vfsmount and its root dentry. The ref counts on these will need
+ incrementing.
+
+Then from userspace, you can just do something like:
+
+ [root@andromeda root]# mount -t afs \#root.afs. /afs
+ [root@andromeda root]# ls /afs
+ asd cambridge cambridge.redhat.com grand.central.org
+ [root@andromeda root]# ls /afs/cambridge
+ afsdoc
+ [root@andromeda root]# ls /afs/cambridge/afsdoc/
+ ChangeLog html LICENSE pdf RELNOTES-1.2.2
+
+And then if you look in the mountpoint catalogue, you'll see something like:
+
+ [root@andromeda root]# cat /proc/mounts
+ ...
+ #root.afs. /afs afs rw 0 0
+ #root.cell. /afs/cambridge.redhat.com afs rw 0 0
+ #afsdoc. /afs/cambridge.redhat.com/afsdoc afs rw 0 0
+
+
+===========================
+AUTOMATIC MOUNTPOINT EXPIRY
+===========================
+
+Automatic expiration of mountpoints is easy, provided you've mounted the
+mountpoint to be expired in the automounting procedure outlined above.
+
+To do expiration, you need to follow these steps:
+
+ (3) Create at least one list off which the vfsmounts to be expired can be
+ hung. Access to this list will be governed by the vfsmount_lock.
+
+ (4) In step (2c) above, the call to do_add_mount() should be provided with a
+ pointer to this list. It will hang the vfsmount off of it if it succeeds.
+
+ (5) When you want mountpoints to be expired, call mark_mounts_for_expiry()
+ with a pointer to this list. This will process the list, marking every
+ vfsmount thereon for potential expiry on the next call.
+
+ If a vfsmount was already flagged for expiry, and if its usage count is 1
+ (it's only referenced by its parent vfsmount), then it will be deleted
+ from the namespace and thrown away (effectively unmounted).
+
+ It may prove simplest to simply call this at regular intervals, using
+ some sort of timed event to drive it.
+
+The expiration flag is cleared by calls to mntput. This means that expiration
+will only happen on the second expiration request after the last time the
+mountpoint was accessed.
+
+If a mountpoint is moved, it gets removed from the expiration list. If a bind
+mount is made on an expirable mount, the new vfsmount will not be on the
+expiration list and will not expire.
+
+If a namespace is copied, all mountpoints contained therein will be copied,
+and the copies of those that are on an expiration list will be added to the
+same expiration list.
+
+
+=======================
+USERSPACE DRIVEN EXPIRY
+=======================
+
+As an alternative, it is possible for userspace to request expiry of any
+mountpoint (though some will be rejected - the current process's idea of the
+rootfs for example). It does this by passing the MNT_EXPIRE flag to
+umount(). This flag is considered incompatible with MNT_FORCE and MNT_DETACH.
+
+If the mountpoint in question is in referenced by something other than
+umount() or its parent mountpoint, an EBUSY error will be returned and the
+mountpoint will not be marked for expiration or unmounted.
+
+If the mountpoint was not already marked for expiry at that time, an EAGAIN
+error will be given and it won't be unmounted.
+
+Otherwise if it was already marked and it wasn't referenced, unmounting will
+take place as usual.
+
+Again, the expiration flag is cleared every time anything other than umount()
+looks at a mountpoint.
--- /dev/null
+ High Precision Event Timer Driver for Linux
+
+The High Precision Event Timer (HPET) hardware is the future replacement for the 8254 and Real
+Time Clock (RTC) periodic timer functionality. Each HPET can have up two 32 timers. It is possible
+to configure the first two timers as legacy replacements for 8254 and RTC periodic. A specification
+done by INTEL and Microsoft can be found at http://www.intel.com/labs/platcomp/hpet/hpetspec.htm.
+
+The driver supports detection of HPET driver allocation and initialization of the HPET before the
+driver module_init routine is called. This enables platform code which uses timer 0 or 1 as the
+main timer to intercept HPET initialization. An example of this initialization can be found in
+arch/i386/kernel/time_hpet.c.
+
+The driver provides two APIs which are very similar to the API found in the rtc.c driver.
+There is a user space API and a kernel space API. An example user space program is provided
+below.
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <string.h>
+#include <memory.h>
+#include <malloc.h>
+#include <time.h>
+#include <ctype.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <signal.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <linux/hpet.h>
+
+
+extern void hpet_open_close(int, const char **);
+extern void hpet_info(int, const char **);
+extern void hpet_poll(int, const char **);
+extern void hpet_fasync(int, const char **);
+extern void hpet_read(int, const char **);
+
+#include <sys/poll.h>
+#include <sys/ioctl.h>
+#include <signal.h>
+
+struct hpet_command {
+ char *command;
+ void (*func)(int argc, const char ** argv);
+} hpet_command[] = {
+ {
+ "open-close",
+ hpet_open_close
+ },
+ {
+ "info",
+ hpet_info
+ },
+ {
+ "poll",
+ hpet_poll
+ },
+ {
+ "fasync",
+ hpet_fasync
+ },
+};
+
+int
+main(int argc, const char ** argv)
+{
+ int i;
+
+ argc--;
+ argv++;
+
+ if (!argc) {
+ fprintf(stderr, "-hpet: requires command\n");
+ return -1;
+ }
+
+
+ for (i = 0; i < (sizeof (hpet_command) / sizeof (hpet_command[0])); i++)
+ if (!strcmp(argv[0], hpet_command[i].command)) {
+ argc--;
+ argv++;
+ fprintf(stderr, "-hpet: executing %s\n",
+ hpet_command[i].command);
+ hpet_command[i].func(argc, argv);
+ return 0;
+ }
+
+ fprintf(stderr, "do_hpet: command %s not implemented\n", argv[0]);
+
+ return -1;
+}
+
+void
+hpet_open_close(int argc, const char **argv)
+{
+ int fd;
+
+ if (argc != 1) {
+ fprintf(stderr, "hpet_open_close: device-name\n");
+ return;
+ }
+
+ fd = open(argv[0], O_RDWR);
+ if (fd < 0)
+ fprintf(stderr, "hpet_open_close: open failed\n");
+ else
+ close(fd);
+
+ return;
+}
+
+void
+hpet_info(int argc, const char **argv)
+{
+}
+
+void
+hpet_poll(int argc, const char **argv)
+{
+ unsigned long freq;
+ int iterations, i, fd;
+ struct pollfd pfd;
+ struct hpet_info info;
+ struct timeval stv, etv;
+ struct timezone tz;
+ long usec;
+
+ if (argc != 3) {
+ fprintf(stderr, "hpet_poll: device-name freq iterations\n");
+ return;
+ }
+
+ freq = atoi(argv[1]);
+ iterations = atoi(argv[2]);
+
+ fd = open(argv[0], O_RDWR);
+
+ if (fd < 0) {
+ fprintf(stderr, "hpet_poll: open of %s failed\n", argv[0]);
+ return;
+ }
+
+ if (ioctl(fd, HPET_IRQFREQ, freq) < 0) {
+ fprintf(stderr, "hpet_poll: HPET_IRQFREQ failed\n");
+ goto out;
+ }
+
+ if (ioctl(fd, HPET_INFO, &info) < 0) {
+ fprintf(stderr, "hpet_poll: failed to get info\n");
+ goto out;
+ }
+
+ fprintf(stderr, "hpet_poll: info.hi_flags 0x%lx\n", info.hi_flags);
+
+ if (info.hi_flags && (ioctl(fd, HPET_EPI, 0) < 0)) {
+ fprintf(stderr, "hpet_poll: HPET_EPI failed\n");
+ goto out;
+ }
+
+ if (ioctl(fd, HPET_IE_ON, 0) < 0) {
+ fprintf(stderr, "hpet_poll, HPET_IE_ON failed\n");
+ goto out;
+ }
+
+ pfd.fd = fd;
+ pfd.events = POLLIN;
+
+ for (i = 0; i < iterations; i++) {
+ pfd.revents = 0;
+ gettimeofday(&stv, &tz);
+ if (poll(&pfd, 1, -1) < 0)
+ fprintf(stderr, "hpet_poll: poll failed\n");
+ else {
+ long data;
+
+ gettimeofday(&etv, &tz);
+ usec = stv.tv_sec * 1000000 + stv.tv_usec;
+ usec = (etv.tv_sec * 1000000 + etv.tv_usec) - usec;
+
+ fprintf(stderr,
+ "hpet_poll: expired time = 0x%lx\n", usec);
+
+ fprintf(stderr, "hpet_poll: revents = 0x%x\n",
+ pfd.revents);
+
+ if (read(fd, &data, sizeof(data)) != sizeof(data)) {
+ fprintf(stderr, "hpet_poll: read failed\n");
+ }
+ else
+ fprintf(stderr, "hpet_poll: data 0x%lx\n",
+ data);
+ }
+ }
+
+out:
+ close(fd);
+ return;
+}
+
+static int hpet_sigio_count;
+
+static void
+hpet_sigio(int val)
+{
+ fprintf(stderr, "hpet_sigio: called\n");
+ hpet_sigio_count++;
+}
+
+void
+hpet_fasync(int argc, const char **argv)
+{
+ unsigned long freq;
+ int iterations, i, fd, value;
+ sig_t oldsig;
+ struct hpet_info info;
+
+ hpet_sigio_count = 0;
+ fd = -1;
+
+ if ((oldsig = signal(SIGIO, hpet_sigio)) == SIG_ERR) {
+ fprintf(stderr, "hpet_fasync: failed to set signal handler\n");
+ return;
+ }
+
+ if (argc != 3) {
+ fprintf(stderr, "hpet_fasync: device-name freq iterations\n");
+ goto out;
+ }
+
+ fd = open(argv[0], O_RDWR);
+
+ if (fd < 0) {
+ fprintf(stderr, "hpet_fasync: failed to open %s\n", argv[0]);
+ return;
+ }
+
+
+ if ((fcntl(fd, F_SETOWN, getpid()) == 1) ||
+ ((value = fcntl(fd, F_GETFL)) == 1) ||
+ (fcntl(fd, F_SETFL, value | O_ASYNC) == 1)) {
+ fprintf(stderr, "hpet_fasync: fcntl failed\n");
+ goto out;
+ }
+
+ freq = atoi(argv[1]);
+ iterations = atoi(argv[2]);
+
+ if (ioctl(fd, HPET_IRQFREQ, freq) < 0) {
+ fprintf(stderr, "hpet_fasync: HPET_IRQFREQ failed\n");
+ goto out;
+ }
+
+ if (ioctl(fd, HPET_INFO, &info) < 0) {
+ fprintf(stderr, "hpet_fasync: failed to get info\n");
+ goto out;
+ }
+
+ fprintf(stderr, "hpet_fasync: info.hi_flags 0x%lx\n", info.hi_flags);
+
+ if (info.hi_flags && (ioctl(fd, HPET_EPI, 0) < 0)) {
+ fprintf(stderr, "hpet_fasync: HPET_EPI failed\n");
+ goto out;
+ }
+
+ if (ioctl(fd, HPET_IE_ON, 0) < 0) {
+ fprintf(stderr, "hpet_fasync, HPET_IE_ON failed\n");
+ goto out;
+ }
+
+ for (i = 0; i < iterations; i++) {
+ (void) pause();
+ fprintf(stderr, "hpet_fasync: count = %d\n", hpet_sigio_count);
+ }
+
+out:
+ signal(SIGIO, oldsig);
+
+ if (fd >= 0)
+ close(fd);
+
+ return;
+}
+
+The kernel API has three interfaces exported from the driver:
+
+ hpet_register(struct hpet_task *tp, int periodic)
+ hpet_unregister(struct hpet_task *tp)
+ hpet_control(struct hpet_task *tp, unsigned int cmd, unsigned long arg)
+
+The kernel module using this interface fills in the ht_func and ht_data members of the
+hpet_task structure before calling hpet_register. hpet_control simply vectors to the hpet_ioctl
+routine and has the same commands and respective arguments as the user API. hpet_unregister
+is used to terminate usage of the HPET timer reserved by hpet_register.
+
+
--- /dev/null
+==================
+i2c-parport driver
+==================
+
+2004-07-06, Jean Delvare
+
+This is a unified driver for several i2c-over-parallel-port adapters,
+such as the ones made by Philips, Velleman or ELV. This driver is
+meant as a replacement for the older, individual drivers:
+ * i2c-philips-par
+ * i2c-elv
+ * i2c-velleman
+ * video/i2c-parport (NOT the same as this one, dedicated to home brew
+ teletext adapters)
+
+It currently supports the following devices:
+ * Philips adapter
+ * home brew teletext adapter
+ * Velleman K8000 adapter
+ * ELV adapter
+ * Analog Devices evaluation boards (ADM1025, ADM1030, ADM1031, ADM1032)
+
+These devices use different pinout configurations, so you have to tell
+the driver what you have, using the type module parameter. There is no
+way to autodetect the devices. Support for different pinout configurations
+can be easily added when needed.
+
+
+Building your own adapter
+-------------------------
+
+If you want to build you own i2c-over-parallel-port adapter, here is
+a sample electronics schema (credits go to Sylvain Munaut):
+
+Device PC
+Side ___________________Vdd (+) Side
+ | | |
+ --- --- ---
+ | | | | | |
+ |R| |R| |R|
+ | | | | | |
+ --- --- ---
+ | | |
+ | | /| |
+SCL ----------x--------o |-----------x------------------- pin 2
+ | \| | |
+ | | |
+ | |\ | |
+SDA ----------x----x---| o---x--------------------------- pin 13
+ | |/ |
+ | |
+ | /| |
+ ---------o |----------------x-------------- pin 3
+ \| | |
+ | |
+ --- ---
+ | | | |
+ |R| |R|
+ | | | |
+ --- ---
+ | |
+ ### ###
+ GND GND
+
+Remarks:
+ - This is the exact pinout and electronics used on the Analog Devices
+ evaluation boards.
+ /|
+ - All inverters -o |- must be 74HC05, they must be open collector output.
+ \|
+ - All resitors are 10k.
+ - Pins 18-25 of the parallel port connected to GND.
+ - Pins 4-9 (D2-D7) could be used as VDD is the driver drives them high.
+ The ADM1032 evaluation board uses D4-D7. Beware that the amount of
+ current you can draw from the parallel port is limited. Also note that
+ all connected lines MUST BE driven at the same state, else you'll short
+ circuit the output buffers! So plugging the I2C adapter after loading
+ the i2c-parport module might be a good safety since data line state
+ prior to init may be unknown.
+ - This is 5V!
+ - Obviously you cannot read SCL (so it's not really standard-compliant).
+ Pretty easy to add, just copy the SDA part and use another input pin.
+ That would give (ELV compatible pinout):
+
+
+Device PC
+Side ______________________________Vdd (+) Side
+ | | | |
+ --- --- --- ---
+ | | | | | | | |
+ |R| |R| |R| |R|
+ | | | | | | | |
+ --- --- --- ---
+ | | | |
+ | | |\ | |
+SCL ----------x--------x--| o---x------------------------ pin 15
+ | | |/ |
+ | | |
+ | | /| |
+ | ---o |-------------x-------------- pin 2
+ | \| | |
+ | | |
+ | | |
+ | |\ | |
+SDA ---------------x---x--| o--------x------------------- pin 10
+ | |/ |
+ | |
+ | /| |
+ ---o |------------------x--------- pin 3
+ \| | |
+ | |
+ --- ---
+ | | | |
+ |R| |R|
+ | | | |
+ --- ---
+ | |
+ ### ###
+ GND GND
+
+
+If possible, you should use the same pinout configuration as existing
+adapters do, so you won't even have to change the code.
+
+
+Similar (but different) drivers
+-------------------------------
+
+This driver is NOT the same as the i2c-pport driver found in the i2c package.
+The i2c-pport driver makes use of modern parallel port features so that
+you don't need additional electronics. It has other restrictions however, and
+was not ported to Linux 2.6 (yet).
+
+This driver is also NOT the same as the i2c-pcf-epp driver found in the
+lm_sensors package. The i2c-pcf-epp driver doesn't use the parallel port
+as an I2C bus directly. Instead, it uses it to control an external I2C bus
+master. That driver was not ported to Linux 2.6 (yet) either.
+
+
+Legacy documentation for Velleman adapter
+-----------------------------------------
+
+Useful links:
+Velleman http://www.velleman.be/
+Velleman K8000 Howto http://howto.htlw16.ac.at/k8000-howto.html
+
+The project has lead to new libs for the Velleman K8000 and K8005:
+ LIBK8000 v1.99.1 and LIBK8005 v0.21
+With these libs, you can control the K8000 interface card and the K8005
+stepper motor card with the simple commands which are in the original
+Velleman software, like SetIOchannel, ReadADchannel, SendStepCCWFull and
+many more, using /dev/velleman.
+ http://home.wanadoo.nl/hihihi/libk8000.htm
+ http://home.wanadoo.nl/hihihi/libk8005.htm
+ http://struyve.mine.nu:8080/index.php?block=k8000
+ http://sourceforge.net/projects/libk8005/
--- /dev/null
+===========================================================================
+ HVCS
+ IBM "Hypervisor Virtual Console Server" Installation Guide
+ for Linux Kernel 2.6.4+
+ Copyright (C) 2004 IBM Corporation
+
+===========================================================================
+NOTE:Eight space tabs are the optimum editor setting for reading this file.
+===========================================================================
+
+ Author(s) : Ryan S. Arnold <rsa@us.ibm.com>
+ Date Created: March, 02, 2004
+ Last Changed: July, 07, 2004
+
+---------------------------------------------------------------------------
+Table of contents:
+
+ 1. Driver Introduction:
+ 2. System Requirements
+ 3. Build Options:
+ 3.1 Built-in:
+ 3.2 Module:
+ 4. Installation:
+ 5. Connection:
+ 6. Disconnection:
+ 7. Configuration:
+ 8. Questions & Answers:
+ 9. Reporting Bugs:
+
+---------------------------------------------------------------------------
+1. Driver Introduction:
+
+This is the device driver for the IBM Hypervisor Virtual Console Server,
+"hvcs". The IBM hvcs provides a tty driver interface to allow Linux user
+space applications access to the system consoles of logically partitioned
+operating systems (Linux and AIX) running on the same partitioned Power5
+ppc64 system. Physical hardware consoles per partition are not practical
+on this hardware so system consoles are accessed by this driver using
+firmware interfaces to virtual terminal devices.
+
+---------------------------------------------------------------------------
+2. System Requirements:
+
+This device driver was written using 2.6.4 Linux kernel APIs and will only
+build and run on kernels of this version or later.
+
+This driver was written to operate solely on IBM Power5 ppc64 hardware
+though some care was taken to abstract the architecture dependent firmware
+calls from the driver code.
+
+Sysfs must be mounted on the system so that the user can determine which
+major and minor numbers are associated with each vty-server. Directions
+for sysfs mounting are outside the scope of this document.
+
+---------------------------------------------------------------------------
+3. Build Options:
+
+The hvcs driver registers itself as a tty driver. The tty layer
+dynamically allocates a block of major and minor numbers in a quantity
+requested by the registering driver. The hvcs driver asks the tty layer
+for 64 of these major/minor numbers by default to use for hvcs device node
+entries.
+
+If the default number of device entries is adequate then this driver can be
+built into the kernel. If not, the default can be over-ridden by inserting
+the driver as a module with insmod parameters.
+
+---------------------------------------------------------------------------
+3.1 Built-in:
+
+The following menuconfig example demonstrates selecting to build this
+driver into the kernel.
+
+ Device Drivers --->
+ Character devices --->
+ <*> IBM Hypervisor Virtual Console Server Support
+
+Begin the kernel make process.
+
+---------------------------------------------------------------------------
+3.2 Module:
+
+The following menuconfig example demonstrates selecting to build this
+driver as a kernel module.
+
+ Device Drivers --->
+ Character devices --->
+ <M> IBM Hypervisor Virtual Console Server Support
+
+The make process will build the following kernel modules:
+
+ hvcs.ko
+ hvcserver.ko
+
+To insert the module with the default allocation execute the following
+commands in the order they appear:
+
+ insmod hvcserver.ko
+ insmod hvcs.ko
+
+The hvcserver module contains architecture specific firmware calls and must
+be inserted first, otherwise the hvcs module will not find some of the
+symbols it expects.
+
+To override the default use an insmod parameter as follows (requesting 4
+tty devices as an example):
+
+ insmod hvcs.ko hvcs_parm_num_devs=4
+
+There is a maximum number of dev entries that can be specified on insmod.
+We think that 1024 is currently a decent maximum number of server adapters
+to allow. This can always be changed by modifying the constant in the
+source file before building.
+
+NOTE: The length of time it takes to insmod the driver seems to be related
+to the number of tty interfaces the registering driver requests.
+
+In order to remove the driver module execute the following command:
+
+ rmmod hvcs.ko
+
+The recommended method for installing hvcs as a module is to use depmod to
+build a current modules.dep file in /lib/modules/`uname -r` and then
+execute:
+
+modprobe hvcs hvcs_parm_num_devs=4
+
+The modules.dep file indicates that hvcserver.ko needs to be inserted
+before hvcs.ko and modprobe uses this file to smartly insert the modules in
+the proper order.
+
+The following modprobe command is used to remove hvcs and hvcserver in the
+proper order:
+
+modprobe -r hvcs
+
+---------------------------------------------------------------------------
+4. Installation:
+
+The tty layer creates sysfs entries which contain the major and minor
+numbers allocated for the hvcs driver. The following snippet of "tree"
+output of the sysfs directory shows where these numbers are presented:
+
+ sys/
+ |-- *other sysfs base dirs*
+ |
+ |-- class
+ | |-- *other classes of devices*
+ | |
+ | `-- tty
+ | |-- *other tty devices*
+ | |
+ | |-- hvcs0
+ | | `-- dev
+ | |-- hvcs1
+ | | `-- dev
+ | |-- hvcs2
+ | | `-- dev
+ | |-- hvcs3
+ | | `-- dev
+ | |
+ | |-- *other tty devices*
+ |
+ |-- *other sysfs base dirs*
+
+For the above examples the following output is a result of cat'ing the
+"dev" entry in the hvcs directory:
+
+ Pow5:/sys/class/tty/hvcs0/ # cat dev
+ 254:0
+
+ Pow5:/sys/class/tty/hvcs1/ # cat dev
+ 254:1
+
+ Pow5:/sys/class/tty/hvcs2/ # cat dev
+ 254:2
+
+ Pow5:/sys/class/tty/hvcs3/ # cat dev
+ 254:3
+
+The output from reading the "dev" attribute is the char device major and
+minor numbers that the tty layer has allocated for this driver's use. Most
+systems running hvcs will already have the device entries created or udev
+will do it automatically.
+
+Given the example output above, to manually create a /dev/hvcs* node entry
+mknod can be used as follows:
+
+ mknod /dev/hvcs0 c 254 0
+ mknod /dev/hvcs1 c 254 1
+ mknod /dev/hvcs2 c 254 2
+ mknod /dev/hvcs3 c 254 3
+
+Using mknod to manually create the device entries makes these device nodes
+persistent. Once created they will exist prior to the driver insmod.
+
+Attempting to connect an application to /dev/hvcs* prior to insertion of
+the hvcs module will result in an error message similar to the following:
+
+ "/dev/hvcs*: No such device".
+
+NOTE: Just because there is a device node present doesn't mean that there
+is a vty-server device configured for that node.
+
+---------------------------------------------------------------------------
+5. Connection
+
+Since this driver controls devices that provide a tty interface a user can
+interact with the device node entries using any standard tty-interactive
+method (e.g. "cat", "dd", "echo"). The intent of this driver however, is
+to provide real time console interaction with a Linux partition's console,
+which requires the use of applications that provide bi-directional,
+interactive I/O with a tty device.
+
+Applications (e.g. "minicom" and "screen") that act as terminal emulators
+or perform terminal type control sequence conversion on the data being
+passed through them are NOT acceptable for providing interactive console
+I/O. These programs often emulate antiquated terminal types (vt100 and
+ANSI) and expect inbound data to take the form of one of these supported
+terminal types but they either do not convert, or do not _adequately_
+convert, outbound data into the terminal type of the terminal which invoked
+them (though screen makes an attempt and can apparently be configured with
+much termcap wrestling.)
+
+For this reason kermit and cu are two of the recommended applications for
+interacting with a Linux console via an hvcs device. These programs simply
+act as a conduit for data transfer to and from the tty device. They do not
+require inbound data to take the form of a particular terminal type, nor do
+they cook outbound data to a particular terminal type.
+
+In order to ensure proper functioning of console applications one must make
+sure that once connected to a /dev/hvcs console that the console's $TERM
+env variable is set to the exact terminal type of the terminal emulator
+used to launch the interactive I/O application. If one is using xterm and
+kermit to connect to /dev/hvcs0 when the console prompt becomes available
+one should "export TERM=xterm" on the console. This tells ncurses
+applications that are invoked from the console that they should output
+control sequences that xterm can understand.
+
+As a precautionary measure an hvcs user should always "exit" from their
+session before disconnecting an application such as kermit from the device
+node. If this is not done, the next user to connect to the console will
+continue using the previous user's logged in session which includes
+using the $TERM variable that the previous user supplied.
+
+---------------------------------------------------------------------------
+6. Disconnection
+
+As a security feature to prevent the delivery of stale data to an
+unintended target the Power5 system firmware disables the fetching of data
+and discards that data when a connection between a vty-server and a vty has
+been severed. As an example, when a vty-server is immediately disconnected
+from a vty following output of data to the vty the vty adapter may not have
+enough time between when it received the data interrupt and when the
+connection was severed to fetch the data from firmware before the fetch is
+disabled by firmware.
+
+When hvcs is being used to serve consoles this behavior is not a huge issue
+because the adapter stays connected for large amounts of time following
+almost all data writes. When hvcs is being used as a tty conduit to tunnel
+data between two partitions [see Q & A below] this is a huge problem
+because the standard Linux behavior when cat'ing or dd'ing data to a device
+is to open the tty, send the data, and then close the tty. If this driver
+manually terminated vty-server connections on tty close this would close
+the vty-server and vty connection before the target vty has had a chance to
+fetch the data.
+
+Additionally, disconnecting a vty-server and vty only on module removal or
+adapter removal is impractical because other vty-servers in other
+partitions may require the usage of the target vty at any time.
+
+Due to this behavioral restriction disconnection of vty-servers from the
+connected vty is a manual procedure using a write to a sysfs attribute
+outlined below, on the other hand the initial vty-server connection to a
+vty is established automatically by this driver. Manual vty-server
+connection is never required.
+
+In order to terminate the connection between a vty-server and vty the
+"vterm_state" sysfs attribute within each vty-server's sysfs entry is used.
+Reading this attribute reveals the current connection state of the
+vty-server adapter. A zero means that the vty-server is not connected to a
+vty. A one indicates that a connection is active.
+
+Writing a '0' (zero) to the vterm_state attribute will disconnect the VTERM
+connection between the vty-server and target vty ONLY if the vterm_state
+previously read '1'. The write directive is ignored if the vterm_state
+read '0' or if any value other than '0' was written to the vterm_state
+attribute. The following example will show the method used for verifying
+the vty-server connection status and disconnecting a vty-server connection.
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat vterm_state
+ 1
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # echo 0 > vterm_state
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat vterm_state
+ 0
+
+All vty-server connections are automatically terminated when the device is
+hotplug removed and when the module is removed.
+
+---------------------------------------------------------------------------
+7. Configuration
+
+Each vty-server has a sysfs entry in the /sys/devices/vio directory, which
+is symlinked in several other sysfs tree directories, notably under the
+hvcs driver entry, which looks like the following example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs # ls
+ . .. 30000003 30000004 rescan
+
+By design, firmware notifies the hvcs driver of vty-server lifetimes and
+partner vty removals but not the addition of partner vtys. Since an HMC
+Super Admin can add partner info dynamically we have provided the hvcs
+driver sysfs directory with the "rescan" update attribute which will query
+firmware and update the partner info for all the vty-servers that this
+driver manages. Writing a '1' to the attribute triggers the update. An
+explicit example follows:
+
+ Pow5:/sys/bus/vio/drivers/hvcs # echo 1 > rescan
+
+Reading the attribute will indicate a state of '1' or '0'. A one indicates
+that an update is in process. A zero indicates that an update has
+completed or was never executed.
+
+Vty-server entries in this directory are a 32 bit partition unique unit
+address that is created by firmware. An example vty-server sysfs entry
+looks like the following:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # ls
+ . current_vty devspec partner_clcs vterm_state
+ .. detach_state name partner_vtys
+
+Each entry is provided, by default with a "name" attribute. Reading the
+"name" attribute will reveal the device type as shown in the following
+example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000003 # cat name
+ vty-server
+
+Each entry is also provided, by default, with a "devspec" attribute which
+reveals the full device specification when read, as shown in the following
+example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat devspec
+ /vdevice/vty-server@30000004
+
+Each vty-server sysfs dir is provided with two read-only attributes that
+provide lists of easily parsed partner vty data: "partner_vtys" and
+"partner_clcs".
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat partner_vtys
+ 30000000
+ 30000001
+ 30000002
+ 30000000
+ 30000000
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat partner_clcs
+ U5112.428.103048A-V3-C0
+ U5112.428.103048A-V3-C2
+ U5112.428.103048A-V3-C3
+ U5112.428.103048A-V4-C0
+ U5112.428.103048A-V5-C0
+
+Reading partner_vtys returns a list of partner vtys. Vty unit address
+numbering is only per-partition-unique so entries will frequently repeat.
+
+Reading partner_clcs returns a list of "converged location codes" which are
+composed of a system serial number followed by "-V*", where the '*' is the
+target partition number, and "-C*", where the '*' is the slot of the
+adapter. The first vty partner corresponds to the first clc item, the
+second vty partner to the second clc item, etc.
+
+A vty-server can only be connected to a single vty at a time. The entry,
+"current_vty" prints the clc of the currently selected partner vty when
+read.
+
+The current_vty can be changed by writing a valid partner clc to the entry
+as in the following example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # echo U5112.428.10304
+ 8A-V4-C0 > current_vty
+
+Changing the current_vty when a vty-server is already connected to a vty
+does not affect the current connection. The change takes effect when the
+currently open connection is freed.
+
+Information on the "vterm_state" attribute was covered earlier on the
+chapter entitled "disconnection".
+
+---------------------------------------------------------------------------
+8. Questions & Answers:
+===========================================================================
+Q: What are the security concerns involving hvcs?
+
+A: There are three main security concerns:
+
+ 1. The creator of the /dev/hvcs* nodes has the ability to restrict
+ the access of the device entries to certain users or groups. It
+ may be best to create a special hvcs group privilege for providing
+ access to system consoles.
+
+ 2. To provide network security when grabbing the console it is
+ suggested that the user connect to the console hosting partition
+ using a secure method, such as SSH or sit at a hardware console.
+
+ 3. Make sure to exit the user session when done with a console or
+ the next vty-server connection (which may be from another
+ partition) will experience the previously logged in session.
+
+---------------------------------------------------------------------------
+Q: How do I multiplex a console that I grab through hvcs so that other
+people can see it:
+
+A: You can use "screen" to directly connect to the /dev/hvcs* device and
+setup a session on your machine with the console group privileges. As
+pointed out earlier by default screen doesn't provide the termcap settings
+for most terminal emulators to provide adequate character conversion from
+term type "screen" to others. This means that curses based programs may
+not display properly in screen sessions.
+
+---------------------------------------------------------------------------
+Q: Why are the colors all messed up?
+Q: Why are the control characters acting strange or not working?
+Q: Why is the console output all strange and unintelligible?
+
+A: Please see the preceding section on "Connection" for a discussion of how
+applications can affect the display of character control sequences.
+Additionally, just because you logged into the console using and xterm
+doesn't mean someone else didn't log into the console with the HMC console
+(vt320) before you and leave the session logged in. The best thing to do
+is to export TERM to the terminal type of your terminal emulator when you
+get the console. Additionally make sure to "exit" the console before you
+disconnect from the console. This will ensure that the next user gets
+their own TERM type set when they login.
+
+---------------------------------------------------------------------------
+Q: When I try to CONNECT kermit to an hvcs device I get:
+"Sorry, can't open connection: /dev/hvcs*"What is happening?
+
+A: Some other Power5 console mechanism has a connection to the vty and
+isn't giving it up. You can try to force disconnect the consoles from the
+HMC by right clicking on the partition and then selecting "close terminal".
+Otherwise you have to hunt down the people who have console authority. It
+is possible that you already have the console open using another kermit
+session and just forgot about it. Please review the console options for
+Power5 systems to determine the many ways a system console can be held.
+
+OR
+
+A: Another user may not have a connectivity method currently attached to a
+/dev/hvcs device but the vterm_state may reveal that they still have the
+vty-server connection established. They need to free this using the method
+outlined in the section on "Disconnection" in order for others to connect
+to the target vty.
+
+OR
+
+A: The user profile you are using to execute kermit probably doesn't have
+permissions to use the /dev/hvcs* device.
+
+OR
+
+A: You probably haven't inserted the hvcs.ko module yet but the /dev/hvcs*
+entry still exists (on systems without udev).
+
+OR
+
+A: There is not a corresponding vty-server device that maps to an existing
+/dev/hvcs* entry.
+
+---------------------------------------------------------------------------
+Q: When I try to CONNECT kermit to an hvcs device I get:
+"Sorry, write access to UUCP lockfile directory denied."
+
+A: The /dev/hvcs* entry you have specified doesn't exist where you said it
+does? Maybe you haven't inserted the module (on systems with udev).
+
+---------------------------------------------------------------------------
+Q: If I already have one Linux partition installed can I use hvcs on said
+partition to provide the console for the install of a second Linux
+partition?
+
+A: Yes granted that your are connected to the /dev/hvcs* device using
+kermit or cu or some other program that doesn't provide terminal emulation.
+
+---------------------------------------------------------------------------
+Q: Can I connect to more than one partition's console at a time using this
+driver?
+
+A: Yes. Of course this means that there must be more than one vty-server
+configured for this partition and each must point to a disconnected vty.
+
+---------------------------------------------------------------------------
+Q: Does the hvcs driver support dynamic (hotplug) addition of devices?
+
+A: Yes, if you have dlpar and hotplug enabled for your system and it has
+been built into the kernel the hvcs drivers is configured to dynamically
+handle additions of new devices and removals of unused devices.
+
+---------------------------------------------------------------------------
+Q: Can I use /dev/hvcs* as a conduit to another partition and use a tty
+device on that partition as the other end of the pipe?
+
+A: Yes, on Power5 platforms the hvc_console driver provides a tty interface
+for extra /dev/hvc* devices (where /dev/hvc0 is most likely the console).
+In order to get a tty conduit working between the two partitions the HMC
+Super Admin must create an additional "serial server" for the target
+partition with the HMC gui which will show up as /dev/hvc* when the target
+partition is rebooted.
+
+The HMC Super Admin then creates an additional "serial client" for the
+current partition and points this at the target partition's newly created
+"serial server" adapter (remember the slot). This shows up as an
+additional /dev/hvcs* device.
+
+Now a program on the target system can be configured to read or write to
+/dev/hvc* and another program on the current partition can be configured to
+read or write to /dev/hvcs*. Now you have a tty conduit between two
+partitions.
+
+---------------------------------------------------------------------------
+9. Reporting Bugs:
+
+The proper channel for reporting bugs is either through the Linux OS
+distribution company that provided your OS or by posting issues to the
+ppc64 development mailing list at:
+
+linuxppc64-dev@lists.linuxppc.org
+
+This request is to provide a documented and searchable public exchange
+of the problems and solutions surrounding this driver for the benefit of
+all users.
--- /dev/null
+Linux 2.6.x on MPC52xx family
+-----------------------------
+
+For the latest info, go to http://www.246tNt.com/mpc52xx/state.txt
+
+To compile/use :
+
+ - U-Boot:
+ # <edit Makefile to set ARCH=ppc & CROSS_COMPILE=... ( also EXTRAVERSION
+ if you wish to ).
+ # make lite5200_defconfig
+ # make uImage
+
+ then, on U-boot:
+ => tftpboot 200000 uImage
+ => tftpboot 400000 pRamdisk
+ => bootm 200000 400000
+
+ - DBug:
+ # <edit Makefile to set ARCH=ppc & CROSS_COMPILE=... ( also EXTRAVERSION
+ if you wish to ).
+ # make lite5200_defconfig
+ # cp your_initrd.gz arch/ppc/boot/images/ramdisk.image.gz
+ # make zImage.initrd
+ # make
+
+ then in DBug:
+ DBug> dn -i zImage.initrd.lite5200
+
+
+Some remarks :
+ - The port is named mpc52xxx, and config options are PPC_MPC52xx. The MGT5100
+ is not supported, and I'm not sure anyone is interesting in working on it
+ so. I didn't took 5xxx because there's apparently a lot of 5xxx that have
+ nothing to do with the MPC5200. I also included the 'MPC' for the same
+ reason.
+ - Of course, I inspired myself from the 2.4 port. If you think I forgot to
+ mention you/your company in the copyright of some code, I'll correct it
+ ASAP.
+ - The codes wants the MBAR to be set at 0xf0000000 by the bootloader. It's
+ mapped 1:1 with the MMU. If for whatever reason, you want to change this,
+ beware that some code depends on the 0xf0000000 address and other depends
+ on the 1:1 mapping.
+ - Most of the code assumes that port multiplexing, frequency selection, ...
+ has already been done. IMHO this should be done as early as possible, in
+ the bootloader. If for whatever reason you can't do it there, do it in the
+ platform setup code (if U-Boot) or in the arch/ppc/boot/simple/... (if
+ DBug)
--- /dev/null
+
+ Sound Blaster Audigy mixer / default DSP code
+ ===========================================
+
+This is based on SB-Live-mixer.txt.
+
+The EMU10K2 chips have a DSP part which can be programmed to support
+various ways of sample processing, which is described here.
+(This acticle does not deal with the overall functionality of the
+EMU10K2 chips. See the manuals section for further details.)
+
+The ALSA driver programs this portion of chip by default code
+(can be altered later) which offers the following functionality:
+
+
+1) Digital mixer controls
+-------------------------
+
+These controls are built using the DSP instructions. They offer extended
+functionality. Only the default build-in code in the ALSA driver is described
+here. Note that the controls work as attenuators: the maximum value is the
+neutral position leaving the signal unchanged. Note that if the same destination
+is mentioned in multiple controls, the signal is accumulated and can be wrapped
+(set to maximal or minimal value without checking of overflow).
+
+
+Explanation of used abbreviations:
+
+DAC - digital to analog converter
+ADC - analog to digital converter
+I2S - one-way three wire serial bus for digital sound by Philips Semiconductors
+ (this standard is used for connecting standalone DAC and ADC converters)
+LFE - low frequency effects (subwoofer signal)
+AC97 - a chip containing an analog mixer, DAC and ADC converters
+IEC958 - S/PDIF
+FX-bus - the EMU10K2 chip has an effect bus containing 64 accumulators.
+ Each of the synthesizer voices can feed its output to these accumulators
+ and the DSP microcontroller can operate with the resulting sum.
+
+name='PCM Front Playback Volume',index=0
+
+This control is used to attenuate samples for left and right front PCM FX-bus
+accumulators. ALSA uses accumulators 8 and 9 for left and right front PCM
+samples for 5.1 playback. The result samples are forwarded to the front DAC PCM
+slots of the Philips DAC.
+
+name='PCM Surround Playback Volume',index=0
+
+This control is used to attenuate samples for left and right surround PCM FX-bus
+accumulators. ALSA uses accumulators 2 and 3 for left and right surround PCM
+samples for 5.1 playback. The result samples are forwarded to the surround DAC PCM
+slots of the Philips DAC.
+
+name='PCM Center Playback Volume',index=0
+
+This control is used to attenuate samples for center PCM FX-bus accumulator.
+ALSA uses accumulator 6 for center PCM sample for 5.1 playback. The result sample
+is forwarded to the center DAC PCM slot of the Philips DAC.
+
+name='PCM LFE Playback Volume',index=0
+
+This control is used to attenuate sample for LFE PCM FX-bus accumulator.
+ALSA uses accumulator 7 for LFE PCM sample for 5.1 playback. The result sample
+is forwarded to the LFE DAC PCM slot of the Philips DAC.
+
+name='PCM Playback Volume',index=0
+
+This control is used to attenuate samples for left and right PCM FX-bus
+accumulators. ALSA uses accumulators 0 and 1 for left and right PCM samples for
+stereo playback. The result samples are forwarded to the front DAC PCM slots
+of the Philips DAC.
+
+name='PCM Capture Volume',index=0
+
+This control is used to attenuate samples for left and right PCM FX-bus
+accumulator. ALSA uses accumulators 0 and 1 for left and right PCM.
+The result is forwarded to the ADC capture FIFO (thus to the standard capture
+PCM device).
+
+name='Music Playback Volume',index=0
+
+This control is used to attenuate samples for left and right MIDI FX-bus
+accumulators. ALSA uses accumulators 4 and 5 for left and right MIDI samples.
+The result samples are forwarded to the front DAC PCM slots of the AC97 codec.
+
+name='Music Capture Volume',index=0
+
+These controls are used to attenuate samples for left and right MIDI FX-bus
+accumulator. ALSA uses accumulators 4 and 5 for left and right PCM.
+The result is forwarded to the ADC capture FIFO (thus to the standard capture
+PCM device).
+
+name='Mic Playback Volume',index=0
+
+This control is used to attenuate samples for left and right Mic input.
+For Mic input is used AC97 codec. The result samples are forwarded to
+the front DAC PCM slots of the Philips DAC. Samples are forwarded to Mic
+capture FIFO (device 1 - 16bit/8KHz mono) too without volume control.
+
+name='Mic Capture Volume',index=0
+
+This control is used to attenuate samples for left and right Mic input.
+The result is forwarded to the ADC capture FIFO (thus to the standard capture
+PCM device).
+
+name='Audigy CD Playback Volume',index=0
+
+This control is used to attenuate samples from left and right IEC958 TTL
+digital inputs (usually used by a CDROM drive). The result samples are
+forwarded to the front DAC PCM slots of the Philips DAC.
+
+name='Audigy CD Capture Volume',index=0
+
+This control is used to attenuate samples from left and right IEC958 TTL
+digital inputs (usually used by a CDROM drive). The result samples are
+forwarded to the ADC capture FIFO (thus to the standard capture PCM device).
+
+name='IEC958 Optical Playback Volume',index=0
+
+This control is used to attenuate samples from left and right IEC958 optical
+digital input. The result samples are forwarded to the front DAC PCM slots
+of the Philips DAC.
+
+name='IEC958 Optical Capture Volume',index=0
+
+This control is used to attenuate samples from left and right IEC958 optical
+digital inputs. The result samples are forwarded to the ADC capture FIFO
+(thus to the standard capture PCM device).
+
+name='Line2 Playback Volume',index=0
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs (on the AudigyDrive). The result samples are forwarded to the front
+DAC PCM slots of the Philips DAC.
+
+name='Line2 Capture Volume',index=1
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs (on the AudigyDrive). The result samples are forwarded to the ADC
+capture FIFO (thus to the standard capture PCM device).
+
+name='Analog Mix Playback Volume',index=0
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs from Philips ADC. The result samples are forwarded to the front
+DAC PCM slots of the Philips DAC. This contains mix from analog sources
+like CD, Line In, Aux, ....
+
+name='Analog Mix Capture Volume',index=1
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs Philips ADC. The result samples are forwarded to the ADC
+capture FIFO (thus to the standard capture PCM device).
+
+name='Aux2 Playback Volume',index=0
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs (on the AudigyDrive). The result samples are forwarded to the front
+DAC PCM slots of the Philips DAC.
+
+name='Aux2 Capture Volume',index=1
+
+This control is used to attenuate samples from left and right I2S ADC
+inputs (on the AudigyDrive). The result samples are forwarded to the ADC
+capture FIFO (thus to the standard capture PCM device).
+
+name='Front Playback Volume',index=0
+
+All stereo signals are mixed together and mirrored to surround, center and LFE.
+This control is used to attenuate samples for left and right front speakers of
+this mix.
+
+name='Surround Playback Volume',index=0
+
+All stereo signals are mixed together and mirrored to surround, center and LFE.
+This control is used to attenuate samples for left and right surround speakers of
+this mix.
+
+name='Center Playback Volume',index=0
+
+All stereo signals are mixed together and mirrored to surround, center and LFE.
+This control is used to attenuate sample for center speaker of this mix.
+
+name='LFE Playback Volume',index=0
+
+All stereo signals are mixed together and mirrored to surround, center and LFE.
+This control is used to attenuate sample for LFE speaker of this mix.
+
+name='Tone Control - Switch',index=0
+
+This control turns the tone control on or off. The samples for front, rear
+and center / LFE outputs are affected.
+
+name='Tone Control - Bass',index=0
+
+This control sets the bass intensity. There is no neutral value!!
+When the tone control code is activated, the samples are always modified.
+The closest value to pure signal is 20.
+
+name='Tone Control - Treble',index=0
+
+This control sets the treble intensity. There is no neutral value!!
+When the tone control code is activated, the samples are always modified.
+The closest value to pure signal is 20.
+
+name='Master Playback Volume',index=0
+
+This control is used to attenuate samples for front, surround, center and
+LFE outputs.
+
+name='IEC958 Optical Raw Playback Switch',index=0
+
+If this switch is on, then the samples for the IEC958 (S/PDIF) digital
+output are taken only from the raw FX8010 PCM, otherwise standard front
+PCM samples are taken.
+
+
+2) PCM stream related controls
+------------------------------
+
+name='EMU10K1 PCM Volume',index 0-31
+
+Channel volume attenuation in range 0-0xffff. The maximum value (no
+attenuation) is default. The channel mapping for three values is
+as follows:
+
+ 0 - mono, default 0xffff (no attenuation)
+ 1 - left, default 0xffff (no attenuation)
+ 2 - right, default 0xffff (no attenuation)
+
+name='EMU10K1 PCM Send Routing',index 0-31
+
+This control specifies the destination - FX-bus accumulators. There 24
+values with this mapping:
+
+ 0 - mono, A destination (FX-bus 0-63), default 0
+ 1 - mono, B destination (FX-bus 0-63), default 1
+ 2 - mono, C destination (FX-bus 0-63), default 2
+ 3 - mono, D destination (FX-bus 0-63), default 3
+ 4 - mono, E destination (FX-bus 0-63), default 0
+ 5 - mono, F destination (FX-bus 0-63), default 0
+ 6 - mono, G destination (FX-bus 0-63), default 0
+ 7 - mono, H destination (FX-bus 0-63), default 0
+ 8 - left, A destination (FX-bus 0-63), default 0
+ 9 - left, B destination (FX-bus 0-63), default 1
+ 10 - left, C destination (FX-bus 0-63), default 2
+ 11 - left, D destination (FX-bus 0-63), default 3
+ 12 - left, E destination (FX-bus 0-63), default 0
+ 13 - left, F destination (FX-bus 0-63), default 0
+ 14 - left, G destination (FX-bus 0-63), default 0
+ 15 - left, H destination (FX-bus 0-63), default 0
+ 16 - right, A destination (FX-bus 0-63), default 0
+ 17 - right, B destination (FX-bus 0-63), default 1
+ 18 - right, C destination (FX-bus 0-63), default 2
+ 19 - right, D destination (FX-bus 0-63), default 3
+ 20 - right, E destination (FX-bus 0-63), default 0
+ 21 - right, F destination (FX-bus 0-63), default 0
+ 22 - right, G destination (FX-bus 0-63), default 0
+ 23 - right, H destination (FX-bus 0-63), default 0
+
+Don't forget that it's illegal to assign a channel to the same FX-bus accumulator
+more than once (it means 0=0 && 1=0 is an invalid combination).
+
+name='EMU10K1 PCM Send Volume',index 0-31
+
+It specifies the attenuation (amount) for given destination in range 0-255.
+The channel mapping is following:
+
+ 0 - mono, A destination attn, default 255 (no attenuation)
+ 1 - mono, B destination attn, default 255 (no attenuation)
+ 2 - mono, C destination attn, default 0 (mute)
+ 3 - mono, D destination attn, default 0 (mute)
+ 4 - mono, E destination attn, default 0 (mute)
+ 5 - mono, F destination attn, default 0 (mute)
+ 6 - mono, G destination attn, default 0 (mute)
+ 7 - mono, H destination attn, default 0 (mute)
+ 8 - left, A destination attn, default 255 (no attenuation)
+ 9 - left, B destination attn, default 0 (mute)
+ 10 - left, C destination attn, default 0 (mute)
+ 11 - left, D destination attn, default 0 (mute)
+ 12 - left, E destination attn, default 0 (mute)
+ 13 - left, F destination attn, default 0 (mute)
+ 14 - left, G destination attn, default 0 (mute)
+ 15 - left, H destination attn, default 0 (mute)
+ 16 - right, A destination attn, default 0 (mute)
+ 17 - right, B destination attn, default 255 (no attenuation)
+ 18 - right, C destination attn, default 0 (mute)
+ 19 - right, D destination attn, default 0 (mute)
+ 20 - right, E destination attn, default 0 (mute)
+ 21 - right, F destination attn, default 0 (mute)
+ 22 - right, G destination attn, default 0 (mute)
+ 23 - right, H destination attn, default 0 (mute)
+
+
+
+4) MANUALS/PATENTS:
+-------------------
+
+ftp://opensource.creative.com/pub/doc
+-------------------------------------
+
+ Files:
+ LM4545.pdf AC97 Codec
+
+ m2049.pdf The EMU10K1 Digital Audio Processor
+
+ hog63.ps FX8010 - A DSP Chip Architecture for Audio Effects
+
+
+WIPO Patents
+------------
+ Patent numbers:
+ WO 9901813 (A1) Audio Effects Processor with multiple asynchronous (Jan. 14, 1999)
+ streams
+
+ WO 9901814 (A1) Processor with Instruction Set for Audio Effects (Jan. 14, 1999)
+
+ WO 9901953 (A1) Audio Effects Processor having Decoupled Instruction
+ Execution and Audio Data Sequencing (Jan. 14, 1999)
+
+
+US Patents (http://www.uspto.gov/)
+----------------------------------
+
+ US 5925841 Digital Sampling Instrument employing cache memory (Jul. 20, 1999)
+
+ US 5928342 Audio Effects Processor integrated on a single chip (Jul. 27, 1999)
+ with a multiport memory onto which multiple asynchronous
+ digital sound samples can be concurrently loaded
+
+ US 5930158 Processor with Instruction Set for Audio Effects (Jul. 27, 1999)
+
+ US 6032235 Memory initialization circuit (Tram) (Feb. 29, 2000)
+
+ US 6138207 Interpolation looping of audio samples in cache connected to (Oct. 24, 2000)
+ system bus with prioritization and modification of bus transfers
+ in accordance with loop ends and minimum block sizes
+
+ US 6151670 Method for conserving memory storage using a (Nov. 21, 2000)
+ pool of short term memory registers
+
+ US 6195715 Interrupt control for multiple programs communicating with (Feb. 27, 2001)
+ a common interrupt by associating programs to GP registers,
+ defining interrupt register, polling GP registers, and invoking
+ callback routine associated with defined interrupt register
--- /dev/null
+
+ SN9C10[12] PC Camera Controllers
+ Driver for Linux
+ ================================
+
+ - Documentation -
+
+
+Index
+=====
+1. Copyright
+2. License
+3. Overview
+4. Module dependencies
+5. Module loading
+6. Module parameters
+7. Device control through "sysfs"
+8. Supported devices
+9. How to add support for new image sensors
+10. Note for V4L2 developers
+11. Contact information
+12. Credits
+
+
+1. Copyright
+============
+Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it>
+
+SONiX is a trademark of SONiX Technology Company Limited, inc.
+This driver is not sponsored or developed by SONiX.
+
+
+2. License
+==========
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+
+3. Overview
+===========
+This driver attempts to support the video streaming capabilities of the devices
+mounting the SONiX SN9C101 or SONiX SN9C102 PC Camera Controllers.
+
+- It's worth to note that SONiX has never collaborated with me during the
+development of this project, despite of several requests for enough detailed
+specifications of the register tables, compression engine and video data format
+of the above chips -
+
+Up to 64 cameras can be handled at the same time. They can be connected and
+disconnected from the host many times without turning off the computer, if
+your system supports the hotplug facility.
+
+The driver relies on the Video4Linux2 and USB core modules. It has been
+designed to run properly on SMP systems as well.
+
+The latest version of the SN9C10[12] driver can be found at the following URL:
+http://go.lamarinapunto.com/
+
+
+4. Module dependencies
+======================
+For it to work properly, the driver needs kernel support for Video4Linux and
+USB.
+
+The following options of the kernel configuration file must be enabled and
+corresponding modules must be compiled:
+
+ # Multimedia devices
+ #
+ CONFIG_VIDEO_DEV=m
+
+ # USB support
+ #
+ CONFIG_USB=m
+
+In addition, depending on the hardware being used, the modules below are
+necessary:
+
+ # USB Host Controller Drivers
+ #
+ CONFIG_USB_EHCI_HCD=m
+ CONFIG_USB_UHCI_HCD=m
+ CONFIG_USB_OHCI_HCD=m
+
+And finally:
+
+ # USB Multimedia devices
+ #
+ CONFIG_USB_SN9C102=m
+
+
+5. Module loading
+=================
+To use the driver, it is necessary to load the "sn9c102" module into memory
+after every other module required: "videodev", "usbcore" and, depending on
+the USB host controller you have, "ehci-hcd", "uhci-hcd" or "ohci-hcd".
+
+Loading can be done as shown below:
+
+ [root@localhost home]# modprobe usbcore
+ [root@localhost home]# modprobe sn9c102
+
+At this point the devices should be recognized. You can invoke "dmesg" to
+analyze kernel messages and verify that the loading process has gone well:
+
+ [user@localhost home]$ dmesg
+
+
+6. Module parameters
+====================
+Module parameters are listed below:
+-------------------------------------------------------------------------------
+Name: video_nr
+Type: int array (min = 0, max = 32)
+Syntax: <-1|n[,...]>
+Description: Specify V4L2 minor mode number:
+ -1 = use next available
+ n = use minor number n
+ You can specify up to 32 cameras this way.
+ For example:
+ video_nr=-1,2,-1 would assign minor number 2 to the second
+ recognized camera and use auto for the first one and for every
+ other camera.
+Default: -1
+-------------------------------------------------------------------------------
+Name: debug
+Type: int
+Syntax: <n>
+Description: Debugging information level, from 0 to 3:
+ 0 = none (use carefully)
+ 1 = critical errors
+ 2 = significant informations
+ 3 = more verbose messages
+ Level 3 is useful for testing only, when just one device
+ is used.
+Default: 2
+-------------------------------------------------------------------------------
+
+
+7. Device control through "sysfs"
+=================================
+It is possible to read and write both the SN9C10[12] and the image sensor
+registers by using the "sysfs" filesystem interface.
+
+Every time a supported device is recognized, read-only files named "redblue"
+and "green" are created in the /sys/class/video4linux/videoX directory. You can
+set the red, blue and green channel's gain by writing the desired value to
+them. The value may range from 0 to 15 for each channel; this means that
+"redblue" accepts 8-bit values, where the low 4 bits are reserved for red and
+the others for blue.
+
+There are other four entries in the directory above for each registered camera:
+"reg", "val", "i2c_reg" and "i2c_val". The first two files control the
+SN9C10[12] bridge, while the other two control the sensor chip. "reg" and
+"i2c_reg" hold the values of the current register index where the following
+reading/writing operations are addressed at through "val" and "i2c_val". Their
+use is not intended for end-users, unless you know what you are doing. Note
+that "i2c_reg" and "i2c_val" won't be created if the sensor does not actually
+support the standard I2C protocol. Also, remember that you must be logged in as
+root before writing to them.
+
+As an example, suppose we were to want to read the value contained in the
+register number 1 of the sensor register table - which usually is the product
+identifier - of the camera registered as "/dev/video0":
+
+ [root@localhost #] cd /sys/class/video4linux/video0
+ [root@localhost #] echo 1 > i2c_reg
+ [root@localhost #] cat i2c_val
+
+Now let's set the green gain's register of the SN9C10[12] chip to 2:
+
+ [root@localhost #] echo 0x11 > reg
+ [root@localhost #] echo 2 > val
+
+Note that the SN9C10[12] always returns 0 when some of its registers are read.
+To avoid race conditions, all the I/O accesses to the files are serialized.
+
+
+8. Supported devices
+====================
+- I won't mention any of the names of the companies as well as their products
+here. They have never collaborated with me, so no advertising -
+
+From the point of view of a driver, what unambiguously identify a device are
+its vendor and product USB identifiers. Below is a list of known identifiers of
+devices mounting the SN9C10[12] PC camera controllers:
+
+Vendor ID Product ID
+--------- ----------
+0xc45 0x6001
+0xc45 0x6005
+0xc45 0x6009
+0xc45 0x600d
+0xc45 0x6024
+0xc45 0x6025
+0xc45 0x6028
+0xc45 0x6029
+0xc45 0x602a
+0xc45 0x602c
+0xc45 0x8001
+
+The list above does NOT imply that all those devices work with this driver: up
+until now only the ones that mount the following image sensors are supported.
+Kernel messages will always tell you whether this is the case:
+
+Model Manufacturer
+----- ------------
+PAS106B PixArt Imaging Inc.
+TAS5110C1B Taiwan Advanced Sensor Corporation
+TAS5130D1B Taiwan Advanced Sensor Corporation
+
+If you think your camera is based on the above hardware and is not actually
+listed in the above table, you may try to add the specific USB VendorID and
+ProductID identifiers to the sn9c102_id_table[] in the file "sn9c102_sensor.h";
+then compile, load the module again and look at the kernel output.
+If this works, please send an email to me reporting the kernel messages, so
+that I will add a new entry in the list of supported devices.
+
+Donations of new models for further testing and support would be much
+appreciated. I won't add official support for hardware that I don't actually
+have.
+
+
+9. How to add support for new image sensors
+===========================================
+It should be easy to write code for new sensors by using the small API that I
+have created for this purpose, which is present in "sn9c102_sensor.h"
+(documentation is included there). As an example, have a look at the code in
+"sn9c102_pas106b.c", which uses the mentioned interface.
+
+At the moment, not yet supported image sensors are: PAS202B (VGA),
+HV7131[D|E1] (VGA), MI03 (VGA), OV7620 (VGA).
+
+
+10. Note for V4L2 developers
+============================
+This driver follows the V4L2 API specifications. In particular, it enforces two
+rules:
+
+1) Exactly one I/O method, either "mmap" or "read", is associated with each
+file descriptor. Once it is selected, the application must close and reopen the
+device to switch to the other I/O method.
+
+2) Previously mapped buffer memory must always be unmapped before calling any
+of the "VIDIOC_S_CROP", "VIDIOC_TRY_FMT" and "VIDIOC_S_FMT" ioctl's. In case,
+the same number of buffers as before will be allocated again to match the size
+of the new video frames, so you have to map them again before any I/O attempts.
+
+
+11. Contact information
+=======================
+I may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
+
+I can accept GPG/PGP encrypted e-mail. My GPG key ID is 'FCE635A4'.
+My public 1024-bit key should be available at any keyserver; the fingerprint
+is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.
+
+
+12. Credits
+===========
+I would thank the following persons:
+
+- Stefano Mozzi, who donated 45 EU;
+- Luca Capello for the donation of a webcam;
+- Mizuno Takafumi for the donation of a webcam.
--- /dev/null
+ .type initrd_start,#object
+ .globl initrd_start
+initrd_start:
+ .incbin INITRD
+ .globl initrd_end
+initrd_end:
--- /dev/null
+ .globl kernel_start
+kernel_start:
+ .incbin "arch/arm/boot/zImage"
+ .globl kernel_end
+kernel_end:
+ .align 2
--- /dev/null
+ .section .piggydata,#alloc
+ .globl input_data
+input_data:
+ .incbin "arch/arm/boot/compressed/piggy.gz"
+ .globl input_data_end
+input_data_end:
--- /dev/null
+/*
+ * linux/arch/arm/common/locomo.c
+ *
+ * Sharp LoCoMo support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This file contains all generic LoCoMo support.
+ *
+ * All initialization functions provided here are intended to be called
+ * from machine specific code with proper arguments when required.
+ *
+ * Based on sa1111.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+
+#include <asm/hardware/locomo.h>
+
+/* the following is the overall data for the locomo chip */
+struct locomo {
+ struct device *dev;
+ unsigned long phys;
+ unsigned int irq;
+ void *base;
+};
+
+struct locomo_dev_info {
+ unsigned long offset;
+ unsigned long length;
+ unsigned int devid;
+ unsigned int irq[1];
+ const char * name;
+};
+
+static struct locomo_dev_info locomo_devices[] = {
+};
+
+
+/** LoCoMo interrupt handling stuff.
+ * NOTE: LoCoMo has a 1 to many mapping on all of its IRQs.
+ * that is, there is only one real hardware interrupt
+ * we determine which interrupt it is by reading some IO memory.
+ * We have two levels of expansion, first in the handler for the
+ * hardware interrupt we generate an interrupt
+ * IRQ_LOCOMO_*_BASE and those handlers generate more interrupts
+ *
+ * hardware irq reads LOCOMO_ICR & 0x0f00
+ * IRQ_LOCOMO_KEY_BASE
+ * IRQ_LOCOMO_GPIO_BASE
+ * IRQ_LOCOMO_LT_BASE
+ * IRQ_LOCOMO_SPI_BASE
+ * IRQ_LOCOMO_KEY_BASE reads LOCOMO_KIC & 0x0001
+ * IRQ_LOCOMO_KEY
+ * IRQ_LOCOMO_GPIO_BASE reads LOCOMO_GIR & LOCOMO_GPD & 0xffff
+ * IRQ_LOCOMO_GPIO[0-15]
+ * IRQ_LOCOMO_LT_BASE reads LOCOMO_LTINT & 0x0001
+ * IRQ_LOCOMO_LT
+ * IRQ_LOCOMO_SPI_BASE reads LOCOMO_SPIIR & 0x000F
+ * IRQ_LOCOMO_SPI_RFR
+ * IRQ_LOCOMO_SPI_RFW
+ * IRQ_LOCOMO_SPI_OVRN
+ * IRQ_LOCOMO_SPI_TEND
+ */
+
+#define LOCOMO_IRQ_START (IRQ_LOCOMO_KEY_BASE)
+#define LOCOMO_IRQ_KEY_START (IRQ_LOCOMO_KEY)
+#define LOCOMO_IRQ_GPIO_START (IRQ_LOCOMO_GPIO0)
+#define LOCOMO_IRQ_LT_START (IRQ_LOCOMO_LT)
+#define LOCOMO_IRQ_SPI_START (IRQ_LOCOMO_SPI_RFR)
+
+static void locomo_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ /* Acknowledge the parent IRQ */
+ desc->chip->ack(irq);
+
+ /* check why this interrupt was generated */
+ req = locomo_readl(mapbase + LOCOMO_ICR) & 0x0f00;
+
+ if (req) {
+ /* generate the next interrupt(s) */
+ irq = LOCOMO_IRQ_START;
+ d = irq_desc + irq;
+ for (i = 0; i <= 3; i++, d++, irq++) {
+ if (req & (0x0100 << i)) {
+ d->handle(irq, d, regs);
+ }
+
+ }
+ }
+}
+
+static void locomo_ack_irq(unsigned int irq)
+{
+}
+
+static void locomo_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_ICR);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_START));
+ locomo_writel(r, mapbase + LOCOMO_ICR);
+}
+
+static void locomo_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_ICR);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_START));
+ locomo_writel(r, mapbase + LOCOMO_ICR);
+}
+
+static struct irqchip locomo_chip = {
+ .ack = locomo_ack_irq,
+ .mask = locomo_mask_irq,
+ .unmask = locomo_unmask_irq,
+};
+
+static void locomo_key_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ if (locomo_readl(mapbase + LOCOMO_KIC) & 0x0001) {
+ d = irq_desc + LOCOMO_IRQ_KEY_START;
+ d->handle(LOCOMO_IRQ_KEY_START, d, regs);
+ }
+}
+
+static void locomo_key_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r &= ~(0x0100 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static void locomo_key_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static void locomo_key_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static struct irqchip locomo_key_chip = {
+ .ack = locomo_key_ack_irq,
+ .mask = locomo_key_mask_irq,
+ .unmask = locomo_key_unmask_irq,
+};
+
+static void locomo_gpio_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ req = locomo_readl(mapbase + LOCOMO_GIR) &
+ locomo_readl(mapbase + LOCOMO_GPD) &
+ 0xffff;
+
+ if (req) {
+ irq = LOCOMO_IRQ_GPIO_START;
+ d = irq_desc + LOCOMO_IRQ_GPIO_START;
+ for (i = 0; i <= 15; i++, irq++, d++) {
+ if (req & (0x0001 << i)) {
+ d->handle(irq, d, regs);
+ }
+ }
+ }
+}
+
+static void locomo_gpio_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GWE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GWE);
+
+ r = locomo_readl(mapbase + LOCOMO_GIS);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIS);
+
+ r = locomo_readl(mapbase + LOCOMO_GWE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GWE);
+}
+
+static void locomo_gpio_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GIE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIE);
+}
+
+static void locomo_gpio_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GIE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIE);
+}
+
+static struct irqchip locomo_gpio_chip = {
+ .ack = locomo_gpio_ack_irq,
+ .mask = locomo_gpio_mask_irq,
+ .unmask = locomo_gpio_unmask_irq,
+};
+
+static void locomo_lt_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ if (locomo_readl(mapbase + LOCOMO_LTINT) & 0x0001) {
+ d = irq_desc + LOCOMO_IRQ_LT_START;
+ d->handle(LOCOMO_IRQ_LT_START, d, regs);
+ }
+}
+
+static void locomo_lt_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r &= ~(0x0100 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static void locomo_lt_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static void locomo_lt_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static struct irqchip locomo_lt_chip = {
+ .ack = locomo_lt_ack_irq,
+ .mask = locomo_lt_mask_irq,
+ .unmask = locomo_lt_unmask_irq,
+};
+
+static void locomo_spi_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ req = locomo_readl(mapbase + LOCOMO_SPIIR) & 0x000F;
+ if (req) {
+ irq = LOCOMO_IRQ_SPI_START;
+ d = irq_desc + irq;
+
+ for (i = 0; i <= 3; i++, irq++, d++) {
+ if (req & (0x0001 << i)) {
+ d->handle(irq, d, regs);
+ }
+ }
+ }
+}
+
+static void locomo_spi_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIWE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIWE);
+
+ r = locomo_readl(mapbase + LOCOMO_SPIIS);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIS);
+
+ r = locomo_readl(mapbase + LOCOMO_SPIWE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIWE);
+}
+
+static void locomo_spi_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIIE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIE);
+}
+
+static void locomo_spi_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIIE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIE);
+}
+
+static struct irqchip locomo_spi_chip = {
+ .ack = locomo_spi_ack_irq,
+ .mask = locomo_spi_mask_irq,
+ .unmask = locomo_spi_unmask_irq,
+};
+
+static void locomo_setup_irq(struct locomo *lchip)
+{
+ int irq;
+ void *irqbase = lchip->base;
+
+ /*
+ * Install handler for IRQ_LOCOMO_HW.
+ */
+ set_irq_type(lchip->irq, IRQT_FALLING);
+ set_irq_chipdata(lchip->irq, irqbase);
+ set_irq_chained_handler(lchip->irq, locomo_handler);
+
+ /* Install handlers for IRQ_LOCOMO_*_BASE */
+ set_irq_chip(IRQ_LOCOMO_KEY_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_KEY_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_KEY_BASE, locomo_key_handler);
+ set_irq_flags(IRQ_LOCOMO_KEY_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_GPIO_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_GPIO_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_GPIO_BASE, locomo_gpio_handler);
+ set_irq_flags(IRQ_LOCOMO_GPIO_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_LT_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_LT_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_LT_BASE, locomo_lt_handler);
+ set_irq_flags(IRQ_LOCOMO_LT_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_SPI_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_SPI_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_SPI_BASE, locomo_spi_handler);
+ set_irq_flags(IRQ_LOCOMO_SPI_BASE, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_KEY_BASE generated interrupts */
+ set_irq_chip(LOCOMO_IRQ_KEY_START, &locomo_key_chip);
+ set_irq_chipdata(LOCOMO_IRQ_KEY_START, irqbase);
+ set_irq_handler(LOCOMO_IRQ_KEY_START, do_edge_IRQ);
+ set_irq_flags(LOCOMO_IRQ_KEY_START, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_GPIO_BASE generated interrupts */
+ for (irq = LOCOMO_IRQ_GPIO_START; irq < LOCOMO_IRQ_GPIO_START + 16; irq++) {
+ set_irq_chip(irq, &locomo_gpio_chip);
+ set_irq_chipdata(irq, irqbase);
+ set_irq_handler(irq, do_edge_IRQ);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ }
+
+ /* install handlers for IRQ_LOCOMO_LT_BASE generated interrupts */
+ set_irq_chip(LOCOMO_IRQ_LT_START, &locomo_lt_chip);
+ set_irq_chipdata(LOCOMO_IRQ_LT_START, irqbase);
+ set_irq_handler(LOCOMO_IRQ_LT_START, do_edge_IRQ);
+ set_irq_flags(LOCOMO_IRQ_LT_START, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_SPI_BASE generated interrupts */
+ for (irq = LOCOMO_IRQ_SPI_START; irq < LOCOMO_IRQ_SPI_START + 3; irq++) {
+ set_irq_chip(irq, &locomo_spi_chip);
+ set_irq_chipdata(irq, irqbase);
+ set_irq_handler(irq, do_edge_IRQ);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ }
+}
+
+
+static void locomo_dev_release(struct device *_dev)
+{
+ struct locomo_dev *dev = LOCOMO_DEV(_dev);
+
+ release_resource(&dev->res);
+ kfree(dev);
+}
+
+static int
+locomo_init_one_child(struct locomo *lchip, struct resource *parent,
+ struct locomo_dev_info *info)
+{
+ struct locomo_dev *dev;
+ int ret;
+
+ dev = kmalloc(sizeof(struct locomo_dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(dev, 0, sizeof(struct locomo_dev));
+
+ strncpy(dev->dev.bus_id,info->name,sizeof(dev->dev.bus_id));
+ /*
+ * If the parent device has a DMA mask associated with it,
+ * propagate it down to the children.
+ */
+ if (lchip->dev->dma_mask) {
+ dev->dma_mask = *lchip->dev->dma_mask;
+ dev->dev.dma_mask = &dev->dma_mask;
+ }
+
+ dev->devid = info->devid;
+ dev->dev.parent = lchip->dev;
+ dev->dev.bus = &locomo_bus_type;
+ dev->dev.release = locomo_dev_release;
+ dev->dev.coherent_dma_mask = lchip->dev->coherent_dma_mask;
+ dev->res.start = lchip->phys + info->offset;
+ dev->res.end = dev->res.start + info->length;
+ dev->res.name = dev->dev.bus_id;
+ dev->res.flags = IORESOURCE_MEM;
+ dev->mapbase = lchip->base + info->offset;
+ memmove(dev->irq, info->irq, sizeof(dev->irq));
+
+ if (info->length) {
+ ret = request_resource(parent, &dev->res);
+ if (ret) {
+ printk("LoCoMo: failed to allocate resource for %s\n",
+ dev->res.name);
+ goto out;
+ }
+ }
+
+ ret = device_register(&dev->dev);
+ if (ret) {
+ release_resource(&dev->res);
+ out:
+ kfree(dev);
+ }
+ return ret;
+}
+
+/**
+ * locomo_probe - probe for a single LoCoMo chip.
+ * @phys_addr: physical address of device.
+ *
+ * Probe for a LoCoMo chip. This must be called
+ * before any other locomo-specific code.
+ *
+ * Returns:
+ * %-ENODEV device not found.
+ * %-EBUSY physical address already marked in-use.
+ * %0 successful.
+ */
+static int
+__locomo_probe(struct device *me, struct resource *mem, int irq)
+{
+ struct locomo *lchip;
+ unsigned long r;
+ int i, ret = -ENODEV;
+
+ lchip = kmalloc(sizeof(struct locomo), GFP_KERNEL);
+ if (!lchip)
+ return -ENOMEM;
+
+ memset(lchip, 0, sizeof(struct locomo));
+
+ lchip->dev = me;
+ dev_set_drvdata(lchip->dev, lchip);
+
+ lchip->phys = mem->start;
+ lchip->irq = irq;
+
+ /*
+ * Map the whole region. This also maps the
+ * registers for our children.
+ */
+ lchip->base = ioremap(mem->start, PAGE_SIZE);
+ if (!lchip->base) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ /* locomo initialize */
+ locomo_writel(0, lchip->base + LOCOMO_ICR);
+ /* KEYBOARD */
+ locomo_writel(0, lchip->base + LOCOMO_KIC);
+
+ /* GPIO */
+ locomo_writel(0, lchip->base + LOCOMO_GPO);
+ locomo_writel( (LOCOMO_GPIO(2) | LOCOMO_GPIO(3) | LOCOMO_GPIO(13) | LOCOMO_GPIO(14))
+ , lchip->base + LOCOMO_GPE);
+ locomo_writel( (LOCOMO_GPIO(2) | LOCOMO_GPIO(3) | LOCOMO_GPIO(13) | LOCOMO_GPIO(14))
+ , lchip->base + LOCOMO_GPD);
+ locomo_writel(0, lchip->base + LOCOMO_GIE);
+
+ /* FrontLight */
+ locomo_writel(0, lchip->base + LOCOMO_ALS);
+ locomo_writel(0, lchip->base + LOCOMO_ALD);
+ /* Longtime timer */
+ locomo_writel(0, lchip->base + LOCOMO_LTINT);
+ /* SPI */
+ locomo_writel(0, lchip->base + LOCOMO_SPIIE);
+
+ locomo_writel(6 + 8 + 320 + 30 - 10, lchip->base + LOCOMO_ASD);
+ r = locomo_readl(lchip->base + LOCOMO_ASD);
+ r |= 0x8000;
+ locomo_writel(r, lchip->base + LOCOMO_ASD);
+
+ locomo_writel(6 + 8 + 320 + 30 - 10 - 128 + 4, lchip->base + LOCOMO_HSD);
+ r = locomo_readl(lchip->base + LOCOMO_HSD);
+ r |= 0x8000;
+ locomo_writel(r, lchip->base + LOCOMO_HSD);
+
+ locomo_writel(128 / 8, lchip->base + LOCOMO_HSC);
+
+ /* XON */
+ locomo_writel(0x80, lchip->base + LOCOMO_TADC);
+ udelay(1000);
+ /* CLK9MEN */
+ r = locomo_readl(lchip->base + LOCOMO_TADC);
+ r |= 0x10;
+ locomo_writel(r, lchip->base + LOCOMO_TADC);
+ udelay(100);
+
+ /* init DAC */
+ r = locomo_readl(lchip->base + LOCOMO_DAC);
+ r |= LOCOMO_DAC_SCLOEB | LOCOMO_DAC_SDAOEB;
+ locomo_writel(r, lchip->base + LOCOMO_DAC);
+
+ r = locomo_readl(lchip->base + LOCOMO_VER);
+ printk(KERN_INFO "LoCoMo Chip: %lu%lu\n", (r >> 8), (r & 0xff));
+
+ /*
+ * The interrupt controller must be initialised before any
+ * other device to ensure that the interrupts are available.
+ */
+ if (lchip->irq != NO_IRQ)
+ locomo_setup_irq(lchip);
+
+ for (i = 0; i < ARRAY_SIZE(locomo_devices); i++)
+ locomo_init_one_child(lchip, mem, &locomo_devices[i]);
+
+ return 0;
+
+ out:
+ kfree(lchip);
+ return ret;
+}
+
+static void __locomo_remove(struct locomo *lchip)
+{
+ struct list_head *l, *n;
+
+ list_for_each_safe(l, n, &lchip->dev->children) {
+ struct device *d = list_to_dev(l);
+
+ device_unregister(d);
+ }
+
+ if (lchip->irq != NO_IRQ) {
+ set_irq_chained_handler(lchip->irq, NULL);
+ set_irq_data(lchip->irq, NULL);
+ }
+
+ iounmap(lchip->base);
+ kfree(lchip);
+}
+
+static int locomo_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *mem = NULL, *irq = NULL;
+ int i;
+
+ for (i = 0; i < pdev->num_resources; i++) {
+ if (pdev->resource[i].flags & IORESOURCE_MEM)
+ mem = &pdev->resource[i];
+ if (pdev->resource[i].flags & IORESOURCE_IRQ)
+ irq = &pdev->resource[i];
+ }
+
+ return __locomo_probe(dev, mem, irq ? irq->start : NO_IRQ);
+}
+
+static int locomo_remove(struct device *dev)
+{
+ struct locomo *lchip = dev_get_drvdata(dev);
+
+ if (lchip) {
+ __locomo_remove(lchip);
+ dev_set_drvdata(dev, NULL);
+
+ kfree(dev->saved_state);
+ dev->saved_state = NULL;
+ }
+
+ return 0;
+}
+
+/*
+ * Not sure if this should be on the system bus or not yet.
+ * We really want some way to register a system device at
+ * the per-machine level, and then have this driver pick
+ * up the registered devices.
+ */
+static struct device_driver locomo_device_driver = {
+ .name = "locomo",
+ .bus = &platform_bus_type,
+ .probe = locomo_probe,
+ .remove = locomo_remove,
+};
+
+/*
+ * Get the parent device driver (us) structure
+ * from a child function device
+ */
+static inline struct locomo *locomo_chip_driver(struct locomo_dev *ldev)
+{
+ return (struct locomo *)dev_get_drvdata(ldev->dev.parent);
+}
+
+/*
+ * LoCoMo "Register Access Bus."
+ *
+ * We model this as a regular bus type, and hang devices directly
+ * off this.
+ */
+static int locomo_match(struct device *_dev, struct device_driver *_drv)
+{
+ struct locomo_dev *dev = LOCOMO_DEV(_dev);
+ struct locomo_driver *drv = LOCOMO_DRV(_drv);
+
+ return dev->devid == drv->devid;
+}
+
+static int locomo_bus_suspend(struct device *dev, u32 state)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv && drv->suspend)
+ ret = drv->suspend(ldev, state);
+ return ret;
+}
+
+static int locomo_bus_resume(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv && drv->resume)
+ ret = drv->resume(ldev);
+ return ret;
+}
+
+static int locomo_bus_probe(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = -ENODEV;
+
+ if (drv->probe)
+ ret = drv->probe(ldev);
+ return ret;
+}
+
+static int locomo_bus_remove(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv->remove)
+ ret = drv->remove(ldev);
+ return ret;
+}
+
+struct bus_type locomo_bus_type = {
+ .name = "locomo-bus",
+ .match = locomo_match,
+ .suspend = locomo_bus_suspend,
+ .resume = locomo_bus_resume,
+};
+
+int locomo_driver_register(struct locomo_driver *driver)
+{
+ driver->drv.probe = locomo_bus_probe;
+ driver->drv.remove = locomo_bus_remove;
+ driver->drv.bus = &locomo_bus_type;
+ return driver_register(&driver->drv);
+}
+
+void locomo_driver_unregister(struct locomo_driver *driver)
+{
+ driver_unregister(&driver->drv);
+}
+
+static int __init locomo_init(void)
+{
+ int ret = bus_register(&locomo_bus_type);
+ if (ret == 0)
+ driver_register(&locomo_device_driver);
+ return ret;
+}
+
+static void __exit locomo_exit(void)
+{
+ driver_unregister(&locomo_device_driver);
+ bus_unregister(&locomo_bus_type);
+}
+
+module_init(locomo_init);
+module_exit(locomo_exit);
+
+MODULE_DESCRIPTION("Sharp LoCoMo core driver");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("John Lenz <jelenz@students.wisc.edu>");
+
+EXPORT_SYMBOL(locomo_driver_register);
+EXPORT_SYMBOL(locomo_driver_unregister);
--- /dev/null
+/*
+ * linux/arch/arm/common/time-acorn.c
+ *
+ * Copyright (c) 1996-2000 Russell King.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ * 13=Jun-2004 DS Moved to arch/arm/common b/c shared w/CLPS7500
+ */
+#include <linux/timex.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/hardware/ioc.h>
+
+#include <asm/mach/time.h>
+
+static unsigned long ioctime_gettimeoffset(void)
+{
+ unsigned int count1, count2, status;
+ long offset;
+
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+ barrier ();
+ status = ioc_readb(IOC_IRQREQA);
+ barrier ();
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+
+ offset = count2;
+ if (count2 < count1) {
+ /*
+ * We have not had an interrupt between reading count1
+ * and count2.
+ */
+ if (status & (1 << 5))
+ offset -= LATCH;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt between reading
+ * count1 and count2.
+ */
+ offset -= LATCH;
+ }
+
+ offset = (LATCH - offset) * (tick_nsec / 1000);
+ return (offset + LATCH/2) / LATCH;
+}
+
+void __init ioctime_init(void)
+{
+ ioc_writeb(LATCH & 255, IOC_T0LTCHL);
+ ioc_writeb(LATCH >> 8, IOC_T0LTCHH);
+ ioc_writeb(0, IOC_T0GO);
+
+ gettimeoffset = ioctime_gettimeoffset;
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+CONFIG_MODVERSIONS=y
+CONFIG_KMOD=y
+
+#
+# System Type
+#
+# CONFIG_ARCH_ADIFCC is not set
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+CONFIG_ARCH_IXP4XX=y
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_VERSATILE_PB is not set
+
+#
+# CLPS711X/EP721X Implementations
+#
+
+#
+# Epxa10db
+#
+
+#
+# Footbridge Implementations
+#
+
+#
+# IOP3xx Implementation Options
+#
+# CONFIG_ARCH_IOP310 is not set
+# CONFIG_ARCH_IOP321 is not set
+
+#
+# IOP3xx Chipset Features
+#
+CONFIG_ARCH_SUPPORTS_BIG_ENDIAN=y
+
+#
+# Intel IXP4xx Implementation Options
+#
+
+#
+# IXP4xx Platforms
+#
+CONFIG_ARCH_IXDP425=y
+CONFIG_ARCH_IXCDP1100=y
+CONFIG_ARCH_PRPMC1100=y
+CONFIG_ARCH_ADI_COYOTE=y
+# CONFIG_ARCH_AVILA is not set
+CONFIG_ARCH_IXDP4XX=y
+
+#
+# IXP4xx Options
+#
+# CONFIG_IXP4XX_INDIRECT_PCI is not set
+
+#
+# Intel PXA250/210 Implementations
+#
+
+#
+# SA11x0 Implementations
+#
+
+#
+# TI OMAP Implementations
+#
+
+#
+# OMAP Core Type
+#
+
+#
+# OMAP Board Type
+#
+
+#
+# OMAP Feature Selections
+#
+
+#
+# S3C2410 Implementations
+#
+
+#
+# LH7A40X Implementations
+#
+CONFIG_DMABOUNCE=y
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5T=y
+CONFIG_CPU_TLB_V4WBI=y
+CONFIG_CPU_MINICACHE=y
+
+#
+# Processor Features
+#
+# CONFIG_ARM_THUMB is not set
+CONFIG_CPU_BIG_ENDIAN=y
+CONFIG_XSCALE_PMU=y
+
+#
+# General setup
+#
+CONFIG_PCI=y
+# CONFIG_ZBOOT_ROM is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_AOUT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_DEBUG_DRIVER is not set
+CONFIG_PM=y
+# CONFIG_PREEMPT is not set
+CONFIG_APM=y
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="console=ttyS0,115200 ip=bootp root=/dev/nfs"
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_REDBOOT_PARTS=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+CONFIG_MTD_COMPLEX_MAPPINGS=y
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+CONFIG_MTD_IXP4XX=y
+# CONFIG_MTD_EDB7312 is not set
+# CONFIG_MTD_PCI is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+CONFIG_MTD_NAND=m
+# CONFIG_MTD_NAND_VERIFY_WRITE is not set
+CONFIG_MTD_NAND_IDS=m
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_BLK_DEV_INITRD=y
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=m
+CONFIG_PACKET_MMAP=y
+CONFIG_NETLINK_DEV=m
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_FWMARK=y
+CONFIG_IP_ROUTE_NAT=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_TOS=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+CONFIG_NET_IPIP=m
+CONFIG_NET_IPGRE=m
+CONFIG_NET_IPGRE_BROADCAST=y
+CONFIG_IP_MROUTE=y
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+
+#
+# IP: Virtual Server Configuration
+#
+CONFIG_IP_VS=m
+CONFIG_IP_VS_DEBUG=y
+CONFIG_IP_VS_TAB_BITS=12
+
+#
+# IPVS transport protocol load balancing support
+#
+# CONFIG_IP_VS_PROTO_TCP is not set
+# CONFIG_IP_VS_PROTO_UDP is not set
+# CONFIG_IP_VS_PROTO_ESP is not set
+# CONFIG_IP_VS_PROTO_AH is not set
+
+#
+# IPVS scheduler
+#
+CONFIG_IP_VS_RR=m
+CONFIG_IP_VS_WRR=m
+CONFIG_IP_VS_LC=m
+CONFIG_IP_VS_WLC=m
+CONFIG_IP_VS_LBLC=m
+CONFIG_IP_VS_LBLCR=m
+CONFIG_IP_VS_DH=m
+CONFIG_IP_VS_SH=m
+# CONFIG_IP_VS_SED is not set
+# CONFIG_IP_VS_NQ is not set
+
+#
+# IPVS application helper
+#
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+CONFIG_BRIDGE_NETFILTER=y
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_IP_NF_CONNTRACK=m
+CONFIG_IP_NF_FTP=m
+CONFIG_IP_NF_IRC=m
+# CONFIG_IP_NF_TFTP is not set
+# CONFIG_IP_NF_AMANDA is not set
+CONFIG_IP_NF_QUEUE=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_LIMIT=m
+# CONFIG_IP_NF_MATCH_IPRANGE is not set
+CONFIG_IP_NF_MATCH_MAC=m
+# CONFIG_IP_NF_MATCH_PKTTYPE is not set
+CONFIG_IP_NF_MATCH_MARK=m
+CONFIG_IP_NF_MATCH_MULTIPORT=m
+CONFIG_IP_NF_MATCH_TOS=m
+# CONFIG_IP_NF_MATCH_RECENT is not set
+# CONFIG_IP_NF_MATCH_ECN is not set
+# CONFIG_IP_NF_MATCH_DSCP is not set
+CONFIG_IP_NF_MATCH_AH_ESP=m
+CONFIG_IP_NF_MATCH_LENGTH=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_MATCH_TCPMSS=m
+# CONFIG_IP_NF_MATCH_HELPER is not set
+CONFIG_IP_NF_MATCH_STATE=m
+# CONFIG_IP_NF_MATCH_CONNTRACK is not set
+CONFIG_IP_NF_MATCH_OWNER=m
+# CONFIG_IP_NF_MATCH_PHYSDEV is not set
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_NAT_NEEDED=y
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+CONFIG_IP_NF_TARGET_REDIRECT=m
+# CONFIG_IP_NF_TARGET_NETMAP is not set
+# CONFIG_IP_NF_TARGET_SAME is not set
+CONFIG_IP_NF_NAT_LOCAL=y
+CONFIG_IP_NF_NAT_SNMP_BASIC=m
+CONFIG_IP_NF_NAT_IRC=m
+CONFIG_IP_NF_NAT_FTP=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_TOS=m
+# CONFIG_IP_NF_TARGET_ECN is not set
+# CONFIG_IP_NF_TARGET_DSCP is not set
+CONFIG_IP_NF_TARGET_MARK=m
+# CONFIG_IP_NF_TARGET_CLASSIFY is not set
+CONFIG_IP_NF_TARGET_LOG=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_IP_NF_TARGET_TCPMSS=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+# CONFIG_IP_NF_ARP_MANGLE is not set
+CONFIG_IP_NF_COMPAT_IPCHAINS=m
+CONFIG_IP_NF_COMPAT_IPFWADM=m
+# CONFIG_IP_NF_RAW is not set
+
+#
+# Bridge: Netfilter Configuration
+#
+# CONFIG_BRIDGE_NF_EBTABLES is not set
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+CONFIG_ATM=y
+CONFIG_ATM_CLIP=y
+# CONFIG_ATM_CLIP_NO_ICMP is not set
+CONFIG_ATM_LANE=m
+CONFIG_ATM_MPOA=m
+CONFIG_ATM_BR2684=m
+# CONFIG_ATM_BR2684_IPFILTER is not set
+CONFIG_BRIDGE=m
+CONFIG_VLAN_8021Q=m
+# CONFIG_DECNET is not set
+CONFIG_LLC=m
+# CONFIG_LLC2 is not set
+CONFIG_IPX=m
+# CONFIG_IPX_INTERN is not set
+CONFIG_ATALK=m
+CONFIG_DEV_APPLETALK=y
+CONFIG_IPDDP=m
+CONFIG_IPDDP_ENCAP=y
+CONFIG_IPDDP_DECAP=y
+CONFIG_X25=m
+CONFIG_LAPB=m
+# CONFIG_NET_DIVERT is not set
+CONFIG_ECONET=m
+CONFIG_ECONET_AUNUDP=y
+CONFIG_ECONET_NATIVE=y
+CONFIG_WAN_ROUTER=m
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+# CONFIG_NET_SCH_HFSC is not set
+CONFIG_NET_SCH_CSZ=m
+# CONFIG_NET_SCH_ATM is not set
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+# CONFIG_NET_SCH_DELAY is not set
+CONFIG_NET_SCH_INGRESS=m
+CONFIG_NET_QOS=y
+CONFIG_NET_ESTIMATOR=y
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_ROUTE=y
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_POLICE=y
+
+#
+# Network testing
+#
+CONFIG_NET_PKTGEN=m
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+CONFIG_EEPRO100=y
+# CONFIG_EEPRO100_PIO is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+# CONFIG_AIRO is not set
+CONFIG_HERMES=y
+# CONFIG_PLX_HERMES is not set
+# CONFIG_TMD_HERMES is not set
+CONFIG_PCI_HERMES=y
+# CONFIG_ATMEL is not set
+
+#
+# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
+#
+CONFIG_NET_WIRELESS=y
+
+#
+# Wan interfaces
+#
+CONFIG_WAN=y
+# CONFIG_DSCC4 is not set
+# CONFIG_LANMEDIA is not set
+# CONFIG_SYNCLINK_SYNCPPP is not set
+CONFIG_HDLC=m
+CONFIG_HDLC_RAW=y
+# CONFIG_HDLC_RAW_ETH is not set
+CONFIG_HDLC_CISCO=y
+CONFIG_HDLC_FR=y
+CONFIG_HDLC_PPP=y
+CONFIG_HDLC_X25=y
+# CONFIG_PCI200SYN is not set
+# CONFIG_WANXL is not set
+# CONFIG_PC300 is not set
+# CONFIG_FARSYNC is not set
+CONFIG_DLCI=m
+CONFIG_DLCI_COUNT=24
+CONFIG_DLCI_MAX=8
+CONFIG_WAN_ROUTER_DRIVERS=y
+# CONFIG_CYCLADES_SYNC is not set
+# CONFIG_LAPBETHER is not set
+# CONFIG_X25_ASY is not set
+
+#
+# ATM drivers
+#
+CONFIG_ATM_TCP=m
+# CONFIG_ATM_LANAI is not set
+# CONFIG_ATM_ENI is not set
+# CONFIG_ATM_FIRESTREAM is not set
+# CONFIG_ATM_ZATM is not set
+# CONFIG_ATM_NICSTAR is not set
+# CONFIG_ATM_IDT77252 is not set
+# CONFIG_ATM_AMBASSADOR is not set
+# CONFIG_ATM_HORIZON is not set
+# CONFIG_ATM_IA is not set
+# CONFIG_ATM_FORE200E_MAYBE is not set
+# CONFIG_ATM_HE is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_IDEDISK_STROKE is not set
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+CONFIG_BLK_DEV_IDEPCI=y
+# CONFIG_IDEPCI_SHARE_IRQ is not set
+# CONFIG_BLK_DEV_OFFBOARD is not set
+# CONFIG_BLK_DEV_GENERIC is not set
+# CONFIG_BLK_DEV_OPTI621 is not set
+# CONFIG_BLK_DEV_SL82C105 is not set
+CONFIG_BLK_DEV_IDEDMA_PCI=y
+# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
+# CONFIG_IDEDMA_PCI_AUTO is not set
+CONFIG_BLK_DEV_ADMA=y
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_BLK_DEV_ALI15X3 is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+CONFIG_BLK_DEV_CMD64X=y
+# CONFIG_BLK_DEV_TRIFLEX is not set
+# CONFIG_BLK_DEV_CY82C693 is not set
+# CONFIG_BLK_DEV_CS5520 is not set
+# CONFIG_BLK_DEV_CS5530 is not set
+# CONFIG_BLK_DEV_HPT34X is not set
+CONFIG_BLK_DEV_HPT366=y
+# CONFIG_BLK_DEV_SC1200 is not set
+# CONFIG_BLK_DEV_PIIX is not set
+# CONFIG_BLK_DEV_NS87415 is not set
+# CONFIG_BLK_DEV_PDC202XX_OLD is not set
+CONFIG_BLK_DEV_PDC202XX_NEW=y
+# CONFIG_PDC202XX_FORCE is not set
+# CONFIG_BLK_DEV_SVWKS is not set
+# CONFIG_BLK_DEV_SIIMAGE is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
+# CONFIG_BLK_DEV_TRM290 is not set
+# CONFIG_BLK_DEV_VIA82CXXX is not set
+CONFIG_BLK_DEV_IDEDMA=y
+# CONFIG_IDEDMA_IVB is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=2
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+CONFIG_IXP4XX_WATCHDOG=y
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+CONFIG_I2C_ALGOBIT=y
+# CONFIG_I2C_ALGOPCF is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+# CONFIG_I2C_ISA is not set
+CONFIG_I2C_IXP4XX=y
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_SCx200_ACB is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+
+#
+# Hardware Sensors Chip support
+#
+CONFIG_I2C_SENSOR=y
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+CONFIG_SENSORS_EEPROM=y
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+# CONFIG_EXT2_FS_SECURITY is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+CONFIG_FS_POSIX_ACL=y
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# Kernel hacking
+#
+CONFIG_FRAME_POINTER=y
+# CONFIG_DEBUG_USER is not set
+# CONFIG_DEBUG_INFO is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_WAITQ is not set
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_ERRORS=y
+CONFIG_DEBUG_LL=y
+# CONFIG_DEBUG_ICEDCC is not set
+# CONFIG_DEBUG_BDI2000_XSCALE is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+CONFIG_HOTPLUG=y
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_KMOD is not set
+
+#
+# System Type
+#
+# CONFIG_ARCH_ADIFCC is not set
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+CONFIG_ARCH_PXA=y
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_VERSATILE_PB is not set
+
+#
+# CLPS711X/EP721X Implementations
+#
+
+#
+# Epxa10db
+#
+
+#
+# Footbridge Implementations
+#
+
+#
+# IOP3xx Implementation Options
+#
+# CONFIG_ARCH_IOP310 is not set
+# CONFIG_ARCH_IOP321 is not set
+
+#
+# IOP3xx Chipset Features
+#
+
+#
+# Intel PXA2xx Implementations
+#
+# CONFIG_ARCH_LUBBOCK is not set
+CONFIG_MACH_MAINSTONE=y
+# CONFIG_ARCH_PXA_IDP is not set
+CONFIG_PXA27x=y
+CONFIG_IWMMXT=y
+
+#
+# SA11x0 Implementations
+#
+
+#
+# TI OMAP Implementations
+#
+
+#
+# OMAP Core Type
+#
+
+#
+# OMAP Board Type
+#
+
+#
+# OMAP Feature Selections
+#
+
+#
+# S3C2410 Implementations
+#
+
+#
+# LH7A40X Implementations
+#
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5T=y
+CONFIG_CPU_TLB_V4WBI=y
+CONFIG_CPU_MINICACHE=y
+
+#
+# Processor Features
+#
+# CONFIG_ARM_THUMB is not set
+CONFIG_XSCALE_PMU=y
+
+#
+# General setup
+#
+# CONFIG_ZBOOT_ROM is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+
+#
+# PCMCIA/CardBus support
+#
+CONFIG_PCMCIA=y
+# CONFIG_PCMCIA_DEBUG is not set
+# CONFIG_TCIC is not set
+CONFIG_PCMCIA_PXA2XX=y
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_AOUT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_FW_LOADER is not set
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_PM is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="root=/dev/nfs ip=bootp console=ttyS0,115200 mem=64M"
+CONFIG_LEDS=y
+CONFIG_LEDS_TIMER=y
+CONFIG_LEDS_CPU=y
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_REDBOOT_PARTS=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+CONFIG_MTD_CFI_ADV_OPTIONS=y
+CONFIG_MTD_CFI_NOSWAP=y
+# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
+# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set
+CONFIG_MTD_CFI_GEOMETRY=y
+# CONFIG_MTD_CFI_B1 is not set
+# CONFIG_MTD_CFI_B2 is not set
+CONFIG_MTD_CFI_B4=y
+# CONFIG_MTD_CFI_B8 is not set
+# CONFIG_MTD_CFI_I1 is not set
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+# CONFIG_MTD_EDB7312 is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_RAM is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_PACKET is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+CONFIG_SMC91X=y
+
+#
+# Ethernet (1000 Mbit)
+#
+
+#
+# Ethernet (10000 Mbit)
+#
+
+#
+# Token Ring devices
+#
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_IDEDISK_STROKE is not set
+CONFIG_BLK_DEV_IDECS=y
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+# CONFIG_IDE_GENERIC is not set
+# CONFIG_BLK_DEV_IDEDMA is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_SERIO_CT82C710 is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_PXA=y
+CONFIG_SERIAL_PXA_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+CONFIG_NLS_ISO8859_1=y
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# Kernel hacking
+#
+CONFIG_FRAME_POINTER=y
+CONFIG_DEBUG_USER=y
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_WAITQ is not set
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_ERRORS=y
+CONFIG_DEBUG_LL=y
+# CONFIG_DEBUG_ICEDCC is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# System Type
+#
+# CONFIG_ARCH_ADIFCC is not set
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_SHARK is not set
+CONFIG_ARCH_S3C2410=y
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_VERSATILE_PB is not set
+
+#
+# CLPS711X/EP721X Implementations
+#
+
+#
+# Epxa10db
+#
+
+#
+# Footbridge Implementations
+#
+
+#
+# IOP3xx Implementation Options
+#
+# CONFIG_ARCH_IOP310 is not set
+# CONFIG_ARCH_IOP321 is not set
+
+#
+# IOP3xx Chipset Features
+#
+
+#
+# Intel PXA250/210 Implementations
+#
+
+#
+# SA11x0 Implementations
+#
+
+#
+# TI OMAP Implementations
+#
+
+#
+# OMAP Core Type
+#
+
+#
+# OMAP Board Type
+#
+
+#
+# OMAP Feature Selections
+#
+
+#
+# S3C2410 Implementations
+#
+# CONFIG_ARCH_BAST is not set
+# CONFIG_ARCH_H1940 is not set
+CONFIG_ARCH_SMDK2410=y
+# CONFIG_MACH_VR1000 is not set
+
+#
+# LH7A40X Implementations
+#
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_ARM920T=y
+CONFIG_CPU_32v4=y
+CONFIG_CPU_ABRT_EV4T=y
+CONFIG_CPU_CACHE_V4WT=y
+CONFIG_CPU_COPY_V4WB=y
+CONFIG_CPU_TLB_V4WBI=y
+
+#
+# Processor Features
+#
+CONFIG_ARM_THUMB=y
+# CONFIG_CPU_ICACHE_DISABLE is not set
+# CONFIG_CPU_DCACHE_DISABLE is not set
+# CONFIG_CPU_DCACHE_WRITETHROUGH is not set
+
+#
+# General setup
+#
+# CONFIG_ZBOOT_ROM is not set
+CONFIG_ZBOOT_ROM_TEXT=0
+CONFIG_ZBOOT_ROM_BSS=0
+
+#
+# At least one math emulation must be selected
+#
+# CONFIG_FPE_NWFPE is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_BINFMT_ELF=y
+CONFIG_BINFMT_AOUT=y
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_PM is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="root=1f04 mem=32M"
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+# CONFIG_MTD_PARTITIONS is not set
+# CONFIG_MTD_CONCAT is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+# CONFIG_MTD_EDB7312 is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_PACKET is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+
+#
+# Ethernet (10000 Mbit)
+#
+
+#
+# Token Ring devices
+#
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_S3C2410=y
+CONFIG_SERIAL_S3C2410_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+CONFIG_ROMFS_FS=y
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+# CONFIG_JFFS2_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+# CONFIG_MSDOS_PARTITION is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_PCI_CONSOLE=y
+# CONFIG_FONTS is not set
+CONFIG_FONT_8x8=y
+CONFIG_FONT_8x16=y
+
+#
+# Logo configuration
+#
+# CONFIG_LOGO is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# Kernel hacking
+#
+CONFIG_FRAME_POINTER=y
+CONFIG_DEBUG_USER=y
+# CONFIG_DEBUG_INFO is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_WAITQ is not set
+# CONFIG_DEBUG_BUGVERBOSE is not set
+# CONFIG_DEBUG_ERRORS is not set
+CONFIG_DEBUG_LL=y
+# CONFIG_DEBUG_ICEDCC is not set
+CONFIG_DEBUG_LL_PRINTK=y
+CONFIG_DEBUG_S3C2410_PORT=y
+CONFIG_DEBUG_S3C2410_UART=0
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+CONFIG_LIBCRC32C=y
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa285/time.h
+ *
+ * Copyright (C) 1998 Russell King.
+ * Copyright (C) 1998 Phil Blundell
+ *
+ * CATS has a real-time clock, though the evaluation board doesn't.
+ *
+ * Changelog:
+ * 21-Mar-1998 RMK Created
+ * 27-Aug-1998 PJB CATS support
+ * 28-Dec-1998 APH Made leds optional
+ * 20-Jan-1999 RMK Started merge of EBSA285, CATS and NetWinder
+ * 16-Mar-1999 RMK More support for EBSA285-like machines with RTCs in
+ */
+
+#define RTC_PORT(x) (rtc_base+(x))
+#define RTC_ALWAYS_BCD 0
+
+#include <linux/timex.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <linux/mc146818rtc.h>
+#include <linux/bcd.h>
+
+#include <asm/hardware/dec21285.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/leds.h>
+#include <asm/mach-types.h>
+#include <asm/io.h>
+#include <asm/hardware/clps7111.h>
+
+#include <asm/mach/time.h>
+
+static int rtc_base;
+
+#define mSEC_10_from_14 ((14318180 + 100) / 200)
+
+static unsigned long isa_gettimeoffset(void)
+{
+ int count;
+
+ static int count_p = (mSEC_10_from_14/6); /* for the first call after boot */
+ static unsigned long jiffies_p = 0;
+
+ /*
+ * cache volatile jiffies temporarily; we have IRQs turned off.
+ */
+ unsigned long jiffies_t;
+
+ /* timer count may underflow right here */
+ outb_p(0x00, 0x43); /* latch the count ASAP */
+
+ count = inb_p(0x40); /* read the latched count */
+
+ /*
+ * We do this guaranteed double memory access instead of a _p
+ * postfix in the previous port access. Wheee, hackady hack
+ */
+ jiffies_t = jiffies;
+
+ count |= inb_p(0x40) << 8;
+
+ /* Detect timer underflows. If we haven't had a timer tick since
+ the last time we were called, and time is apparently going
+ backwards, the counter must have wrapped during this routine. */
+ if ((jiffies_t == jiffies_p) && (count > count_p))
+ count -= (mSEC_10_from_14/6);
+ else
+ jiffies_p = jiffies_t;
+
+ count_p = count;
+
+ count = (((mSEC_10_from_14/6)-1) - count) * (tick_nsec / 1000);
+ count = (count + (mSEC_10_from_14/6)/2) / (mSEC_10_from_14/6);
+
+ return count;
+}
+
+static irqreturn_t
+isa_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static unsigned long __init get_isa_cmos_time(void)
+{
+ unsigned int year, mon, day, hour, min, sec;
+ int i;
+
+ // check to see if the RTC makes sense.....
+ if ((CMOS_READ(RTC_VALID) & RTC_VRT) == 0)
+ return mktime(1970, 1, 1, 0, 0, 0);
+
+ /* The Linux interpretation of the CMOS clock register contents:
+ * When the Update-In-Progress (UIP) flag goes from 1 to 0, the
+ * RTC registers show the second which has precisely just started.
+ * Let's hope other operating systems interpret the RTC the same way.
+ */
+ /* read RTC exactly on falling edge of update flag */
+ for (i = 0 ; i < 1000000 ; i++) /* may take up to 1 second... */
+ if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP)
+ break;
+
+ for (i = 0 ; i < 1000000 ; i++) /* must try at least 2.228 ms */
+ if (!(CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
+ break;
+
+ do { /* Isn't this overkill ? UIP above should guarantee consistency */
+ sec = CMOS_READ(RTC_SECONDS);
+ min = CMOS_READ(RTC_MINUTES);
+ hour = CMOS_READ(RTC_HOURS);
+ day = CMOS_READ(RTC_DAY_OF_MONTH);
+ mon = CMOS_READ(RTC_MONTH);
+ year = CMOS_READ(RTC_YEAR);
+ } while (sec != CMOS_READ(RTC_SECONDS));
+
+ if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(sec);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(year);
+ }
+ if ((year += 1900) < 1970)
+ year += 100;
+ return mktime(year, mon, day, hour, min, sec);
+}
+
+static int
+set_isa_cmos_time(void)
+{
+ int retval = 0;
+ int real_seconds, real_minutes, cmos_minutes;
+ unsigned char save_control, save_freq_select;
+ unsigned long nowtime = xtime.tv_sec;
+
+ save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
+ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT); /* stop and reset prescaler */
+ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+
+ cmos_minutes = CMOS_READ(RTC_MINUTES);
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ BCD_TO_BIN(cmos_minutes);
+
+ /*
+ * since we're only adjusting minutes and seconds,
+ * don't interfere with hour overflow. This avoids
+ * messing with unknown time zones but requires your
+ * RTC not to be off by more than 15 minutes
+ */
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
+ real_minutes += 30; /* correct for half hour time zone */
+ real_minutes %= 60;
+
+ if (abs(real_minutes - cmos_minutes) < 30) {
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BIN_TO_BCD(real_seconds);
+ BIN_TO_BCD(real_minutes);
+ }
+ CMOS_WRITE(real_seconds,RTC_SECONDS);
+ CMOS_WRITE(real_minutes,RTC_MINUTES);
+ } else
+ retval = -1;
+
+ /* The following flags have to be released exactly in this order,
+ * otherwise the DS12887 (popular MC146818A clone with integrated
+ * battery and quartz) will not reset the oscillator and will not
+ * update precisely 500 ms later. You won't find this mentioned in
+ * the Dallas Semiconductor data sheets, but who believes data
+ * sheets anyway ... -- Markus Kuhn
+ */
+ CMOS_WRITE(save_control, RTC_CONTROL);
+ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+
+ return retval;
+}
+
+
+static unsigned long timer1_latch;
+
+static unsigned long timer1_gettimeoffset (void)
+{
+ unsigned long value = timer1_latch - *CSR_TIMER1_VALUE;
+
+ return ((tick_nsec / 1000) * value) / timer1_latch;
+}
+
+static irqreturn_t
+timer1_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ *CSR_TIMER1_CLR = 0;
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction footbridge_timer_irq = {
+ .flags = SA_INTERRUPT
+};
+
+/*
+ * Set up timer interrupt.
+ */
+void __init footbridge_init_time(void)
+{
+ if (machine_is_co285() ||
+ machine_is_personal_server())
+ /*
+ * Add-in 21285s shouldn't access the RTC
+ */
+ rtc_base = 0;
+ else
+ rtc_base = 0x70;
+
+ if (rtc_base) {
+ int reg_d, reg_b;
+
+ /*
+ * Probe for the RTC.
+ */
+ reg_d = CMOS_READ(RTC_REG_D);
+
+ /*
+ * make sure the divider is set
+ */
+ CMOS_WRITE(RTC_REF_CLCK_32KHZ, RTC_REG_A);
+
+ /*
+ * Set control reg B
+ * (24 hour mode, update enabled)
+ */
+ reg_b = CMOS_READ(RTC_REG_B) & 0x7f;
+ reg_b |= 2;
+ CMOS_WRITE(reg_b, RTC_REG_B);
+
+ if ((CMOS_READ(RTC_REG_A) & 0x7f) == RTC_REF_CLCK_32KHZ &&
+ CMOS_READ(RTC_REG_B) == reg_b) {
+ struct timespec tv;
+
+ /*
+ * We have a RTC. Check the battery
+ */
+ if ((reg_d & 0x80) == 0)
+ printk(KERN_WARNING "RTC: *** warning: CMOS battery bad\n");
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = get_isa_cmos_time();
+ do_settimeofday(&tv);
+ set_rtc = set_isa_cmos_time;
+ } else
+ rtc_base = 0;
+ }
+
+ if (machine_is_ebsa285() ||
+ machine_is_co285() ||
+ machine_is_personal_server()) {
+ gettimeoffset = timer1_gettimeoffset;
+
+ timer1_latch = (mem_fclk_21285 + 8 * HZ) / (16 * HZ);
+
+ *CSR_TIMER1_CLR = 0;
+ *CSR_TIMER1_LOAD = timer1_latch;
+ *CSR_TIMER1_CNTL = TIMER_CNTL_ENABLE | TIMER_CNTL_AUTORELOAD | TIMER_CNTL_DIV16;
+
+ footbridge_timer_irq.name = "Timer1 Timer Tick";
+ footbridge_timer_irq.handler = timer1_interrupt;
+
+ setup_irq(IRQ_TIMER1, &footbridge_timer_irq);
+
+ } else {
+ /* enable PIT timer */
+ /* set for periodic (4) and LSB/MSB write (0x30) */
+ outb(0x34, 0x43);
+ outb((mSEC_10_from_14/6) & 0xFF, 0x40);
+ outb((mSEC_10_from_14/6) >> 8, 0x40);
+
+ gettimeoffset = isa_gettimeoffset;
+
+ footbridge_timer_irq.name = "ISA Timer Tick";
+ footbridge_timer_irq.handler = isa_timer_interrupt;
+
+ setup_irq(IRQ_ISA_TIMER, &footbridge_timer_irq);
+ }
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-integrator/clock.c
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+
+#include <asm/semaphore.h>
+#include <asm/hardware/clock.h>
+#include <asm/hardware/icst525.h>
+
+#include "clock.h"
+
+static LIST_HEAD(clocks);
+static DECLARE_MUTEX(clocks_sem);
+
+struct clk *clk_get(struct device *dev, const char *id)
+{
+ struct clk *p, *clk = ERR_PTR(-ENOENT);
+
+ down(&clocks_sem);
+ list_for_each_entry(p, &clocks, node) {
+ if (strcmp(id, p->name) == 0 && try_module_get(p->owner)) {
+ clk = p;
+ break;
+ }
+ }
+ up(&clocks_sem);
+
+ return clk;
+}
+EXPORT_SYMBOL(clk_get);
+
+void clk_put(struct clk *clk)
+{
+ module_put(clk->owner);
+}
+EXPORT_SYMBOL(clk_put);
+
+int clk_enable(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_enable);
+
+void clk_disable(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_disable);
+
+int clk_use(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_use);
+
+void clk_unuse(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_unuse);
+
+unsigned long clk_get_rate(struct clk *clk)
+{
+ return clk->rate;
+}
+EXPORT_SYMBOL(clk_get_rate);
+
+long clk_round_rate(struct clk *clk, unsigned long rate)
+{
+ return rate;
+}
+EXPORT_SYMBOL(clk_round_rate);
+
+int clk_set_rate(struct clk *clk, unsigned long rate)
+{
+ int ret = -EIO;
+ if (clk->setvco) {
+ struct icst525_vco vco;
+
+ vco = icst525_khz_to_vco(clk->params, rate);
+ clk->rate = icst525_khz(clk->params, vco);
+
+ printk("Clock %s: setting VCO reg params: S=%d R=%d V=%d\n",
+ clk->name, vco.s, vco.r, vco.v);
+
+ clk->setvco(clk, vco);
+ ret = 0;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(clk_set_rate);
+
+/*
+ * These are fixed clocks.
+ */
+static struct clk kmi_clk = {
+ .name = "KMIREFCLK",
+ .rate = 24000000,
+};
+
+static struct clk uart_clk = {
+ .name = "UARTCLK",
+ .rate = 14745600,
+};
+
+int clk_register(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_add(&clk->node, &clocks);
+ up(&clocks_sem);
+ return 0;
+}
+EXPORT_SYMBOL(clk_register);
+
+void clk_unregister(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_del(&clk->node);
+ up(&clocks_sem);
+}
+EXPORT_SYMBOL(clk_unregister);
+
+static int __init clk_init(void)
+{
+ clk_register(&kmi_clk);
+ clk_register(&uart_clk);
+ return 0;
+}
+arch_initcall(clk_init);
--- /dev/null
+/*
+ * linux/arch/arm/mach-integrator/clock.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+struct module;
+struct icst525_params;
+
+struct clk {
+ struct list_head node;
+ unsigned long rate;
+ struct module *owner;
+ const char *name;
+ const struct icst525_params *params;
+ void *data;
+ void (*setvco)(struct clk *, struct icst525_vco vco);
+};
+
+int clk_register(struct clk *clk);
+void clk_unregister(struct clk *clk);
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+
+obj-y += common.o common-pci.o
+
+obj-$(CONFIG_ARCH_IXDP4XX) += ixdp425-pci.o ixdp425-setup.o
+obj-$(CONFIG_ARCH_ADI_COYOTE) += coyote-pci.o coyote-setup.o
+obj-$(CONFIG_ARCH_PRPMC1100) += prpmc1100-pci.o prpmc1100-setup.o
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/common-pci.c
+ *
+ * IXP4XX PCI routines for all platforms
+ *
+ * Maintainer: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003 Greg Ungerer <gerg@snapgear.com>
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <asm/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/sizes.h>
+#include <asm/system.h>
+#include <asm/mach/pci.h>
+#include <asm/hardware.h>
+#include <asm/sizes.h>
+
+
+/*
+ * IXP4xx PCI read function is dependent on whether we are
+ * running A0 or B0 (AppleGate) silicon.
+ */
+int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data);
+
+/*
+ * Base address for PCI regsiter region
+ */
+unsigned long ixp4xx_pci_reg_base = 0;
+
+/*
+ * PCI cfg an I/O routines are done by programming a
+ * command/byte enable register, and then read/writing
+ * the data from a data regsiter. We need to ensure
+ * these transactions are atomic or we will end up
+ * with corrupt data on the bus or in a driver.
+ */
+static spinlock_t ixp4xx_pci_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Read from PCI config space
+ */
+static void crp_read(u32 ad_cbe, u32 *data)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+ *PCI_CRP_AD_CBE = ad_cbe;
+ *data = *PCI_CRP_RDATA;
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+}
+
+/*
+ * Write to PCI config space
+ */
+static void crp_write(u32 ad_cbe, u32 data)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+ *PCI_CRP_AD_CBE = CRP_AD_CBE_WRITE | ad_cbe;
+ *PCI_CRP_WDATA = data;
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+}
+
+static inline int check_master_abort(void)
+{
+ /* check Master Abort bit after access */
+ unsigned long isr = *PCI_ISR;
+
+ if (isr & PCI_ISR_PFE) {
+ /* make sure the Master Abort bit is reset */
+ *PCI_ISR = PCI_ISR_PFE;
+ pr_debug("%s failed\n", __FUNCTION__);
+ return 1;
+ }
+
+ return 0;
+}
+
+int ixp4xx_pci_read_errata(u32 addr, u32 cmd, u32* data)
+{
+ unsigned long flags;
+ int retval = 0;
+ int i;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /*
+ * PCI workaround - only works if NP PCI space reads have
+ * no side effects!!! Read 8 times. last one will be good.
+ */
+ for (i = 0; i < 8; i++) {
+ *PCI_NP_CBE = cmd;
+ *data = *PCI_NP_RDATA;
+ *data = *PCI_NP_RDATA;
+ }
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+int ixp4xx_pci_read_no_errata(u32 addr, u32 cmd, u32* data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /* set up and execute the read */
+ *PCI_NP_CBE = cmd;
+
+ /* the result of the read is now in NP_RDATA */
+ *data = *PCI_NP_RDATA;
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /* set up the write */
+ *PCI_NP_CBE = cmd;
+
+ /* execute the write by writing to NP_WDATA */
+ *PCI_NP_WDATA = data;
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+static u32 ixp4xx_config_addr(u8 bus_num, u16 devfn, int where)
+{
+ u32 addr;
+ if (!bus_num) {
+ /* type 0 */
+ addr = BIT(32-PCI_SLOT(devfn)) | ((PCI_FUNC(devfn)) << 8) |
+ (where & ~3);
+ } else {
+ /* type 1 */
+ addr = (bus_num << 16) | ((PCI_SLOT(devfn)) << 11) |
+ ((PCI_FUNC(devfn)) << 8) | (where & ~3) | 1;
+ }
+ return addr;
+}
+
+/*
+ * Mask table, bits to mask for quantity of size 1, 2 or 4 bytes.
+ * 0 and 3 are not valid indexes...
+ */
+static u32 bytemask[] = {
+ /*0*/ 0,
+ /*1*/ 0xff,
+ /*2*/ 0xffff,
+ /*3*/ 0,
+ /*4*/ 0xffffffff,
+};
+
+static u32 local_byte_lane_enable_bits(u32 n, int size)
+{
+ if (size == 1)
+ return (0xf & ~BIT(n)) << CRP_AD_CBE_BESL;
+ if (size == 2)
+ return (0xf & ~(BIT(n) | BIT(n+1))) << CRP_AD_CBE_BESL;
+ if (size == 4)
+ return 0;
+ return 0xffffffff;
+}
+
+static int local_read_config(int where, int size, u32 *value)
+{
+ u32 n, data;
+ pr_debug("local_read_config from %d size %d\n", where, size);
+ n = where % 4;
+ crp_read(where & ~3, &data);
+ *value = (data >> (8*n)) & bytemask[size];
+ pr_debug("local_read_config read %#x\n", *value);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int local_write_config(int where, int size, u32 value)
+{
+ u32 n, byte_enables, data;
+ pr_debug("local_write_config %#x to %d size %d\n", value, where, size);
+ n = where % 4;
+ byte_enables = local_byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+ data = value << (8*n);
+ crp_write((where & ~3) | byte_enables, data);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static u32 byte_lane_enable_bits(u32 n, int size)
+{
+ if (size == 1)
+ return (0xf & ~BIT(n)) << 4;
+ if (size == 2)
+ return (0xf & ~(BIT(n) | BIT(n+1))) << 4;
+ if (size == 4)
+ return 0;
+ return 0xffffffff;
+}
+
+static int read_config(u8 bus_num, u16 devfn, int where, int size, u32 *value)
+{
+ u32 n, byte_enables, addr, data;
+
+ pr_debug("read_config from %d size %d dev %d:%d:%d\n", where, size,
+ bus_num, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+ *value = 0xffffffff;
+ n = where % 4;
+ byte_enables = byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ addr = ixp4xx_config_addr(bus_num, devfn, where);
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_CONFIGREAD, &data))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ *value = (data >> (8*n)) & bytemask[size];
+ pr_debug("read_config_byte read %#x\n", *value);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int write_config(u8 bus_num, u16 devfn, int where, int size, u32 value)
+{
+ u32 n, byte_enables, addr, data;
+
+ pr_debug("write_config_byte %#x to %d size %d dev %d:%d:%d\n", value, where,
+ size, bus_num, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+ n = where % 4;
+ byte_enables = byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ addr = ixp4xx_config_addr(bus_num, devfn, where);
+ data = value << (8*n);
+ if (ixp4xx_pci_write(addr, byte_enables | NP_CMD_CONFIGWRITE, data))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+/*
+ * Generalized PCI config access functions.
+ */
+static int ixp4xx_read_config(struct pci_bus *bus, unsigned int devfn,
+ int where, int size, u32 *value)
+{
+ if (bus->number && !PCI_SLOT(devfn))
+ return local_read_config(where, size, value);
+ return read_config(bus->number, devfn, where, size, value);
+}
+
+static int ixp4xx_write_config(struct pci_bus *bus, unsigned int devfn,
+ int where, int size, u32 value)
+{
+ if (bus->number && !PCI_SLOT(devfn))
+ return local_write_config(where, size, value);
+ return write_config(bus->number, devfn, where, size, value);
+}
+
+struct pci_ops ixp4xx_ops = {
+ .read = ixp4xx_read_config,
+ .write = ixp4xx_write_config,
+};
+
+
+/*
+ * PCI abort handler
+ */
+static int abort_handler(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+{
+ u32 isr, status;
+
+ isr = *PCI_ISR;
+ local_read_config(PCI_STATUS, 2, &status);
+ pr_debug("PCI: abort_handler addr = %#lx, isr = %#x, "
+ "status = %#x\n", addr, isr, status);
+
+ /* make sure the Master Abort bit is reset */
+ *PCI_ISR = PCI_ISR_PFE;
+ status |= PCI_STATUS_REC_MASTER_ABORT;
+ local_write_config(PCI_STATUS, 2, status);
+
+ /*
+ * If it was an imprecise abort, then we need to correct the
+ * return address to be _after_ the instruction.
+ */
+ if (fsr & (1 << 10))
+ regs->ARM_pc += 4;
+
+ return 0;
+}
+
+
+/*
+ * Setup DMA mask to 64MB on PCI devices. Ignore all other devices.
+ */
+static int ixp4xx_pci_platform_notify(struct device *dev)
+{
+ if(dev->bus == &pci_bus_type) {
+ *dev->dma_mask = SZ_64M - 1;
+ dev->coherent_dma_mask = SZ_64M - 1;
+ dmabounce_register_dev(dev, 2048, 4096);
+ }
+ return 0;
+}
+
+static int ixp4xx_pci_platform_notify_remove(struct device *dev)
+{
+ if(dev->bus == &pci_bus_type) {
+ dmabounce_unregister_dev(dev);
+ }
+ return 0;
+}
+
+int dma_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
+{
+ return (dev->bus == &pci_bus_type ) && ((dma_addr + size) >= SZ_64M);
+}
+
+void __init ixp4xx_pci_preinit(void)
+{
+ unsigned long processor_id;
+
+ asm("mrc p15, 0, %0, cr0, cr0, 0;" : "=r"(processor_id) :);
+
+ /*
+ * Determine which PCI read method to use
+ */
+ if (!(processor_id & 0xf)) {
+ printk("PCI: IXP4xx A0 silicon detected - "
+ "PCI Non-Prefetch Workaround Enabled\n");
+ ixp4xx_pci_read = ixp4xx_pci_read_errata;
+ } else
+ ixp4xx_pci_read = ixp4xx_pci_read_no_errata;
+
+
+ /* hook in our fault handler for PCI errors */
+ hook_fault_code(16+6, abort_handler, SIGBUS, "imprecise external abort");
+
+ pr_debug("setup PCI-AHB(inbound) and AHB-PCI(outbound) address mappings\n");
+
+ /*
+ * We use identity AHB->PCI address translation
+ * in the 0x48000000 to 0x4bffffff address space
+ */
+ *PCI_PCIMEMBASE = 0x48494A4B;
+
+ /*
+ * We also use identity PCI->AHB address translation
+ * in 4 16MB BARs that begin at the physical memory start
+ */
+ *PCI_AHBMEMBASE = (PHYS_OFFSET & 0xFF000000) +
+ ((PHYS_OFFSET & 0xFF000000) >> 8) +
+ ((PHYS_OFFSET & 0xFF000000) >> 16) +
+ ((PHYS_OFFSET & 0xFF000000) >> 24) +
+ 0x00010203;
+
+ if (*PCI_CSR & PCI_CSR_HOST) {
+ printk("PCI: IXP4xx is host\n");
+
+ pr_debug("setup BARs in controller\n");
+
+ /*
+ * We configure the PCI inbound memory windows to be
+ * 1:1 mapped to SDRAM
+ */
+ local_write_config(PCI_BASE_ADDRESS_0, 4, PHYS_OFFSET + 0x00000000);
+ local_write_config(PCI_BASE_ADDRESS_1, 4, PHYS_OFFSET + 0x01000000);
+ local_write_config(PCI_BASE_ADDRESS_2, 4, PHYS_OFFSET + 0x02000000);
+ local_write_config(PCI_BASE_ADDRESS_3, 4, PHYS_OFFSET + 0x03000000);
+
+ /*
+ * Enable CSR window at 0xff000000.
+ */
+ local_write_config(PCI_BASE_ADDRESS_4, 4, 0xff000008);
+
+ /*
+ * Enable the IO window to be way up high, at 0xfffffc00
+ */
+ local_write_config(PCI_BASE_ADDRESS_5, 4, 0xfffffc01);
+ } else {
+ printk("PCI: IXP4xx is target - No bus scan performed\n");
+ }
+
+ printk("PCI: IXP4xx Using %s access for memory space\n",
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+ "direct"
+#else
+ "indirect"
+#endif
+ );
+
+ pr_debug("clear error bits in ISR\n");
+ *PCI_ISR = PCI_ISR_PSE | PCI_ISR_PFE | PCI_ISR_PPE | PCI_ISR_AHBE;
+
+ /*
+ * Set Initialize Complete in PCI Control Register: allow IXP4XX to
+ * respond to PCI configuration cycles. Specify that the AHB bus is
+ * operating in big endian mode. Set up byte lane swapping between
+ * little-endian PCI and the big-endian AHB bus
+ */
+#ifdef __ARMEB__
+ *PCI_CSR = PCI_CSR_IC | PCI_CSR_ABE | PCI_CSR_PDS | PCI_CSR_ADS;
+#else
+ *PCI_CSR = PCI_CSR_IC;
+#endif
+
+ pr_debug("DONE\n");
+}
+
+int ixp4xx_setup(int nr, struct pci_sys_data *sys)
+{
+ struct resource *res;
+
+ if (nr >= 1)
+ return 0;
+
+ res = kmalloc(sizeof(*res) * 2, GFP_KERNEL);
+ if (res == NULL) {
+ /*
+ * If we're out of memory this early, something is wrong,
+ * so we might as well catch it here.
+ */
+ panic("PCI: unable to allocate resources?\n");
+ }
+ memset(res, 0, sizeof(*res) * 2);
+
+ local_write_config(PCI_COMMAND, 2, PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY);
+
+ res[0].name = "PCI I/O Space";
+ res[0].start = 0x00001000;
+ res[0].end = 0xffff0000;
+ res[0].flags = IORESOURCE_IO;
+
+ res[1].name = "PCI Memory Space";
+ res[1].start = 0x48000000;
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+ res[1].end = 0x4bffffff;
+#else
+ res[1].end = 0x4fffffff;
+#endif
+ res[1].flags = IORESOURCE_MEM;
+
+ request_resource(&ioport_resource, &res[0]);
+ request_resource(&iomem_resource, &res[1]);
+
+ sys->resource[0] = &res[0];
+ sys->resource[1] = &res[1];
+ sys->resource[2] = NULL;
+
+ platform_notify = ixp4xx_pci_platform_notify;
+ platform_notify_remove = ixp4xx_pci_platform_notify_remove;
+
+ return 1;
+}
+
+struct pci_bus *ixp4xx_scan_bus(int nr, struct pci_sys_data *sys)
+{
+ return pci_scan_bus(sys->busnr, &ixp4xx_ops, sys);
+}
+
+/*
+ * We override these so we properly do dmabounce otherwise drivers
+ * are able to set the dma_mask to 0xffffffff and we can no longer
+ * trap bounces. :(
+ *
+ * We just return true on everyhing except for < 64MB in which case
+ * we will fail miseralby and die since we can't handle that case.
+ */
+int
+pci_set_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+int
+pci_dac_set_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+int
+pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+EXPORT_SYMBOL(pci_set_dma_mask);
+EXPORT_SYMBOL(pci_dac_set_dma_mask);
+EXPORT_SYMBOL(pci_set_consistent_dma_mask);
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/common.c
+ *
+ * Generic code shared across all IXP4XX platforms
+ *
+ * Maintainer: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2002 (c) Intel Corporation
+ * Copyright 2003-2004 (c) MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/sched.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+#include <linux/bootmem.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+
+#include <asm/hardware.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/irq.h>
+
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+
+
+/*************************************************************************
+ * GPIO acces functions
+ *************************************************************************/
+
+/*
+ * Configure GPIO line for input, interrupt, or output operation
+ *
+ * TODO: Enable/disable the irq_desc based on interrupt or output mode.
+ * TODO: Should these be named ixp4xx_gpio_?
+ */
+void gpio_line_config(u8 line, u32 style)
+{
+ u32 enable;
+ volatile u32 *int_reg;
+ u32 int_style;
+
+ enable = *IXP4XX_GPIO_GPOER;
+
+ if (style & IXP4XX_GPIO_OUT) {
+ enable &= ~((1) << line);
+ } else if (style & IXP4XX_GPIO_IN) {
+ enable |= ((1) << line);
+
+ switch (style & IXP4XX_GPIO_INTSTYLE_MASK)
+ {
+ case (IXP4XX_GPIO_ACTIVE_HIGH):
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_HIGH;
+ break;
+ case (IXP4XX_GPIO_ACTIVE_LOW):
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_LOW;
+ break;
+ case (IXP4XX_GPIO_RISING_EDGE):
+ int_style = IXP4XX_GPIO_STYLE_RISING_EDGE;
+ break;
+ case (IXP4XX_GPIO_FALLING_EDGE):
+ int_style = IXP4XX_GPIO_STYLE_FALLING_EDGE;
+ break;
+ case (IXP4XX_GPIO_TRANSITIONAL):
+ int_style = IXP4XX_GPIO_STYLE_TRANSITIONAL;
+ break;
+ default:
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_HIGH;
+ break;
+ }
+
+ if (line >= 8) { /* pins 8-15 */
+ line -= 8;
+ int_reg = IXP4XX_GPIO_GPIT2R;
+ }
+ else { /* pins 0-7 */
+ int_reg = IXP4XX_GPIO_GPIT1R;
+ }
+
+ /* Clear the style for the appropriate pin */
+ *int_reg &= ~(IXP4XX_GPIO_STYLE_CLEAR <<
+ (line * IXP4XX_GPIO_STYLE_SIZE));
+
+ /* Set the new style */
+ *int_reg |= (int_style << (line * IXP4XX_GPIO_STYLE_SIZE));
+ }
+
+ *IXP4XX_GPIO_GPOER = enable;
+}
+
+EXPORT_SYMBOL(gpio_line_config);
+
+/*************************************************************************
+ * IXP4xx chipset I/O mapping
+ *************************************************************************/
+static struct map_desc ixp4xx_io_desc[] __initdata = {
+ { /* UART, Interrupt ctrl, GPIO, timers, NPEs, MACs, USB .... */
+ .virtual = IXP4XX_PERIPHERAL_BASE_VIRT,
+ .physical = IXP4XX_PERIPHERAL_BASE_PHYS,
+ .length = IXP4XX_PERIPHERAL_REGION_SIZE,
+ .type = MT_DEVICE
+ }, { /* Expansion Bus Config Registers */
+ .virtual = IXP4XX_EXP_CFG_BASE_VIRT,
+ .physical = IXP4XX_EXP_CFG_BASE_PHYS,
+ .length = IXP4XX_EXP_CFG_REGION_SIZE,
+ .type = MT_DEVICE
+ }, { /* PCI Registers */
+ .virtual = IXP4XX_PCI_CFG_BASE_VIRT,
+ .physical = IXP4XX_PCI_CFG_BASE_PHYS,
+ .length = IXP4XX_PCI_CFG_REGION_SIZE,
+ .type = MT_DEVICE
+ }
+};
+
+void __init ixp4xx_map_io(void)
+{
+ iotable_init(ixp4xx_io_desc, ARRAY_SIZE(ixp4xx_io_desc));
+}
+
+
+/*************************************************************************
+ * IXP4xx chipset IRQ handling
+ *
+ * TODO: GPIO IRQs should be marked invalid until the user of the IRQ
+ * (be it PCI or something else) configures that GPIO line
+ * as an IRQ. Also, we should use a different chip structure for
+ * level-based GPIO vs edge-based GPIO. Currently nobody needs this as
+ * all HW that's publically available uses level IRQs, so we'll
+ * worry about it if/when we have HW to test.
+ **************************************************************************/
+static void ixp4xx_irq_mask(unsigned int irq)
+{
+ *IXP4XX_ICMR &= ~(1 << irq);
+}
+
+static void ixp4xx_irq_mask_ack(unsigned int irq)
+{
+ ixp4xx_irq_mask(irq);
+}
+
+static void ixp4xx_irq_unmask(unsigned int irq)
+{
+ static int irq2gpio[NR_IRQS] = {
+ -1, -1, -1, -1, -1, -1, 0, 1,
+ -1, -1, -1, -1, -1, -1, -1, -1,
+ -1, -1, -1, 2, 3, 4, 5, 6,
+ 7, 8, 9, 10, 11, 12, -1, -1,
+ };
+ int line = irq2gpio[irq];
+
+ /*
+ * This only works for LEVEL gpio IRQs as per the IXP4xx developer's
+ * manual. If edge-triggered, need to move it to the mask_ack.
+ * Nobody seems to be using the edge-triggered mode on the GPIOs.
+ */
+ if (line >= 0)
+ gpio_line_isr_clear(line);
+
+ *IXP4XX_ICMR |= (1 << irq);
+}
+
+static struct irqchip ixp4xx_irq_chip = {
+ .ack = ixp4xx_irq_mask_ack,
+ .mask = ixp4xx_irq_mask,
+ .unmask = ixp4xx_irq_unmask,
+};
+
+void __init ixp4xx_init_irq(void)
+{
+ int i = 0;
+
+ /* Route all sources to IRQ instead of FIQ */
+ *IXP4XX_ICLR = 0x0;
+
+ /* Disable all interrupt */
+ *IXP4XX_ICMR = 0x0;
+
+ for(i = 0; i < NR_IRQS; i++)
+ {
+ set_irq_chip(i, &ixp4xx_irq_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID);
+ }
+}
+
+
+/*************************************************************************
+ * IXP4xx timer tick
+ * We use OS timer1 on the CPU for the timer tick and the timestamp
+ * counter as a source of real clock ticks to account for missed jiffies.
+ *************************************************************************/
+
+static unsigned volatile last_jiffy_time;
+
+#define CLOCK_TICKS_PER_USEC (CLOCK_TICK_RATE / USEC_PER_SEC)
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long ixp4xx_gettimeoffset(void)
+{
+ u32 elapsed;
+
+ elapsed = *IXP4XX_OSTS - last_jiffy_time;
+
+ return elapsed / CLOCK_TICKS_PER_USEC;
+}
+
+static irqreturn_t ixp4xx_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ /* Clear Pending Interrupt by writing '1' to it */
+ *IXP4XX_OSST = IXP4XX_OSST_TIMER_1_PEND;
+
+ /*
+ * Catch up with the real idea of time
+ */
+ do {
+ do_timer(regs);
+ last_jiffy_time += LATCH;
+ } while((*IXP4XX_OSTS - last_jiffy_time) > LATCH);
+
+ return IRQ_HANDLED;
+}
+
+extern unsigned long (*gettimeoffset)(void);
+
+static struct irqaction timer_irq = {
+ .name = "IXP4xx Timer Tick",
+ .flags = SA_INTERRUPT
+};
+
+void __init time_init(void)
+{
+ gettimeoffset = ixp4xx_gettimeoffset;
+ timer_irq.handler = ixp4xx_timer_interrupt;
+
+ /* Clear Pending Interrupt by writing '1' to it */
+ *IXP4XX_OSST = IXP4XX_OSST_TIMER_1_PEND;
+
+ /* Setup the Timer counter value */
+ *IXP4XX_OSRT1 = (LATCH & ~IXP4XX_OST_RELOAD_MASK) | IXP4XX_OST_ENABLE;
+
+ /* Reset time-stamp counter */
+ *IXP4XX_OSTS = 0;
+ last_jiffy_time = 0;
+
+ /* Connect the interrupt handler and enable the interrupt */
+ setup_irq(IRQ_IXP4XX_TIMER1, &timer_irq);
+}
+
+
--- /dev/null
+/*
+ * arch/arch/mach-ixp4xx/coyote-pci.c
+ *
+ * PCI setup routines for ADI Engineering Coyote platform
+ *
+ * Copyright (C) 2002 Jungo Software Technologies.
+ * Copyright (C) 2003 MontaVista Softwrae, Inc.
+ *
+ * Maintainer: Deepak Saxena <dsaxena@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/pci.h>
+#include <linux/init.h>
+
+#include <asm/mach-types.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+
+#include <asm/mach/pci.h>
+
+extern void ixp4xx_pci_preinit(void);
+extern int ixp4xx_setup(int nr, struct pci_sys_data *sys);
+extern struct pci_bus *ixp4xx_scan_bus(int nr, struct pci_sys_data *sys);
+
+void __init coyote_pci_preinit(void)
+{
+ gpio_line_config(COYOTE_PCI_SLOT0_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+
+ gpio_line_config(COYOTE_PCI_SLOT1_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+
+ gpio_line_isr_clear(COYOTE_PCI_SLOT0_PIN);
+ gpio_line_isr_clear(COYOTE_PCI_SLOT1_PIN);
+
+ ixp4xx_pci_preinit();
+}
+
+static int __init coyote_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ if (slot == COYOTE_PCI_SLOT0_DEVID)
+ return IRQ_COYOTE_PCI_SLOT0;
+ else if (slot == COYOTE_PCI_SLOT1_DEVID)
+ return IRQ_COYOTE_PCI_SLOT1;
+ else return -1;
+}
+
+struct hw_pci coyote_pci __initdata = {
+ .nr_controllers = 1,
+ .preinit = coyote_pci_preinit,
+ .swizzle = pci_std_swizzle,
+ .setup = ixp4xx_setup,
+ .scan = ixp4xx_scan_bus,
+ .map_irq = coyote_map_irq,
+};
+
+int __init coyote_pci_init(void)
+{
+ if (machine_is_adi_coyote())
+ pci_common_init(&coyote_pci);
+ return 0;
+}
+
+subsys_initcall(coyote_pci_init);
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/ixdp425-pci.c
+ *
+ * IXDP425 board-level PCI initialization
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * Maintainer: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+
+#include <asm/mach/pci.h>
+#include <asm/irq.h>
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+
+void __init ixdp425_pci_preinit(void)
+{
+ gpio_line_config(IXDP425_PCI_INTA_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(IXDP425_PCI_INTB_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(IXDP425_PCI_INTC_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(IXDP425_PCI_INTD_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+
+ gpio_line_isr_clear(IXDP425_PCI_INTA_PIN);
+ gpio_line_isr_clear(IXDP425_PCI_INTB_PIN);
+ gpio_line_isr_clear(IXDP425_PCI_INTC_PIN);
+ gpio_line_isr_clear(IXDP425_PCI_INTD_PIN);
+
+ ixp4xx_pci_preinit();
+}
+
+static int __init ixdp425_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ static int pci_irq_table[IXDP425_PCI_IRQ_LINES] = {
+ IRQ_IXDP425_PCI_INTA,
+ IRQ_IXDP425_PCI_INTB,
+ IRQ_IXDP425_PCI_INTC,
+ IRQ_IXDP425_PCI_INTD
+ };
+
+ int irq = -1;
+
+ if (slot >= 1 && slot <= IXDP425_PCI_MAX_DEV &&
+ pin >= 1 && pin <= IXDP425_PCI_IRQ_LINES) {
+ irq = pci_irq_table[(slot + pin - 2) % 4];
+ }
+
+ return irq;
+}
+
+struct hw_pci ixdp425_pci __initdata = {
+ .nr_controllers = 1,
+ .preinit = ixdp425_pci_preinit,
+ .swizzle = pci_std_swizzle,
+ .setup = ixp4xx_setup,
+ .scan = ixp4xx_scan_bus,
+ .map_irq = ixdp425_map_irq,
+};
+
+int __init ixdp425_pci_init(void)
+{
+ if (machine_is_ixdp425() ||
+ machine_is_ixcdp1100() ||
+ machine_is_avila())
+ pci_common_init(&ixdp425_pci);
+ return 0;
+}
+
+subsys_initcall(ixdp425_pci_init);
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/prpmc1100-pci.c
+ *
+ * PrPMC1100 PCI initialization
+ *
+ * Copyright (C) 2003-2004 MontaVista Sofwtare, Inc.
+ * Based on IXDP425 code originally (C) Intel Corporation
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * PrPMC1100 PCI init code. GPIO usage is similar to that on
+ * IXDP425, but the IRQ routing is completely different and
+ * depends on what carrier you are using. This code is written
+ * to work on the Motorola PrPMC800 ATX carrier board.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+
+#include <asm/mach-types.h>
+#include <asm/irq.h>
+#include <asm/hardware.h>
+
+#include <asm/mach/pci.h>
+
+
+void __init prpmc1100_pci_preinit(void)
+{
+ gpio_line_config(PRPMC1100_PCI_INTA_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(PRPMC1100_PCI_INTB_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(PRPMC1100_PCI_INTC_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+ gpio_line_config(PRPMC1100_PCI_INTD_PIN,
+ IXP4XX_GPIO_IN | IXP4XX_GPIO_ACTIVE_LOW);
+
+ gpio_line_isr_clear(PRPMC1100_PCI_INTA_PIN);
+ gpio_line_isr_clear(PRPMC1100_PCI_INTB_PIN);
+ gpio_line_isr_clear(PRPMC1100_PCI_INTC_PIN);
+ gpio_line_isr_clear(PRPMC1100_PCI_INTD_PIN);
+
+ ixp4xx_pci_preinit();
+}
+
+
+static int __init prpmc1100_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int irq = -1;
+
+ static int pci_irq_table[][4] = {
+ { /* IDSEL 16 - PMC A1 */
+ IRQ_PRPMC1100_PCI_INTD,
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB,
+ IRQ_PRPMC1100_PCI_INTC
+ }, { /* IDSEL 17 - PRPMC-A-B */
+ IRQ_PRPMC1100_PCI_INTD,
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB,
+ IRQ_PRPMC1100_PCI_INTC
+ }, { /* IDSEL 18 - PMC A1-B */
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB,
+ IRQ_PRPMC1100_PCI_INTC,
+ IRQ_PRPMC1100_PCI_INTD
+ }, { /* IDSEL 19 - Unused */
+ 0, 0, 0, 0
+ }, { /* IDSEL 20 - P2P Bridge */
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB,
+ IRQ_PRPMC1100_PCI_INTC,
+ IRQ_PRPMC1100_PCI_INTD
+ }, { /* IDSEL 21 - PMC A2 */
+ IRQ_PRPMC1100_PCI_INTC,
+ IRQ_PRPMC1100_PCI_INTD,
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB
+ }, { /* IDSEL 22 - PMC A2-B */
+ IRQ_PRPMC1100_PCI_INTD,
+ IRQ_PRPMC1100_PCI_INTA,
+ IRQ_PRPMC1100_PCI_INTB,
+ IRQ_PRPMC1100_PCI_INTC
+ },
+ };
+
+ if (slot >= PRPMC1100_PCI_MIN_DEVID && slot <= PRPMC1100_PCI_MAX_DEVID
+ && pin >= 1 && pin <= PRPMC1100_PCI_IRQ_LINES) {
+ irq = pci_irq_table[slot - PRPMC1100_PCI_MIN_DEVID][pin - 1];
+ }
+
+ return irq;
+}
+
+
+struct hw_pci prpmc1100_pci __initdata = {
+ .nr_controllers = 1,
+ .preinit = prpmc1100_pci_preinit,
+ .swizzle = pci_std_swizzle,
+ .setup = ixp4xx_setup,
+ .scan = ixp4xx_scan_bus,
+ .map_irq = prpmc1100_map_irq,
+};
+
+int __init prpmc1100_pci_init(void)
+{
+ if (machine_is_prpmc1100())
+ pci_common_init(&prpmc1100_pci);
+ return 0;
+}
+
+subsys_initcall(prpmc1100_pci_init);
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/prpmc1100-setup.c
+ *
+ * Motorola PrPMC1100 board setup
+ *
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/serial.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+
+#ifdef __ARMEB__
+#define REG_OFFSET 3
+#else
+#define REG_OFFSET 0
+#endif
+
+/*
+ * Only one serial port is connected on the PrPMC1100
+ */
+static struct uart_port prpmc1100_serial_port = {
+ .membase = (char*)(IXP4XX_UART1_BASE_VIRT + REG_OFFSET),
+ .mapbase = (IXP4XX_UART1_BASE_PHYS),
+ .irq = IRQ_IXP4XX_UART1,
+ .flags = UPF_SKIP_TEST,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ .line = 0,
+ .type = PORT_XSCALE,
+ .fifosize = 32
+};
+
+void __init prpmc1100_map_io(void)
+{
+ early_serial_setup(&prpmc1100_serial_port);
+
+ ixp4xx_map_io();
+}
+
+static struct flash_platform_data prpmc1100_flash_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+};
+
+static struct resource prpmc1100_flash_resource = {
+ .start = PRPMC1100_FLASH_BASE,
+ .end = PRPMC1100_FLASH_BASE + PRPMC1100_FLASH_SIZE,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device prpmc1100_flash_device = {
+ .name = "IXP4XX-Flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &prpmc1100_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &prpmc1100_flash_resource,
+};
+
+static void __init prpmc1100_init(void)
+{
+ platform_add_device(&prpmc1100_flash_device);
+}
+
+MACHINE_START(PRPMC1100, "Motorola PrPMC1100")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(prpmc1100_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(prpmc1100_init)
+MACHINE_END
+
--- /dev/null
+/*
+ * arch/arm/mach-lh7a40x/time.c
+ *
+ * Copyright (C) 2004 Logic Product Development
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/leds.h>
+
+#include <asm/mach/time.h>
+
+#if HZ < 100
+# define TIMER_CONTROL TIMER_CONTROL1
+# define TIMER_LOAD TIMER_LOAD1
+# define TIMER_CONSTANT (508469/HZ)
+# define TIMER_MODE (TIMER_C_ENABLE | TIMER_C_PERIODIC | TIMER_C_508KHZ)
+# define TIMER_EOI TIMER_EOI1
+# define TIMER_IRQ IRQ_T1UI
+#else
+# define TIMER_CONTROL TIMER_CONTROL3
+# define TIMER_LOAD TIMER_LOAD3
+# define TIMER_CONSTANT (3686400/HZ)
+# define TIMER_MODE (TIMER_C_ENABLE | TIMER_C_PERIODIC)
+# define TIMER_EOI TIMER_EOI3
+# define TIMER_IRQ IRQ_T3UI
+#endif
+
+static irqreturn_t
+lh7a40x_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ TIMER_EOI = 0;
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction lh7a40x_timer_irq = {
+ .name = "LHA740x Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = lh7a40x_timer_interrupt
+};
+
+void __init lh7a40x_init_time(void)
+{
+ /* Stop/disable all timers */
+ TIMER_CONTROL1 = 0;
+ TIMER_CONTROL2 = 0;
+ TIMER_CONTROL3 = 0;
+
+ setup_irq (TIMER_IRQ, &lh7a40x_timer_irq);
+
+ TIMER_LOAD = TIMER_CONSTANT;
+ TIMER_CONTROL = TIMER_MODE;
+}
+
--- /dev/null
+/*
+ * arch/arm/mach-omap/time.c
+ *
+ * OMAP Timer Tick
+ *
+ * Copyright (C) 2000 RidgeRun, Inc.
+ * Author: Greg Lonnon <glonnon@ridgerun.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/leds.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
+#include <asm/arch/clocks.h>
+
+#ifndef __instrument
+#define __instrument
+#define __noinstrument __attribute__ ((no_instrument_function))
+#endif
+
+typedef struct {
+ u32 cntl; /* CNTL_TIMER, R/W */
+ u32 load_tim; /* LOAD_TIM, W */
+ u32 read_tim; /* READ_TIM, R */
+} mputimer_regs_t;
+
+#define mputimer_base(n) \
+ ((volatile mputimer_regs_t*)IO_ADDRESS(OMAP_MPUTIMER_BASE + \
+ (n)*OMAP_MPUTIMER_OFFSET))
+
+static inline unsigned long timer32k_read(int reg) {
+ unsigned long val;
+ val = omap_readw(reg + OMAP_32kHz_TIMER_BASE);
+ return val;
+}
+static inline void timer32k_write(int reg,int val) {
+ omap_writew(val, reg + OMAP_32kHz_TIMER_BASE);
+}
+
+/*
+ * How long is the timer interval? 100 HZ, right...
+ * IRQ rate = (TVR + 1) / 32768 seconds
+ * TVR = 32768 * IRQ_RATE -1
+ * IRQ_RATE = 1/100
+ * TVR = 326
+ */
+#define TIMER32k_PERIOD 326
+//#define TIMER32k_PERIOD 0x7ff
+
+static inline void start_timer32k(void) {
+ timer32k_write(TIMER32k_CR,
+ TIMER32k_TSS | TIMER32k_TRB |
+ TIMER32k_INT | TIMER32k_ARL);
+}
+
+#ifdef CONFIG_MACH_OMAP_PERSEUS2
+/*
+ * After programming PTV with 0 and setting the MPUTIM_CLOCK_ENABLE
+ * (external clock enable) bit, the timer count rate is 6.5 MHz (13
+ * MHZ input/2). !! The divider by 2 is undocumented !!
+ */
+#define MPUTICKS_PER_SEC (13000000/2)
+#else
+/*
+ * After programming PTV with 0, the timer count rate is 6 MHz.
+ * WARNING! this must be an even number, or machinecycles_to_usecs
+ * below will break.
+ */
+#define MPUTICKS_PER_SEC (12000000/2)
+#endif
+
+static int mputimer_started[3] = {0,0,0};
+
+static inline void __noinstrument start_mputimer(int n,
+ unsigned long load_val)
+{
+ volatile mputimer_regs_t* timer = mputimer_base(n);
+
+ mputimer_started[n] = 0;
+ timer->cntl = MPUTIM_CLOCK_ENABLE;
+ udelay(1);
+
+ timer->load_tim = load_val;
+ udelay(1);
+ timer->cntl = (MPUTIM_CLOCK_ENABLE | MPUTIM_AR | MPUTIM_ST);
+ mputimer_started[n] = 1;
+}
+
+static inline unsigned long __noinstrument
+read_mputimer(int n)
+{
+ volatile mputimer_regs_t* timer = mputimer_base(n);
+ return (mputimer_started[n] ? timer->read_tim : 0);
+}
+
+void __noinstrument start_mputimer1(unsigned long load_val)
+{
+ start_mputimer(0, load_val);
+}
+void __noinstrument start_mputimer2(unsigned long load_val)
+{
+ start_mputimer(1, load_val);
+}
+void __noinstrument start_mputimer3(unsigned long load_val)
+{
+ start_mputimer(2, load_val);
+}
+
+unsigned long __noinstrument read_mputimer1(void)
+{
+ return read_mputimer(0);
+}
+unsigned long __noinstrument read_mputimer2(void)
+{
+ return read_mputimer(1);
+}
+unsigned long __noinstrument read_mputimer3(void)
+{
+ return read_mputimer(2);
+}
+
+unsigned long __noinstrument do_getmachinecycles(void)
+{
+ return 0 - read_mputimer(0);
+}
+
+unsigned long __noinstrument machinecycles_to_usecs(unsigned long mputicks)
+{
+ /* Round up to nearest usec */
+ return ((mputicks * 1000) / (MPUTICKS_PER_SEC / 2 / 1000) + 1) >> 1;
+}
+
+/*
+ * This marks the time of the last system timer interrupt
+ * that was *processed by the ISR* (timer 2).
+ */
+static unsigned long systimer_mark;
+
+static unsigned long omap_gettimeoffset(void)
+{
+ /* Return elapsed usecs since last system timer ISR */
+ return machinecycles_to_usecs(do_getmachinecycles() - systimer_mark);
+}
+
+static irqreturn_t
+omap_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long now, ilatency;
+
+ /*
+ * Mark the time at which the timer interrupt ocurred using
+ * timer1. We need to remove interrupt latency, which we can
+ * retrieve from the current system timer2 counter. Both the
+ * offset timer1 and the system timer2 are counting at 6MHz,
+ * so we're ok.
+ */
+ now = 0 - read_mputimer1();
+ ilatency = MPUTICKS_PER_SEC / 100 - read_mputimer2();
+ systimer_mark = now - ilatency;
+
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction omap_timer_irq = {
+ .name = "OMAP Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = omap_timer_interrupt
+};
+
+void __init omap_init_time(void)
+{
+ /* Since we don't call request_irq, we must init the structure */
+ gettimeoffset = omap_gettimeoffset;
+
+#ifdef OMAP1510_USE_32KHZ_TIMER
+ timer32k_write(TIMER32k_CR, 0x0);
+ timer32k_write(TIMER32k_TVR,TIMER32k_PERIOD);
+ setup_irq(INT_OS_32kHz_TIMER, &omap_timer_irq);
+ start_timer32k();
+#else
+ setup_irq(INT_TIMER2, &omap_timer_irq);
+ start_mputimer2(MPUTICKS_PER_SEC / 100 - 1);
+#endif
+}
+
--- /dev/null
+/*
+ * arch/arm/mach-pxa/time.c
+ *
+ * Author: Nicolas Pitre
+ * Created: Jun 15, 2001
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/leds.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
+
+
+static inline unsigned long pxa_get_rtc_time(void)
+{
+ return RCNR;
+}
+
+static int pxa_set_rtc(void)
+{
+ unsigned long current_time = xtime.tv_sec;
+
+ if (RTSR & RTSR_ALE) {
+ /* make sure not to forward the clock over an alarm */
+ unsigned long alarm = RTAR;
+ if (current_time >= alarm && alarm >= RCNR)
+ return -ERESTARTSYS;
+ }
+ RCNR = current_time;
+ return 0;
+}
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long pxa_gettimeoffset (void)
+{
+ long ticks_to_match, elapsed, usec;
+
+ /* Get ticks before next timer match */
+ ticks_to_match = OSMR0 - OSCR;
+
+ /* We need elapsed ticks since last match */
+ elapsed = LATCH - ticks_to_match;
+
+ /* don't get fooled by the workaround in pxa_timer_interrupt() */
+ if (elapsed <= 0)
+ return 0;
+
+ /* Now convert them to usec */
+ usec = (unsigned long)(elapsed * (tick_nsec / 1000))/LATCH;
+
+ return usec;
+}
+
+static irqreturn_t
+pxa_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int next_match;
+
+ /* Loop until we get ahead of the free running timer.
+ * This ensures an exact clock tick count and time accuracy.
+ * IRQs are disabled inside the loop to ensure coherence between
+ * lost_ticks (updated in do_timer()) and the match reg value, so we
+ * can use do_gettimeofday() from interrupt handlers.
+ *
+ * HACK ALERT: it seems that the PXA timer regs aren't updated right
+ * away in all cases when a write occurs. We therefore compare with
+ * 8 instead of 0 in the while() condition below to avoid missing a
+ * match if OSCR has already reached the next OSMR value.
+ * Experience has shown that up to 6 ticks are needed to work around
+ * this problem, but let's use 8 to be conservative. Note that this
+ * affect things only when the timer IRQ has been delayed by nearly
+ * exactly one tick period which should be a pretty rare event.
+ */
+ do {
+ timer_tick(regs);
+ OSSR = OSSR_M0; /* Clear match on timer 0 */
+ next_match = (OSMR0 += LATCH);
+ } while( (signed long)(next_match - OSCR) <= 8 );
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction pxa_timer_irq = {
+ .name = "PXA Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = pxa_timer_interrupt
+};
+
+void __init pxa_init_time(void)
+{
+ struct timespec tv;
+
+ gettimeoffset = pxa_gettimeoffset;
+ set_rtc = pxa_set_rtc;
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = pxa_get_rtc_time();
+ do_settimeofday(&tv);
+
+ OSMR0 = 0; /* set initial match at 0 */
+ OSSR = 0xf; /* clear status on all timers */
+ setup_irq(IRQ_OST0, &pxa_timer_irq);
+ OIER |= OIER_E0; /* enable match on timer 0 to cause interrupts */
+ OSCR = 0; /* initialize free-running timer, force first match */
+}
+
--- /dev/null
+/* linux/arch/arm/mach-s3c2410/gpio.c
+ *
+ * Copyright (c) 2004 Simtec Electronics
+ * Ben Dooks <ben@simtec.co.uk>
+ *
+ * S3C2410 GPIO support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+
+#include <asm/arch/regs-gpio.h>
+
+void s3c2410_gpio_cfgpin(unsigned int pin, unsigned int function)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long shift = 1;
+ unsigned long mask = 3;
+ unsigned long con;
+ unsigned long flags;
+
+ if (pin < S3C2410_GPIO_BANKB) {
+ shift = 0;
+ mask = 1;
+ }
+
+ mask <<= S3C2410_GPIO_OFFSET(pin);
+
+ local_irq_save(flags);
+
+ con = __raw_readl(base + 0x00);
+
+ con &= mask << shift;
+ con |= function;
+
+ __raw_writel(con, base + 0x00);
+
+ local_irq_restore(flags);
+}
+
+void s3c2410_gpio_pullup(unsigned int pin, unsigned int to)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long offs = S3C2410_GPIO_OFFSET(pin);
+ unsigned long flags;
+ unsigned long up;
+
+ if (pin < S3C2410_GPIO_BANKB)
+ return;
+
+ local_irq_save(flags);
+
+ up = __raw_readl(base + 0x08);
+ up &= 1 << offs;
+ up |= to << offs;
+ __raw_writel(up, base + 0x08);
+
+ local_irq_restore(flags);
+}
+
+void s3c2410_gpio_setpin(unsigned int pin, unsigned int to)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long offs = S3C2410_GPIO_OFFSET(pin);
+ unsigned long flags;
+ unsigned long dat;
+
+ local_irq_save(flags);
+
+ dat = __raw_readl(base + 0x04);
+ dat &= 1 << offs;
+ dat |= to << offs;
+ __raw_writel(dat, base + 0x04);
+
+ local_irq_restore(flags);
+}
--- /dev/null
+/***********************************************************************
+ *
+ * linux/arch/arm/mach-s3c2410/mach-smdk2410.c
+ *
+ * Copyright (C) 2004 by FS Forth-Systeme GmbH
+ * All rights reserved.
+ *
+ * $Id: mach-smdk2410.c,v 1.1 2004/05/11 14:15:38 mpietrek Exp $
+ * @Author: Jonas Dietsche
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ *
+ * @History:
+ * derived from linux/arch/arm/mach-s3c2410/mach-bast.c, written by
+ * Ben Dooks <ben@simtec.co.uk>
+ ***********************************************************************/
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/timer.h>
+#include <linux/init.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+
+#include <asm/arch/regs-serial.h>
+
+#include "s3c2410.h"
+
+
+static struct map_desc smdk2410_iodesc[] __initdata = {
+ /* nothing here yet */
+};
+
+#define UCON S3C2410_UCON_DEFAULT
+#define ULCON S3C2410_LCON_CS8 | S3C2410_LCON_PNONE | S3C2410_LCON_STOPB
+#define UFCON S3C2410_UFCON_RXTRIG8 | S3C2410_UFCON_FIFOMODE
+
+/* base baud rate for all our UARTs */
+static unsigned long smdk2410_serial_clock = 24*1000*1000;
+
+static struct s3c2410_uartcfg smdk2410_uartcfgs[] = {
+ [0] = {
+ .hwport = 0,
+ .flags = 0,
+ .clock = &smdk2410_serial_clock,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ },
+ [1] = {
+ .hwport = 1,
+ .flags = 0,
+ .clock = &smdk2410_serial_clock,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ },
+ [2] = {
+ .hwport = 2,
+ .flags = 0,
+ .clock = &smdk2410_serial_clock,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ }
+};
+
+
+void __init smdk2410_map_io(void)
+{
+ s3c2410_map_io(smdk2410_iodesc, ARRAY_SIZE(smdk2410_iodesc));
+ s3c2410_uartcfgs = smdk2410_uartcfgs;
+}
+
+void __init smdk2410_init_irq(void)
+{
+ s3c2410_init_irq();
+}
+
+MACHINE_START(SMDK2410, "SMDK2410") /* @TODO: request a new identifier and switch
+ * to SMDK2410 */
+ MAINTAINER("Jonas Dietsche")
+ BOOT_MEM(S3C2410_SDRAM_PA, S3C2410_PA_UART, S3C2410_VA_UART)
+ BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
+ MAPIO(smdk2410_map_io)
+ INITIRQ(smdk2410_init_irq)
+MACHINE_END
--- /dev/null
+/* linux/include/asm-arm/arch-s3c2410/time.h
+ *
+ * Copyright (C) 2003 Simtec Electronics <linux@simtec.co.uk>
+ * Ben Dooks, <ben@simtec.co.uk>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <asm/system.h>
+#include <asm/leds.h>
+#include <asm/mach-types.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/arch/map.h>
+#include <asm/arch/regs-timer.h>
+#include <asm/mach/time.h>
+
+static unsigned long timer_startval;
+static unsigned long timer_ticks_usec;
+
+#ifdef CONFIG_S3C2410_RTC
+extern void s3c2410_rtc_check();
+#endif
+
+/* with an 12MHz clock, we get 12 ticks per-usec
+ */
+
+
+/***
+ * Returns microsecond since last clock interrupt. Note that interrupts
+ * will have been disabled by do_gettimeoffset()
+ * IRQs are disabled before entering here from do_gettimeofday()
+ */
+static unsigned long s3c2410_gettimeoffset (void)
+{
+ unsigned long tdone;
+ unsigned long usec;
+
+ /* work out how many ticks have gone since last timer interrupt */
+
+ tdone = timer_startval - __raw_readl(S3C2410_TCNTO(4));
+
+ /* currently, tcnt is in 12MHz units, but this may change
+ * for non-bast machines...
+ */
+
+ usec = tdone / timer_ticks_usec;
+
+ return usec;
+}
+
+
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t
+s3c2410_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ timer_tick(regs);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction s3c2410_timer_irq = {
+ .name = "S32410 Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = s3c2410_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ *
+ * Currently we only use timer4, as it is the only timer which has no
+ * other function that can be exploited externally
+ */
+void __init s3c2410_init_time (void)
+{
+ unsigned long tcon;
+ unsigned long tcnt;
+ unsigned long tcfg1;
+ unsigned long tcfg0;
+
+ gettimeoffset = s3c2410_gettimeoffset;
+
+ tcnt = 0xffff; /* default value for tcnt */
+
+ /* read the current timer configuration bits */
+
+ tcon = __raw_readl(S3C2410_TCON);
+ tcfg1 = __raw_readl(S3C2410_TCFG1);
+ tcfg0 = __raw_readl(S3C2410_TCFG0);
+
+ /* configure the system for whichever machine is in use */
+
+ if (machine_is_bast() || machine_is_vr1000()) {
+ timer_ticks_usec = 12; /* timer is at 12MHz */
+ tcnt = (timer_ticks_usec * (1000*1000)) / HZ;
+ }
+
+ /* for the h1940, we use the pclk from the core to generate
+ * the timer values. since 67.5MHz is not a value we can directly
+ * generate the timer value from, we need to pre-scale and
+ * divied before using it.
+ *
+ * overall divsior to get 200Hz is 337500
+ * we can fit tcnt if we pre-scale by 6, producing a tick rate
+ * of 11.25MHz, and a tcnt of 56250.
+ */
+
+ if (machine_is_h1940() || machine_is_smdk2410() ) {
+ timer_ticks_usec = s3c2410_pclk / (1000*1000);
+ timer_ticks_usec /= 6;
+
+ tcfg1 &= ~S3C2410_TCFG1_MUX4_MASK;
+ tcfg1 |= S3C2410_TCFG1_MUX4_DIV2;
+
+ tcfg0 &= ~S3C2410_TCFG_PRESCALER1_MASK;
+ tcfg0 |= ((6 - 1) / 2) << S3C2410_TCFG_PRESCALER1_SHIFT;
+
+ tcnt = (s3c2410_pclk / 6) / HZ;
+ }
+
+
+ printk("setup_timer tcon=%08lx, tcnt %04lx, tcfg %08lx,%08lx\n",
+ tcon, tcnt, tcfg0, tcfg1);
+
+ /* check to see if timer is within 16bit range... */
+ if (tcnt > 0xffff) {
+ panic("setup_timer: HZ is too small, cannot configure timer!");
+ return;
+ }
+
+ __raw_writel(tcfg1, S3C2410_TCFG1);
+ __raw_writel(tcfg0, S3C2410_TCFG0);
+
+ timer_startval = tcnt;
+ __raw_writel(tcnt, S3C2410_TCNTB(4));
+
+ /* ensure timer is stopped... */
+
+ tcon &= ~(7<<20);
+ tcon |= S3C2410_TCON_T4RELOAD;
+ tcon |= S3C2410_TCON_T4MANUALUPD;
+
+ __raw_writel(tcon, S3C2410_TCON);
+ __raw_writel(tcnt, S3C2410_TCNTB(4));
+ __raw_writel(tcnt, S3C2410_TCMPB(4));
+
+ setup_irq(IRQ_TIMER4, &s3c2410_timer_irq);
+
+ /* start the timer running */
+ tcon |= S3C2410_TCON_T4START;
+ tcon &= ~S3C2410_TCON_T4MANUALUPD;
+ __raw_writel(tcon, S3C2410_TCON);
+}
+
+
+
--- /dev/null
+/*
+ * linux/arch/arm/mach-sa1100/collie.c
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * This file contains all Collie-specific tweaks.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * ChangeLog:
+ * 03-06-2004 John Lenz <jelenz@wisc.edu>
+ * 06-04-2002 Chris Larson <kergoth@digitalnemesis.net>
+ * 04-16-2001 Lineo Japan,Inc. ...
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/tty.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/timer.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/irq.h>
+#include <asm/setup.h>
+#include <asm/arch/collie.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/serial_sa1100.h>
+
+#include <asm/hardware/locomo.h>
+
+#include "generic.h"
+
+static void __init scoop_init(void)
+{
+
+#define COLLIE_SCP_INIT_DATA(adr,dat) (((adr)<<16)|(dat))
+#define COLLIE_SCP_INIT_DATA_END ((unsigned long)-1)
+ static const unsigned long scp_init[] = {
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0140), // 00
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0100),
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CDR, 0x0000), // 04
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CPR, 0x0000), // 0C
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CCR, 0x0000), // 10
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IMR, 0x0000), // 18
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x00FF), // 14
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_ISR, 0x0000), // 1C
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x0000),
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPCR, COLLIE_SCP_IO_DIR), // 20
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPWR, COLLIE_SCP_IO_OUT), // 24
+ COLLIE_SCP_INIT_DATA_END
+ };
+ int i;
+ for (i = 0; scp_init[i] != COLLIE_SCP_INIT_DATA_END; i++) {
+ int adr = scp_init[i] >> 16;
+ COLLIE_SCP_REG(adr) = scp_init[i] & 0xFFFF;
+ }
+
+}
+
+static struct resource locomo_resources[] = {
+ [0] = {
+ .start = 0x40000000,
+ .end = 0x40001fff,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_GPIO25,
+ .end = IRQ_GPIO25,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device locomo_device = {
+ .name = "locomo",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(locomo_resources),
+ .resource = locomo_resources,
+};
+
+static struct platform_device *devices[] __initdata = {
+ &locomo_device,
+};
+
+static void __init collie_init(void)
+{
+ int ret = 0;
+
+ /* cpu initialize */
+ GAFR = ( GPIO_SSP_TXD | \
+ GPIO_SSP_SCLK | GPIO_SSP_SFRM | GPIO_SSP_CLK | GPIO_TIC_ACK | \
+ GPIO_32_768kHz );
+
+ GPDR = ( GPIO_LDD8 | GPIO_LDD9 | GPIO_LDD10 | GPIO_LDD11 | GPIO_LDD12 | \
+ GPIO_LDD13 | GPIO_LDD14 | GPIO_LDD15 | GPIO_SSP_TXD | \
+ GPIO_SSP_SCLK | GPIO_SSP_SFRM | GPIO_SDLC_SCLK | \
+ GPIO_SDLC_AAF | GPIO_UART_SCLK1 | GPIO_32_768kHz );
+ GPLR = GPIO_GPIO18;
+
+ // PPC pin setting
+ PPDR = ( PPC_LDD0 | PPC_LDD1 | PPC_LDD2 | PPC_LDD3 | PPC_LDD4 | PPC_LDD5 | \
+ PPC_LDD6 | PPC_LDD7 | PPC_L_PCLK | PPC_L_LCLK | PPC_L_FCLK | PPC_L_BIAS | \
+ PPC_TXD1 | PPC_TXD2 | PPC_RXD2 | PPC_TXD3 | PPC_TXD4 | PPC_SCLK | PPC_SFRM );
+
+ PSDR = ( PPC_RXD1 | PPC_RXD2 | PPC_RXD3 | PPC_RXD4 );
+
+ GAFR |= GPIO_32_768kHz;
+ GPDR |= GPIO_32_768kHz;
+ TUCR = TUCR_32_768kHz;
+
+ scoop_init();
+
+ ret = platform_add_devices(devices, ARRAY_SIZE(devices));
+ if (ret) {
+ printk(KERN_WARNING "collie: Unable to register LoCoMo device\n");
+ }
+}
+
+static struct map_desc collie_io_desc[] __initdata = {
+ /* virtual physical length type */
+ {0xe8000000, 0x00000000, 0x02000000, MT_DEVICE}, /* 32M main flash (cs0) */
+ {0xea000000, 0x08000000, 0x02000000, MT_DEVICE}, /* 32M boot flash (cs1) */
+ {0xf0000000, 0x40000000, 0x01000000, MT_DEVICE}, /* 16M LOCOMO & SCOOP (cs4) */
+};
+
+static void __init collie_map_io(void)
+{
+ sa1100_map_io();
+ iotable_init(collie_io_desc, ARRAY_SIZE(collie_io_desc));
+}
+
+MACHINE_START(COLLIE, "Sharp-Collie")
+ BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
+ MAPIO(collie_map_io)
+ INITIRQ(sa1100_init_irq)
+ INIT_MACHINE(collie_init)
+MACHINE_END
--- /dev/null
+/*
+ * linux/arch/arm/mach-sa1100/time.c
+ *
+ * Copyright (C) 1998 Deborah Wallach.
+ * Twiddles (C) 1999 Hugo Fiennes <hugo@empeg.com>
+ *
+ * 2000/03/29 (C) Nicolas Pitre <nico@cam.org>
+ * Rewritten: big cleanup, much simpler, better HZ accuracy.
+ *
+ */
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/signal.h>
+
+#include <asm/mach/time.h>
+#include <asm/hardware.h>
+
+#define RTC_DEF_DIVIDER (32768 - 1)
+#define RTC_DEF_TRIM 0
+
+static unsigned long __init sa1100_get_rtc_time(void)
+{
+ /*
+ * According to the manual we should be able to let RTTR be zero
+ * and then a default diviser for a 32.768KHz clock is used.
+ * Apparently this doesn't work, at least for my SA1110 rev 5.
+ * If the clock divider is uninitialized then reset it to the
+ * default value to get the 1Hz clock.
+ */
+ if (RTTR == 0) {
+ RTTR = RTC_DEF_DIVIDER + (RTC_DEF_TRIM << 16);
+ printk(KERN_WARNING "Warning: uninitialized Real Time Clock\n");
+ /* The current RTC value probably doesn't make sense either */
+ RCNR = 0;
+ return 0;
+ }
+ return RCNR;
+}
+
+static int sa1100_set_rtc(void)
+{
+ unsigned long current_time = xtime.tv_sec;
+
+ if (RTSR & RTSR_ALE) {
+ /* make sure not to forward the clock over an alarm */
+ unsigned long alarm = RTAR;
+ if (current_time >= alarm && alarm >= RCNR)
+ return -ERESTARTSYS;
+ }
+ RCNR = current_time;
+ return 0;
+}
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long sa1100_gettimeoffset (void)
+{
+ unsigned long ticks_to_match, elapsed, usec;
+
+ /* Get ticks before next timer match */
+ ticks_to_match = OSMR0 - OSCR;
+
+ /* We need elapsed ticks since last match */
+ elapsed = LATCH - ticks_to_match;
+
+ /* Now convert them to usec */
+ usec = (unsigned long)(elapsed * (tick_nsec / 1000))/LATCH;
+
+ return usec;
+}
+
+/*
+ * We will be entered with IRQs enabled.
+ *
+ * Loop until we get ahead of the free running timer.
+ * This ensures an exact clock tick count and time accuracy.
+ * IRQs are disabled inside the loop to ensure coherence between
+ * lost_ticks (updated in do_timer()) and the match reg value, so we
+ * can use do_gettimeofday() from interrupt handlers.
+ */
+static irqreturn_t
+sa1100_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned int next_match;
+
+ do {
+ timer_tick(regs);
+ OSSR = OSSR_M0; /* Clear match on timer 0 */
+ next_match = (OSMR0 += LATCH);
+ } while ((signed long)(next_match - OSCR) <= 0);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction sa1100_timer_irq = {
+ .name = "SA11xx Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = sa1100_timer_interrupt
+};
+
+void __init sa1100_init_time(void)
+{
+ struct timespec tv;
+
+ gettimeoffset = sa1100_gettimeoffset;
+ set_rtc = sa1100_set_rtc;
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = sa1100_get_rtc_time();
+ do_settimeofday(&tv);
+
+ OSMR0 = 0; /* set initial match at 0 */
+ OSSR = 0xf; /* clear status on all timers */
+ setup_irq(IRQ_OST0, &sa1100_timer_irq);
+ OIER |= OIER_E0; /* enable match on timer 0 to cause interrupts */
+ OSCR = 0; /* initialize free-running timer, force first match */
+}
+
--- /dev/null
+/*
+ * linux/arch/arm/mach-versatile/clock.c
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+
+#include <asm/semaphore.h>
+#include <asm/hardware/clock.h>
+#include <asm/hardware/icst525.h>
+
+#include "clock.h"
+
+static LIST_HEAD(clocks);
+static DECLARE_MUTEX(clocks_sem);
+
+struct clk *clk_get(struct device *dev, const char *id)
+{
+ struct clk *p, *clk = ERR_PTR(-ENOENT);
+
+ down(&clocks_sem);
+ list_for_each_entry(p, &clocks, node) {
+ if (strcmp(id, p->name) == 0 && try_module_get(p->owner)) {
+ clk = p;
+ break;
+ }
+ }
+ up(&clocks_sem);
+
+ return clk;
+}
+EXPORT_SYMBOL(clk_get);
+
+void clk_put(struct clk *clk)
+{
+ module_put(clk->owner);
+}
+EXPORT_SYMBOL(clk_put);
+
+int clk_enable(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_enable);
+
+void clk_disable(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_disable);
+
+int clk_use(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_use);
+
+void clk_unuse(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_unuse);
+
+unsigned long clk_get_rate(struct clk *clk)
+{
+ return clk->rate;
+}
+EXPORT_SYMBOL(clk_get_rate);
+
+long clk_round_rate(struct clk *clk, unsigned long rate)
+{
+ return rate;
+}
+EXPORT_SYMBOL(clk_round_rate);
+
+int clk_set_rate(struct clk *clk, unsigned long rate)
+{
+ int ret = -EIO;
+#if 0 // Not yet
+ if (clk->setvco) {
+ struct icst525_vco vco;
+
+ vco = icst525_khz_to_vco(clk->params, rate);
+ clk->rate = icst525_khz(clk->params, vco);
+
+ printk("Clock %s: setting VCO reg params: S=%d R=%d V=%d\n",
+ clk->name, vco.s, vco.r, vco.v);
+
+ clk->setvco(clk, vco);
+ ret = 0;
+ }
+#endif
+ return 0;
+}
+EXPORT_SYMBOL(clk_set_rate);
+
+/*
+ * These are fixed clocks.
+ */
+static struct clk kmi_clk = {
+ .name = "KMIREFCLK",
+ .rate = 24000000,
+};
+
+static struct clk uart_clk = {
+ .name = "UARTCLK",
+ .rate = 24000000,
+};
+
+static struct clk mmci_clk = {
+ .name = "MCLK",
+ .rate = 33000000,
+};
+
+int clk_register(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_add(&clk->node, &clocks);
+ up(&clocks_sem);
+ return 0;
+}
+EXPORT_SYMBOL(clk_register);
+
+void clk_unregister(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_del(&clk->node);
+ up(&clocks_sem);
+}
+EXPORT_SYMBOL(clk_unregister);
+
+static int __init clk_init(void)
+{
+ clk_register(&kmi_clk);
+ clk_register(&uart_clk);
+ clk_register(&mmci_clk);
+ return 0;
+}
+arch_initcall(clk_init);
--- /dev/null
+/*
+ * linux/arch/arm/mach-versatile/clock.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+struct module;
+struct icst525_params;
+
+struct clk {
+ struct list_head node;
+ unsigned long rate;
+ struct module *owner;
+ const char *name;
+ const struct icst525_params *params;
+ void *data;
+ void (*setvco)(struct clk *, struct icst525_vco vco);
+};
+
+int clk_register(struct clk *clk);
+void clk_unregister(struct clk *clk);
--- /dev/null
+#
+# linux/arch/arm/vfp/Makefile
+#
+# Copyright (C) 2001 ARM Limited
+#
+
+# EXTRA_CFLAGS := -DDEBUG
+# EXTRA_AFLAGS := -DDEBUG
+
+obj-y += vfp.o
+
+vfp-$(CONFIG_VFP) += entry.o vfpmodule.o vfphw.o vfpsingle.o vfpdouble.o
--- /dev/null
+/*
+ * linux/arch/arm/vfp/entry.S
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Basic entry code, called from the kernel's undefined instruction trap.
+ * r0 = faulted instruction
+ * r5 = faulted PC+4
+ * r9 = successful return
+ * r10 = thread_info structure
+ * lr = failure return
+ */
+#include <linux/linkage.h>
+#include <linux/init.h>
+#include <asm/thread_info.h>
+#include <asm/vfpmacros.h>
+
+ .globl do_vfp
+do_vfp:
+ ldr r4, .LCvfp
+ add r10, r10, #TI_VFPSTATE @ r10 = workspace
+ ldr pc, [r4] @ call VFP entry point
+
+.LCvfp:
+ .word vfp_vector
+
+@ This code is called if the VFP does not exist. It needs to flag the
+@ failure to the VFP initialisation code.
+
+ __INIT
+ .globl vfp_testing_entry
+vfp_testing_entry:
+ ldr r0, VFP_arch_address
+ str r5, [r0] @ known non-zero value
+ mov pc, r9 @ we have handled the fault
+
+VFP_arch_address:
+ .word VFP_arch
+
+ __FINIT
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfp.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+static inline u32 vfp_shiftright32jamming(u32 val, unsigned int shift)
+{
+ if (shift) {
+ if (shift < 32)
+ val = val >> shift | ((val << (32 - shift)) != 0);
+ else
+ val = val != 0;
+ }
+ return val;
+}
+
+static inline u64 vfp_shiftright64jamming(u64 val, unsigned int shift)
+{
+ if (shift) {
+ if (shift < 64)
+ val = val >> shift | ((val << (64 - shift)) != 0);
+ else
+ val = val != 0;
+ }
+ return val;
+}
+
+static inline u32 vfp_hi64to32jamming(u64 val)
+{
+ u32 v;
+
+ asm(
+ "cmp %Q1, #1 @ vfp_hi64to32jamming\n\t"
+ "movcc %0, %R1\n\t"
+ "orrcs %0, %R1, #1"
+ : "=r" (v) : "r" (val) : "cc");
+
+ return v;
+}
+
+static inline void add128(u64 *resh, u64 *resl, u64 nh, u64 nl, u64 mh, u64 ml)
+{
+ asm( "adds %Q0, %Q2, %Q4\n\t"
+ "adcs %R0, %R2, %R4\n\t"
+ "adcs %Q1, %Q3, %Q5\n\t"
+ "adc %R1, %R3, %R5"
+ : "=r" (nl), "=r" (nh)
+ : "0" (nl), "1" (nh), "r" (ml), "r" (mh)
+ : "cc");
+ *resh = nh;
+ *resl = nl;
+}
+
+static inline void sub128(u64 *resh, u64 *resl, u64 nh, u64 nl, u64 mh, u64 ml)
+{
+ asm( "subs %Q0, %Q2, %Q4\n\t"
+ "sbcs %R0, %R2, %R4\n\t"
+ "sbcs %Q1, %Q3, %Q5\n\t"
+ "sbc %R1, %R3, %R5\n\t"
+ : "=r" (nl), "=r" (nh)
+ : "0" (nl), "1" (nh), "r" (ml), "r" (mh)
+ : "cc");
+ *resh = nh;
+ *resl = nl;
+}
+
+static inline void mul64to128(u64 *resh, u64 *resl, u64 n, u64 m)
+{
+ u32 nh, nl, mh, ml;
+ u64 rh, rma, rmb, rl;
+
+ nl = n;
+ ml = m;
+ rl = (u64)nl * ml;
+
+ nh = n >> 32;
+ rma = (u64)nh * ml;
+
+ mh = m >> 32;
+ rmb = (u64)nl * mh;
+ rma += rmb;
+
+ rh = (u64)nh * mh;
+ rh += ((u64)(rma < rmb) << 32) + (rma >> 32);
+
+ rma <<= 32;
+ rl += rma;
+ rh += (rl < rma);
+
+ *resl = rl;
+ *resh = rh;
+}
+
+static inline void shift64left(u64 *resh, u64 *resl, u64 n)
+{
+ *resh = n >> 63;
+ *resl = n << 1;
+}
+
+static inline u64 vfp_hi64multiply64(u64 n, u64 m)
+{
+ u64 rh, rl;
+ mul64to128(&rh, &rl, n, m);
+ return rh | (rl != 0);
+}
+
+static inline u64 vfp_estimate_div128to64(u64 nh, u64 nl, u64 m)
+{
+ u64 mh, ml, remh, reml, termh, terml, z;
+
+ if (nh >= m)
+ return ~0ULL;
+ mh = m >> 32;
+ z = (mh << 32 <= nh) ? 0xffffffff00000000ULL : (nh / mh) << 32;
+ mul64to128(&termh, &terml, m, z);
+ sub128(&remh, &reml, nh, nl, termh, terml);
+ ml = m << 32;
+ while ((s64)remh < 0) {
+ z -= 0x100000000ULL;
+ add128(&remh, &reml, remh, reml, mh, ml);
+ }
+ remh = (remh << 32) | (reml >> 32);
+ z |= (mh << 32 <= remh) ? 0xffffffff : remh / mh;
+ return z;
+}
+
+/*
+ * Operations on unpacked elements
+ */
+#define vfp_sign_negate(sign) (sign ^ 0x8000)
+
+/*
+ * Single-precision
+ */
+struct vfp_single {
+ s16 exponent;
+ u16 sign;
+ u32 significand;
+};
+
+extern s32 vfp_get_float(unsigned int reg);
+extern void vfp_put_float(unsigned int reg, s32 val);
+
+/*
+ * VFP_SINGLE_MANTISSA_BITS - number of bits in the mantissa
+ * VFP_SINGLE_EXPONENT_BITS - number of bits in the exponent
+ * VFP_SINGLE_LOW_BITS - number of low bits in the unpacked significand
+ * which are not propagated to the float upon packing.
+ */
+#define VFP_SINGLE_MANTISSA_BITS (23)
+#define VFP_SINGLE_EXPONENT_BITS (8)
+#define VFP_SINGLE_LOW_BITS (32 - VFP_SINGLE_MANTISSA_BITS - 2)
+#define VFP_SINGLE_LOW_BITS_MASK ((1 << VFP_SINGLE_LOW_BITS) - 1)
+
+/*
+ * The bit in an unpacked float which indicates that it is a quiet NaN
+ */
+#define VFP_SINGLE_SIGNIFICAND_QNAN (1 << (VFP_SINGLE_MANTISSA_BITS - 1 + VFP_SINGLE_LOW_BITS))
+
+/*
+ * Operations on packed single-precision numbers
+ */
+#define vfp_single_packed_sign(v) ((v) & 0x80000000)
+#define vfp_single_packed_negate(v) ((v) ^ 0x80000000)
+#define vfp_single_packed_abs(v) ((v) & ~0x80000000)
+#define vfp_single_packed_exponent(v) (((v) >> VFP_SINGLE_MANTISSA_BITS) & ((1 << VFP_SINGLE_EXPONENT_BITS) - 1))
+#define vfp_single_packed_mantissa(v) ((v) & ((1 << VFP_SINGLE_MANTISSA_BITS) - 1))
+
+/*
+ * Unpack a single-precision float. Note that this returns the magnitude
+ * of the single-precision float mantissa with the 1. if necessary,
+ * aligned to bit 30.
+ */
+static inline void vfp_single_unpack(struct vfp_single *s, s32 val)
+{
+ u32 significand;
+
+ s->sign = vfp_single_packed_sign(val) >> 16,
+ s->exponent = vfp_single_packed_exponent(val);
+
+ significand = (u32) val;
+ significand = (significand << (32 - VFP_SINGLE_MANTISSA_BITS)) >> 2;
+ if (s->exponent && s->exponent != 255)
+ significand |= 0x40000000;
+ s->significand = significand;
+}
+
+/*
+ * Re-pack a single-precision float. This assumes that the float is
+ * already normalised such that the MSB is bit 30, _not_ bit 31.
+ */
+static inline s32 vfp_single_pack(struct vfp_single *s)
+{
+ u32 val;
+ val = (s->sign << 16) +
+ (s->exponent << VFP_SINGLE_MANTISSA_BITS) +
+ (s->significand >> VFP_SINGLE_LOW_BITS);
+ return (s32)val;
+}
+
+#define VFP_NUMBER (1<<0)
+#define VFP_ZERO (1<<1)
+#define VFP_DENORMAL (1<<2)
+#define VFP_INFINITY (1<<3)
+#define VFP_NAN (1<<4)
+#define VFP_NAN_SIGNAL (1<<5)
+
+#define VFP_QNAN (VFP_NAN)
+#define VFP_SNAN (VFP_NAN|VFP_NAN_SIGNAL)
+
+static inline int vfp_single_type(struct vfp_single *s)
+{
+ int type = VFP_NUMBER;
+ if (s->exponent == 255) {
+ if (s->significand == 0)
+ type = VFP_INFINITY;
+ else if (s->significand & VFP_SINGLE_SIGNIFICAND_QNAN)
+ type = VFP_QNAN;
+ else
+ type = VFP_SNAN;
+ } else if (s->exponent == 0) {
+ if (s->significand == 0)
+ type |= VFP_ZERO;
+ else
+ type |= VFP_DENORMAL;
+ }
+ return type;
+}
+
+#ifndef DEBUG
+#define vfp_single_normaliseround(sd,vsd,fpscr,except,func) __vfp_single_normaliseround(sd,vsd,fpscr,except)
+u32 __vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions);
+#else
+u32 vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions, const char *func);
+#endif
+
+/*
+ * Double-precision
+ */
+struct vfp_double {
+ s16 exponent;
+ u16 sign;
+ u64 significand;
+};
+
+extern u64 vfp_get_double(unsigned int reg);
+extern void vfp_put_double(unsigned int reg, u64 val);
+
+#define VFP_DOUBLE_MANTISSA_BITS (52)
+#define VFP_DOUBLE_EXPONENT_BITS (11)
+#define VFP_DOUBLE_LOW_BITS (64 - VFP_DOUBLE_MANTISSA_BITS - 2)
+#define VFP_DOUBLE_LOW_BITS_MASK ((1 << VFP_DOUBLE_LOW_BITS) - 1)
+
+/*
+ * The bit in an unpacked double which indicates that it is a quiet NaN
+ */
+#define VFP_DOUBLE_SIGNIFICAND_QNAN (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1 + VFP_DOUBLE_LOW_BITS))
+
+/*
+ * Operations on packed single-precision numbers
+ */
+#define vfp_double_packed_sign(v) ((v) & (1ULL << 63))
+#define vfp_double_packed_negate(v) ((v) ^ (1ULL << 63))
+#define vfp_double_packed_abs(v) ((v) & ~(1ULL << 63))
+#define vfp_double_packed_exponent(v) (((v) >> VFP_DOUBLE_MANTISSA_BITS) & ((1 << VFP_DOUBLE_EXPONENT_BITS) - 1))
+#define vfp_double_packed_mantissa(v) ((v) & ((1ULL << VFP_DOUBLE_MANTISSA_BITS) - 1))
+
+/*
+ * Unpack a double-precision float. Note that this returns the magnitude
+ * of the double-precision float mantissa with the 1. if necessary,
+ * aligned to bit 62.
+ */
+static inline void vfp_double_unpack(struct vfp_double *s, s64 val)
+{
+ u64 significand;
+
+ s->sign = vfp_double_packed_sign(val) >> 48;
+ s->exponent = vfp_double_packed_exponent(val);
+
+ significand = (u64) val;
+ significand = (significand << (64 - VFP_DOUBLE_MANTISSA_BITS)) >> 2;
+ if (s->exponent && s->exponent != 2047)
+ significand |= (1ULL << 62);
+ s->significand = significand;
+}
+
+/*
+ * Re-pack a double-precision float. This assumes that the float is
+ * already normalised such that the MSB is bit 30, _not_ bit 31.
+ */
+static inline s64 vfp_double_pack(struct vfp_double *s)
+{
+ u64 val;
+ val = ((u64)s->sign << 48) +
+ ((u64)s->exponent << VFP_DOUBLE_MANTISSA_BITS) +
+ (s->significand >> VFP_DOUBLE_LOW_BITS);
+ return (s64)val;
+}
+
+static inline int vfp_double_type(struct vfp_double *s)
+{
+ int type = VFP_NUMBER;
+ if (s->exponent == 2047) {
+ if (s->significand == 0)
+ type = VFP_INFINITY;
+ else if (s->significand & VFP_DOUBLE_SIGNIFICAND_QNAN)
+ type = VFP_QNAN;
+ else
+ type = VFP_SNAN;
+ } else if (s->exponent == 0) {
+ if (s->significand == 0)
+ type |= VFP_ZERO;
+ else
+ type |= VFP_DENORMAL;
+ }
+ return type;
+}
+
+u32 vfp_double_normaliseround(int dd, struct vfp_double *vd, u32 fpscr, u32 exceptions, const char *func);
+
+/*
+ * System registers
+ */
+extern u32 vfp_get_sys(unsigned int reg);
+extern void vfp_put_sys(unsigned int reg, u32 val);
+
+u32 vfp_estimate_sqrt_significand(u32 exponent, u32 significand);
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpdouble.c
+ *
+ * This code is derived in part from John R. Housers softfloat library, which
+ * carries the following notice:
+ *
+ * ===========================================================================
+ * This C source file is part of the SoftFloat IEC/IEEE Floating-point
+ * Arithmetic Package, Release 2.
+ *
+ * Written by John R. Hauser. This work was made possible in part by the
+ * International Computer Science Institute, located at Suite 600, 1947 Center
+ * Street, Berkeley, California 94704. Funding was partially provided by the
+ * National Science Foundation under grant MIP-9311980. The original version
+ * of this code was written as part of a project to build a fixed-point vector
+ * processor in collaboration with the University of California at Berkeley,
+ * overseen by Profs. Nelson Morgan and John Wawrzynek. More information
+ * is available through the web page `http://HTTP.CS.Berkeley.EDU/~jhauser/
+ * arithmetic/softfloat.html'.
+ *
+ * THIS SOFTWARE IS DISTRIBUTED AS IS, FOR FREE. Although reasonable effort
+ * has been made to avoid it, THIS SOFTWARE MAY CONTAIN FAULTS THAT WILL AT
+ * TIMES RESULT IN INCORRECT BEHAVIOR. USE OF THIS SOFTWARE IS RESTRICTED TO
+ * PERSONS AND ORGANIZATIONS WHO CAN AND WILL TAKE FULL RESPONSIBILITY FOR ANY
+ * AND ALL LOSSES, COSTS, OR OTHER PROBLEMS ARISING FROM ITS USE.
+ *
+ * Derivative works are acceptable, even for commercial purposes, so long as
+ * (1) they include prominent notice that the work is derivative, and (2) they
+ * include prominent notice akin to these three paragraphs for those parts of
+ * this code that are retained.
+ * ===========================================================================
+ */
+#include <linux/kernel.h>
+#include <asm/bitops.h>
+#include <asm/ptrace.h>
+#include <asm/vfp.h>
+
+#include "vfpinstr.h"
+#include "vfp.h"
+
+static struct vfp_double vfp_double_default_qnan = {
+ .exponent = 2047,
+ .sign = 0,
+ .significand = VFP_DOUBLE_SIGNIFICAND_QNAN,
+};
+
+static void vfp_double_dump(const char *str, struct vfp_double *d)
+{
+ pr_debug("VFP: %s: sign=%d exponent=%d significand=%016llx\n",
+ str, d->sign != 0, d->exponent, d->significand);
+}
+
+static void vfp_double_normalise_denormal(struct vfp_double *vd)
+{
+ int bits = 31 - fls(vd->significand >> 32);
+ if (bits == 31)
+ bits = 62 - fls(vd->significand);
+
+ vfp_double_dump("normalise_denormal: in", vd);
+
+ if (bits) {
+ vd->exponent -= bits - 1;
+ vd->significand <<= bits;
+ }
+
+ vfp_double_dump("normalise_denormal: out", vd);
+}
+
+u32 vfp_double_normaliseround(int dd, struct vfp_double *vd, u32 fpscr, u32 exceptions, const char *func)
+{
+ u64 significand, incr;
+ int exponent, shift, underflow;
+ u32 rmode;
+
+ vfp_double_dump("pack: in", vd);
+
+ /*
+ * Infinities and NaNs are a special case.
+ */
+ if (vd->exponent == 2047 && (vd->significand == 0 || exceptions))
+ goto pack;
+
+ /*
+ * Special-case zero.
+ */
+ if (vd->significand == 0) {
+ vd->exponent = 0;
+ goto pack;
+ }
+
+ exponent = vd->exponent;
+ significand = vd->significand;
+
+ shift = 32 - fls(significand >> 32);
+ if (shift == 32)
+ shift = 64 - fls(significand);
+ if (shift) {
+ exponent -= shift;
+ significand <<= shift;
+ }
+
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: normalised", vd);
+#endif
+
+ /*
+ * Tiny number?
+ */
+ underflow = exponent < 0;
+ if (underflow) {
+ significand = vfp_shiftright64jamming(significand, -exponent);
+ exponent = 0;
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: tiny number", vd);
+#endif
+ if (!(significand & ((1ULL << (VFP_DOUBLE_LOW_BITS + 1)) - 1)))
+ underflow = 0;
+ }
+
+ /*
+ * Select rounding increment.
+ */
+ incr = 0;
+ rmode = fpscr & FPSCR_RMODE_MASK;
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 1ULL << VFP_DOUBLE_LOW_BITS;
+ if ((significand & (1ULL << (VFP_DOUBLE_LOW_BITS + 1))) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vd->sign != 0))
+ incr = (1ULL << (VFP_DOUBLE_LOW_BITS + 1)) - 1;
+
+ pr_debug("VFP: rounding increment = 0x%08llx\n", incr);
+
+ /*
+ * Is our rounding going to overflow?
+ */
+ if ((significand + incr) < significand) {
+ exponent += 1;
+ significand = (significand >> 1) | (significand & 1);
+ incr >>= 1;
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: overflow", vd);
+#endif
+ }
+
+ /*
+ * If any of the low bits (which will be shifted out of the
+ * number) are non-zero, the result is inexact.
+ */
+ if (significand & ((1 << (VFP_DOUBLE_LOW_BITS + 1)) - 1))
+ exceptions |= FPSCR_IXC;
+
+ /*
+ * Do our rounding.
+ */
+ significand += incr;
+
+ /*
+ * Infinity?
+ */
+ if (exponent >= 2046) {
+ exceptions |= FPSCR_OFC | FPSCR_IXC;
+ if (incr == 0) {
+ vd->exponent = 2045;
+ vd->significand = 0x7fffffffffffffffULL;
+ } else {
+ vd->exponent = 2047; /* infinity */
+ vd->significand = 0;
+ }
+ } else {
+ if (significand >> (VFP_DOUBLE_LOW_BITS + 1) == 0)
+ exponent = 0;
+ if (exponent || significand > 0x8000000000000000ULL)
+ underflow = 0;
+ if (underflow)
+ exceptions |= FPSCR_UFC;
+ vd->exponent = exponent;
+ vd->significand = significand >> 1;
+ }
+
+ pack:
+ vfp_double_dump("pack: final", vd);
+ {
+ s64 d = vfp_double_pack(vd);
+ pr_debug("VFP: %s: d(d%d)=%016llx exceptions=%08x\n", func,
+ dd, d, exceptions);
+ vfp_put_double(dd, d);
+ }
+ return exceptions;
+}
+
+/*
+ * Propagate the NaN, setting exceptions if it is signalling.
+ * 'n' is always a NaN. 'm' may be a number, NaN or infinity.
+ */
+static u32
+vfp_propagate_nan(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ struct vfp_double *nan;
+ int tn, tm = 0;
+
+ tn = vfp_double_type(vdn);
+
+ if (vdm)
+ tm = vfp_double_type(vdm);
+
+ if (fpscr & FPSCR_DEFAULT_NAN)
+ /*
+ * Default NaN mode - always returns a quiet NaN
+ */
+ nan = &vfp_double_default_qnan;
+ else {
+ /*
+ * Contemporary mode - select the first signalling
+ * NAN, or if neither are signalling, the first
+ * quiet NAN.
+ */
+ if (tn == VFP_SNAN || (tm != VFP_SNAN && tn == VFP_QNAN))
+ nan = vdn;
+ else
+ nan = vdm;
+ /*
+ * Make the NaN quiet.
+ */
+ nan->significand |= VFP_DOUBLE_SIGNIFICAND_QNAN;
+ }
+
+ *vdd = *nan;
+
+ /*
+ * If one was a signalling NAN, raise invalid operation.
+ */
+ return tn == VFP_SNAN || tm == VFP_SNAN ? FPSCR_IOC : 0x100;
+}
+
+/*
+ * Extended operations
+ */
+static u32 vfp_double_fabs(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_double_packed_abs(vfp_get_double(dm)));
+ return 0;
+}
+
+static u32 vfp_double_fcpy(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_get_double(dm));
+ return 0;
+}
+
+static u32 vfp_double_fneg(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_double_packed_negate(vfp_get_double(dm)));
+ return 0;
+}
+
+static u32 vfp_double_fsqrt(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm, vdd;
+ int ret, tm;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ tm = vfp_double_type(&vdm);
+ if (tm & (VFP_NAN|VFP_INFINITY)) {
+ struct vfp_double *vdp = &vdd;
+
+ if (tm & VFP_NAN)
+ ret = vfp_propagate_nan(vdp, &vdm, NULL, fpscr);
+ else if (vdm.sign == 0) {
+ sqrt_copy:
+ vdp = &vdm;
+ ret = 0;
+ } else {
+ sqrt_invalid:
+ vdp = &vfp_double_default_qnan;
+ ret = FPSCR_IOC;
+ }
+ vfp_put_double(dd, vfp_double_pack(vdp));
+ return ret;
+ }
+
+ /*
+ * sqrt(+/- 0) == +/- 0
+ */
+ if (tm & VFP_ZERO)
+ goto sqrt_copy;
+
+ /*
+ * Normalise a denormalised number
+ */
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * sqrt(<0) = invalid
+ */
+ if (vdm.sign)
+ goto sqrt_invalid;
+
+ vfp_double_dump("sqrt", &vdm);
+
+ /*
+ * Estimate the square root.
+ */
+ vdd.sign = 0;
+ vdd.exponent = ((vdm.exponent - 1023) >> 1) + 1023;
+ vdd.significand = (u64)vfp_estimate_sqrt_significand(vdm.exponent, vdm.significand >> 32) << 31;
+
+ vfp_double_dump("sqrt estimate1", &vdd);
+
+ vdm.significand >>= 1 + (vdm.exponent & 1);
+ vdd.significand += 2 + vfp_estimate_div128to64(vdm.significand, 0, vdd.significand);
+
+ vfp_double_dump("sqrt estimate2", &vdd);
+
+ /*
+ * And now adjust.
+ */
+ if ((vdd.significand & VFP_DOUBLE_LOW_BITS_MASK) <= 5) {
+ if (vdd.significand < 2) {
+ vdd.significand = ~0ULL;
+ } else {
+ u64 termh, terml, remh, reml;
+ vdm.significand <<= 2;
+ mul64to128(&termh, &terml, vdd.significand, vdd.significand);
+ sub128(&remh, &reml, vdm.significand, 0, termh, terml);
+ while ((s64)remh < 0) {
+ vdd.significand -= 1;
+ shift64left(&termh, &terml, vdd.significand);
+ terml |= 1;
+ add128(&remh, &reml, remh, reml, termh, terml);
+ }
+ vdd.significand |= (remh | reml) != 0;
+ }
+ }
+ vdd.significand = vfp_shiftright64jamming(vdd.significand, 1);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, 0, "fsqrt");
+}
+
+/*
+ * Equal := ZC
+ * Less than := N
+ * Greater than := C
+ * Unordered := CV
+ */
+static u32 vfp_compare(int dd, int signal_on_qnan, int dm, u32 fpscr)
+{
+ s64 d, m;
+ u32 ret = 0;
+
+ m = vfp_get_double(dm);
+ if (vfp_double_packed_exponent(m) == 2047 && vfp_double_packed_mantissa(m)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_double_packed_mantissa(m) & (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ d = vfp_get_double(dd);
+ if (vfp_double_packed_exponent(d) == 2047 && vfp_double_packed_mantissa(d)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_double_packed_mantissa(d) & (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (ret == 0) {
+ if (d == m || vfp_double_packed_abs(d | m) == 0) {
+ /*
+ * equal
+ */
+ ret |= FPSCR_Z | FPSCR_C;
+ } else if (vfp_double_packed_sign(d ^ m)) {
+ /*
+ * different signs
+ */
+ if (vfp_double_packed_sign(d))
+ /*
+ * d is negative, so d < m
+ */
+ ret |= FPSCR_N;
+ else
+ /*
+ * d is positive, so d > m
+ */
+ ret |= FPSCR_C;
+ } else if ((vfp_double_packed_sign(d) != 0) ^ (d < m)) {
+ /*
+ * d < m
+ */
+ ret |= FPSCR_N;
+ } else if ((vfp_double_packed_sign(d) != 0) ^ (d > m)) {
+ /*
+ * d > m
+ */
+ ret |= FPSCR_C;
+ }
+ }
+
+ return ret;
+}
+
+static u32 vfp_double_fcmp(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 0, dm, fpscr);
+}
+
+static u32 vfp_double_fcmpe(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 1, dm, fpscr);
+}
+
+static u32 vfp_double_fcmpz(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 0, -1, fpscr);
+}
+
+static u32 vfp_double_fcmpez(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 1, -1, fpscr);
+}
+
+static u32 vfp_double_fcvts(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ struct vfp_single vsd;
+ int tm;
+ u32 exceptions = 0;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ tm = vfp_double_type(&vdm);
+
+ /*
+ * If we have a signalling NaN, signal invalid operation.
+ */
+ if (tm == VFP_SNAN)
+ exceptions = FPSCR_IOC;
+
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ vsd.sign = vdm.sign;
+ vsd.significand = vfp_hi64to32jamming(vdm.significand);
+
+ /*
+ * If we have an infinity or a NaN, the exponent must be 255
+ */
+ if (tm & (VFP_INFINITY|VFP_NAN)) {
+ vsd.exponent = 255;
+ if (tm & VFP_NAN)
+ vsd.significand |= VFP_SINGLE_SIGNIFICAND_QNAN;
+ goto pack_nan;
+ } else if (tm & VFP_ZERO)
+ vsd.exponent = 0;
+ else
+ vsd.exponent = vdm.exponent - (1023 - 127);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fcvts");
+
+ pack_nan:
+ vfp_put_float(sd, vfp_single_pack(&vsd));
+ return exceptions;
+}
+
+static u32 vfp_double_fuito(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 m = vfp_get_float(dm);
+
+ vdm.sign = 0;
+ vdm.exponent = 1023 + 63 - 1;
+ vdm.significand = (u64)m;
+
+ return vfp_double_normaliseround(dd, &vdm, fpscr, 0, "fuito");
+}
+
+static u32 vfp_double_fsito(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 m = vfp_get_float(dm);
+
+ vdm.sign = (m & 0x80000000) >> 16;
+ vdm.exponent = 1023 + 63 - 1;
+ vdm.significand = vdm.sign ? -m : m;
+
+ return vfp_double_normaliseround(dd, &vdm, fpscr, 0, "fsito");
+}
+
+static u32 vfp_double_ftoui(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+ int tm;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ /*
+ * Do we have a denormalised number?
+ */
+ tm = vfp_double_type(&vdm);
+ if (tm & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (tm & VFP_NAN)
+ vdm.sign = 0;
+
+ if (vdm.exponent >= 1023 + 32) {
+ d = vdm.sign ? 0 : 0xffffffff;
+ exceptions = FPSCR_IOC;
+ } else if (vdm.exponent >= 1023 - 1) {
+ int shift = 1023 + 63 - vdm.exponent;
+ u64 rem, incr = 0;
+
+ /*
+ * 2^0 <= m < 2^32-2^8
+ */
+ d = (vdm.significand << 1) >> shift;
+ rem = vdm.significand << (65 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x8000000000000000ULL;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vdm.sign != 0)) {
+ incr = ~0ULL;
+ }
+
+ if ((rem + incr) < rem) {
+ if (d < 0xffffffff)
+ d += 1;
+ else
+ exceptions |= FPSCR_IOC;
+ }
+
+ if (d && vdm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+ } else {
+ d = 0;
+ if (vdm.exponent | vdm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vdm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vdm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ }
+ }
+ }
+
+ pr_debug("VFP: ftoui: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, d);
+
+ return exceptions;
+}
+
+static u32 vfp_double_ftouiz(int sd, int unused, int dm, u32 fpscr)
+{
+ return vfp_double_ftoui(sd, unused, dm, FPSCR_ROUND_TOZERO);
+}
+
+static u32 vfp_double_ftosi(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ vfp_double_dump("VDM", &vdm);
+
+ /*
+ * Do we have denormalised number?
+ */
+ if (vfp_double_type(&vdm) & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (vdm.exponent >= 1023 + 32) {
+ d = 0x7fffffff;
+ if (vdm.sign)
+ d = ~d;
+ exceptions |= FPSCR_IOC;
+ } else if (vdm.exponent >= 1023 - 1) {
+ int shift = 1023 + 63 - vdm.exponent; /* 58 */
+ u64 rem, incr = 0;
+
+ d = (vdm.significand << 1) >> shift;
+ rem = vdm.significand << (65 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x8000000000000000ULL;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vdm.sign != 0)) {
+ incr = ~0ULL;
+ }
+
+ if ((rem + incr) < rem && d < 0xffffffff)
+ d += 1;
+ if (d > 0x7fffffff + (vdm.sign != 0)) {
+ d = 0x7fffffff + (vdm.sign != 0);
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+
+ if (vdm.sign)
+ d = -d;
+ } else {
+ d = 0;
+ if (vdm.exponent | vdm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vdm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vdm.sign)
+ d = -1;
+ }
+ }
+
+ pr_debug("VFP: ftosi: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, (s32)d);
+
+ return exceptions;
+}
+
+static u32 vfp_double_ftosiz(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_double_ftosi(dd, unused, dm, FPSCR_ROUND_TOZERO);
+}
+
+
+static u32 (* const fop_extfns[32])(int dd, int unused, int dm, u32 fpscr) = {
+ [FEXT_TO_IDX(FEXT_FCPY)] = vfp_double_fcpy,
+ [FEXT_TO_IDX(FEXT_FABS)] = vfp_double_fabs,
+ [FEXT_TO_IDX(FEXT_FNEG)] = vfp_double_fneg,
+ [FEXT_TO_IDX(FEXT_FSQRT)] = vfp_double_fsqrt,
+ [FEXT_TO_IDX(FEXT_FCMP)] = vfp_double_fcmp,
+ [FEXT_TO_IDX(FEXT_FCMPE)] = vfp_double_fcmpe,
+ [FEXT_TO_IDX(FEXT_FCMPZ)] = vfp_double_fcmpz,
+ [FEXT_TO_IDX(FEXT_FCMPEZ)] = vfp_double_fcmpez,
+ [FEXT_TO_IDX(FEXT_FCVT)] = vfp_double_fcvts,
+ [FEXT_TO_IDX(FEXT_FUITO)] = vfp_double_fuito,
+ [FEXT_TO_IDX(FEXT_FSITO)] = vfp_double_fsito,
+ [FEXT_TO_IDX(FEXT_FTOUI)] = vfp_double_ftoui,
+ [FEXT_TO_IDX(FEXT_FTOUIZ)] = vfp_double_ftouiz,
+ [FEXT_TO_IDX(FEXT_FTOSI)] = vfp_double_ftosi,
+ [FEXT_TO_IDX(FEXT_FTOSIZ)] = vfp_double_ftosiz,
+};
+
+
+
+
+static u32
+vfp_double_fadd_nonnumber(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ struct vfp_double *vdp;
+ u32 exceptions = 0;
+ int tn, tm;
+
+ tn = vfp_double_type(vdn);
+ tm = vfp_double_type(vdm);
+
+ if (tn & tm & VFP_INFINITY) {
+ /*
+ * Two infinities. Are they different signs?
+ */
+ if (vdn->sign ^ vdm->sign) {
+ /*
+ * different signs -> invalid
+ */
+ exceptions = FPSCR_IOC;
+ vdp = &vfp_double_default_qnan;
+ } else {
+ /*
+ * same signs -> valid
+ */
+ vdp = vdn;
+ }
+ } else if (tn & VFP_INFINITY && tm & VFP_NUMBER) {
+ /*
+ * One infinity and one number -> infinity
+ */
+ vdp = vdn;
+ } else {
+ /*
+ * 'n' is a NaN of some type
+ */
+ return vfp_propagate_nan(vdd, vdn, vdm, fpscr);
+ }
+ *vdd = *vdp;
+ return exceptions;
+}
+
+static u32
+vfp_double_add(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ u32 exp_diff;
+ u64 m_sig;
+
+ if (vdn->significand & (1ULL << 63) ||
+ vdm->significand & (1ULL << 63)) {
+ pr_info("VFP: bad FP values in %s\n", __func__);
+ vfp_double_dump("VDN", vdn);
+ vfp_double_dump("VDM", vdm);
+ }
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vdn->exponent < vdm->exponent) {
+ struct vfp_double *t = vdn;
+ vdn = vdm;
+ vdm = t;
+ }
+
+ /*
+ * Is 'n' an infinity or a NaN? Note that 'm' may be a number,
+ * infinity or a NaN here.
+ */
+ if (vdn->exponent == 2047)
+ return vfp_double_fadd_nonnumber(vdd, vdn, vdm, fpscr);
+
+ /*
+ * We have two proper numbers, where 'vdn' is the larger magnitude.
+ *
+ * Copy 'n' to 'd' before doing the arithmetic.
+ */
+ *vdd = *vdn;
+
+ /*
+ * Align 'm' with the result.
+ */
+ exp_diff = vdn->exponent - vdm->exponent;
+ m_sig = vfp_shiftright64jamming(vdm->significand, exp_diff);
+
+ /*
+ * If the signs are different, we are really subtracting.
+ */
+ if (vdn->sign ^ vdm->sign) {
+ m_sig = vdn->significand - m_sig;
+ if ((s64)m_sig < 0) {
+ vdd->sign = vfp_sign_negate(vdd->sign);
+ m_sig = -m_sig;
+ }
+ } else {
+ m_sig += vdn->significand;
+ }
+ vdd->significand = m_sig;
+
+ return 0;
+}
+
+static u32
+vfp_double_multiply(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ vfp_double_dump("VDN", vdn);
+ vfp_double_dump("VDM", vdm);
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vdn->exponent < vdm->exponent) {
+ struct vfp_double *t = vdn;
+ vdn = vdm;
+ vdm = t;
+ pr_debug("VFP: swapping M <-> N\n");
+ }
+
+ vdd->sign = vdn->sign ^ vdm->sign;
+
+ /*
+ * If 'n' is an infinity or NaN, handle it. 'm' may be anything.
+ */
+ if (vdn->exponent == 2047) {
+ if (vdn->significand || (vdm->exponent == 2047 && vdm->significand))
+ return vfp_propagate_nan(vdd, vdn, vdm, fpscr);
+ if ((vdm->exponent | vdm->significand) == 0) {
+ *vdd = vfp_double_default_qnan;
+ return FPSCR_IOC;
+ }
+ vdd->exponent = vdn->exponent;
+ vdd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * If 'm' is zero, the result is always zero. In this case,
+ * 'n' may be zero or a number, but it doesn't matter which.
+ */
+ if ((vdm->exponent | vdm->significand) == 0) {
+ vdd->exponent = 0;
+ vdd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * We add 2 to the destination exponent for the same reason
+ * as the addition case - though this time we have +1 from
+ * each input operand.
+ */
+ vdd->exponent = vdn->exponent + vdm->exponent - 1023 + 2;
+ vdd->significand = vfp_hi64multiply64(vdn->significand, vdm->significand);
+
+ vfp_double_dump("VDD", vdd);
+ return 0;
+}
+
+#define NEG_MULTIPLY (1 << 0)
+#define NEG_SUBTRACT (1 << 1)
+
+static u32
+vfp_double_multiply_accumulate(int dd, int dn, int dm, u32 fpscr, u32 negate, char *func)
+{
+ struct vfp_double vdd, vdp, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdp, &vdn, &vdm, fpscr);
+ if (negate & NEG_MULTIPLY)
+ vdp.sign = vfp_sign_negate(vdp.sign);
+
+ vfp_double_unpack(&vdn, vfp_get_double(dd));
+ if (negate & NEG_SUBTRACT)
+ vdn.sign = vfp_sign_negate(vdn.sign);
+
+ exceptions |= vfp_double_add(&vdd, &vdn, &vdp, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, func);
+}
+
+/*
+ * Standard operations
+ */
+
+/*
+ * sd = sd + (sn * sm)
+ */
+static u32 vfp_double_fmac(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, 0, "fmac");
+}
+
+/*
+ * sd = sd - (sn * sm)
+ */
+static u32 vfp_double_fnmac(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_MULTIPLY, "fnmac");
+}
+
+/*
+ * sd = -sd + (sn * sm)
+ */
+static u32 vfp_double_fmsc(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_SUBTRACT, "fmsc");
+}
+
+/*
+ * sd = -sd - (sn * sm)
+ */
+static u32 vfp_double_fnmsc(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_SUBTRACT | NEG_MULTIPLY, "fnmsc");
+}
+
+/*
+ * sd = sn * sm
+ */
+static u32 vfp_double_fmul(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdd, &vdn, &vdm, fpscr);
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fmul");
+}
+
+/*
+ * sd = -(sn * sm)
+ */
+static u32 vfp_double_fnmul(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdd, &vdn, &vdm, fpscr);
+ vdd.sign = vfp_sign_negate(vdd.sign);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fnmul");
+}
+
+/*
+ * sd = sn + sm
+ */
+static u32 vfp_double_fadd(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_add(&vdd, &vdn, &vdm, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fadd");
+}
+
+/*
+ * sd = sn - sm
+ */
+static u32 vfp_double_fsub(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * Subtraction is like addition, but with a negated operand.
+ */
+ vdm.sign = vfp_sign_negate(vdm.sign);
+
+ exceptions = vfp_double_add(&vdd, &vdn, &vdm, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fsub");
+}
+
+/*
+ * sd = sn / sm
+ */
+static u32 vfp_double_fdiv(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions = 0;
+ int tm, tn;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ vdd.sign = vdn.sign ^ vdm.sign;
+
+ tn = vfp_double_type(&vdn);
+ tm = vfp_double_type(&vdm);
+
+ /*
+ * Is n a NAN?
+ */
+ if (tn & VFP_NAN)
+ goto vdn_nan;
+
+ /*
+ * Is m a NAN?
+ */
+ if (tm & VFP_NAN)
+ goto vdm_nan;
+
+ /*
+ * If n and m are infinity, the result is invalid
+ * If n and m are zero, the result is invalid
+ */
+ if (tm & tn & (VFP_INFINITY|VFP_ZERO))
+ goto invalid;
+
+ /*
+ * If n is infinity, the result is infinity
+ */
+ if (tn & VFP_INFINITY)
+ goto infinity;
+
+ /*
+ * If m is zero, raise div0 exceptions
+ */
+ if (tm & VFP_ZERO)
+ goto divzero;
+
+ /*
+ * If m is infinity, or n is zero, the result is zero
+ */
+ if (tm & VFP_INFINITY || tn & VFP_ZERO)
+ goto zero;
+
+ if (tn & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdn);
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * Ok, we have two numbers, we can perform division.
+ */
+ vdd.exponent = vdn.exponent - vdm.exponent + 1023 - 1;
+ vdm.significand <<= 1;
+ if (vdm.significand <= (2 * vdn.significand)) {
+ vdn.significand >>= 1;
+ vdd.exponent++;
+ }
+ vdd.significand = vfp_estimate_div128to64(vdn.significand, 0, vdm.significand);
+ if ((vdd.significand & 0x1ff) <= 2) {
+ u64 termh, terml, remh, reml;
+ mul64to128(&termh, &terml, vdm.significand, vdd.significand);
+ sub128(&remh, &reml, vdn.significand, 0, termh, terml);
+ while ((s64)remh < 0) {
+ vdd.significand -= 1;
+ add128(&remh, &reml, remh, reml, 0, vdm.significand);
+ }
+ vdd.significand |= (reml != 0);
+ }
+ return vfp_double_normaliseround(dd, &vdd, fpscr, 0, "fdiv");
+
+ vdn_nan:
+ exceptions = vfp_propagate_nan(&vdd, &vdn, &vdm, fpscr);
+ pack:
+ vfp_put_double(dd, vfp_double_pack(&vdd));
+ return exceptions;
+
+ vdm_nan:
+ exceptions = vfp_propagate_nan(&vdd, &vdm, &vdn, fpscr);
+ goto pack;
+
+ zero:
+ vdd.exponent = 0;
+ vdd.significand = 0;
+ goto pack;
+
+ divzero:
+ exceptions = FPSCR_DZC;
+ infinity:
+ vdd.exponent = 2047;
+ vdd.significand = 0;
+ goto pack;
+
+ invalid:
+ vfp_put_double(dd, vfp_double_pack(&vfp_double_default_qnan));
+ return FPSCR_IOC;
+}
+
+static u32 (* const fop_fns[16])(int dd, int dn, int dm, u32 fpscr) = {
+ [FOP_TO_IDX(FOP_FMAC)] = vfp_double_fmac,
+ [FOP_TO_IDX(FOP_FNMAC)] = vfp_double_fnmac,
+ [FOP_TO_IDX(FOP_FMSC)] = vfp_double_fmsc,
+ [FOP_TO_IDX(FOP_FNMSC)] = vfp_double_fnmsc,
+ [FOP_TO_IDX(FOP_FMUL)] = vfp_double_fmul,
+ [FOP_TO_IDX(FOP_FNMUL)] = vfp_double_fnmul,
+ [FOP_TO_IDX(FOP_FADD)] = vfp_double_fadd,
+ [FOP_TO_IDX(FOP_FSUB)] = vfp_double_fsub,
+ [FOP_TO_IDX(FOP_FDIV)] = vfp_double_fdiv,
+};
+
+#define FREG_BANK(x) ((x) & 0x0c)
+#define FREG_IDX(x) ((x) & 3)
+
+u32 vfp_double_cpdo(u32 inst, u32 fpscr)
+{
+ u32 op = inst & FOP_MASK;
+ u32 exceptions = 0;
+ unsigned int dd = vfp_get_sd(inst);
+ unsigned int dn = vfp_get_sn(inst);
+ unsigned int dm = vfp_get_sm(inst);
+ unsigned int vecitr, veclen, vecstride;
+ u32 (*fop)(int, int, s32, u32);
+
+ veclen = fpscr & FPSCR_LENGTH_MASK;
+ vecstride = (1 + ((fpscr & FPSCR_STRIDE_MASK) == FPSCR_STRIDE_MASK)) * 2;
+
+ /*
+ * If destination bank is zero, vector length is always '1'.
+ * ARM DDI0100F C5.1.3, C5.3.2.
+ */
+ if (FREG_BANK(dd) == 0)
+ veclen = 0;
+
+ pr_debug("VFP: vecstride=%u veclen=%u\n", vecstride,
+ (veclen >> FPSCR_LENGTH_BIT) + 1);
+
+ fop = (op == FOP_EXT) ? fop_extfns[dn] : fop_fns[FOP_TO_IDX(op)];
+ if (!fop)
+ goto invalid;
+
+ for (vecitr = 0; vecitr <= veclen; vecitr += 1 << FPSCR_LENGTH_BIT) {
+ u32 except;
+
+ if (op == FOP_EXT)
+ pr_debug("VFP: itr%d (d%u.%u) = op[%u] (d%u.%u)\n",
+ vecitr >> FPSCR_LENGTH_BIT,
+ dd >> 1, dd & 1, dn,
+ dm >> 1, dm & 1);
+ else
+ pr_debug("VFP: itr%d (d%u.%u) = (d%u.%u) op[%u] (d%u.%u)\n",
+ vecitr >> FPSCR_LENGTH_BIT,
+ dd >> 1, dd & 1,
+ dn >> 1, dn & 1,
+ FOP_TO_IDX(op),
+ dm >> 1, dm & 1);
+
+ except = fop(dd, dn, dm, fpscr);
+ pr_debug("VFP: itr%d: exceptions=%08x\n",
+ vecitr >> FPSCR_LENGTH_BIT, except);
+
+ exceptions |= except;
+
+ /*
+ * This ensures that comparisons only operate on scalars;
+ * comparisons always return with one FPSCR status bit set.
+ */
+ if (except & (FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V))
+ break;
+
+ /*
+ * CHECK: It appears to be undefined whether we stop when
+ * we encounter an exception. We continue.
+ */
+
+ dd = FREG_BANK(dd) + ((FREG_IDX(dd) + vecstride) & 6);
+ dn = FREG_BANK(dn) + ((FREG_IDX(dn) + vecstride) & 6);
+ if (FREG_BANK(dm) != 0)
+ dm = FREG_BANK(dm) + ((FREG_IDX(dm) + vecstride) & 6);
+ }
+ return exceptions;
+
+ invalid:
+ return ~0;
+}
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfphw.S
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This code is called from the kernel's undefined instruction trap.
+ * r9 holds the return address for successful handling.
+ * lr holds the return address for unrecognised instructions.
+ * r10 points at the start of the private FP workspace in the thread structure
+ * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
+ */
+#include <asm/thread_info.h>
+#include <asm/vfpmacros.h>
+#include "../kernel/entry-header.S"
+
+ .macro DBGSTR, str
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+ .macro DBGSTR1, str, arg
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ mov r1, \arg
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+ .macro DBGSTR3, str, arg1, arg2, arg3
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ mov r3, \arg3
+ mov r2, \arg2
+ mov r1, \arg1
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+
+@ VFP hardware support entry point.
+@
+@ r0 = faulted instruction
+@ r5 = faulted PC+4
+@ r9 = successful return
+@ r10 = vfp_state union
+@ lr = failure return
+
+ .globl vfp_support_entry
+vfp_support_entry:
+ DBGSTR3 "instr %08x pc %08x state %p", r0, r5, r10
+
+ VFPFMRX r1, FPEXC @ Is the VFP enabled?
+ DBGSTR1 "fpexc %08x", r1
+ tst r1, #FPEXC_ENABLE
+ bne look_for_VFP_exceptions @ VFP is already enabled
+
+ DBGSTR1 "enable %x", r10
+ ldr r3, last_VFP_context_address
+ orr r1, r1, #FPEXC_ENABLE @ user FPEXC has the enable bit set
+ ldr r4, [r3] @ last_VFP_context pointer
+ bic r2, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled
+ cmp r4, r10
+ beq check_for_exception @ we are returning to the same
+ @ process, so the registers are
+ @ still there. In this case, we do
+ @ not want to drop a pending exception.
+
+ VFPFMXR FPEXC, r2 @ enable VFP, disable any pending
+ @ exceptions, so we can get at the
+ @ rest of it
+
+ @ Save out the current registers to the old thread state
+
+ DBGSTR1 "save old state %p", r4
+ cmp r4, #0
+ beq no_old_VFP_process
+ VFPFMRX r2, FPSCR @ current status
+ VFPFMRX r6, FPINST @ FPINST (always there, rev0 onwards)
+ tst r1, #FPEXC_FPV2 @ is there an FPINST2 to read?
+ VFPFMRX r8, FPINST2, NE @ FPINST2 if needed - avoids reading
+ @ nonexistant reg on rev0
+ VFPFSTMIA r4 @ save the working registers
+ add r4, r4, #8*16+4
+ stmia r4, {r1, r2, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2
+ @ and point r4 at the word at the
+ @ start of the register dump
+
+no_old_VFP_process:
+ DBGSTR1 "load state %p", r10
+ str r10, [r3] @ update the last_VFP_context pointer
+ @ Load the saved state back into the VFP
+ add r4, r10, #8*16+4
+ ldmia r4, {r1, r2, r6, r8} @ load FPEXC, FPSCR, FPINST, FPINST2
+ VFPFLDMIA r10 @ reload the working registers while
+ @ FPEXC is in a safe state
+ tst r1, #FPEXC_FPV2 @ is there an FPINST2 to write?
+ VFPFMXR FPINST2, r8, NE @ FPINST2 if needed - avoids writing
+ @ nonexistant reg on rev0
+ VFPFMXR FPINST, r6
+ VFPFMXR FPSCR, r2 @ restore status
+
+check_for_exception:
+ tst r1, #FPEXC_EXCEPTION
+ bne process_exception @ might as well handle the pending
+ @ exception before retrying branch
+ @ out before setting an FPEXC that
+ @ stops us reading stuff
+ VFPFMXR FPEXC, r1 @ restore FPEXC last
+ sub r5, r5, #4
+ str r5, [sp, #S_PC] @ retry the instruction
+ mov pc, r9 @ we think we have handled things
+
+
+look_for_VFP_exceptions:
+ tst r1, #FPEXC_EXCEPTION
+ bne process_exception
+ VFPFMRX r2, FPSCR
+ tst r2, #FPSCR_IXE @ IXE doesn't set FPEXC_EXCEPTION !
+ bne process_exception
+
+ @ Fall into hand on to next handler - appropriate coproc instr
+ @ not recognised by VFP
+
+ DBGSTR "not VFP"
+ mov pc, lr
+
+process_exception:
+ DBGSTR "bounce"
+ sub r5, r5, #4
+ str r5, [sp, #S_PC] @ retry the instruction on exit from
+ @ the imprecise exception handling in
+ @ the support code
+ mov r2, sp @ nothing stacked - regdump is at TOS
+ mov lr, r9 @ setup for a return to the user code.
+
+ @ Now call the C code to package up the bounce to the support code
+ @ r0 holds the trigger instruction
+ @ r1 holds the FPEXC value
+ @ r2 pointer to register dump
+ b VFP9_bounce @ we have handled this - the support
+ @ code will raise an exception if
+ @ required. If not, the user code will
+ @ retry the faulted instruction
+
+last_VFP_context_address:
+ .word last_VFP_context
+
+ .globl vfp_get_float
+vfp_get_float:
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mrc p10, 0, r0, c\dr, c0, 0 @ fmrs r0, s0
+ mov pc, lr
+ mrc p10, 0, r0, c\dr, c0, 4 @ fmrs r0, s1
+ mov pc, lr
+ .endr
+
+ .globl vfp_put_float
+vfp_put_float:
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mcr p10, 0, r1, c\dr, c0, 0 @ fmsr r0, s0
+ mov pc, lr
+ mcr p10, 0, r1, c\dr, c0, 4 @ fmsr r0, s1
+ mov pc, lr
+ .endr
+
+ .globl vfp_get_double
+vfp_get_double:
+ mov r0, r0, lsr #1
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mrrc p10, 1, r0, r1, c\dr @ fmrrd r0, r1, d\dr
+ mov pc, lr
+ .endr
+
+ .globl vfp_put_double
+vfp_put_double:
+ mov r0, r0, lsr #1
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mcrr p10, 1, r1, r2, c\dr @ fmrrd r1, r2, d\dr
+ mov pc, lr
+ .endr
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpinstr.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * VFP instruction masks.
+ */
+#define INST_CPRTDO(inst) (((inst) & 0x0f000000) == 0x0e000000)
+#define INST_CPRT(inst) ((inst) & (1 << 4))
+#define INST_CPRT_L(inst) ((inst) & (1 << 20))
+#define INST_CPRT_Rd(inst) (((inst) & (15 << 12)) >> 12)
+#define INST_CPRT_OP(inst) (((inst) >> 21) & 7)
+#define INST_CPNUM(inst) ((inst) & 0xf00)
+#define CPNUM(cp) ((cp) << 8)
+
+#define FOP_MASK (0x00b00040)
+#define FOP_FMAC (0x00000000)
+#define FOP_FNMAC (0x00000040)
+#define FOP_FMSC (0x00100000)
+#define FOP_FNMSC (0x00100040)
+#define FOP_FMUL (0x00200000)
+#define FOP_FNMUL (0x00200040)
+#define FOP_FADD (0x00300000)
+#define FOP_FSUB (0x00300040)
+#define FOP_FDIV (0x00800000)
+#define FOP_EXT (0x00b00040)
+
+#define FOP_TO_IDX(inst) ((inst & 0x00b00000) >> 20 | (inst & (1 << 6)) >> 4)
+
+#define FEXT_MASK (0x000f0080)
+#define FEXT_FCPY (0x00000000)
+#define FEXT_FABS (0x00000080)
+#define FEXT_FNEG (0x00010000)
+#define FEXT_FSQRT (0x00010080)
+#define FEXT_FCMP (0x00040000)
+#define FEXT_FCMPE (0x00040080)
+#define FEXT_FCMPZ (0x00050000)
+#define FEXT_FCMPEZ (0x00050080)
+#define FEXT_FCVT (0x00070080)
+#define FEXT_FUITO (0x00080000)
+#define FEXT_FSITO (0x00080080)
+#define FEXT_FTOUI (0x000c0000)
+#define FEXT_FTOUIZ (0x000c0080)
+#define FEXT_FTOSI (0x000d0000)
+#define FEXT_FTOSIZ (0x000d0080)
+
+#define FEXT_TO_IDX(inst) ((inst & 0x000f0000) >> 15 | (inst & (1 << 7)) >> 7)
+
+#define vfp_get_sd(inst) ((inst & 0x0000f000) >> 11 | (inst & (1 << 22)) >> 22)
+#define vfp_get_dd(inst) ((inst & 0x0000f000) >> 12)
+#define vfp_get_sm(inst) ((inst & 0x0000000f) << 1 | (inst & (1 << 5)) >> 5)
+#define vfp_get_dm(inst) ((inst & 0x0000000f))
+#define vfp_get_sn(inst) ((inst & 0x000f0000) >> 15 | (inst & (1 << 7)) >> 7)
+#define vfp_get_dn(inst) ((inst & 0x000f0000) >> 16)
+
+#define vfp_single(inst) (((inst) & 0x0000f00) == 0xa00)
+
+#define FPSCR_N (1 << 31)
+#define FPSCR_Z (1 << 30)
+#define FPSCR_C (1 << 29)
+#define FPSCR_V (1 << 28)
+
+/*
+ * Since we aren't building with -mfpu=vfp, we need to code
+ * these instructions using their MRC/MCR equivalents.
+ */
+#define vfpreg(_vfp_) #_vfp_
+
+#define fmrx(_vfp_) ({ \
+ u32 __v; \
+ asm("mrc%? p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmrx %0, " #_vfp_ \
+ : "=r" (__v)); \
+ __v; \
+ })
+
+#define fmxr(_vfp_,_var_) \
+ asm("mcr%? p10, 7, %0, " vfpreg(_vfp_) ", cr0, 0 @ fmxr " #_vfp_ ", %0" \
+ : : "r" (_var_))
+
+u32 vfp_single_cpdo(u32 inst, u32 fpscr);
+u32 vfp_single_cprt(u32 inst, u32 fpscr, struct pt_regs *regs);
+
+u32 vfp_double_cpdo(u32 inst, u32 fpscr);
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpmodule.c
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <asm/vfp.h>
+
+#include "vfpinstr.h"
+#include "vfp.h"
+
+/*
+ * Our undef handlers (in entry.S)
+ */
+void vfp_testing_entry(void);
+void vfp_support_entry(void);
+
+void (*vfp_vector)(void) = vfp_testing_entry;
+union vfp_state *last_VFP_context;
+
+/*
+ * Dual-use variable.
+ * Used in startup: set to non-zero if VFP checks fail
+ * After startup, holds VFP architecture
+ */
+unsigned int VFP_arch;
+
+/*
+ * Per-thread VFP initialisation.
+ */
+void vfp_flush_thread(union vfp_state *vfp)
+{
+ memset(vfp, 0, sizeof(union vfp_state));
+
+ vfp->hard.fpexc = FPEXC_ENABLE;
+ vfp->hard.fpscr = FPSCR_ROUND_NEAREST;
+
+ /*
+ * Disable VFP to ensure we initialise it first.
+ */
+ fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_ENABLE);
+
+ /*
+ * Ensure we don't try to overwrite our newly initialised
+ * state information on the first fault.
+ */
+ if (last_VFP_context == vfp)
+ last_VFP_context = NULL;
+}
+
+/*
+ * Per-thread VFP cleanup.
+ */
+void vfp_release_thread(union vfp_state *vfp)
+{
+ if (last_VFP_context == vfp)
+ last_VFP_context = NULL;
+}
+
+/*
+ * Raise a SIGFPE for the current process.
+ * sicode describes the signal being raised.
+ */
+void vfp_raise_sigfpe(unsigned int sicode, struct pt_regs *regs)
+{
+ siginfo_t info;
+
+ memset(&info, 0, sizeof(info));
+
+ info.si_signo = SIGFPE;
+ info.si_code = sicode;
+ info.si_addr = (void *)(instruction_pointer(regs) - 4);
+
+ /*
+ * This is the same as NWFPE, because it's not clear what
+ * this is used for
+ */
+ current->thread.error_code = 0;
+ current->thread.trap_no = 6;
+
+ force_sig_info(SIGFPE, &info, current);
+}
+
+static void vfp_panic(char *reason)
+{
+ int i;
+
+ printk(KERN_ERR "VFP: Error: %s\n", reason);
+ printk(KERN_ERR "VFP: EXC 0x%08x SCR 0x%08x INST 0x%08x\n",
+ fmrx(FPEXC), fmrx(FPSCR), fmrx(FPINST));
+ for (i = 0; i < 32; i += 2)
+ printk(KERN_ERR "VFP: s%2u: 0x%08x s%2u: 0x%08x\n",
+ i, vfp_get_float(i), i+1, vfp_get_float(i+1));
+}
+
+/*
+ * Process bitmask of exception conditions.
+ */
+static void vfp_raise_exceptions(u32 exceptions, u32 inst, u32 fpscr, struct pt_regs *regs)
+{
+ int si_code = 0;
+
+ pr_debug("VFP: raising exceptions %08x\n", exceptions);
+
+ if (exceptions == (u32)-1) {
+ vfp_panic("unhandled bounce");
+ vfp_raise_sigfpe(0, regs);
+ return;
+ }
+
+ /*
+ * If any of the status flags are set, update the FPSCR.
+ * Comparison instructions always return at least one of
+ * these flags set.
+ */
+ if (exceptions & (FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V))
+ fpscr &= ~(FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V);
+
+ fpscr |= exceptions;
+
+ fmxr(FPSCR, fpscr);
+
+#define RAISE(stat,en,sig) \
+ if (exceptions & stat && fpscr & en) \
+ si_code = sig;
+
+ /*
+ * These are arranged in priority order, least to highest.
+ */
+ RAISE(FPSCR_IXC, FPSCR_IXE, FPE_FLTRES);
+ RAISE(FPSCR_UFC, FPSCR_UFE, FPE_FLTUND);
+ RAISE(FPSCR_OFC, FPSCR_OFE, FPE_FLTOVF);
+ RAISE(FPSCR_IOC, FPSCR_IOE, FPE_FLTINV);
+
+ if (si_code)
+ vfp_raise_sigfpe(si_code, regs);
+}
+
+/*
+ * Emulate a VFP instruction.
+ */
+static u32 vfp_emulate_instruction(u32 inst, u32 fpscr, struct pt_regs *regs)
+{
+ u32 exceptions = (u32)-1;
+
+ pr_debug("VFP: emulate: INST=0x%08x SCR=0x%08x\n", inst, fpscr);
+
+ if (INST_CPRTDO(inst)) {
+ if (!INST_CPRT(inst)) {
+ /*
+ * CPDO
+ */
+ if (vfp_single(inst)) {
+ exceptions = vfp_single_cpdo(inst, fpscr);
+ } else {
+ exceptions = vfp_double_cpdo(inst, fpscr);
+ }
+ } else {
+ /*
+ * A CPRT instruction can not appear in FPINST2, nor
+ * can it cause an exception. Therefore, we do not
+ * have to emulate it.
+ */
+ }
+ } else {
+ /*
+ * A CPDT instruction can not appear in FPINST2, nor can
+ * it cause an exception. Therefore, we do not have to
+ * emulate it.
+ */
+ }
+ return exceptions;
+}
+
+/*
+ * Package up a bounce condition.
+ */
+void VFP9_bounce(u32 trigger, u32 fpexc, struct pt_regs *regs)
+{
+ u32 fpscr, orig_fpscr, exceptions, inst;
+
+ pr_debug("VFP: bounce: trigger %08x fpexc %08x\n", trigger, fpexc);
+
+ /*
+ * Enable access to the VFP so we can handle the bounce.
+ */
+ fmxr(FPEXC, fpexc & ~(FPEXC_EXCEPTION|FPEXC_INV|FPEXC_UFC|FPEXC_IOC));
+
+ orig_fpscr = fpscr = fmrx(FPSCR);
+
+ /*
+ * If we are running with inexact exceptions enabled, we need to
+ * emulate the trigger instruction. Note that as we're emulating
+ * the trigger instruction, we need to increment PC.
+ */
+ if (fpscr & FPSCR_IXE) {
+ regs->ARM_pc += 4;
+ goto emulate;
+ }
+
+ barrier();
+
+ /*
+ * Modify fpscr to indicate the number of iterations remaining
+ */
+ if (fpexc & FPEXC_EXCEPTION) {
+ u32 len;
+
+ len = fpexc + (1 << FPEXC_LENGTH_BIT);
+
+ fpscr &= ~FPSCR_LENGTH_MASK;
+ fpscr |= (len & FPEXC_LENGTH_MASK) << (FPSCR_LENGTH_BIT - FPEXC_LENGTH_BIT);
+ }
+
+ /*
+ * Handle the first FP instruction. We used to take note of the
+ * FPEXC bounce reason, but this appears to be unreliable.
+ * Emulate the bounced instruction instead.
+ */
+ inst = fmrx(FPINST);
+ exceptions = vfp_emulate_instruction(inst, fpscr, regs);
+ if (exceptions)
+ vfp_raise_exceptions(exceptions, inst, orig_fpscr, regs);
+
+ /*
+ * If there isn't a second FP instruction, exit now.
+ */
+ if (!(fpexc & FPEXC_FPV2))
+ return;
+
+ /*
+ * The barrier() here prevents fpinst2 being read
+ * before the condition above.
+ */
+ barrier();
+ trigger = fmrx(FPINST2);
+ fpscr = fmrx(FPSCR);
+
+ emulate:
+ exceptions = vfp_emulate_instruction(trigger, fpscr, regs);
+ if (exceptions)
+ vfp_raise_exceptions(exceptions, trigger, orig_fpscr, regs);
+}
+
+/*
+ * VFP support code initialisation.
+ */
+static int __init vfp_init(void)
+{
+ unsigned int vfpsid;
+
+ /*
+ * First check that there is a VFP that we can use.
+ * The handler is already setup to just log calls, so
+ * we just need to read the VFPSID register.
+ */
+ vfpsid = fmrx(FPSID);
+
+ printk(KERN_INFO "VFP support v0.3: ");
+ if (VFP_arch) {
+ printk("not present\n");
+ } else if (vfpsid & FPSID_NODOUBLE) {
+ printk("no double precision support\n");
+ } else {
+ VFP_arch = (vfpsid & FPSID_ARCH_MASK) >> FPSID_ARCH_BIT; /* Extract the architecture version */
+ printk("implementor %02x architecture %d part %02x variant %x rev %x\n",
+ (vfpsid & FPSID_IMPLEMENTER_MASK) >> FPSID_IMPLEMENTER_BIT,
+ (vfpsid & FPSID_ARCH_MASK) >> FPSID_ARCH_BIT,
+ (vfpsid & FPSID_PART_MASK) >> FPSID_PART_BIT,
+ (vfpsid & FPSID_VARIANT_MASK) >> FPSID_VARIANT_BIT,
+ (vfpsid & FPSID_REV_MASK) >> FPSID_REV_BIT);
+ vfp_vector = vfp_support_entry;
+ }
+ return 0;
+}
+
+late_initcall(vfp_init);
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpsingle.c
+ *
+ * This code is derived in part from John R. Housers softfloat library, which
+ * carries the following notice:
+ *
+ * ===========================================================================
+ * This C source file is part of the SoftFloat IEC/IEEE Floating-point
+ * Arithmetic Package, Release 2.
+ *
+ * Written by John R. Hauser. This work was made possible in part by the
+ * International Computer Science Institute, located at Suite 600, 1947 Center
+ * Street, Berkeley, California 94704. Funding was partially provided by the
+ * National Science Foundation under grant MIP-9311980. The original version
+ * of this code was written as part of a project to build a fixed-point vector
+ * processor in collaboration with the University of California at Berkeley,
+ * overseen by Profs. Nelson Morgan and John Wawrzynek. More information
+ * is available through the web page `http://HTTP.CS.Berkeley.EDU/~jhauser/
+ * arithmetic/softfloat.html'.
+ *
+ * THIS SOFTWARE IS DISTRIBUTED AS IS, FOR FREE. Although reasonable effort
+ * has been made to avoid it, THIS SOFTWARE MAY CONTAIN FAULTS THAT WILL AT
+ * TIMES RESULT IN INCORRECT BEHAVIOR. USE OF THIS SOFTWARE IS RESTRICTED TO
+ * PERSONS AND ORGANIZATIONS WHO CAN AND WILL TAKE FULL RESPONSIBILITY FOR ANY
+ * AND ALL LOSSES, COSTS, OR OTHER PROBLEMS ARISING FROM ITS USE.
+ *
+ * Derivative works are acceptable, even for commercial purposes, so long as
+ * (1) they include prominent notice that the work is derivative, and (2) they
+ * include prominent notice akin to these three paragraphs for those parts of
+ * this code that are retained.
+ * ===========================================================================
+ */
+#include <linux/kernel.h>
+#include <asm/bitops.h>
+#include <asm/ptrace.h>
+#include <asm/vfp.h>
+
+#include "vfpinstr.h"
+#include "vfp.h"
+
+static struct vfp_single vfp_single_default_qnan = {
+ .exponent = 255,
+ .sign = 0,
+ .significand = VFP_SINGLE_SIGNIFICAND_QNAN,
+};
+
+static void vfp_single_dump(const char *str, struct vfp_single *s)
+{
+ pr_debug("VFP: %s: sign=%d exponent=%d significand=%08x\n",
+ str, s->sign != 0, s->exponent, s->significand);
+}
+
+static void vfp_single_normalise_denormal(struct vfp_single *vs)
+{
+ int bits = 31 - fls(vs->significand);
+
+ vfp_single_dump("normalise_denormal: in", vs);
+
+ if (bits) {
+ vs->exponent -= bits - 1;
+ vs->significand <<= bits;
+ }
+
+ vfp_single_dump("normalise_denormal: out", vs);
+}
+
+#ifndef DEBUG
+#define vfp_single_normaliseround(sd,vsd,fpscr,except,func) __vfp_single_normaliseround(sd,vsd,fpscr,except)
+u32 __vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions)
+#else
+u32 vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions, const char *func)
+#endif
+{
+ u32 significand, incr, rmode;
+ int exponent, shift, underflow;
+
+ vfp_single_dump("pack: in", vs);
+
+ /*
+ * Infinities and NaNs are a special case.
+ */
+ if (vs->exponent == 255 && (vs->significand == 0 || exceptions))
+ goto pack;
+
+ /*
+ * Special-case zero.
+ */
+ if (vs->significand == 0) {
+ vs->exponent = 0;
+ goto pack;
+ }
+
+ exponent = vs->exponent;
+ significand = vs->significand;
+
+ /*
+ * Normalise first. Note that we shift the significand up to
+ * bit 31, so we have VFP_SINGLE_LOW_BITS + 1 below the least
+ * significant bit.
+ */
+ shift = 32 - fls(significand);
+ if (shift < 32 && shift) {
+ exponent -= shift;
+ significand <<= shift;
+ }
+
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: normalised", vs);
+#endif
+
+ /*
+ * Tiny number?
+ */
+ underflow = exponent < 0;
+ if (underflow) {
+ significand = vfp_shiftright32jamming(significand, -exponent);
+ exponent = 0;
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: tiny number", vs);
+#endif
+ if (!(significand & ((1 << (VFP_SINGLE_LOW_BITS + 1)) - 1)))
+ underflow = 0;
+ }
+
+ /*
+ * Select rounding increment.
+ */
+ incr = 0;
+ rmode = fpscr & FPSCR_RMODE_MASK;
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 1 << VFP_SINGLE_LOW_BITS;
+ if ((significand & (1 << (VFP_SINGLE_LOW_BITS + 1))) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vs->sign != 0))
+ incr = (1 << (VFP_SINGLE_LOW_BITS + 1)) - 1;
+
+ pr_debug("VFP: rounding increment = 0x%08x\n", incr);
+
+ /*
+ * Is our rounding going to overflow?
+ */
+ if ((significand + incr) < significand) {
+ exponent += 1;
+ significand = (significand >> 1) | (significand & 1);
+ incr >>= 1;
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: overflow", vs);
+#endif
+ }
+
+ /*
+ * If any of the low bits (which will be shifted out of the
+ * number) are non-zero, the result is inexact.
+ */
+ if (significand & ((1 << (VFP_SINGLE_LOW_BITS + 1)) - 1))
+ exceptions |= FPSCR_IXC;
+
+ /*
+ * Do our rounding.
+ */
+ significand += incr;
+
+ /*
+ * Infinity?
+ */
+ if (exponent >= 254) {
+ exceptions |= FPSCR_OFC | FPSCR_IXC;
+ if (incr == 0) {
+ vs->exponent = 253;
+ vs->significand = 0x7fffffff;
+ } else {
+ vs->exponent = 255; /* infinity */
+ vs->significand = 0;
+ }
+ } else {
+ if (significand >> (VFP_SINGLE_LOW_BITS + 1) == 0)
+ exponent = 0;
+ if (exponent || significand > 0x80000000)
+ underflow = 0;
+ if (underflow)
+ exceptions |= FPSCR_UFC;
+ vs->exponent = exponent;
+ vs->significand = significand >> 1;
+ }
+
+ pack:
+ vfp_single_dump("pack: final", vs);
+ {
+ s32 d = vfp_single_pack(vs);
+ pr_debug("VFP: %s: d(s%d)=%08x exceptions=%08x\n", func,
+ sd, d, exceptions);
+ vfp_put_float(sd, d);
+ }
+
+ return exceptions;
+}
+
+/*
+ * Propagate the NaN, setting exceptions if it is signalling.
+ * 'n' is always a NaN. 'm' may be a number, NaN or infinity.
+ */
+static u32
+vfp_propagate_nan(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ struct vfp_single *nan;
+ int tn, tm = 0;
+
+ tn = vfp_single_type(vsn);
+
+ if (vsm)
+ tm = vfp_single_type(vsm);
+
+ if (fpscr & FPSCR_DEFAULT_NAN)
+ /*
+ * Default NaN mode - always returns a quiet NaN
+ */
+ nan = &vfp_single_default_qnan;
+ else {
+ /*
+ * Contemporary mode - select the first signalling
+ * NAN, or if neither are signalling, the first
+ * quiet NAN.
+ */
+ if (tn == VFP_SNAN || (tm != VFP_SNAN && tn == VFP_QNAN))
+ nan = vsn;
+ else
+ nan = vsm;
+ /*
+ * Make the NaN quiet.
+ */
+ nan->significand |= VFP_SINGLE_SIGNIFICAND_QNAN;
+ }
+
+ *vsd = *nan;
+
+ /*
+ * If one was a signalling NAN, raise invalid operation.
+ */
+ return tn == VFP_SNAN || tm == VFP_SNAN ? FPSCR_IOC : 0x100;
+}
+
+
+/*
+ * Extended operations
+ */
+static u32 vfp_single_fabs(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, vfp_single_packed_abs(m));
+ return 0;
+}
+
+static u32 vfp_single_fcpy(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, m);
+ return 0;
+}
+
+static u32 vfp_single_fneg(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, vfp_single_packed_negate(m));
+ return 0;
+}
+
+static const u16 sqrt_oddadjust[] = {
+ 0x0004, 0x0022, 0x005d, 0x00b1, 0x011d, 0x019f, 0x0236, 0x02e0,
+ 0x039c, 0x0468, 0x0545, 0x0631, 0x072b, 0x0832, 0x0946, 0x0a67
+};
+
+static const u16 sqrt_evenadjust[] = {
+ 0x0a2d, 0x08af, 0x075a, 0x0629, 0x051a, 0x0429, 0x0356, 0x029e,
+ 0x0200, 0x0179, 0x0109, 0x00af, 0x0068, 0x0034, 0x0012, 0x0002
+};
+
+u32 vfp_estimate_sqrt_significand(u32 exponent, u32 significand)
+{
+ int index;
+ u32 z, a;
+
+ if ((significand & 0xc0000000) != 0x40000000) {
+ printk(KERN_WARNING "VFP: estimate_sqrt: invalid significand\n");
+ }
+
+ a = significand << 1;
+ index = (a >> 27) & 15;
+ if (exponent & 1) {
+ z = 0x4000 + (a >> 17) - sqrt_oddadjust[index];
+ z = ((a / z) << 14) + (z << 15);
+ a >>= 1;
+ } else {
+ z = 0x8000 + (a >> 17) - sqrt_evenadjust[index];
+ z = a / z + z;
+ z = (z >= 0x20000) ? 0xffff8000 : (z << 15);
+ if (z <= a)
+ return (s32)a >> 1;
+ }
+ return (u32)(((u64)a << 31) / z) + (z >> 1);
+}
+
+static u32 vfp_single_fsqrt(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm, vsd;
+ int ret, tm;
+
+ vfp_single_unpack(&vsm, m);
+ tm = vfp_single_type(&vsm);
+ if (tm & (VFP_NAN|VFP_INFINITY)) {
+ struct vfp_single *vsp = &vsd;
+
+ if (tm & VFP_NAN)
+ ret = vfp_propagate_nan(vsp, &vsm, NULL, fpscr);
+ else if (vsm.sign == 0) {
+ sqrt_copy:
+ vsp = &vsm;
+ ret = 0;
+ } else {
+ sqrt_invalid:
+ vsp = &vfp_single_default_qnan;
+ ret = FPSCR_IOC;
+ }
+ vfp_put_float(sd, vfp_single_pack(vsp));
+ return ret;
+ }
+
+ /*
+ * sqrt(+/- 0) == +/- 0
+ */
+ if (tm & VFP_ZERO)
+ goto sqrt_copy;
+
+ /*
+ * Normalise a denormalised number
+ */
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ /*
+ * sqrt(<0) = invalid
+ */
+ if (vsm.sign)
+ goto sqrt_invalid;
+
+ vfp_single_dump("sqrt", &vsm);
+
+ /*
+ * Estimate the square root.
+ */
+ vsd.sign = 0;
+ vsd.exponent = ((vsm.exponent - 127) >> 1) + 127;
+ vsd.significand = vfp_estimate_sqrt_significand(vsm.exponent, vsm.significand) + 2;
+
+ vfp_single_dump("sqrt estimate", &vsd);
+
+ /*
+ * And now adjust.
+ */
+ if ((vsd.significand & VFP_SINGLE_LOW_BITS_MASK) <= 5) {
+ if (vsd.significand < 2) {
+ vsd.significand = 0xffffffff;
+ } else {
+ u64 term;
+ s64 rem;
+ vsm.significand <<= !(vsm.exponent & 1);
+ term = (u64)vsd.significand * vsd.significand;
+ rem = ((u64)vsm.significand << 32) - term;
+
+ pr_debug("VFP: term=%016llx rem=%016llx\n", term, rem);
+
+ while (rem < 0) {
+ vsd.significand -= 1;
+ rem += ((u64)vsd.significand << 1) | 1;
+ }
+ vsd.significand |= rem != 0;
+ }
+ }
+ vsd.significand = vfp_shiftright32jamming(vsd.significand, 1);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, 0, "fsqrt");
+}
+
+/*
+ * Equal := ZC
+ * Less than := N
+ * Greater than := C
+ * Unordered := CV
+ */
+static u32 vfp_compare(int sd, int signal_on_qnan, s32 m, u32 fpscr)
+{
+ s32 d;
+ u32 ret = 0;
+
+ d = vfp_get_float(sd);
+ if (vfp_single_packed_exponent(m) == 255 && vfp_single_packed_mantissa(m)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_single_packed_mantissa(m) & (1 << (VFP_SINGLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (vfp_single_packed_exponent(d) == 255 && vfp_single_packed_mantissa(d)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_single_packed_mantissa(d) & (1 << (VFP_SINGLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (ret == 0) {
+ if (d == m || vfp_single_packed_abs(d | m) == 0) {
+ /*
+ * equal
+ */
+ ret |= FPSCR_Z | FPSCR_C;
+ } else if (vfp_single_packed_sign(d ^ m)) {
+ /*
+ * different signs
+ */
+ if (vfp_single_packed_sign(d))
+ /*
+ * d is negative, so d < m
+ */
+ ret |= FPSCR_N;
+ else
+ /*
+ * d is positive, so d > m
+ */
+ ret |= FPSCR_C;
+ } else if ((vfp_single_packed_sign(d) != 0) ^ (d < m)) {
+ /*
+ * d < m
+ */
+ ret |= FPSCR_N;
+ } else if ((vfp_single_packed_sign(d) != 0) ^ (d > m)) {
+ /*
+ * d > m
+ */
+ ret |= FPSCR_C;
+ }
+ }
+ return ret;
+}
+
+static u32 vfp_single_fcmp(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 0, m, fpscr);
+}
+
+static u32 vfp_single_fcmpe(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 1, m, fpscr);
+}
+
+static u32 vfp_single_fcmpz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 0, 0, fpscr);
+}
+
+static u32 vfp_single_fcmpez(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 1, 0, fpscr);
+}
+
+static u32 vfp_single_fcvtd(int dd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ struct vfp_double vdd;
+ int tm;
+ u32 exceptions = 0;
+
+ vfp_single_unpack(&vsm, m);
+
+ tm = vfp_single_type(&vsm);
+
+ /*
+ * If we have a signalling NaN, signal invalid operation.
+ */
+ if (tm == VFP_SNAN)
+ exceptions = FPSCR_IOC;
+
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ vdd.sign = vsm.sign;
+ vdd.significand = (u64)vsm.significand << 32;
+
+ /*
+ * If we have an infinity or NaN, the exponent must be 2047.
+ */
+ if (tm & (VFP_INFINITY|VFP_NAN)) {
+ vdd.exponent = 2047;
+ if (tm & VFP_NAN)
+ vdd.significand |= VFP_DOUBLE_SIGNIFICAND_QNAN;
+ goto pack_nan;
+ } else if (tm & VFP_ZERO)
+ vdd.exponent = 0;
+ else
+ vdd.exponent = vsm.exponent + (1023 - 127);
+
+ /*
+ * Technically, if bit 0 of dd is set, this is an invalid
+ * instruction. However, we ignore this for efficiency.
+ */
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fcvtd");
+
+ pack_nan:
+ vfp_put_double(dd, vfp_double_pack(&vdd));
+ return exceptions;
+}
+
+static u32 vfp_single_fuito(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vs;
+
+ vs.sign = 0;
+ vs.exponent = 127 + 31 - 1;
+ vs.significand = (u32)m;
+
+ return vfp_single_normaliseround(sd, &vs, fpscr, 0, "fuito");
+}
+
+static u32 vfp_single_fsito(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vs;
+
+ vs.sign = (m & 0x80000000) >> 16;
+ vs.exponent = 127 + 31 - 1;
+ vs.significand = vs.sign ? -m : m;
+
+ return vfp_single_normaliseround(sd, &vs, fpscr, 0, "fsito");
+}
+
+static u32 vfp_single_ftoui(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+ int tm;
+
+ vfp_single_unpack(&vsm, m);
+ vfp_single_dump("VSM", &vsm);
+
+ /*
+ * Do we have a denormalised number?
+ */
+ tm = vfp_single_type(&vsm);
+ if (tm & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (tm & VFP_NAN)
+ vsm.sign = 0;
+
+ if (vsm.exponent >= 127 + 32) {
+ d = vsm.sign ? 0 : 0xffffffff;
+ exceptions = FPSCR_IOC;
+ } else if (vsm.exponent >= 127 - 1) {
+ int shift = 127 + 31 - vsm.exponent;
+ u32 rem, incr = 0;
+
+ /*
+ * 2^0 <= m < 2^32-2^8
+ */
+ d = (vsm.significand << 1) >> shift;
+ rem = vsm.significand << (33 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x80000000;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vsm.sign != 0)) {
+ incr = ~0;
+ }
+
+ if ((rem + incr) < rem) {
+ if (d < 0xffffffff)
+ d += 1;
+ else
+ exceptions |= FPSCR_IOC;
+ }
+
+ if (d && vsm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+ } else {
+ d = 0;
+ if (vsm.exponent | vsm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vsm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vsm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ }
+ }
+ }
+
+ pr_debug("VFP: ftoui: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, d);
+
+ return exceptions;
+}
+
+static u32 vfp_single_ftouiz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_single_ftoui(sd, unused, m, FPSCR_ROUND_TOZERO);
+}
+
+static u32 vfp_single_ftosi(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+
+ vfp_single_unpack(&vsm, m);
+ vfp_single_dump("VSM", &vsm);
+
+ /*
+ * Do we have a denormalised number?
+ */
+ if (vfp_single_type(&vsm) & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (vsm.exponent >= 127 + 32) {
+ /*
+ * m >= 2^31-2^7: invalid
+ */
+ d = 0x7fffffff;
+ if (vsm.sign)
+ d = ~d;
+ exceptions |= FPSCR_IOC;
+ } else if (vsm.exponent >= 127 - 1) {
+ int shift = 127 + 31 - vsm.exponent;
+ u32 rem, incr = 0;
+
+ /* 2^0 <= m <= 2^31-2^7 */
+ d = (vsm.significand << 1) >> shift;
+ rem = vsm.significand << (33 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x80000000;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vsm.sign != 0)) {
+ incr = ~0;
+ }
+
+ if ((rem + incr) < rem && d < 0xffffffff)
+ d += 1;
+ if (d > 0x7fffffff + (vsm.sign != 0)) {
+ d = 0x7fffffff + (vsm.sign != 0);
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+
+ if (vsm.sign)
+ d = -d;
+ } else {
+ d = 0;
+ if (vsm.exponent | vsm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vsm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vsm.sign)
+ d = -1;
+ }
+ }
+
+ pr_debug("VFP: ftosi: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, (s32)d);
+
+ return exceptions;
+}
+
+static u32 vfp_single_ftosiz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_single_ftosi(sd, unused, m, FPSCR_ROUND_TOZERO);
+}
+
+static u32 (* const fop_extfns[32])(int sd, int unused, s32 m, u32 fpscr) = {
+ [FEXT_TO_IDX(FEXT_FCPY)] = vfp_single_fcpy,
+ [FEXT_TO_IDX(FEXT_FABS)] = vfp_single_fabs,
+ [FEXT_TO_IDX(FEXT_FNEG)] = vfp_single_fneg,
+ [FEXT_TO_IDX(FEXT_FSQRT)] = vfp_single_fsqrt,
+ [FEXT_TO_IDX(FEXT_FCMP)] = vfp_single_fcmp,
+ [FEXT_TO_IDX(FEXT_FCMPE)] = vfp_single_fcmpe,
+ [FEXT_TO_IDX(FEXT_FCMPZ)] = vfp_single_fcmpz,
+ [FEXT_TO_IDX(FEXT_FCMPEZ)] = vfp_single_fcmpez,
+ [FEXT_TO_IDX(FEXT_FCVT)] = vfp_single_fcvtd,
+ [FEXT_TO_IDX(FEXT_FUITO)] = vfp_single_fuito,
+ [FEXT_TO_IDX(FEXT_FSITO)] = vfp_single_fsito,
+ [FEXT_TO_IDX(FEXT_FTOUI)] = vfp_single_ftoui,
+ [FEXT_TO_IDX(FEXT_FTOUIZ)] = vfp_single_ftouiz,
+ [FEXT_TO_IDX(FEXT_FTOSI)] = vfp_single_ftosi,
+ [FEXT_TO_IDX(FEXT_FTOSIZ)] = vfp_single_ftosiz,
+};
+
+
+
+
+
+static u32
+vfp_single_fadd_nonnumber(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ struct vfp_single *vsp;
+ u32 exceptions = 0;
+ int tn, tm;
+
+ tn = vfp_single_type(vsn);
+ tm = vfp_single_type(vsm);
+
+ if (tn & tm & VFP_INFINITY) {
+ /*
+ * Two infinities. Are they different signs?
+ */
+ if (vsn->sign ^ vsm->sign) {
+ /*
+ * different signs -> invalid
+ */
+ exceptions = FPSCR_IOC;
+ vsp = &vfp_single_default_qnan;
+ } else {
+ /*
+ * same signs -> valid
+ */
+ vsp = vsn;
+ }
+ } else if (tn & VFP_INFINITY && tm & VFP_NUMBER) {
+ /*
+ * One infinity and one number -> infinity
+ */
+ vsp = vsn;
+ } else {
+ /*
+ * 'n' is a NaN of some type
+ */
+ return vfp_propagate_nan(vsd, vsn, vsm, fpscr);
+ }
+ *vsd = *vsp;
+ return exceptions;
+}
+
+static u32
+vfp_single_add(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ u32 exp_diff, m_sig;
+
+ if (vsn->significand & 0x80000000 ||
+ vsm->significand & 0x80000000) {
+ pr_info("VFP: bad FP values in %s\n", __func__);
+ vfp_single_dump("VSN", vsn);
+ vfp_single_dump("VSM", vsm);
+ }
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vsn->exponent < vsm->exponent) {
+ struct vfp_single *t = vsn;
+ vsn = vsm;
+ vsm = t;
+ }
+
+ /*
+ * Is 'n' an infinity or a NaN? Note that 'm' may be a number,
+ * infinity or a NaN here.
+ */
+ if (vsn->exponent == 255)
+ return vfp_single_fadd_nonnumber(vsd, vsn, vsm, fpscr);
+
+ /*
+ * We have two proper numbers, where 'vsn' is the larger magnitude.
+ *
+ * Copy 'n' to 'd' before doing the arithmetic.
+ */
+ *vsd = *vsn;
+
+ /*
+ * Align both numbers.
+ */
+ exp_diff = vsn->exponent - vsm->exponent;
+ m_sig = vfp_shiftright32jamming(vsm->significand, exp_diff);
+
+ /*
+ * If the signs are different, we are really subtracting.
+ */
+ if (vsn->sign ^ vsm->sign) {
+ m_sig = vsn->significand - m_sig;
+ if ((s32)m_sig < 0) {
+ vsd->sign = vfp_sign_negate(vsd->sign);
+ m_sig = -m_sig;
+ } else if (m_sig == 0) {
+ vsd->sign = (fpscr & FPSCR_RMODE_MASK) ==
+ FPSCR_ROUND_MINUSINF ? 0x8000 : 0;
+ }
+ } else {
+ m_sig = vsn->significand + m_sig;
+ }
+ vsd->significand = m_sig;
+
+ return 0;
+}
+
+static u32
+vfp_single_multiply(struct vfp_single *vsd, struct vfp_single *vsn, struct vfp_single *vsm, u32 fpscr)
+{
+ vfp_single_dump("VSN", vsn);
+ vfp_single_dump("VSM", vsm);
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vsn->exponent < vsm->exponent) {
+ struct vfp_single *t = vsn;
+ vsn = vsm;
+ vsm = t;
+ pr_debug("VFP: swapping M <-> N\n");
+ }
+
+ vsd->sign = vsn->sign ^ vsm->sign;
+
+ /*
+ * If 'n' is an infinity or NaN, handle it. 'm' may be anything.
+ */
+ if (vsn->exponent == 255) {
+ if (vsn->significand || (vsm->exponent == 255 && vsm->significand))
+ return vfp_propagate_nan(vsd, vsn, vsm, fpscr);
+ if ((vsm->exponent | vsm->significand) == 0) {
+ *vsd = vfp_single_default_qnan;
+ return FPSCR_IOC;
+ }
+ vsd->exponent = vsn->exponent;
+ vsd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * If 'm' is zero, the result is always zero. In this case,
+ * 'n' may be zero or a number, but it doesn't matter which.
+ */
+ if ((vsm->exponent | vsm->significand) == 0) {
+ vsd->exponent = 0;
+ vsd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * We add 2 to the destination exponent for the same reason as
+ * the addition case - though this time we have +1 from each
+ * input operand.
+ */
+ vsd->exponent = vsn->exponent + vsm->exponent - 127 + 2;
+ vsd->significand = vfp_hi64to32jamming((u64)vsn->significand * vsm->significand);
+
+ vfp_single_dump("VSD", vsd);
+ return 0;
+}
+
+#define NEG_MULTIPLY (1 << 0)
+#define NEG_SUBTRACT (1 << 1)
+
+static u32
+vfp_single_multiply_accumulate(int sd, int sn, s32 m, u32 fpscr, u32 negate, char *func)
+{
+ struct vfp_single vsd, vsp, vsn, vsm;
+ u32 exceptions;
+ s32 v;
+
+ v = vfp_get_float(sn);
+ pr_debug("VFP: s%u = %08x\n", sn, v);
+ vfp_single_unpack(&vsn, v);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsp, &vsn, &vsm, fpscr);
+ if (negate & NEG_MULTIPLY)
+ vsp.sign = vfp_sign_negate(vsp.sign);
+
+ v = vfp_get_float(sd);
+ pr_debug("VFP: s%u = %08x\n", sd, v);
+ vfp_single_unpack(&vsn, v);
+ if (negate & NEG_SUBTRACT)
+ vsn.sign = vfp_sign_negate(vsn.sign);
+
+ exceptions |= vfp_single_add(&vsd, &vsn, &vsp, fpscr);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, func);
+}
+
+/*
+ * Standard operations
+ */
+
+/*
+ * sd = sd + (sn * sm)
+ */
+static u32 vfp_single_fmac(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, 0, "fmac");
+}
+
+/*
+ * sd = sd - (sn * sm)
+ */
+static u32 vfp_single_fnmac(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_MULTIPLY, "fnmac");
+}
+
+/*
+ * sd = -sd + (sn * sm)
+ */
+static u32 vfp_single_fmsc(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_SUBTRACT, "fmsc");
+}
+
+/*
+ * sd = -sd - (sn * sm)
+ */
+static u32 vfp_single_fnmsc(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_SUBTRACT | NEG_MULTIPLY, "fnmsc");
+}
+
+/*
+ * sd = sn * sm
+ */
+static u32 vfp_single_fmul(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsd, &vsn, &vsm, fpscr);
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fmul");
+}
+
+/*
+ * sd = -(sn * sm)
+ */
+static u32 vfp_single_fnmul(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsd, &vsn, &vsm, fpscr);
+ vsd.sign = vfp_sign_negate(vsd.sign);
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fnmul");
+}
+
+/*
+ * sd = sn + sm
+ */
+static u32 vfp_single_fadd(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ /*
+ * Unpack and normalise denormals.
+ */
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_add(&vsd, &vsn, &vsm, fpscr);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fadd");
+}
+
+/*
+ * sd = sn - sm
+ */
+static u32 vfp_single_fsub(int sd, int sn, s32 m, u32 fpscr)
+{
+ /*
+ * Subtraction is addition with one sign inverted.
+ */
+ return vfp_single_fadd(sd, sn, vfp_single_packed_negate(m), fpscr);
+}
+
+/*
+ * sd = sn / sm
+ */
+static u32 vfp_single_fdiv(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions = 0;
+ s32 n = vfp_get_float(sn);
+ int tm, tn;
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ vfp_single_unpack(&vsm, m);
+
+ vsd.sign = vsn.sign ^ vsm.sign;
+
+ tn = vfp_single_type(&vsn);
+ tm = vfp_single_type(&vsm);
+
+ /*
+ * Is n a NAN?
+ */
+ if (tn & VFP_NAN)
+ goto vsn_nan;
+
+ /*
+ * Is m a NAN?
+ */
+ if (tm & VFP_NAN)
+ goto vsm_nan;
+
+ /*
+ * If n and m are infinity, the result is invalid
+ * If n and m are zero, the result is invalid
+ */
+ if (tm & tn & (VFP_INFINITY|VFP_ZERO))
+ goto invalid;
+
+ /*
+ * If n is infinity, the result is infinity
+ */
+ if (tn & VFP_INFINITY)
+ goto infinity;
+
+ /*
+ * If m is zero, raise div0 exception
+ */
+ if (tm & VFP_ZERO)
+ goto divzero;
+
+ /*
+ * If m is infinity, or n is zero, the result is zero
+ */
+ if (tm & VFP_INFINITY || tn & VFP_ZERO)
+ goto zero;
+
+ if (tn & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsn);
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ /*
+ * Ok, we have two numbers, we can perform division.
+ */
+ vsd.exponent = vsn.exponent - vsm.exponent + 127 - 1;
+ vsm.significand <<= 1;
+ if (vsm.significand <= (2 * vsn.significand)) {
+ vsn.significand >>= 1;
+ vsd.exponent++;
+ }
+ vsd.significand = ((u64)vsn.significand << 32) / vsm.significand;
+ if ((vsd.significand & 0x3f) == 0)
+ vsd.significand |= ((u64)vsm.significand * vsd.significand != (u64)vsn.significand << 32);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, 0, "fdiv");
+
+ vsn_nan:
+ exceptions = vfp_propagate_nan(&vsd, &vsn, &vsm, fpscr);
+ pack:
+ vfp_put_float(sd, vfp_single_pack(&vsd));
+ return exceptions;
+
+ vsm_nan:
+ exceptions = vfp_propagate_nan(&vsd, &vsm, &vsn, fpscr);
+ goto pack;
+
+ zero:
+ vsd.exponent = 0;
+ vsd.significand = 0;
+ goto pack;
+
+ divzero:
+ exceptions = FPSCR_DZC;
+ infinity:
+ vsd.exponent = 255;
+ vsd.significand = 0;
+ goto pack;
+
+ invalid:
+ vfp_put_float(sd, vfp_single_pack(&vfp_single_default_qnan));
+ return FPSCR_IOC;
+}
+
+static u32 (* const fop_fns[16])(int sd, int sn, s32 m, u32 fpscr) = {
+ [FOP_TO_IDX(FOP_FMAC)] = vfp_single_fmac,
+ [FOP_TO_IDX(FOP_FNMAC)] = vfp_single_fnmac,
+ [FOP_TO_IDX(FOP_FMSC)] = vfp_single_fmsc,
+ [FOP_TO_IDX(FOP_FNMSC)] = vfp_single_fnmsc,
+ [FOP_TO_IDX(FOP_FMUL)] = vfp_single_fmul,
+ [FOP_TO_IDX(FOP_FNMUL)] = vfp_single_fnmul,
+ [FOP_TO_IDX(FOP_FADD)] = vfp_single_fadd,
+ [FOP_TO_IDX(FOP_FSUB)] = vfp_single_fsub,
+ [FOP_TO_IDX(FOP_FDIV)] = vfp_single_fdiv,
+};
+
+#define FREG_BANK(x) ((x) & 0x18)
+#define FREG_IDX(x) ((x) & 7)
+
+u32 vfp_single_cpdo(u32 inst, u32 fpscr)
+{
+ u32 op = inst & FOP_MASK;
+ u32 exceptions = 0;
+ unsigned int sd = vfp_get_sd(inst);
+ unsigned int sn = vfp_get_sn(inst);
+ unsigned int sm = vfp_get_sm(inst);
+ unsigned int vecitr, veclen, vecstride;
+ u32 (*fop)(int, int, s32, u32);
+
+ veclen = fpscr & FPSCR_LENGTH_MASK;
+ vecstride = 1 + ((fpscr & FPSCR_STRIDE_MASK) == FPSCR_STRIDE_MASK);
+
+ /*
+ * If destination bank is zero, vector length is always '1'.
+ * ARM DDI0100F C5.1.3, C5.3.2.
+ */
+ if (FREG_BANK(sd) == 0)
+ veclen = 0;
+
+ pr_debug("VFP: vecstride=%u veclen=%u\n", vecstride,
+ (veclen >> FPSCR_LENGTH_BIT) + 1);
+
+ fop = (op == FOP_EXT) ? fop_extfns[sn] : fop_fns[FOP_TO_IDX(op)];
+ if (!fop)
+ goto invalid;
+
+ for (vecitr = 0; vecitr <= veclen; vecitr += 1 << FPSCR_LENGTH_BIT) {
+ s32 m = vfp_get_float(sm);
+ u32 except;
+
+ if (op == FOP_EXT)
+ pr_debug("VFP: itr%d (s%u) = op[%u] (s%u=%08x)\n",
+ vecitr >> FPSCR_LENGTH_BIT, sd, sn, sm, m);
+ else
+ pr_debug("VFP: itr%d (s%u) = (s%u) op[%u] (s%u=%08x)\n",
+ vecitr >> FPSCR_LENGTH_BIT, sd, sn,
+ FOP_TO_IDX(op), sm, m);
+
+ except = fop(sd, sn, m, fpscr);
+ pr_debug("VFP: itr%d: exceptions=%08x\n",
+ vecitr >> FPSCR_LENGTH_BIT, except);
+
+ exceptions |= except;
+
+ /*
+ * This ensures that comparisons only operate on scalars;
+ * comparisons always return with one FPSCR status bit set.
+ */
+ if (except & (FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V))
+ break;
+
+ /*
+ * CHECK: It appears to be undefined whether we stop when
+ * we encounter an exception. We continue.
+ */
+
+ sd = FREG_BANK(sd) + ((FREG_IDX(sd) + vecstride) & 7);
+ sn = FREG_BANK(sn) + ((FREG_IDX(sn) + vecstride) & 7);
+ if (FREG_BANK(sm) != 0)
+ sm = FREG_BANK(sm) + ((FREG_IDX(sm) + vecstride) & 7);
+ }
+ return exceptions;
+
+ invalid:
+ return (u32)-1;
+}
--- /dev/null
+/* $Id: ide.c,v 1.1 2004/01/22 08:22:58 starvik Exp $
+ *
+ * Etrax specific IDE functions, like init and PIO-mode setting etc.
+ * Almost the entire ide.c is used for the rest of the Etrax ATA driver.
+ * Copyright (c) 2000-2004 Axis Communications AB
+ *
+ * Authors: Bjorn Wesen (initial version)
+ * Mikael Starvik (pio setup stuff, Linux 2.6 port)
+ */
+
+/* Regarding DMA:
+ *
+ * There are two forms of DMA - "DMA handshaking" between the interface and the drive,
+ * and DMA between the memory and the interface. We can ALWAYS use the latter, since it's
+ * something built-in in the Etrax. However only some drives support the DMA-mode handshaking
+ * on the ATA-bus. The normal PC driver and Triton interface disables memory-if DMA when the
+ * device can't do DMA handshaking for some stupid reason. We don't need to do that.
+ */
+
+#undef REALLY_SLOW_IO /* most systems can safely undef this */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/timer.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/ide.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/arch/svinto.h>
+#include <asm/dma.h>
+
+/* number of Etrax DMA descriptors */
+#define MAX_DMA_DESCRS 64
+
+/* number of times to retry busy-flags when reading/writing IDE-registers
+ * this can't be too high because a hung harddisk might cause the watchdog
+ * to trigger (sometimes INB and OUTB are called with irq's disabled)
+ */
+
+#define IDE_REGISTER_TIMEOUT 300
+
+#ifdef CONFIG_ETRAX_IDE_CSE1_16_RESET
+/* address where the memory-mapped IDE reset bit lives, if used */
+static volatile unsigned long *reset_addr;
+#endif
+
+static int e100_read_command = 0;
+
+#define LOWDB(x)
+#define D(x)
+
+void
+etrax100_ide_outw(unsigned short data, ide_ioreg_t reg) {
+ int timeleft;
+ LOWDB(printk("ow: data 0x%x, reg 0x%x\n", data, reg));
+
+ /* note the lack of handling any timeouts. we stop waiting, but we don't
+ * really notify anybody.
+ */
+
+ timeleft = IDE_REGISTER_TIMEOUT;
+ /* wait for busy flag */
+ while(timeleft && (*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy)))
+ timeleft--;
+
+ /*
+ * Fall through at a timeout, so the ongoing command will be
+ * aborted by the write below, which is expected to be a dummy
+ * command to the command register. This happens when a faulty
+ * drive times out on a command. See comment on timeout in
+ * INB.
+ */
+ if(!timeleft)
+ printk("ATA timeout reg 0x%lx := 0x%x\n", reg, data);
+
+ *R_ATA_CTRL_DATA = reg | data; /* write data to the drive's register */
+
+ timeleft = IDE_REGISTER_TIMEOUT;
+ /* wait for transmitter ready */
+ while(timeleft && !(*R_ATA_STATUS_DATA &
+ IO_MASK(R_ATA_STATUS_DATA, tr_rdy)))
+ timeleft--;
+}
+
+void
+etrax100_ide_outb(unsigned char data, ide_ioreg_t reg)
+{
+ etrax100_ide_outw(data, reg);
+}
+
+void
+etrax100_ide_outbsync(ide_drive_t *drive, u8 addr, unsigned long port)
+{
+ etrax100_ide_outw(addr, port);
+}
+
+unsigned short
+etrax100_ide_inw(ide_ioreg_t reg) {
+ int status;
+ int timeleft;
+
+ timeleft = IDE_REGISTER_TIMEOUT;
+ /* wait for busy flag */
+ while(timeleft && (*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy)))
+ timeleft--;
+
+ if(!timeleft) {
+ /*
+ * If we're asked to read the status register, like for
+ * example when a command does not complete for an
+ * extended time, but the ATA interface is stuck in a
+ * busy state at the *ETRAX* ATA interface level (as has
+ * happened repeatedly with at least one bad disk), then
+ * the best thing to do is to pretend that we read
+ * "busy" in the status register, so the IDE driver will
+ * time-out, abort the ongoing command and perform a
+ * reset sequence. Note that the subsequent OUT_BYTE
+ * call will also timeout on busy, but as long as the
+ * write is still performed, everything will be fine.
+ */
+ if ((reg & IO_MASK (R_ATA_CTRL_DATA, addr))
+ == IO_FIELD (R_ATA_CTRL_DATA, addr, IDE_STATUS_OFFSET))
+ return BUSY_STAT;
+ else
+ /* For other rare cases we assume 0 is good enough. */
+ return 0;
+ }
+
+ *R_ATA_CTRL_DATA = reg | IO_STATE(R_ATA_CTRL_DATA, rw, read); /* read data */
+
+ timeleft = IDE_REGISTER_TIMEOUT;
+ /* wait for available */
+ while(timeleft && !((status = *R_ATA_STATUS_DATA) &
+ IO_MASK(R_ATA_STATUS_DATA, dav)))
+ timeleft--;
+
+ if(!timeleft)
+ return 0;
+
+ LOWDB(printk("inb: 0x%x from reg 0x%x\n", status & 0xff, reg));
+
+ return (unsigned short)status;
+}
+
+unsigned char
+etrax100_ide_inb(ide_ioreg_t reg)
+{
+ return (unsigned char)etrax100_ide_inw(reg);
+}
+
+/* PIO timing (in R_ATA_CONFIG)
+ *
+ * _____________________________
+ * ADDRESS : ________/
+ *
+ * _______________
+ * DIOR : ____________/ \__________
+ *
+ * _______________
+ * DATA : XXXXXXXXXXXXXXXX_______________XXXXXXXX
+ *
+ *
+ * DIOR is unbuffered while address and data is buffered.
+ * This creates two problems:
+ * 1. The DIOR pulse is to early (because it is unbuffered)
+ * 2. The rise time of DIOR is long
+ *
+ * There are at least three different plausible solutions
+ * 1. Use a pad capable of larger currents in Etrax
+ * 2. Use an external buffer
+ * 3. Make the strobe pulse longer
+ *
+ * Some of the strobe timings below are modified to compensate
+ * for this. This implies a slight performance decrease.
+ *
+ * THIS SHOULD NEVER BE CHANGED!
+ *
+ * TODO: Is this true for the latest LX boards still ?
+ */
+
+#define ATA_DMA2_STROBE 4
+#define ATA_DMA2_HOLD 0
+#define ATA_DMA1_STROBE 4
+#define ATA_DMA1_HOLD 1
+#define ATA_DMA0_STROBE 12
+#define ATA_DMA0_HOLD 9
+#define ATA_PIO4_SETUP 1
+#define ATA_PIO4_STROBE 5
+#define ATA_PIO4_HOLD 0
+#define ATA_PIO3_SETUP 1
+#define ATA_PIO3_STROBE 5
+#define ATA_PIO3_HOLD 1
+#define ATA_PIO2_SETUP 1
+#define ATA_PIO2_STROBE 6
+#define ATA_PIO2_HOLD 2
+#define ATA_PIO1_SETUP 2
+#define ATA_PIO1_STROBE 11
+#define ATA_PIO1_HOLD 4
+#define ATA_PIO0_SETUP 4
+#define ATA_PIO0_STROBE 19
+#define ATA_PIO0_HOLD 4
+
+static int e100_dma_check (ide_drive_t *drive);
+static int e100_dma_begin (ide_drive_t *drive);
+static int e100_dma_end (ide_drive_t *drive);
+static int e100_dma_read (ide_drive_t *drive);
+static int e100_dma_write (ide_drive_t *drive);
+static void e100_ide_input_data (ide_drive_t *drive, void *, unsigned int);
+static void e100_ide_output_data (ide_drive_t *drive, void *, unsigned int);
+static void e100_atapi_input_bytes(ide_drive_t *drive, void *, unsigned int);
+static void e100_atapi_output_bytes(ide_drive_t *drive, void *, unsigned int);
+static int e100_dma_off (ide_drive_t *drive);
+static int e100_dma_verbose (ide_drive_t *drive);
+
+
+/*
+ * good_dma_drives() lists the model names (from "hdparm -i")
+ * of drives which do not support mword2 DMA but which are
+ * known to work fine with this interface under Linux.
+ */
+
+const char *good_dma_drives[] = {"Micropolis 2112A",
+ "CONNER CTMA 4000",
+ "CONNER CTT8000-A",
+ NULL};
+
+static void tune_e100_ide(ide_drive_t *drive, byte pio)
+{
+ pio = 4;
+ /* pio = ide_get_best_pio_mode(drive, pio, 4, NULL); */
+
+ /* set pio mode! */
+
+ switch(pio) {
+ case 0:
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO0_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO0_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO0_HOLD ) );
+ break;
+ case 1:
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO1_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO1_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO1_HOLD ) );
+ break;
+ case 2:
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO2_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO2_HOLD ) );
+ break;
+ case 3:
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO3_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO3_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO3_HOLD ) );
+ break;
+ case 4:
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO4_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO4_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO4_HOLD ) );
+ break;
+ }
+}
+
+void __init
+init_e100_ide (void)
+{
+ volatile unsigned int dummy;
+ int h;
+
+ printk("ide: ETRAX 100LX built-in ATA DMA controller\n");
+
+ /* first fill in some stuff in the ide_hwifs fields */
+
+ for(h = 0; h < MAX_HWIFS; h++) {
+ ide_hwif_t *hwif = &ide_hwifs[h];
+ hwif->mmio = 2;
+ hwif->chipset = ide_etrax100;
+ hwif->tuneproc = &tune_e100_ide;
+ hwif->ata_input_data = &e100_ide_input_data;
+ hwif->ata_output_data = &e100_ide_output_data;
+ hwif->atapi_input_bytes = &e100_atapi_input_bytes;
+ hwif->atapi_output_bytes = &e100_atapi_output_bytes;
+ hwif->ide_dma_check = &e100_dma_check;
+ hwif->ide_dma_end = &e100_dma_end;
+ hwif->ide_dma_write = &e100_dma_write;
+ hwif->ide_dma_read = &e100_dma_read;
+ hwif->ide_dma_begin = &e100_dma_begin;
+ hwif->OUTB = &etrax100_ide_outb;
+ hwif->OUTW = &etrax100_ide_outw;
+ hwif->OUTBSYNC = &etrax100_ide_outbsync;
+ hwif->INB = &etrax100_ide_inb;
+ hwif->INW = &etrax100_ide_inw;
+ hwif->ide_dma_off_quietly = &e100_dma_off;
+ hwif->ide_dma_verbose = &e100_dma_verbose;
+ hwif->sg_table =
+ kmalloc(sizeof(struct scatterlist) * PRD_ENTRIES, GFP_KERNEL);
+ }
+
+ /* actually reset and configure the etrax100 ide/ata interface */
+
+ *R_ATA_CTRL_DATA = 0;
+ *R_ATA_TRANSFER_CNT = 0;
+ *R_ATA_CONFIG = 0;
+
+ genconfig_shadow = (genconfig_shadow &
+ ~IO_MASK(R_GEN_CONFIG, dma2) &
+ ~IO_MASK(R_GEN_CONFIG, dma3) &
+ ~IO_MASK(R_GEN_CONFIG, ata)) |
+ ( IO_STATE( R_GEN_CONFIG, dma3, ata ) |
+ IO_STATE( R_GEN_CONFIG, dma2, ata ) |
+ IO_STATE( R_GEN_CONFIG, ata, select ) );
+
+ *R_GEN_CONFIG = genconfig_shadow;
+
+ /* pull the chosen /reset-line low */
+
+#ifdef CONFIG_ETRAX_IDE_G27_RESET
+ REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 27, 0);
+#endif
+#ifdef CONFIG_ETRAX_IDE_CSE1_16_RESET
+ REG_SHADOW_SET(port_cse1_addr, port_cse1_shadow, 16, 0);
+#endif
+#ifdef CONFIG_ETRAX_IDE_CSP0_8_RESET
+ REG_SHADOW_SET(port_csp0_addr, port_csp0_shadow, 8, 0);
+#endif
+#ifdef CONFIG_ETRAX_IDE_PB7_RESET
+ port_pb_dir_shadow = port_pb_dir_shadow |
+ IO_STATE(R_PORT_PB_DIR, dir7, output);
+ *R_PORT_PB_DIR = port_pb_dir_shadow;
+ REG_SHADOW_SET(R_PORT_PB_DATA, port_pb_data_shadow, 7, 1);
+#endif
+
+ /* wait some */
+
+ udelay(25);
+
+ /* de-assert bus-reset */
+
+#ifdef CONFIG_ETRAX_IDE_CSE1_16_RESET
+ REG_SHADOW_SET(port_cse1_addr, port_cse1_shadow, 16, 1);
+#endif
+#ifdef CONFIG_ETRAX_IDE_CSP0_8_RESET
+ REG_SHADOW_SET(port_csp0_addr, port_csp0_shadow, 8, 1);
+#endif
+#ifdef CONFIG_ETRAX_IDE_G27_RESET
+ REG_SHADOW_SET(R_PORT_G_DATA, port_g_data_shadow, 27, 1);
+#endif
+
+ /* make a dummy read to set the ata controller in a proper state */
+ dummy = *R_ATA_STATUS_DATA;
+
+ *R_ATA_CONFIG = ( IO_FIELD( R_ATA_CONFIG, enable, 1 ) |
+ IO_FIELD( R_ATA_CONFIG, dma_strobe, ATA_DMA2_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, dma_hold, ATA_DMA2_HOLD ) |
+ IO_FIELD( R_ATA_CONFIG, pio_setup, ATA_PIO4_SETUP ) |
+ IO_FIELD( R_ATA_CONFIG, pio_strobe, ATA_PIO4_STROBE ) |
+ IO_FIELD( R_ATA_CONFIG, pio_hold, ATA_PIO4_HOLD ) );
+
+ *R_ATA_CTRL_DATA = ( IO_STATE( R_ATA_CTRL_DATA, rw, read) |
+ IO_FIELD( R_ATA_CTRL_DATA, addr, 1 ) );
+
+ while(*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy)); /* wait for busy flag*/
+
+ *R_IRQ_MASK0_SET = ( IO_STATE( R_IRQ_MASK0_SET, ata_irq0, set ) |
+ IO_STATE( R_IRQ_MASK0_SET, ata_irq1, set ) |
+ IO_STATE( R_IRQ_MASK0_SET, ata_irq2, set ) |
+ IO_STATE( R_IRQ_MASK0_SET, ata_irq3, set ) );
+
+ printk("ide: waiting %d seconds for drives to regain consciousness\n",
+ CONFIG_ETRAX_IDE_DELAY);
+
+ h = jiffies + (CONFIG_ETRAX_IDE_DELAY * HZ);
+ while(time_before(jiffies, h)) /* nothing */ ;
+
+ /* reset the dma channels we will use */
+
+ RESET_DMA(ATA_TX_DMA_NBR);
+ RESET_DMA(ATA_RX_DMA_NBR);
+ WAIT_DMA(ATA_TX_DMA_NBR);
+ WAIT_DMA(ATA_RX_DMA_NBR);
+
+}
+
+static int e100_dma_off (ide_drive_t *drive)
+{
+ return 0;
+}
+
+static int e100_dma_verbose (ide_drive_t *drive)
+{
+ printk(", DMA(mode 2)");
+ return 0;
+}
+
+static etrax_dma_descr mydescr;
+
+/*
+ * The following routines are mainly used by the ATAPI drivers.
+ *
+ * These routines will round up any request for an odd number of bytes,
+ * so if an odd bytecount is specified, be sure that there's at least one
+ * extra byte allocated for the buffer.
+ */
+static void
+e100_atapi_input_bytes (ide_drive_t *drive, void *buffer, unsigned int bytecount)
+{
+ ide_ioreg_t data_reg = IDE_DATA_REG;
+
+ D(printk("atapi_input_bytes, dreg 0x%x, buffer 0x%x, count %d\n",
+ data_reg, buffer, bytecount));
+
+ if(bytecount & 1) {
+ printk("warning, odd bytecount in cdrom_in_bytes = %d.\n", bytecount);
+ bytecount++; /* to round off */
+ }
+
+ /* make sure the DMA channel is available */
+ RESET_DMA(ATA_RX_DMA_NBR);
+ WAIT_DMA(ATA_RX_DMA_NBR);
+
+ /* setup DMA descriptor */
+
+ mydescr.sw_len = bytecount;
+ mydescr.ctrl = d_eol;
+ mydescr.buf = virt_to_phys(buffer);
+
+ /* start the dma channel */
+
+ *R_DMA_CH3_FIRST = virt_to_phys(&mydescr);
+ *R_DMA_CH3_CMD = IO_STATE(R_DMA_CH3_CMD, cmd, start);
+
+ /* initiate a multi word dma read using PIO handshaking */
+
+ *R_ATA_TRANSFER_CNT = IO_FIELD(R_ATA_TRANSFER_CNT, count, bytecount >> 1);
+
+ *R_ATA_CTRL_DATA = data_reg |
+ IO_STATE(R_ATA_CTRL_DATA, rw, read) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, pio) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ /* wait for completion */
+
+ LED_DISK_READ(1);
+ WAIT_DMA(ATA_RX_DMA_NBR);
+ LED_DISK_READ(0);
+
+#if 0
+ /* old polled transfer code
+ * this should be moved into a new function that can do polled
+ * transfers if DMA is not available
+ */
+
+ /* initiate a multi word read */
+
+ *R_ATA_TRANSFER_CNT = wcount << 1;
+
+ *R_ATA_CTRL_DATA = data_reg |
+ IO_STATE(R_ATA_CTRL_DATA, rw, read) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, register) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, pio) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ /* svinto has a latency until the busy bit actually is set */
+
+ nop(); nop();
+ nop(); nop();
+ nop(); nop();
+ nop(); nop();
+ nop(); nop();
+
+ /* unit should be busy during multi transfer */
+ while((status = *R_ATA_STATUS_DATA) & IO_MASK(R_ATA_STATUS_DATA, busy)) {
+ while(!(status & IO_MASK(R_ATA_STATUS_DATA, dav)))
+ status = *R_ATA_STATUS_DATA;
+ *ptr++ = (unsigned short)(status & 0xffff);
+ }
+#endif
+}
+
+static void
+e100_atapi_output_bytes (ide_drive_t *drive, void *buffer, unsigned int bytecount)
+{
+ ide_ioreg_t data_reg = IDE_DATA_REG;
+
+ D(printk("atapi_output_bytes, dreg 0x%x, buffer 0x%x, count %d\n",
+ data_reg, buffer, bytecount));
+
+ if(bytecount & 1) {
+ printk("odd bytecount %d in atapi_out_bytes!\n", bytecount);
+ bytecount++;
+ }
+
+ /* make sure the DMA channel is available */
+ RESET_DMA(ATA_TX_DMA_NBR);
+ WAIT_DMA(ATA_TX_DMA_NBR);
+
+ /* setup DMA descriptor */
+
+ mydescr.sw_len = bytecount;
+ mydescr.ctrl = d_eol;
+ mydescr.buf = virt_to_phys(buffer);
+
+ /* start the dma channel */
+
+ *R_DMA_CH2_FIRST = virt_to_phys(&mydescr);
+ *R_DMA_CH2_CMD = IO_STATE(R_DMA_CH2_CMD, cmd, start);
+
+ /* initiate a multi word dma write using PIO handshaking */
+
+ *R_ATA_TRANSFER_CNT = IO_FIELD(R_ATA_TRANSFER_CNT, count, bytecount >> 1);
+
+ *R_ATA_CTRL_DATA = data_reg |
+ IO_STATE(R_ATA_CTRL_DATA, rw, write) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, pio) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ /* wait for completion */
+
+ LED_DISK_WRITE(1);
+ WAIT_DMA(ATA_TX_DMA_NBR);
+ LED_DISK_WRITE(0);
+
+#if 0
+ /* old polled write code - see comment in input_bytes */
+
+ /* wait for busy flag */
+ while(*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy));
+
+ /* initiate a multi word write */
+
+ *R_ATA_TRANSFER_CNT = bytecount >> 1;
+
+ ctrl = data_reg |
+ IO_STATE(R_ATA_CTRL_DATA, rw, write) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, register) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, pio) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ LED_DISK_WRITE(1);
+
+ /* Etrax will set busy = 1 until the multi pio transfer has finished
+ * and tr_rdy = 1 after each successful word transfer.
+ * When the last byte has been transferred Etrax will first set tr_tdy = 1
+ * and then busy = 0 (not in the same cycle). If we read busy before it
+ * has been set to 0 we will think that we should transfer more bytes
+ * and then tr_rdy would be 0 forever. This is solved by checking busy
+ * in the inner loop.
+ */
+
+ do {
+ *R_ATA_CTRL_DATA = ctrl | *ptr++;
+ while(!(*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, tr_rdy)) &&
+ (*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy)));
+ } while(*R_ATA_STATUS_DATA & IO_MASK(R_ATA_STATUS_DATA, busy));
+
+ LED_DISK_WRITE(0);
+#endif
+
+}
+
+/*
+ * This is used for most PIO data transfers *from* the IDE interface
+ */
+static void
+e100_ide_input_data (ide_drive_t *drive, void *buffer, unsigned int wcount)
+{
+ e100_atapi_input_bytes(drive, buffer, wcount << 2);
+}
+
+/*
+ * This is used for most PIO data transfers *to* the IDE interface
+ */
+static void
+e100_ide_output_data (ide_drive_t *drive, void *buffer, unsigned int wcount)
+{
+ e100_atapi_output_bytes(drive, buffer, wcount << 2);
+}
+
+/* we only have one DMA channel on the chip for ATA, so we can keep these statically */
+static etrax_dma_descr ata_descrs[MAX_DMA_DESCRS];
+static unsigned int ata_tot_size;
+
+/*
+ * e100_ide_build_dmatable() prepares a dma request.
+ * Returns 0 if all went okay, returns 1 otherwise.
+ */
+static int e100_ide_build_dmatable (ide_drive_t *drive)
+{
+ ide_hwif_t *hwif = HWIF(drive);
+ struct scatterlist* sg;
+ struct request *rq = HWGROUP(drive)->rq;
+ unsigned long size, addr;
+ unsigned int count = 0;
+ int i = 0;
+
+ sg = hwif->sg_table;
+
+ ata_tot_size = 0;
+
+ if (HWGROUP(drive)->rq->flags & REQ_DRIVE_TASKFILE) {
+ u8 *virt_addr = rq->buffer;
+ int sector_count = rq->nr_sectors;
+ memset(&sg[0], 0, sizeof(*sg));
+ sg[0].page = virt_to_page(virt_addr);
+ sg[0].offset = offset_in_page(virt_addr);
+ sg[0].length = sector_count * SECTOR_SIZE;
+ hwif->sg_nents = i = 1;
+ }
+ else
+ {
+ hwif->sg_nents = i = blk_rq_map_sg(drive->queue, rq, hwif->sg_table);
+ }
+
+
+ while(i) {
+ /*
+ * Determine addr and size of next buffer area. We assume that
+ * individual virtual buffers are always composed linearly in
+ * physical memory. For example, we assume that any 8kB buffer
+ * is always composed of two adjacent physical 4kB pages rather
+ * than two possibly non-adjacent physical 4kB pages.
+ */
+ /* group sequential buffers into one large buffer */
+ addr = page_to_phys(sg->page) + sg->offset;
+ size = sg_dma_len(sg);
+ while (sg++, --i) {
+ if ((addr + size) != page_to_phys(sg->page) + sg->offset)
+ break;
+ size += sg_dma_len(sg);
+ }
+
+ /* did we run out of descriptors? */
+
+ if(count >= MAX_DMA_DESCRS) {
+ printk("%s: too few DMA descriptors\n", drive->name);
+ return 1;
+ }
+
+ /* however, this case is more difficult - R_ATA_TRANSFER_CNT cannot be more
+ than 65536 words per transfer, so in that case we need to either
+ 1) use a DMA interrupt to re-trigger R_ATA_TRANSFER_CNT and continue with
+ the descriptors, or
+ 2) simply do the request here, and get dma_intr to only ide_end_request on
+ those blocks that were actually set-up for transfer.
+ */
+
+ if(ata_tot_size + size > 131072) {
+ printk("too large total ATA DMA request, %d + %d!\n", ata_tot_size, (int)size);
+ return 1;
+ }
+
+ /* If size > 65536 it has to be splitted into new descriptors. Since we don't handle
+ size > 131072 only one split is necessary */
+
+ if(size > 65536) {
+ /* ok we want to do IO at addr, size bytes. set up a new descriptor entry */
+ ata_descrs[count].sw_len = 0; /* 0 means 65536, this is a 16-bit field */
+ ata_descrs[count].ctrl = 0;
+ ata_descrs[count].buf = addr;
+ ata_descrs[count].next = virt_to_phys(&ata_descrs[count + 1]);
+ count++;
+ ata_tot_size += 65536;
+ /* size and addr should refere to not handled data */
+ size -= 65536;
+ addr += 65536;
+ }
+ /* ok we want to do IO at addr, size bytes. set up a new descriptor entry */
+ if(size == 65536) {
+ ata_descrs[count].sw_len = 0; /* 0 means 65536, this is a 16-bit field */
+ } else {
+ ata_descrs[count].sw_len = size;
+ }
+ ata_descrs[count].ctrl = 0;
+ ata_descrs[count].buf = addr;
+ ata_descrs[count].next = virt_to_phys(&ata_descrs[count + 1]);
+ count++;
+ ata_tot_size += size;
+ }
+
+ if (count) {
+ /* set the end-of-list flag on the last descriptor */
+ ata_descrs[count - 1].ctrl |= d_eol;
+ /* return and say all is ok */
+ return 0;
+ }
+
+ printk("%s: empty DMA table?\n", drive->name);
+ return 1; /* let the PIO routines handle this weirdness */
+}
+
+static int config_drive_for_dma (ide_drive_t *drive)
+{
+ const char **list;
+ struct hd_driveid *id = drive->id;
+
+ if (id && (id->capability & 1)) {
+ /* Enable DMA on any drive that supports mword2 DMA */
+ if ((id->field_valid & 2) && (id->dma_mword & 0x404) == 0x404) {
+ drive->using_dma = 1;
+ return 0; /* DMA enabled */
+ }
+
+ /* Consult the list of known "good" drives */
+ list = good_dma_drives;
+ while (*list) {
+ if (!strcmp(*list++,id->model)) {
+ drive->using_dma = 1;
+ return 0; /* DMA enabled */
+ }
+ }
+ }
+ return 1; /* DMA not enabled */
+}
+
+/*
+ * etrax_dma_intr() is the handler for disk read/write DMA interrupts
+ */
+static ide_startstop_t etrax_dma_intr (ide_drive_t *drive)
+{
+ int i, dma_stat;
+ byte stat;
+
+ LED_DISK_READ(0);
+ LED_DISK_WRITE(0);
+
+ dma_stat = HWIF(drive)->ide_dma_end(drive);
+ stat = HWIF(drive)->INB(IDE_STATUS_REG); /* get drive status */
+ if (OK_STAT(stat,DRIVE_READY,drive->bad_wstat|DRQ_STAT)) {
+ if (!dma_stat) {
+ struct request *rq;
+ rq = HWGROUP(drive)->rq;
+ for (i = rq->nr_sectors; i > 0;) {
+ i -= rq->current_nr_sectors;
+ DRIVER(drive)->end_request(drive, 1, rq->nr_sectors);
+ }
+ return ide_stopped;
+ }
+ printk("%s: bad DMA status\n", drive->name);
+ }
+ return DRIVER(drive)->error(drive, "dma_intr", stat);
+}
+
+/*
+ * Functions below initiates/aborts DMA read/write operations on a drive.
+ *
+ * The caller is assumed to have selected the drive and programmed the drive's
+ * sector address using CHS or LBA. All that remains is to prepare for DMA
+ * and then issue the actual read/write DMA/PIO command to the drive.
+ *
+ * For ATAPI devices, we just prepare for DMA and return. The caller should
+ * then issue the packet command to the drive and call us again with
+ * ide_dma_begin afterwards.
+ *
+ * Returns 0 if all went well.
+ * Returns 1 if DMA read/write could not be started, in which case
+ * the caller should revert to PIO for the current request.
+ */
+
+static int e100_dma_check(ide_drive_t *drive)
+{
+ return config_drive_for_dma (drive);
+}
+
+static int e100_dma_end(ide_drive_t *drive)
+{
+ /* TODO: check if something went wrong with the DMA */
+ return 0;
+}
+
+static int e100_start_dma(ide_drive_t *drive, int atapi, int reading)
+{
+ if(reading) {
+
+ RESET_DMA(ATA_RX_DMA_NBR); /* sometimes the DMA channel get stuck so we need to do this */
+ WAIT_DMA(ATA_RX_DMA_NBR);
+
+ /* set up the Etrax DMA descriptors */
+
+ if(e100_ide_build_dmatable (drive))
+ return 1;
+
+ if(!atapi) {
+ /* set the irq handler which will finish the request when DMA is done */
+
+ ide_set_handler(drive, &etrax_dma_intr, WAIT_CMD, NULL);
+
+ /* issue cmd to drive */
+ if ((HWGROUP(drive)->rq->cmd == IDE_DRIVE_TASKFILE) &&
+ (drive->addressing == 1)) {
+ ide_task_t *args = HWGROUP(drive)->rq->special;
+ etrax100_ide_outb(args->tfRegister[IDE_COMMAND_OFFSET], IDE_COMMAND_REG);
+ } else if (drive->addressing) {
+ etrax100_ide_outb(WIN_READDMA_EXT, IDE_COMMAND_REG);
+ } else {
+ etrax100_ide_outb(WIN_READDMA, IDE_COMMAND_REG);
+ }
+ }
+
+ /* begin DMA */
+
+ /* need to do this before RX DMA due to a chip bug
+ * it is enough to just flush the part of the cache that
+ * corresponds to the buffers we start, but since HD transfers
+ * usually are more than 8 kB, it is easier to optimize for the
+ * normal case and just flush the entire cache. its the only
+ * way to be sure! (OB movie quote)
+ */
+ flush_etrax_cache();
+ *R_DMA_CH3_FIRST = virt_to_phys(ata_descrs);
+ *R_DMA_CH3_CMD = IO_STATE(R_DMA_CH3_CMD, cmd, start);
+
+ /* initiate a multi word dma read using DMA handshaking */
+
+ *R_ATA_TRANSFER_CNT =
+ IO_FIELD(R_ATA_TRANSFER_CNT, count, ata_tot_size >> 1);
+
+ *R_ATA_CTRL_DATA =
+ IO_FIELD(R_ATA_CTRL_DATA, data, IDE_DATA_REG) |
+ IO_STATE(R_ATA_CTRL_DATA, rw, read) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ LED_DISK_READ(1);
+
+ D(printk("dma read of %d bytes.\n", ata_tot_size));
+
+ } else {
+ /* writing */
+
+ RESET_DMA(ATA_TX_DMA_NBR); /* sometimes the DMA channel get stuck so we need to do this */
+ WAIT_DMA(ATA_TX_DMA_NBR);
+
+ /* set up the Etrax DMA descriptors */
+
+ if(e100_ide_build_dmatable (drive))
+ return 1;
+
+ if(!atapi) {
+ /* set the irq handler which will finish the request when DMA is done */
+
+ ide_set_handler(drive, &etrax_dma_intr, WAIT_CMD, NULL);
+
+ /* issue cmd to drive */
+ if ((HWGROUP(drive)->rq->cmd == IDE_DRIVE_TASKFILE) &&
+ (drive->addressing == 1)) {
+ ide_task_t *args = HWGROUP(drive)->rq->special;
+ etrax100_ide_outb(args->tfRegister[IDE_COMMAND_OFFSET], IDE_COMMAND_REG);
+ } else if (drive->addressing) {
+ etrax100_ide_outb(WIN_WRITEDMA_EXT, IDE_COMMAND_REG);
+ } else {
+ etrax100_ide_outb(WIN_WRITEDMA, IDE_COMMAND_REG);
+ }
+ }
+
+ /* begin DMA */
+
+ *R_DMA_CH2_FIRST = virt_to_phys(ata_descrs);
+ *R_DMA_CH2_CMD = IO_STATE(R_DMA_CH2_CMD, cmd, start);
+
+ /* initiate a multi word dma write using DMA handshaking */
+
+ *R_ATA_TRANSFER_CNT =
+ IO_FIELD(R_ATA_TRANSFER_CNT, count, ata_tot_size >> 1);
+
+ *R_ATA_CTRL_DATA =
+ IO_FIELD(R_ATA_CTRL_DATA, data, IDE_DATA_REG) |
+ IO_STATE(R_ATA_CTRL_DATA, rw, write) |
+ IO_STATE(R_ATA_CTRL_DATA, src_dst, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, handsh, dma) |
+ IO_STATE(R_ATA_CTRL_DATA, multi, on) |
+ IO_STATE(R_ATA_CTRL_DATA, dma_size, word);
+
+ LED_DISK_WRITE(1);
+
+ D(printk("dma write of %d bytes.\n", ata_tot_size));
+ }
+ return 0;
+}
+
+static int e100_dma_write(ide_drive_t *drive)
+{
+ e100_read_command = 0;
+ /* ATAPI-devices (not disks) first call ide_dma_read/write to set the direction
+ * then they call ide_dma_begin after they have issued the appropriate drive command
+ * themselves to actually start the chipset DMA. so we just return here if we're
+ * not a diskdrive.
+ */
+ if (drive->media != ide_disk)
+ return 0;
+ return e100_start_dma(drive, 0, 0);
+}
+
+static int e100_dma_read(ide_drive_t *drive)
+{
+ e100_read_command = 1;
+ /* ATAPI-devices (not disks) first call ide_dma_read/write to set the direction
+ * then they call ide_dma_begin after they have issued the appropriate drive command
+ * themselves to actually start the chipset DMA. so we just return here if we're
+ * not a diskdrive.
+ */
+ if (drive->media != ide_disk)
+ return 0;
+ return e100_start_dma(drive, 0, 1);
+}
+
+static int e100_dma_begin(ide_drive_t *drive)
+{
+ /* begin DMA, used by ATAPI devices which want to issue the
+ * appropriate IDE command themselves.
+ *
+ * they have already called ide_dma_read/write to set the
+ * static reading flag, now they call ide_dma_begin to do
+ * the real stuff. we tell our code below not to issue
+ * any IDE commands itself and jump into it.
+ */
+ return e100_start_dma(drive, 1, e100_read_command);
+}
--- /dev/null
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/user.h>
+#include <linux/elfcore.h>
+#include <linux/sched.h>
+#include <linux/in6.h>
+#include <linux/interrupt.h>
+#include <linux/smp_lock.h>
+#include <linux/pm.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+
+#include <asm/semaphore.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+#include <asm/io.h>
+#include <asm/hardirq.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/pgtable.h>
+#include <asm/fasttimer.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern unsigned long get_cmos_time(void);
+extern void __Udiv(void);
+extern void __Umod(void);
+extern void __Div(void);
+extern void __Mod(void);
+extern void __ashrdi3(void);
+extern void iounmap(void *addr);
+
+/* Platform dependent support */
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(kernel_thread);
+EXPORT_SYMBOL(get_cmos_time);
+EXPORT_SYMBOL(loops_per_usec);
+
+/* String functions */
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(strpbrk);
+EXPORT_SYMBOL(strstr);
+EXPORT_SYMBOL(strcpy);
+EXPORT_SYMBOL(strchr);
+EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strlen);
+EXPORT_SYMBOL(strcat);
+EXPORT_SYMBOL(strncat);
+EXPORT_SYMBOL(strncmp);
+EXPORT_SYMBOL(strncpy);
+
+/* Math functions */
+EXPORT_SYMBOL(__Udiv);
+EXPORT_SYMBOL(__Umod);
+EXPORT_SYMBOL(__Div);
+EXPORT_SYMBOL(__Mod);
+EXPORT_SYMBOL(__ashrdi3);
+
+/* Memory functions */
+EXPORT_SYMBOL(__ioremap);
+EXPORT_SYMBOL(iounmap);
+
+/* Semaphore functions */
+EXPORT_SYMBOL(__up);
+EXPORT_SYMBOL(__down);
+EXPORT_SYMBOL(__down_interruptible);
+EXPORT_SYMBOL(__down_trylock);
+
+/* Export shadow registers for the CPU I/O pins */
+EXPORT_SYMBOL(genconfig_shadow);
+EXPORT_SYMBOL(port_pa_data_shadow);
+EXPORT_SYMBOL(port_pa_dir_shadow);
+EXPORT_SYMBOL(port_pb_data_shadow);
+EXPORT_SYMBOL(port_pb_dir_shadow);
+EXPORT_SYMBOL(port_pb_config_shadow);
+EXPORT_SYMBOL(port_g_data_shadow);
+
+/* Userspace access functions */
+EXPORT_SYMBOL(__copy_user_zeroing);
+EXPORT_SYMBOL(__copy_user);
+
+/* Cache flush functions */
+EXPORT_SYMBOL(flush_etrax_cache);
+EXPORT_SYMBOL(prepare_rx_descriptor);
+
+#undef memcpy
+#undef memset
+extern void * memset(void *, int, __kernel_size_t);
+extern void * memcpy(void *, const void *, __kernel_size_t);
+EXPORT_SYMBOL_NOVERS(memcpy);
+EXPORT_SYMBOL_NOVERS(memset);
+
+#ifdef CONFIG_ETRAX_FAST_TIMER
+/* Fast timer functions */
+EXPORT_SYMBOL(fast_timer_list);
+EXPORT_SYMBOL(start_one_shot_timer);
+EXPORT_SYMBOL(del_fast_timer);
+EXPORT_SYMBOL(schedule_usleep);
+#endif
+
--- /dev/null
+#
+# i386/crypto/Makefile
+#
+# Arch-specific CryptoAPI modules.
+#
+
+obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
+
+aes-i586-y := aes-i586-asm.o aes.o
--- /dev/null
+// -------------------------------------------------------------------------
+// Copyright (c) 2001, Dr Brian Gladman < >, Worcester, UK.
+// All rights reserved.
+//
+// LICENSE TERMS
+//
+// The free distribution and use of this software in both source and binary
+// form is allowed (with or without changes) provided that:
+//
+// 1. distributions of this source code include the above copyright
+// notice, this list of conditions and the following disclaimer//
+//
+// 2. distributions in binary form include the above copyright
+// notice, this list of conditions and the following disclaimer
+// in the documentation and/or other associated materials//
+//
+// 3. the copyright holder's name is not used to endorse products
+// built using this software without specific written permission.
+//
+//
+// ALTERNATIVELY, provided that this notice is retained in full, this product
+// may be distributed under the terms of the GNU General Public License (GPL),
+// in which case the provisions of the GPL apply INSTEAD OF those given above.
+//
+// Copyright (c) 2004 Linus Torvalds <torvalds@osdl.org>
+// Copyright (c) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+
+// DISCLAIMER
+//
+// This software is provided 'as is' with no explicit or implied warranties
+// in respect of its properties including, but not limited to, correctness
+// and fitness for purpose.
+// -------------------------------------------------------------------------
+// Issue Date: 29/07/2002
+
+.file "aes-i586-asm.S"
+.text
+
+// aes_rval aes_enc_blk(const unsigned char in_blk[], unsigned char out_blk[], const aes_ctx cx[1])//
+// aes_rval aes_dec_blk(const unsigned char in_blk[], unsigned char out_blk[], const aes_ctx cx[1])//
+
+#define tlen 1024 // length of each of 4 'xor' arrays (256 32-bit words)
+
+// offsets to parameters with one register pushed onto stack
+
+#define in_blk 8 // input byte array address parameter
+#define out_blk 12 // output byte array address parameter
+#define ctx 16 // AES context structure
+
+// offsets in context structure
+
+#define ekey 0 // encryption key schedule base address
+#define nrnd 256 // number of rounds
+#define dkey 260 // decryption key schedule base address
+
+// register mapping for encrypt and decrypt subroutines
+
+#define r0 eax
+#define r1 ebx
+#define r2 ecx
+#define r3 edx
+#define r4 esi
+#define r5 edi
+#define r6 ebp
+
+#define eaxl al
+#define eaxh ah
+#define ebxl bl
+#define ebxh bh
+#define ecxl cl
+#define ecxh ch
+#define edxl dl
+#define edxh dh
+
+#define _h(reg) reg##h
+#define h(reg) _h(reg)
+
+#define _l(reg) reg##l
+#define l(reg) _l(reg)
+
+// This macro takes a 32-bit word representing a column and uses
+// each of its four bytes to index into four tables of 256 32-bit
+// words to obtain values that are then xored into the appropriate
+// output registers r0, r1, r4 or r5.
+
+// Parameters:
+// %1 out_state[0]
+// %2 out_state[1]
+// %3 out_state[2]
+// %4 out_state[3]
+// %5 table base address
+// %6 input register for the round (destroyed)
+// %7 scratch register for the round
+
+#define do_col(a1, a2, a3, a4, a5, a6, a7) \
+ movzx %l(a6),%a7; \
+ xor a5(,%a7,4),%a1; \
+ movzx %h(a6),%a7; \
+ shr $16,%a6; \
+ xor a5+tlen(,%a7,4),%a2; \
+ movzx %l(a6),%a7; \
+ movzx %h(a6),%a6; \
+ xor a5+2*tlen(,%a7,4),%a3; \
+ xor a5+3*tlen(,%a6,4),%a4;
+
+// initialise output registers from the key schedule
+
+#define do_fcol(a1, a2, a3, a4, a5, a6, a7, a8) \
+ mov 0 a8,%a1; \
+ movzx %l(a6),%a7; \
+ mov 12 a8,%a2; \
+ xor a5(,%a7,4),%a1; \
+ mov 4 a8,%a4; \
+ movzx %h(a6),%a7; \
+ shr $16,%a6; \
+ xor a5+tlen(,%a7,4),%a2; \
+ movzx %l(a6),%a7; \
+ movzx %h(a6),%a6; \
+ xor a5+3*tlen(,%a6,4),%a4; \
+ mov %a3,%a6; \
+ mov 8 a8,%a3; \
+ xor a5+2*tlen(,%a7,4),%a3;
+
+// initialise output registers from the key schedule
+
+#define do_icol(a1, a2, a3, a4, a5, a6, a7, a8) \
+ mov 0 a8,%a1; \
+ movzx %l(a6),%a7; \
+ mov 4 a8,%a2; \
+ xor a5(,%a7,4),%a1; \
+ mov 12 a8,%a4; \
+ movzx %h(a6),%a7; \
+ shr $16,%a6; \
+ xor a5+tlen(,%a7,4),%a2; \
+ movzx %l(a6),%a7; \
+ movzx %h(a6),%a6; \
+ xor a5+3*tlen(,%a6,4),%a4; \
+ mov %a3,%a6; \
+ mov 8 a8,%a3; \
+ xor a5+2*tlen(,%a7,4),%a3;
+
+
+// original Gladman had conditional saves to MMX regs.
+#define save(a1, a2) \
+ mov %a2,4*a1(%esp)
+
+#define restore(a1, a2) \
+ mov 4*a2(%esp),%a1
+
+// This macro performs a forward encryption cycle. It is entered with
+// the first previous round column values in r0, r1, r4 and r5 and
+// exits with the final values in the same registers, using the MMX
+// registers mm0-mm1 or the stack for temporary storage
+
+// mov current column values into the MMX registers
+#define fwd_rnd(arg, table) \
+ /* mov current column values into the MMX registers */ \
+ mov %r0,%r2; \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_fcol(r0,r5,r4,r1,table, r2,r3, arg); \
+ do_col (r4,r1,r0,r5,table, r2,r3); \
+ restore(r2,0); \
+ do_col (r1,r0,r5,r4,table, r2,r3); \
+ restore(r2,1); \
+ do_col (r5,r4,r1,r0,table, r2,r3);
+
+// This macro performs an inverse encryption cycle. It is entered with
+// the first previous round column values in r0, r1, r4 and r5 and
+// exits with the final values in the same registers, using the MMX
+// registers mm0-mm1 or the stack for temporary storage
+
+#define inv_rnd(arg, table) \
+ /* mov current column values into the MMX registers */ \
+ mov %r0,%r2; \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_icol(r0,r1,r4,r5, table, r2,r3, arg); \
+ do_col (r4,r5,r0,r1, table, r2,r3); \
+ restore(r2,0); \
+ do_col (r1,r4,r5,r0, table, r2,r3); \
+ restore(r2,1); \
+ do_col (r5,r0,r1,r4, table, r2,r3);
+
+// AES (Rijndael) Encryption Subroutine
+
+.global aes_enc_blk
+
+.extern ft_tab
+.extern fl_tab
+
+.align 4
+
+aes_enc_blk:
+ push %ebp
+ mov ctx(%esp),%ebp // pointer to context
+ xor %eax,%eax
+
+// CAUTION: the order and the values used in these assigns
+// rely on the register mappings
+
+1: push %ebx
+ mov in_blk+4(%esp),%r2
+ push %esi
+ mov nrnd(%ebp),%r3 // number of rounds
+ push %edi
+ lea ekey(%ebp),%r6 // key pointer
+
+// input four columns and xor in first round key
+
+ mov (%r2),%r0
+ mov 4(%r2),%r1
+ mov 8(%r2),%r4
+ mov 12(%r2),%r5
+ xor (%r6),%r0
+ xor 4(%r6),%r1
+ xor 8(%r6),%r4
+ xor 12(%r6),%r5
+
+ sub $8,%esp // space for register saves on stack
+ add $16,%r6 // increment to next round key
+ sub $10,%r3
+ je 4f // 10 rounds for 128-bit key
+ add $32,%r6
+ sub $2,%r3
+ je 3f // 12 rounds for 128-bit key
+ add $32,%r6
+
+2: fwd_rnd( -64(%r6) ,ft_tab) // 14 rounds for 128-bit key
+ fwd_rnd( -48(%r6) ,ft_tab)
+3: fwd_rnd( -32(%r6) ,ft_tab) // 12 rounds for 128-bit key
+ fwd_rnd( -16(%r6) ,ft_tab)
+4: fwd_rnd( (%r6) ,ft_tab) // 10 rounds for 128-bit key
+ fwd_rnd( +16(%r6) ,ft_tab)
+ fwd_rnd( +32(%r6) ,ft_tab)
+ fwd_rnd( +48(%r6) ,ft_tab)
+ fwd_rnd( +64(%r6) ,ft_tab)
+ fwd_rnd( +80(%r6) ,ft_tab)
+ fwd_rnd( +96(%r6) ,ft_tab)
+ fwd_rnd(+112(%r6) ,ft_tab)
+ fwd_rnd(+128(%r6) ,ft_tab)
+ fwd_rnd(+144(%r6) ,fl_tab) // last round uses a different table
+
+// move final values to the output array. CAUTION: the
+// order of these assigns rely on the register mappings
+
+ add $8,%esp
+ mov out_blk+12(%esp),%r6
+ mov %r5,12(%r6)
+ pop %edi
+ mov %r4,8(%r6)
+ pop %esi
+ mov %r1,4(%r6)
+ pop %ebx
+ mov %r0,(%r6)
+ pop %ebp
+ mov $1,%eax
+ ret
+
+// AES (Rijndael) Decryption Subroutine
+
+.global aes_dec_blk
+
+.extern it_tab
+.extern il_tab
+
+.align 4
+
+aes_dec_blk:
+ push %ebp
+ mov ctx(%esp),%ebp // pointer to context
+ xor %eax,%eax
+
+// CAUTION: the order and the values used in these assigns
+// rely on the register mappings
+
+1: push %ebx
+ mov in_blk+4(%esp),%r2
+ push %esi
+ mov nrnd(%ebp),%r3 // number of rounds
+ push %edi
+ lea dkey(%ebp),%r6 // key pointer
+ mov %r3,%r0
+ shl $4,%r0
+ add %r0,%r6
+
+// input four columns and xor in first round key
+
+ mov (%r2),%r0
+ mov 4(%r2),%r1
+ mov 8(%r2),%r4
+ mov 12(%r2),%r5
+ xor (%r6),%r0
+ xor 4(%r6),%r1
+ xor 8(%r6),%r4
+ xor 12(%r6),%r5
+
+ sub $8,%esp // space for register saves on stack
+ sub $16,%r6 // increment to next round key
+ sub $10,%r3
+ je 4f // 10 rounds for 128-bit key
+ sub $32,%r6
+ sub $2,%r3
+ je 3f // 12 rounds for 128-bit key
+ sub $32,%r6
+
+2: inv_rnd( +64(%r6), it_tab) // 14 rounds for 128-bit key
+ inv_rnd( +48(%r6), it_tab)
+3: inv_rnd( +32(%r6), it_tab) // 12 rounds for 128-bit key
+ inv_rnd( +16(%r6), it_tab)
+4: inv_rnd( (%r6), it_tab) // 10 rounds for 128-bit key
+ inv_rnd( -16(%r6), it_tab)
+ inv_rnd( -32(%r6), it_tab)
+ inv_rnd( -48(%r6), it_tab)
+ inv_rnd( -64(%r6), it_tab)
+ inv_rnd( -80(%r6), it_tab)
+ inv_rnd( -96(%r6), it_tab)
+ inv_rnd(-112(%r6), it_tab)
+ inv_rnd(-128(%r6), it_tab)
+ inv_rnd(-144(%r6), il_tab) // last round uses a different table
+
+// move final values to the output array. CAUTION: the
+// order of these assigns rely on the register mappings
+
+ add $8,%esp
+ mov out_blk+12(%esp),%r6
+ mov %r5,12(%r6)
+ pop %edi
+ mov %r4,8(%r6)
+ pop %esi
+ mov %r1,4(%r6)
+ pop %ebx
+ mov %r0,(%r6)
+ pop %ebp
+ mov $1,%eax
+ ret
+
--- /dev/null
+/*
+ *
+ * Glue Code for optimized 586 assembler version of AES
+ *
+ * Copyright (c) 2002, Dr Brian Gladman <>, Worcester, UK.
+ * All rights reserved.
+ *
+ * LICENSE TERMS
+ *
+ * The free distribution and use of this software in both source and binary
+ * form is allowed (with or without changes) provided that:
+ *
+ * 1. distributions of this source code include the above copyright
+ * notice, this list of conditions and the following disclaimer;
+ *
+ * 2. distributions in binary form include the above copyright
+ * notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other associated materials;
+ *
+ * 3. the copyright holder's name is not used to endorse products
+ * built using this software without specific written permission.
+ *
+ * ALTERNATIVELY, provided that this notice is retained in full, this product
+ * may be distributed under the terms of the GNU General Public License (GPL),
+ * in which case the provisions of the GPL apply INSTEAD OF those given above.
+ *
+ * DISCLAIMER
+ *
+ * This software is provided 'as is' with no explicit or implied warranties
+ * in respect of its properties, including, but not limited to, correctness
+ * and/or fitness for purpose.
+ *
+ * Copyright (c) 2003, Adam J. Richter <adam@yggdrasil.com> (conversion to
+ * 2.5 API).
+ * Copyright (c) 2003, 2004 Fruhwirth Clemens <clemens@endorphin.org>
+ * Copyright (c) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <linux/linkage.h>
+
+asmlinkage void aes_enc_blk(const u8 *src, u8 *dst, void *ctx);
+asmlinkage void aes_dec_blk(const u8 *src, u8 *dst, void *ctx);
+
+#define AES_MIN_KEY_SIZE 16
+#define AES_MAX_KEY_SIZE 32
+#define AES_BLOCK_SIZE 16
+#define AES_KS_LENGTH 4 * AES_BLOCK_SIZE
+#define RC_LENGTH 29
+
+struct aes_ctx {
+ u32 ekey[AES_KS_LENGTH];
+ u32 rounds;
+ u32 dkey[AES_KS_LENGTH];
+};
+
+#define WPOLY 0x011b
+#define u32_in(x) le32_to_cpu(*(const u32 *)(x))
+#define bytes2word(b0, b1, b2, b3) \
+ (((u32)(b3) << 24) | ((u32)(b2) << 16) | ((u32)(b1) << 8) | (b0))
+
+/* define the finite field multiplies required for Rijndael */
+#define f2(x) ((x) ? pow[log[x] + 0x19] : 0)
+#define f3(x) ((x) ? pow[log[x] + 0x01] : 0)
+#define f9(x) ((x) ? pow[log[x] + 0xc7] : 0)
+#define fb(x) ((x) ? pow[log[x] + 0x68] : 0)
+#define fd(x) ((x) ? pow[log[x] + 0xee] : 0)
+#define fe(x) ((x) ? pow[log[x] + 0xdf] : 0)
+#define fi(x) ((x) ? pow[255 - log[x]]: 0)
+
+static inline u32 upr(u32 x, int n)
+{
+ return (x << 8 * n) | (x >> (32 - 8 * n));
+}
+
+static inline u8 bval(u32 x, int n)
+{
+ return x >> 8 * n;
+}
+
+/* The forward and inverse affine transformations used in the S-box */
+#define fwd_affine(x) \
+ (w = (u32)x, w ^= (w<<1)^(w<<2)^(w<<3)^(w<<4), 0x63^(u8)(w^(w>>8)))
+
+#define inv_affine(x) \
+ (w = (u32)x, w = (w<<1)^(w<<3)^(w<<6), 0x05^(u8)(w^(w>>8)))
+
+static u32 rcon_tab[RC_LENGTH];
+
+u32 ft_tab[4][256];
+u32 fl_tab[4][256];
+u32 ls_tab[4][256];
+u32 im_tab[4][256];
+u32 il_tab[4][256];
+u32 it_tab[4][256];
+
+void gen_tabs(void)
+{
+ u32 i, w;
+ u8 pow[512], log[256];
+
+ /*
+ * log and power tables for GF(2^8) finite field with
+ * WPOLY as modular polynomial - the simplest primitive
+ * root is 0x03, used here to generate the tables.
+ */
+ i = 0; w = 1;
+
+ do {
+ pow[i] = (u8)w;
+ pow[i + 255] = (u8)w;
+ log[w] = (u8)i++;
+ w ^= (w << 1) ^ (w & 0x80 ? WPOLY : 0);
+ } while (w != 1);
+
+ for(i = 0, w = 1; i < RC_LENGTH; ++i) {
+ rcon_tab[i] = bytes2word(w, 0, 0, 0);
+ w = f2(w);
+ }
+
+ for(i = 0; i < 256; ++i) {
+ u8 b;
+
+ b = fwd_affine(fi((u8)i));
+ w = bytes2word(f2(b), b, b, f3(b));
+
+ /* tables for a normal encryption round */
+ ft_tab[0][i] = w;
+ ft_tab[1][i] = upr(w, 1);
+ ft_tab[2][i] = upr(w, 2);
+ ft_tab[3][i] = upr(w, 3);
+ w = bytes2word(b, 0, 0, 0);
+
+ /*
+ * tables for last encryption round
+ * (may also be used in the key schedule)
+ */
+ fl_tab[0][i] = w;
+ fl_tab[1][i] = upr(w, 1);
+ fl_tab[2][i] = upr(w, 2);
+ fl_tab[3][i] = upr(w, 3);
+
+ /*
+ * table for key schedule if fl_tab above is
+ * not of the required form
+ */
+ ls_tab[0][i] = w;
+ ls_tab[1][i] = upr(w, 1);
+ ls_tab[2][i] = upr(w, 2);
+ ls_tab[3][i] = upr(w, 3);
+
+ b = fi(inv_affine((u8)i));
+ w = bytes2word(fe(b), f9(b), fd(b), fb(b));
+
+ /* tables for the inverse mix column operation */
+ im_tab[0][b] = w;
+ im_tab[1][b] = upr(w, 1);
+ im_tab[2][b] = upr(w, 2);
+ im_tab[3][b] = upr(w, 3);
+
+ /* tables for a normal decryption round */
+ it_tab[0][i] = w;
+ it_tab[1][i] = upr(w,1);
+ it_tab[2][i] = upr(w,2);
+ it_tab[3][i] = upr(w,3);
+
+ w = bytes2word(b, 0, 0, 0);
+
+ /* tables for last decryption round */
+ il_tab[0][i] = w;
+ il_tab[1][i] = upr(w,1);
+ il_tab[2][i] = upr(w,2);
+ il_tab[3][i] = upr(w,3);
+ }
+}
+
+#define four_tables(x,tab,vf,rf,c) \
+( tab[0][bval(vf(x,0,c),rf(0,c))] ^ \
+ tab[1][bval(vf(x,1,c),rf(1,c))] ^ \
+ tab[2][bval(vf(x,2,c),rf(2,c))] ^ \
+ tab[3][bval(vf(x,3,c),rf(3,c))] \
+)
+
+#define vf1(x,r,c) (x)
+#define rf1(r,c) (r)
+#define rf2(r,c) ((r-c)&3)
+
+#define inv_mcol(x) four_tables(x,im_tab,vf1,rf1,0)
+#define ls_box(x,c) four_tables(x,fl_tab,vf1,rf2,c)
+
+#define ff(x) inv_mcol(x)
+
+#define ke4(k,i) \
+{ \
+ k[4*(i)+4] = ss[0] ^= ls_box(ss[3],3) ^ rcon_tab[i]; \
+ k[4*(i)+5] = ss[1] ^= ss[0]; \
+ k[4*(i)+6] = ss[2] ^= ss[1]; \
+ k[4*(i)+7] = ss[3] ^= ss[2]; \
+}
+
+#define kel4(k,i) \
+{ \
+ k[4*(i)+4] = ss[0] ^= ls_box(ss[3],3) ^ rcon_tab[i]; \
+ k[4*(i)+5] = ss[1] ^= ss[0]; \
+ k[4*(i)+6] = ss[2] ^= ss[1]; k[4*(i)+7] = ss[3] ^= ss[2]; \
+}
+
+#define ke6(k,i) \
+{ \
+ k[6*(i)+ 6] = ss[0] ^= ls_box(ss[5],3) ^ rcon_tab[i]; \
+ k[6*(i)+ 7] = ss[1] ^= ss[0]; \
+ k[6*(i)+ 8] = ss[2] ^= ss[1]; \
+ k[6*(i)+ 9] = ss[3] ^= ss[2]; \
+ k[6*(i)+10] = ss[4] ^= ss[3]; \
+ k[6*(i)+11] = ss[5] ^= ss[4]; \
+}
+
+#define kel6(k,i) \
+{ \
+ k[6*(i)+ 6] = ss[0] ^= ls_box(ss[5],3) ^ rcon_tab[i]; \
+ k[6*(i)+ 7] = ss[1] ^= ss[0]; \
+ k[6*(i)+ 8] = ss[2] ^= ss[1]; \
+ k[6*(i)+ 9] = ss[3] ^= ss[2]; \
+}
+
+#define ke8(k,i) \
+{ \
+ k[8*(i)+ 8] = ss[0] ^= ls_box(ss[7],3) ^ rcon_tab[i]; \
+ k[8*(i)+ 9] = ss[1] ^= ss[0]; \
+ k[8*(i)+10] = ss[2] ^= ss[1]; \
+ k[8*(i)+11] = ss[3] ^= ss[2]; \
+ k[8*(i)+12] = ss[4] ^= ls_box(ss[3],0); \
+ k[8*(i)+13] = ss[5] ^= ss[4]; \
+ k[8*(i)+14] = ss[6] ^= ss[5]; \
+ k[8*(i)+15] = ss[7] ^= ss[6]; \
+}
+
+#define kel8(k,i) \
+{ \
+ k[8*(i)+ 8] = ss[0] ^= ls_box(ss[7],3) ^ rcon_tab[i]; \
+ k[8*(i)+ 9] = ss[1] ^= ss[0]; \
+ k[8*(i)+10] = ss[2] ^= ss[1]; \
+ k[8*(i)+11] = ss[3] ^= ss[2]; \
+}
+
+#define kdf4(k,i) \
+{ \
+ ss[0] = ss[0] ^ ss[2] ^ ss[1] ^ ss[3]; \
+ ss[1] = ss[1] ^ ss[3]; \
+ ss[2] = ss[2] ^ ss[3]; \
+ ss[3] = ss[3]; \
+ ss[4] = ls_box(ss[(i+3) % 4], 3) ^ rcon_tab[i]; \
+ ss[i % 4] ^= ss[4]; \
+ ss[4] ^= k[4*(i)]; \
+ k[4*(i)+4] = ff(ss[4]); \
+ ss[4] ^= k[4*(i)+1]; \
+ k[4*(i)+5] = ff(ss[4]); \
+ ss[4] ^= k[4*(i)+2]; \
+ k[4*(i)+6] = ff(ss[4]); \
+ ss[4] ^= k[4*(i)+3]; \
+ k[4*(i)+7] = ff(ss[4]); \
+}
+
+#define kd4(k,i) \
+{ \
+ ss[4] = ls_box(ss[(i+3) % 4], 3) ^ rcon_tab[i]; \
+ ss[i % 4] ^= ss[4]; \
+ ss[4] = ff(ss[4]); \
+ k[4*(i)+4] = ss[4] ^= k[4*(i)]; \
+ k[4*(i)+5] = ss[4] ^= k[4*(i)+1]; \
+ k[4*(i)+6] = ss[4] ^= k[4*(i)+2]; \
+ k[4*(i)+7] = ss[4] ^= k[4*(i)+3]; \
+}
+
+#define kdl4(k,i) \
+{ \
+ ss[4] = ls_box(ss[(i+3) % 4], 3) ^ rcon_tab[i]; \
+ ss[i % 4] ^= ss[4]; \
+ k[4*(i)+4] = (ss[0] ^= ss[1]) ^ ss[2] ^ ss[3]; \
+ k[4*(i)+5] = ss[1] ^ ss[3]; \
+ k[4*(i)+6] = ss[0]; \
+ k[4*(i)+7] = ss[1]; \
+}
+
+#define kdf6(k,i) \
+{ \
+ ss[0] ^= ls_box(ss[5],3) ^ rcon_tab[i]; \
+ k[6*(i)+ 6] = ff(ss[0]); \
+ ss[1] ^= ss[0]; \
+ k[6*(i)+ 7] = ff(ss[1]); \
+ ss[2] ^= ss[1]; \
+ k[6*(i)+ 8] = ff(ss[2]); \
+ ss[3] ^= ss[2]; \
+ k[6*(i)+ 9] = ff(ss[3]); \
+ ss[4] ^= ss[3]; \
+ k[6*(i)+10] = ff(ss[4]); \
+ ss[5] ^= ss[4]; \
+ k[6*(i)+11] = ff(ss[5]); \
+}
+
+#define kd6(k,i) \
+{ \
+ ss[6] = ls_box(ss[5],3) ^ rcon_tab[i]; \
+ ss[0] ^= ss[6]; ss[6] = ff(ss[6]); \
+ k[6*(i)+ 6] = ss[6] ^= k[6*(i)]; \
+ ss[1] ^= ss[0]; \
+ k[6*(i)+ 7] = ss[6] ^= k[6*(i)+ 1]; \
+ ss[2] ^= ss[1]; \
+ k[6*(i)+ 8] = ss[6] ^= k[6*(i)+ 2]; \
+ ss[3] ^= ss[2]; \
+ k[6*(i)+ 9] = ss[6] ^= k[6*(i)+ 3]; \
+ ss[4] ^= ss[3]; \
+ k[6*(i)+10] = ss[6] ^= k[6*(i)+ 4]; \
+ ss[5] ^= ss[4]; \
+ k[6*(i)+11] = ss[6] ^= k[6*(i)+ 5]; \
+}
+
+#define kdl6(k,i) \
+{ \
+ ss[0] ^= ls_box(ss[5],3) ^ rcon_tab[i]; \
+ k[6*(i)+ 6] = ss[0]; \
+ ss[1] ^= ss[0]; \
+ k[6*(i)+ 7] = ss[1]; \
+ ss[2] ^= ss[1]; \
+ k[6*(i)+ 8] = ss[2]; \
+ ss[3] ^= ss[2]; \
+ k[6*(i)+ 9] = ss[3]; \
+}
+
+#define kdf8(k,i) \
+{ \
+ ss[0] ^= ls_box(ss[7],3) ^ rcon_tab[i]; \
+ k[8*(i)+ 8] = ff(ss[0]); \
+ ss[1] ^= ss[0]; \
+ k[8*(i)+ 9] = ff(ss[1]); \
+ ss[2] ^= ss[1]; \
+ k[8*(i)+10] = ff(ss[2]); \
+ ss[3] ^= ss[2]; \
+ k[8*(i)+11] = ff(ss[3]); \
+ ss[4] ^= ls_box(ss[3],0); \
+ k[8*(i)+12] = ff(ss[4]); \
+ ss[5] ^= ss[4]; \
+ k[8*(i)+13] = ff(ss[5]); \
+ ss[6] ^= ss[5]; \
+ k[8*(i)+14] = ff(ss[6]); \
+ ss[7] ^= ss[6]; \
+ k[8*(i)+15] = ff(ss[7]); \
+}
+
+#define kd8(k,i) \
+{ \
+ u32 __g = ls_box(ss[7],3) ^ rcon_tab[i]; \
+ ss[0] ^= __g; \
+ __g = ff(__g); \
+ k[8*(i)+ 8] = __g ^= k[8*(i)]; \
+ ss[1] ^= ss[0]; \
+ k[8*(i)+ 9] = __g ^= k[8*(i)+ 1]; \
+ ss[2] ^= ss[1]; \
+ k[8*(i)+10] = __g ^= k[8*(i)+ 2]; \
+ ss[3] ^= ss[2]; \
+ k[8*(i)+11] = __g ^= k[8*(i)+ 3]; \
+ __g = ls_box(ss[3],0); \
+ ss[4] ^= __g; \
+ __g = ff(__g); \
+ k[8*(i)+12] = __g ^= k[8*(i)+ 4]; \
+ ss[5] ^= ss[4]; \
+ k[8*(i)+13] = __g ^= k[8*(i)+ 5]; \
+ ss[6] ^= ss[5]; \
+ k[8*(i)+14] = __g ^= k[8*(i)+ 6]; \
+ ss[7] ^= ss[6]; \
+ k[8*(i)+15] = __g ^= k[8*(i)+ 7]; \
+}
+
+#define kdl8(k,i) \
+{ \
+ ss[0] ^= ls_box(ss[7],3) ^ rcon_tab[i]; \
+ k[8*(i)+ 8] = ss[0]; \
+ ss[1] ^= ss[0]; \
+ k[8*(i)+ 9] = ss[1]; \
+ ss[2] ^= ss[1]; \
+ k[8*(i)+10] = ss[2]; \
+ ss[3] ^= ss[2]; \
+ k[8*(i)+11] = ss[3]; \
+}
+
+static int
+aes_set_key(void *ctx_arg, const u8 *in_key, unsigned int key_len, u32 *flags)
+{
+ int i;
+ u32 ss[8];
+ struct aes_ctx *ctx = ctx_arg;
+
+ /* encryption schedule */
+
+ ctx->ekey[0] = ss[0] = u32_in(in_key);
+ ctx->ekey[1] = ss[1] = u32_in(in_key + 4);
+ ctx->ekey[2] = ss[2] = u32_in(in_key + 8);
+ ctx->ekey[3] = ss[3] = u32_in(in_key + 12);
+
+ switch(key_len) {
+ case 16:
+ for (i = 0; i < 9; i++)
+ ke4(ctx->ekey, i);
+ kel4(ctx->ekey, 9);
+ ctx->rounds = 10;
+ break;
+
+ case 24:
+ ctx->ekey[4] = ss[4] = u32_in(in_key + 16);
+ ctx->ekey[5] = ss[5] = u32_in(in_key + 20);
+ for (i = 0; i < 7; i++)
+ ke6(ctx->ekey, i);
+ kel6(ctx->ekey, 7);
+ ctx->rounds = 12;
+ break;
+
+ case 32:
+ ctx->ekey[4] = ss[4] = u32_in(in_key + 16);
+ ctx->ekey[5] = ss[5] = u32_in(in_key + 20);
+ ctx->ekey[6] = ss[6] = u32_in(in_key + 24);
+ ctx->ekey[7] = ss[7] = u32_in(in_key + 28);
+ for (i = 0; i < 6; i++)
+ ke8(ctx->ekey, i);
+ kel8(ctx->ekey, 6);
+ ctx->rounds = 14;
+ break;
+
+ default:
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ /* decryption schedule */
+
+ ctx->dkey[0] = ss[0] = u32_in(in_key);
+ ctx->dkey[1] = ss[1] = u32_in(in_key + 4);
+ ctx->dkey[2] = ss[2] = u32_in(in_key + 8);
+ ctx->dkey[3] = ss[3] = u32_in(in_key + 12);
+
+ switch (key_len) {
+ case 16:
+ kdf4(ctx->dkey, 0);
+ for (i = 1; i < 9; i++)
+ kd4(ctx->dkey, i);
+ kdl4(ctx->dkey, 9);
+ break;
+
+ case 24:
+ ctx->dkey[4] = ff(ss[4] = u32_in(in_key + 16));
+ ctx->dkey[5] = ff(ss[5] = u32_in(in_key + 20));
+ kdf6(ctx->dkey, 0);
+ for (i = 1; i < 7; i++)
+ kd6(ctx->dkey, i);
+ kdl6(ctx->dkey, 7);
+ break;
+
+ case 32:
+ ctx->dkey[4] = ff(ss[4] = u32_in(in_key + 16));
+ ctx->dkey[5] = ff(ss[5] = u32_in(in_key + 20));
+ ctx->dkey[6] = ff(ss[6] = u32_in(in_key + 24));
+ ctx->dkey[7] = ff(ss[7] = u32_in(in_key + 28));
+ kdf8(ctx->dkey, 0);
+ for (i = 1; i < 6; i++)
+ kd8(ctx->dkey, i);
+ kdl8(ctx->dkey, 6);
+ break;
+ }
+ return 0;
+}
+
+static inline void aes_encrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ aes_enc_blk(src, dst, ctx);
+}
+static inline void aes_decrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ aes_dec_blk(src, dst, ctx);
+}
+
+
+static struct crypto_alg aes_alg = {
+ .cra_name = "aes",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct aes_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
+ .cra_u = {
+ .cipher = {
+ .cia_min_keysize = AES_MIN_KEY_SIZE,
+ .cia_max_keysize = AES_MAX_KEY_SIZE,
+ .cia_setkey = aes_set_key,
+ .cia_encrypt = aes_encrypt,
+ .cia_decrypt = aes_decrypt
+ }
+ }
+};
+
+static int __init aes_init(void)
+{
+ gen_tabs();
+ return crypto_register_alg(&aes_alg);
+}
+
+static void __exit aes_fini(void)
+{
+ crypto_unregister_alg(&aes_alg);
+}
+
+module_init(aes_init);
+module_exit(aes_fini);
+
+MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, i586 asm optimized");
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Fruhwirth Clemens, James Morris, Brian Gladman, Adam Richter");
+MODULE_ALIAS("aes");
--- /dev/null
+#include <linux/bitops.h>
+#include <linux/module.h>
+
+/**
+ * find_next_bit - find the first set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+int find_next_bit(const unsigned long *addr, int size, int offset)
+{
+ const unsigned long *p = addr + (offset >> 5);
+ int set = 0, bit = offset & 31, res;
+
+ if (bit) {
+ /*
+ * Look for nonzero in the first 32 bits:
+ */
+ __asm__("bsfl %1,%0\n\t"
+ "jne 1f\n\t"
+ "movl $32, %0\n"
+ "1:"
+ : "=r" (set)
+ : "r" (*p >> bit));
+ if (set < (32 - bit))
+ return set + offset;
+ set = 32 - bit;
+ p++;
+ }
+ /*
+ * No set bit yet, search remaining full words for a bit
+ */
+ res = find_first_bit (p, size - 32 * (p - addr));
+ return (offset + set + res);
+}
+EXPORT_SYMBOL(find_next_bit);
+
+/**
+ * find_next_zero_bit - find the first zero bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+int find_next_zero_bit(const unsigned long *addr, int size, int offset)
+{
+ unsigned long * p = ((unsigned long *) addr) + (offset >> 5);
+ int set = 0, bit = offset & 31, res;
+
+ if (bit) {
+ /*
+ * Look for zero in the first 32 bits.
+ */
+ __asm__("bsfl %1,%0\n\t"
+ "jne 1f\n\t"
+ "movl $32, %0\n"
+ "1:"
+ : "=r" (set)
+ : "r" (~(*p >> bit)));
+ if (set < (32 - bit))
+ return set + offset;
+ set = 32 - bit;
+ p++;
+ }
+ /*
+ * No zero yet, search remaining full bytes for a zero
+ */
+ res = find_first_zero_bit (p, size - 32 * (p - (unsigned long *) addr));
+ return (offset + set + res);
+}
+EXPORT_SYMBOL(find_next_zero_bit);
--- /dev/null
+/*
+ * APIC driver for the Unisys ES7000 chipset.
+ */
+#define APIC_DEFINITION 1
+#include <linux/config.h>
+#include <linux/threads.h>
+#include <linux/cpumask.h>
+#include <asm/mpspec.h>
+#include <asm/genapic.h>
+#include <asm/fixmap.h>
+#include <asm/apicdef.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/smp.h>
+#include <linux/init.h>
+#include <asm/mach-es7000/mach_apicdef.h>
+#include <asm/mach-es7000/mach_apic.h>
+#include <asm/mach-es7000/mach_ipi.h>
+#include <asm/mach-es7000/mach_mpparse.h>
+#include <asm/mach-es7000/mach_wakecpu.h>
+
+static __init int probe_es7000(void)
+{
+ /* probed later in mptable/ACPI hooks */
+ return 0;
+}
+
+struct genapic apic_es7000 = APIC_INIT("es7000", probe_es7000);
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+# CONFIG_CLEAN_COMPILE is not set
+# CONFIG_STANDALONE is not set
+CONFIG_BROKEN=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=16
+# CONFIG_HOTPLUG is not set
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_OBSOLETE_MODPARM=y
+CONFIG_MODVERSIONS=y
+CONFIG_KMOD=y
+CONFIG_STOP_MACHINE=y
+
+#
+# Processor type and features
+#
+CONFIG_IA64=y
+CONFIG_64BIT=y
+CONFIG_MMU=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_TIME_INTERPOLATION=y
+CONFIG_EFI=y
+# CONFIG_IA64_GENERIC is not set
+# CONFIG_IA64_DIG is not set
+# CONFIG_IA64_HP_ZX1 is not set
+# CONFIG_IA64_SGI_SN2 is not set
+CONFIG_IA64_HP_SIM=y
+# CONFIG_ITANIUM is not set
+CONFIG_MCKINLEY=y
+# CONFIG_IA64_PAGE_SIZE_4KB is not set
+# CONFIG_IA64_PAGE_SIZE_8KB is not set
+# CONFIG_IA64_PAGE_SIZE_16KB is not set
+CONFIG_IA64_PAGE_SIZE_64KB=y
+CONFIG_IA64_L1_CACHE_SHIFT=7
+# CONFIG_MCKINLEY_ASTEP_SPECIFIC is not set
+# CONFIG_VIRTUAL_MEM_MAP is not set
+# CONFIG_IA64_CYCLONE is not set
+CONFIG_FORCE_MAX_ZONEORDER=18
+CONFIG_SMP=y
+CONFIG_NR_CPUS=64
+CONFIG_PREEMPT=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_IA32_SUPPORT=y
+CONFIG_COMPAT=y
+# CONFIG_PERFMON is not set
+CONFIG_IA64_PALINFO=m
+
+#
+# Firmware Drivers
+#
+CONFIG_EFI_VARS=y
+# CONFIG_SMBIOS is not set
+CONFIG_BINFMT_ELF=y
+CONFIG_BINFMT_MISC=y
+
+#
+# Power management and ACPI
+#
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+# CONFIG_DEBUG_DRIVER is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+# CONFIG_CHR_DEV_ST is not set
+# CONFIG_CHR_DEV_OSST is not set
+# CONFIG_BLK_DEV_SR is not set
+# CONFIG_CHR_DEV_SG is not set
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+
+#
+# SCSI Transport Attributes
+#
+CONFIG_SCSI_SPI_ATTRS=y
+# CONFIG_SCSI_FC_ATTRS is not set
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_EATA_PIO is not set
+# CONFIG_SCSI_DEBUG is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+# CONFIG_UNIX is not set
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_NETDEVICES is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_UNIX98_PTYS=y
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+CONFIG_EFI_RTC=y
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+# CONFIG_EXT3_FS_XATTR is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+# CONFIG_TMPFS is not set
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+CONFIG_NFS_DIRECTIO=y
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+# CONFIG_NFSD_V4 is not set
+# CONFIG_NFSD_TCP is not set
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_EXPORTFS=y
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+CONFIG_EFI_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+
+#
+# HP Simulator drivers
+#
+CONFIG_HP_SIMETH=y
+CONFIG_HP_SIMSERIAL=y
+CONFIG_HP_SIMSERIAL_CONSOLE=y
+CONFIG_HP_SIMSCSI=y
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_IA64_GRANULE_16MB is not set
+CONFIG_IA64_GRANULE_64MB=y
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_IA64_PRINT_HAZARDS is not set
+# CONFIG_DISABLE_VHPT is not set
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+# CONFIG_IA64_DEBUG_CMPXCHG is not set
+# CONFIG_IA64_DEBUG_IRQ is not set
+CONFIG_DEBUG_INFO=y
+CONFIG_SYSVIPC_COMPAT=y
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
--- /dev/null
+/*
+ * arch/ia64/dig/topology.c
+ * Popuate driverfs with topology information.
+ * Derived entirely from i386/mach-default.c
+ * Intel Corporation - Ashok Raj
+ */
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/notifier.h>
+#include <linux/cpu.h>
+#include <asm/cpu.h>
+
+static DEFINE_PER_CPU(struct ia64_cpu, cpu_devices);
+
+/*
+ * First Pass: simply borrowed code for now. Later should hook into
+ * hotplug notification for node/cpu/memory as applicable
+ */
+
+static int arch_register_cpu(int num)
+{
+ struct node *parent = NULL;
+
+#ifdef CONFIG_NUMA
+ //parent = &node_devices[cpu_to_node(num)].node;
+#endif
+
+ return register_cpu(&per_cpu(cpu_devices,num).cpu, num, parent);
+}
+
+static int __init topology_init(void)
+{
+ int i;
+
+ for_each_cpu(i) {
+ arch_register_cpu(i);
+ }
+ return 0;
+}
+
+subsys_initcall(topology_init);
--- /dev/null
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include <asm/intrinsics.h>
+#include <linux/module.h>
+#include <asm/bitops.h>
+
+/*
+ * Find next zero bit in a bitmap reasonably efficiently..
+ */
+
+int __find_next_zero_bit (void *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (64-offset);
+ if (size < 64)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while (size & ~63UL) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* any bits zero? */
+ return result + size; /* nope */
+found_middle:
+ return result + ffz(tmp);
+}
+EXPORT_SYMBOL(__find_next_zero_bit);
+
+/*
+ * Find next bit in a bitmap reasonably efficiently..
+ */
+int __find_next_bit(const void *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = ((unsigned long *) addr) + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp &= ~0UL << offset;
+ if (size < 64)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while (size & ~63UL) {
+ if ((tmp = *(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+ found_first:
+ tmp &= ~0UL >> (64-size);
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+ found_middle:
+ return result + __ffs(tmp);
+}
+EXPORT_SYMBOL(__find_next_bit);
--- /dev/null
+/*
+ * arch/mips/au1000/common/cputable.c
+ *
+ * Copyright (C) 2004 Dan Malek (dan@embeddededge.com)
+ * Copied from PowerPC and updated for Alchemy Au1xxx processors.
+ *
+ * Copyright (C) 2001 Ben. Herrenschmidt (benh@kernel.crashing.org)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/string.h>
+#include <linux/sched.h>
+#include <linux/threads.h>
+#include <linux/init.h>
+#include <asm/mach-au1x00/au1000.h>
+
+struct cpu_spec* cur_cpu_spec[NR_CPUS];
+
+/* With some thought, we can probably use the mask to reduce the
+ * size of the table.
+ */
+struct cpu_spec cpu_specs[] = {
+ { 0xffffffff, 0x00030100, "Au1000 DA", 1, 0 },
+ { 0xffffffff, 0x00030201, "Au1000 HA", 1, 0 },
+ { 0xffffffff, 0x00030202, "Au1000 HB", 1, 0 },
+ { 0xffffffff, 0x00030203, "Au1000 HC", 1, 1 },
+ { 0xffffffff, 0x00030204, "Au1000 HD", 1, 1 },
+ { 0xffffffff, 0x01030200, "Au1500 AB", 1, 1 },
+ { 0xffffffff, 0x01030201, "Au1500 AC", 0, 1 },
+ { 0xffffffff, 0x01030202, "Au1500 AD", 0, 1 },
+ { 0xffffffff, 0x02030200, "Au1100 AB", 1, 1 },
+ { 0xffffffff, 0x02030201, "Au1100 BA", 1, 1 },
+ { 0xffffffff, 0x02030202, "Au1100 BC", 1, 1 },
+ { 0xffffffff, 0x02030203, "Au1100 BD", 0, 1 },
+ { 0xffffffff, 0x02030204, "Au1100 BE", 0, 1 },
+ { 0xffffffff, 0x03030200, "Au1550 AA", 0, 1 },
+ { 0x00000000, 0x00000000, "Unknown Au1xxx", 1, 0 },
+};
+
+void
+set_cpuspec(void)
+{
+ struct cpu_spec *sp;
+ u32 prid;
+
+ prid = read_c0_prid();
+ sp = cpu_specs;
+ while ((prid & sp->prid_mask) != sp->prid_value)
+ sp++;
+ cur_cpu_spec[0] = sp;
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_MIPS=y
+CONFIG_MIPS64=y
+CONFIG_64BIT=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# Machine selection
+#
+# CONFIG_MACH_JAZZ is not set
+# CONFIG_MACH_VR41XX is not set
+# CONFIG_MIPS_COBALT is not set
+# CONFIG_MACH_DECSTATION is not set
+# CONFIG_MIPS_EV64120 is not set
+# CONFIG_MIPS_EV96100 is not set
+# CONFIG_MIPS_IVR is not set
+# CONFIG_LASAT is not set
+# CONFIG_MIPS_ITE8172 is not set
+# CONFIG_MIPS_ATLAS is not set
+# CONFIG_MIPS_MALTA is not set
+# CONFIG_MIPS_SEAD is not set
+# CONFIG_MOMENCO_OCELOT is not set
+CONFIG_MOMENCO_OCELOT_G=y
+# CONFIG_MOMENCO_OCELOT_C is not set
+# CONFIG_MOMENCO_JAGUAR_ATX is not set
+# CONFIG_PMC_YOSEMITE is not set
+# CONFIG_DDB5074 is not set
+# CONFIG_DDB5476 is not set
+# CONFIG_DDB5477 is not set
+# CONFIG_NEC_OSPREY is not set
+# CONFIG_SGI_IP22 is not set
+# CONFIG_SGI_IP27 is not set
+# CONFIG_SGI_IP32 is not set
+# CONFIG_SIBYTE_SB1xxx_SOC is not set
+# CONFIG_SNI_RM200_PCI is not set
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_DMA_NONCOHERENT=y
+# CONFIG_CPU_LITTLE_ENDIAN is not set
+CONFIG_IRQ_CPU=y
+CONFIG_IRQ_CPU_RM7K=y
+CONFIG_PCI_MARVELL=y
+CONFIG_SWAP_IO_SPACE=y
+# CONFIG_SYSCLK_75 is not set
+# CONFIG_SYSCLK_83 is not set
+CONFIG_SYSCLK_100=y
+CONFIG_MIPS_L1_CACHE_SHIFT=5
+# CONFIG_FB is not set
+
+#
+# CPU selection
+#
+# CONFIG_CPU_MIPS32 is not set
+# CONFIG_CPU_MIPS64 is not set
+# CONFIG_CPU_R3000 is not set
+# CONFIG_CPU_TX39XX is not set
+# CONFIG_CPU_VR41XX is not set
+# CONFIG_CPU_R4300 is not set
+# CONFIG_CPU_R4X00 is not set
+# CONFIG_CPU_TX49XX is not set
+# CONFIG_CPU_R5000 is not set
+# CONFIG_CPU_R5432 is not set
+# CONFIG_CPU_R6000 is not set
+# CONFIG_CPU_NEVADA is not set
+# CONFIG_CPU_R8000 is not set
+# CONFIG_CPU_R10000 is not set
+CONFIG_CPU_RM7000=y
+# CONFIG_CPU_RM9000 is not set
+# CONFIG_CPU_SB1 is not set
+CONFIG_PAGE_SIZE_4KB=y
+# CONFIG_PAGE_SIZE_8KB is not set
+# CONFIG_PAGE_SIZE_16KB is not set
+# CONFIG_PAGE_SIZE_64KB is not set
+CONFIG_BOARD_SCACHE=y
+CONFIG_RM7000_CPU_SCACHE=y
+CONFIG_CPU_HAS_PREFETCH=y
+CONFIG_CPU_HAS_LLSC=y
+CONFIG_CPU_HAS_LLDSCD=y
+CONFIG_CPU_HAS_SYNC=y
+# CONFIG_PREEMPT is not set
+# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
+
+#
+# Bus options (PCI, PCMCIA, EISA, ISA, TC)
+#
+CONFIG_HW_HAS_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+CONFIG_MMU=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+CONFIG_MIPS32_COMPAT=y
+CONFIG_COMPAT=y
+CONFIG_MIPS32_O32=y
+CONFIG_MIPS32_N32=y
+CONFIG_BINFMT_ELF32=y
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+# CONFIG_BLK_DEV_RAM is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_PACKET is not set
+CONFIG_NETLINK_DEV=y
+CONFIG_UNIX=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+CONFIG_GALILEO_64240_ETH=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_PCI is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PCIPS2 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+CONFIG_DEVPTS_FS_XATTR=y
+CONFIG_DEVPTS_FS_SECURITY=y
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+CONFIG_NFSD=y
+# CONFIG_NFSD_V3 is not set
+# CONFIG_NFSD_TCP is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_EXPORTFS=y
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Kernel hacking
+#
+CONFIG_CROSSCOMPILE=y
+CONFIG_CMDLINE=""
+# CONFIG_DEBUG_KERNEL is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC16=y
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
--- /dev/null
+#include <linux/module.h>
+#include <linux/spinlock.h>
+
+static LIST_HEAD(dbe_list);
+static spinlock_t dbe_lock = SPIN_LOCK_UNLOCKED;
+
+/* Given an address, look for it in the module exception tables. */
+const struct exception_table_entry *search_module_dbetables(unsigned long addr)
+{
+ unsigned long flags;
+ const struct exception_table_entry *e = NULL;
+ struct mod_arch_specific *dbe;
+
+ spin_lock_irqsave(&dbe_lock, flags);
+ list_for_each_entry(dbe, &dbe_list, dbe_list) {
+ e = search_extable(dbe->dbe_start, dbe->dbe_end - 1, addr);
+ if (e)
+ break;
+ }
+ spin_unlock_irqrestore(&dbe_lock, flags);
+
+ /* Now, if we found one, we are running inside it now, hence
+ we cannot unload the module, hence no refcnt needed. */
+ return e;
+}
+
+/* Put in dbe list if neccessary. */
+int module_finalize(const Elf_Ehdr *hdr,
+ const Elf_Shdr *sechdrs,
+ struct module *me)
+{
+ const Elf_Shdr *s;
+ char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ INIT_LIST_HEAD(&me->arch.dbe_list);
+ for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
+ if (strcmp("__dbe_table", secstrings + s->sh_name) != 0)
+ continue;
+ me->arch.dbe_start = (void *)s->sh_addr;
+ me->arch.dbe_end = (void *)s->sh_addr + s->sh_size;
+ spin_lock_irq(&dbe_lock);
+ list_add(&me->arch.dbe_list, &dbe_list);
+ spin_unlock_irq(&dbe_lock);
+ }
+ return 0;
+}
+
+void module_arch_cleanup(struct module *mod)
+{
+ spin_lock_irq(&dbe_lock);
+ list_del(&mod->arch.dbe_list);
+ spin_unlock_irq(&dbe_lock);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle ralf@gnu.org
+ * Carsten Langgaard, carstenl@mips.com
+ * Copyright (C) 2002 MIPS Technologies, Inc. All rights reserved.
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+
+#include <asm/cpu.h>
+#include <asm/bootinfo.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+
+extern void except_vec0_generic(void);
+extern void except_vec1_r8k(void);
+
+#define TFP_TLB_SIZE 384
+#define TFP_TLB_SET_SHIFT 7
+
+/* CP0 hazard avoidance. */
+#define BARRIER __asm__ __volatile__(".set noreorder\n\t" \
+ "nop; nop; nop; nop; nop; nop;\n\t" \
+ ".set reorder\n\t")
+
+void local_flush_tlb_all(void)
+{
+ unsigned long flags;
+ unsigned long old_ctx;
+ int entry;
+
+ local_irq_save(flags);
+ /* Save old context and create impossible VPN2 value */
+ old_ctx = read_c0_entryhi();
+ write_c0_entrylo(0);
+
+ for (entry = 0; entry < TFP_TLB_SIZE; entry++) {
+ write_c0_tlbset(entry >> TFP_TLB_SET_SHIFT);
+ write_c0_vaddr(entry << PAGE_SHIFT);
+ write_c0_entryhi(CKSEG0 + (entry << (PAGE_SHIFT + 1)));
+ mtc0_tlbw_hazard();
+ tlb_write();
+ }
+ tlbw_use_hazard();
+ write_c0_entryhi(old_ctx);
+ local_irq_restore(flags);
+}
+
+void local_flush_tlb_mm(struct mm_struct *mm)
+{
+ int cpu = smp_processor_id();
+
+ if (cpu_context(cpu, mm) != 0)
+ drop_mmu_context(mm,cpu);
+}
+
+void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ int cpu = smp_processor_id();
+ unsigned long flags;
+ int oldpid, newpid, size;
+
+ if (!cpu_context(cpu, mm))
+ return;
+
+ size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ size = (size + 1) >> 1;
+
+ local_irq_save(flags);
+
+ if (size > TFP_TLB_SIZE / 2) {
+ drop_mmu_context(mm, cpu);
+ goto out_restore;
+ }
+
+ oldpid = read_c0_entryhi();
+ newpid = cpu_asid(cpu, mm);
+
+ write_c0_entrylo(0);
+
+ start &= PAGE_MASK;
+ end += (PAGE_SIZE - 1);
+ end &= PAGE_MASK;
+ while (start < end) {
+ signed long idx;
+
+ write_c0_vaddr(start);
+ write_c0_entryhi(start);
+ start += PAGE_SIZE;
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ continue;
+
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+ }
+ write_c0_entryhi(oldpid);
+
+out_restore:
+ local_irq_restore(flags);
+}
+
+/* Usable for KV1 addresses only! */
+void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
+{
+ unsigned long flags;
+ int size;
+
+ size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ size = (size + 1) >> 1;
+
+ if (size > TFP_TLB_SIZE / 2) {
+ local_flush_tlb_all();
+ return;
+ }
+
+ local_irq_save(flags);
+
+ write_c0_entrylo(0);
+
+ start &= PAGE_MASK;
+ end += (PAGE_SIZE - 1);
+ end &= PAGE_MASK;
+ while (start < end) {
+ signed long idx;
+
+ write_c0_vaddr(start);
+ write_c0_entryhi(start);
+ start += PAGE_SIZE;
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ continue;
+
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+ }
+
+ local_irq_restore(flags);
+}
+
+void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ int cpu = smp_processor_id();
+ unsigned long flags;
+ int oldpid, newpid;
+ signed long idx;
+
+ if (!cpu_context(cpu, vma->vm_mm))
+ return;
+
+ newpid = cpu_asid(cpu, vma->vm_mm);
+ page &= PAGE_MASK;
+ local_irq_save(flags);
+ oldpid = read_c0_entryhi();
+ write_c0_vaddr(page);
+ write_c0_entryhi(newpid);
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ goto finish;
+
+ write_c0_entrylo(0);
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+
+finish:
+ write_c0_entryhi(oldpid);
+ local_irq_restore(flags);
+}
+
+/*
+ * We will need multiple versions of update_mmu_cache(), one that just
+ * updates the TLB with the new pte(s), and another which also checks
+ * for the R4k "end of page" hardware bug and does the needy.
+ */
+void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
+{
+ unsigned long flags;
+ pgd_t *pgdp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+ int pid;
+
+ /*
+ * Handle debugger faulting in for debugee.
+ */
+ if (current->active_mm != vma->vm_mm)
+ return;
+
+ pid = read_c0_entryhi() & ASID_MASK;
+
+ local_irq_save(flags);
+ address &= PAGE_MASK;
+ write_c0_vaddr(address);
+ write_c0_entryhi(pid);
+ pgdp = pgd_offset(vma->vm_mm, address);
+ pmdp = pmd_offset(pgdp, address);
+ ptep = pte_offset_map(pmdp, address);
+ tlb_probe();
+
+ write_c0_entrylo(pte_val(*ptep++) >> 6);
+ tlb_write();
+
+ write_c0_entryhi(pid);
+ local_irq_restore(flags);
+}
+
+static void __init probe_tlb(unsigned long config)
+{
+ struct cpuinfo_mips *c = ¤t_cpu_data;
+
+ c->tlbsize = 3 * 128; /* 3 sets each 128 entries */
+}
+
+void __init tlb_init(void)
+{
+ unsigned int config = read_c0_config();
+ unsigned long status;
+
+ probe_tlb(config);
+
+ status = read_c0_status();
+ status &= ~(ST0_UPS | ST0_KPS);
+#ifdef CONFIG_PAGE_SIZE_4KB
+ status |= (TFP_PAGESIZE_4K << 32) | (TFP_PAGESIZE_4K << 36);
+#elif defined(CONFIG_PAGE_SIZE_8KB)
+ status |= (TFP_PAGESIZE_8K << 32) | (TFP_PAGESIZE_8K << 36);
+#elif defined(CONFIG_PAGE_SIZE_16KB)
+ status |= (TFP_PAGESIZE_16K << 32) | (TFP_PAGESIZE_16K << 36);
+#elif defined(CONFIG_PAGE_SIZE_64KB)
+ status |= (TFP_PAGESIZE_64K << 32) | (TFP_PAGESIZE_64K << 36);
+#endif
+ write_c0_status(status);
+
+ write_c0_wired(0);
+
+ local_flush_tlb_all();
+
+ memcpy((void *)(CKSEG0 + 0x00), &except_vec0_generic, 0x80);
+ memcpy((void *)(CKSEG0 + 0x80), except_vec1_r8k, 0x80);
+ flush_icache_range(CKSEG0 + 0x80, CKSEG0 + 0x100);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1999 Ralf Baechle
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ */
+#include <linux/init.h>
+#include <asm/mipsregs.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+
+ .macro __BUILD_cli
+ CLI
+ .endm
+
+ .macro __BUILD_sti
+ STI
+ .endm
+
+ .macro __BUILD_kmode
+ KMODE
+ .endm
+
+ .macro tlb_handler name interruptible writebit
+ NESTED(__\name, PT_SIZE, sp)
+ SAVE_ALL
+ dmfc0 a2, CP0_BADVADDR
+ __BUILD_\interruptible
+ li a1, \writebit
+ sd a2, PT_BVADDR(sp)
+ move a0, sp
+ jal do_page_fault
+ j ret_from_exception
+ END(__\name)
+ .endm
+
+ tlb_handler xtlb_mod kmode 1
+ tlb_handler xtlb_tlbl kmode 0
+ tlb_handler xtlb_tlbs kmode 1
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1999 Ralf Baechle
+ * Copyright (C) 1999 Silicon Graphics, Inc.
+ */
+#include <linux/init.h>
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+#include <asm/war.h>
+
+ .macro __BUILD_cli
+ CLI
+ .endm
+
+ .macro __BUILD_sti
+ STI
+ .endm
+
+ .macro __BUILD_kmode
+ KMODE
+ .endm
+
+ .macro tlb_handler name interruptible writebit
+ NESTED(__\name, PT_SIZE, sp)
+ SAVE_ALL
+ dmfc0 a2, CP0_BADVADDR
+ __BUILD_\interruptible
+ li a1, \writebit
+ sd a2, PT_BVADDR(sp)
+ move a0, sp
+ jal do_page_fault
+ j ret_from_exception
+ END(__\name)
+ .endm
+
+ .macro tlb_handler_m3 name interruptible writebit
+ NESTED(__\name, PT_SIZE, sp)
+ dmfc0 k0, CP0_BADVADDR
+ dmfc0 k1, CP0_ENTRYHI
+ xor k0, k1
+ dsrl k0, k0, PAGE_SHIFT + 1
+ bnez k0, 1f
+ SAVE_ALL
+ dmfc0 a2, CP0_BADVADDR
+ __BUILD_\interruptible
+ li a1, \writebit
+ sd a2, PT_BVADDR(sp)
+ move a0, sp
+ jal do_page_fault
+1:
+ j ret_from_exception
+ END(__\name)
+ .endm
+
+ tlb_handler xtlb_mod kmode 1
+#if BCM1250_M3_WAR
+ tlb_handler_m3 xtlb_tlbl kmode 0
+#else
+ tlb_handler xtlb_tlbl kmode 0
+#endif
+ tlb_handler xtlb_tlbs kmode 1
--- /dev/null
+/*
+ * TLB exception handling code for R2000/R3000.
+ *
+ * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse
+ *
+ * Multi-CPU abstraction reworking:
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ *
+ * Further modifications to make this work:
+ * Copyright (c) 1998 Harald Koerfgen
+ * Copyright (c) 1998, 1999 Gleb Raiko & Vladimir Roganov
+ * Copyright (c) 2001 Ralf Baechle
+ * Copyright (c) 2001 MIPS Technologies, Inc.
+ */
+#include <linux/init.h>
+#include <asm/asm.h>
+#include <asm/cachectl.h>
+#include <asm/fpregdef.h>
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/pgtable-bits.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+
+#define TLB_OPTIMIZE /* If you are paranoid, disable this. */
+
+ .text
+ .set mips1
+ .set noreorder
+
+ __INIT
+
+ /* TLB refill, R[23]00 version */
+ LEAF(except_vec0_r2300)
+ .set noat
+ .set mips1
+ mfc0 k0, CP0_BADVADDR
+ lw k1, pgd_current # get pgd pointer
+ srl k0, k0, 22
+ sll k0, k0, 2
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+ and k0, k0, 0xffc
+ addu k1, k1, k0
+ lw k0, (k1)
+ nop
+ mtc0 k0, CP0_ENTRYLO0
+ mfc0 k1, CP0_EPC
+ tlbwr
+ jr k1
+ rfe
+ END(except_vec0_r2300)
+
+ __FINIT
+
+ /* ABUSE of CPP macros 101. */
+
+ /* After this macro runs, the pte faulted on is
+ * in register PTE, a ptr into the table in which
+ * the pte belongs is in PTR.
+ */
+#define LOAD_PTE(pte, ptr) \
+ mfc0 pte, CP0_BADVADDR; \
+ lw ptr, pgd_current; \
+ srl pte, pte, 22; \
+ sll pte, pte, 2; \
+ addu ptr, ptr, pte; \
+ mfc0 pte, CP0_CONTEXT; \
+ lw ptr, (ptr); \
+ andi pte, pte, 0xffc; \
+ addu ptr, ptr, pte; \
+ lw pte, (ptr); \
+ nop;
+
+ /* This places the even/odd pte pair in the page
+ * table at PTR into ENTRYLO0 and ENTRYLO1 using
+ * TMP as a scratch register.
+ */
+#define PTE_RELOAD(ptr) \
+ lw ptr, (ptr) ; \
+ nop ; \
+ mtc0 ptr, CP0_ENTRYLO0; \
+ nop;
+
+#define DO_FAULT(write) \
+ .set noat; \
+ .set macro; \
+ SAVE_ALL; \
+ mfc0 a2, CP0_BADVADDR; \
+ KMODE; \
+ .set at; \
+ move a0, sp; \
+ jal do_page_fault; \
+ li a1, write; \
+ j ret_from_exception; \
+ nop; \
+ .set noat; \
+ .set nomacro;
+
+ /* Check is PTE is present, if not then jump to LABEL.
+ * PTR points to the page table where this PTE is located,
+ * when the macro is done executing PTE will be restored
+ * with it's original value.
+ */
+#define PTE_PRESENT(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ bnez pte, label; \
+ .set push; \
+ .set reorder; \
+ lw pte, (ptr); \
+ .set pop;
+
+ /* Make PTE valid, store result in PTR. */
+#define PTE_MAKEVALID(pte, ptr) \
+ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \
+ sw pte, (ptr);
+
+ /* Check if PTE can be written to, if not branch to LABEL.
+ * Regardless restore PTE with value from PTR when done.
+ */
+#define PTE_WRITABLE(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ bnez pte, label; \
+ .set push; \
+ .set reorder; \
+ lw pte, (ptr); \
+ .set pop;
+
+
+ /* Make PTE writable, update software status bits as well,
+ * then store at PTR.
+ */
+#define PTE_MAKEWRITE(pte, ptr) \
+ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \
+ _PAGE_VALID | _PAGE_DIRTY); \
+ sw pte, (ptr);
+
+/*
+ * The index register may have the probe fail bit set,
+ * because we would trap on access kseg2, i.e. without refill.
+ */
+#define TLB_WRITE(reg) \
+ mfc0 reg, CP0_INDEX; \
+ nop; \
+ bltz reg, 1f; \
+ nop; \
+ tlbwi; \
+ j 2f; \
+ nop; \
+1: tlbwr; \
+2:
+
+#define RET(reg) \
+ mfc0 reg, CP0_EPC; \
+ nop; \
+ jr reg; \
+ rfe
+
+ .set noreorder
+
+ .align 5
+NESTED(handle_tlbl, PT_SIZE, sp)
+ .set noat
+
+#ifdef TLB_OPTIMIZE
+ /* Test present bit in entry. */
+ LOAD_PTE(k0, k1)
+ tlbp
+ PTE_PRESENT(k0, k1, nopage_tlbl)
+ PTE_MAKEVALID(k0, k1)
+ PTE_RELOAD(k1)
+ TLB_WRITE(k0)
+ RET(k0)
+nopage_tlbl:
+#endif
+
+ DO_FAULT(0)
+END(handle_tlbl)
+
+NESTED(handle_tlbs, PT_SIZE, sp)
+ .set noat
+
+#ifdef TLB_OPTIMIZE
+ LOAD_PTE(k0, k1)
+ tlbp # find faulting entry
+ PTE_WRITABLE(k0, k1, nopage_tlbs)
+ PTE_MAKEWRITE(k0, k1)
+ PTE_RELOAD(k1)
+ TLB_WRITE(k0)
+ RET(k0)
+nopage_tlbs:
+#endif
+
+ DO_FAULT(1)
+END(handle_tlbs)
+
+ .align 5
+NESTED(handle_mod, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ LOAD_PTE(k0, k1)
+ tlbp # find faulting entry
+ andi k0, k0, _PAGE_WRITE
+ beqz k0, nowrite_mod
+ .set push
+ .set reorder
+ lw k0, (k1)
+ .set pop
+
+ /* Present and writable bits set, set accessed and dirty bits. */
+ PTE_MAKEWRITE(k0, k1)
+
+ /* Now reload the entry into the tlb. */
+ PTE_RELOAD(k1)
+ tlbwi
+ RET(k0)
+#endif
+
+nowrite_mod:
+ DO_FAULT(1)
+END(handle_mod)
--- /dev/null
+/*
+ * TLB exception handling code for r4k.
+ *
+ * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse
+ *
+ * Multi-cpu abstraction and reworking:
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ *
+ * Carsten Langgaard, carstenl@mips.com
+ * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved.
+ */
+#include <linux/init.h>
+#include <linux/config.h>
+
+#include <asm/asm.h>
+#include <asm/offset.h>
+#include <asm/cachectl.h>
+#include <asm/fpregdef.h>
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/pgtable-bits.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+#include <asm/war.h>
+
+#define TLB_OPTIMIZE /* If you are paranoid, disable this. */
+
+#ifdef CONFIG_64BIT_PHYS_ADDR
+#define PTE_L ld
+#define PTE_S sd
+#define PTE_SRL dsrl
+#define P_MTC0 dmtc0
+#define PTE_SIZE 8
+#define PTEP_INDX_MSK 0xff0
+#define PTE_INDX_MSK 0xff8
+#define PTE_INDX_SHIFT 9
+#else
+#define PTE_L lw
+#define PTE_S sw
+#define PTE_SRL srl
+#define P_MTC0 mtc0
+#define PTE_SIZE 4
+#define PTEP_INDX_MSK 0xff8
+#define PTE_INDX_MSK 0xffc
+#define PTE_INDX_SHIFT 10
+#endif
+
+/*
+ * ABUSE of CPP macros 101.
+ *
+ * After this macro runs, the pte faulted on is
+ * in register PTE, a ptr into the table in which
+ * the pte belongs is in PTR.
+ */
+
+#ifdef CONFIG_SMP
+#define GET_PGD(scratch, ptr) \
+ mfc0 ptr, CP0_CONTEXT; \
+ la scratch, pgd_current;\
+ srl ptr, 23; \
+ sll ptr, 2; \
+ addu ptr, scratch, ptr; \
+ lw ptr, (ptr);
+#else
+#define GET_PGD(scratch, ptr) \
+ lw ptr, pgd_current;
+#endif
+
+#define LOAD_PTE(pte, ptr) \
+ GET_PGD(pte, ptr) \
+ mfc0 pte, CP0_BADVADDR; \
+ srl pte, pte, _PGDIR_SHIFT; \
+ sll pte, pte, 2; \
+ addu ptr, ptr, pte; \
+ mfc0 pte, CP0_BADVADDR; \
+ lw ptr, (ptr); \
+ srl pte, pte, PTE_INDX_SHIFT; \
+ and pte, pte, PTE_INDX_MSK; \
+ addu ptr, ptr, pte; \
+ PTE_L pte, (ptr);
+
+ /* This places the even/odd pte pair in the page
+ * table at PTR into ENTRYLO0 and ENTRYLO1 using
+ * TMP as a scratch register.
+ */
+#define PTE_RELOAD(ptr, tmp) \
+ ori ptr, ptr, PTE_SIZE; \
+ xori ptr, ptr, PTE_SIZE; \
+ PTE_L tmp, PTE_SIZE(ptr); \
+ PTE_L ptr, 0(ptr); \
+ PTE_SRL tmp, tmp, 6; \
+ P_MTC0 tmp, CP0_ENTRYLO1; \
+ PTE_SRL ptr, ptr, 6; \
+ P_MTC0 ptr, CP0_ENTRYLO0;
+
+#define DO_FAULT(write) \
+ .set noat; \
+ SAVE_ALL; \
+ mfc0 a2, CP0_BADVADDR; \
+ KMODE; \
+ .set at; \
+ move a0, sp; \
+ jal do_page_fault; \
+ li a1, write; \
+ j ret_from_exception; \
+ nop; \
+ .set noat;
+
+ /* Check is PTE is present, if not then jump to LABEL.
+ * PTR points to the page table where this PTE is located,
+ * when the macro is done executing PTE will be restored
+ * with it's original value.
+ */
+#define PTE_PRESENT(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ bnez pte, label; \
+ PTE_L pte, (ptr);
+
+ /* Make PTE valid, store result in PTR. */
+#define PTE_MAKEVALID(pte, ptr) \
+ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \
+ PTE_S pte, (ptr);
+
+ /* Check if PTE can be written to, if not branch to LABEL.
+ * Regardless restore PTE with value from PTR when done.
+ */
+#define PTE_WRITABLE(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ bnez pte, label; \
+ PTE_L pte, (ptr);
+
+ /* Make PTE writable, update software status bits as well,
+ * then store at PTR.
+ */
+#define PTE_MAKEWRITE(pte, ptr) \
+ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \
+ _PAGE_VALID | _PAGE_DIRTY); \
+ PTE_S pte, (ptr);
+
+ __INIT
+
+#ifdef CONFIG_64BIT_PHYS_ADDR
+#define GET_PTE_OFF(reg)
+#elif CONFIG_CPU_VR41XX
+#define GET_PTE_OFF(reg) srl reg, reg, 3
+#else
+#define GET_PTE_OFF(reg) srl reg, reg, 1
+#endif
+
+/*
+ * These handlers much be written in a relocatable manner
+ * because based upon the cpu type an arbitrary one of the
+ * following pieces of code will be copied to the KSEG0
+ * vector location.
+ */
+ /* TLB refill, EXL == 0, R4xx0, non-R4600 version */
+ .set noreorder
+ .set noat
+ LEAF(except_vec0_r4000)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR # Get faulting address
+ srl k0, k0, _PGDIR_SHIFT # get pgd only bits
+
+ sll k0, k0, 2
+ addu k1, k1, k0 # add in pgd offset
+ mfc0 k0, CP0_CONTEXT # get context reg
+ lw k1, (k1)
+ GET_PTE_OFF(k0) # get pte offset
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0 # add in offset
+ PTE_L k0, 0(k1) # get even pte
+ PTE_L k1, PTE_SIZE(k1) # get odd pte
+ PTE_SRL k0, k0, 6 # convert to entrylo0
+ P_MTC0 k0, CP0_ENTRYLO0 # load it
+ PTE_SRL k1, k1, 6 # convert to entrylo1
+ P_MTC0 k1, CP0_ENTRYLO1 # load it
+ mtc0_tlbw_hazard
+ tlbwr # write random tlb entry
+ tlbw_eret_hazard
+ eret # return from trap
+ END(except_vec0_r4000)
+
+ /* TLB refill, EXL == 0, R4600 version */
+ LEAF(except_vec0_r4600)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR
+ srl k0, k0, _PGDIR_SHIFT
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+ GET_PTE_OFF(k0) # get pte offset
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0
+ PTE_L k0, 0(k1)
+ PTE_L k1, PTE_SIZE(k1)
+ PTE_SRL k0, k0, 6
+ P_MTC0 k0, CP0_ENTRYLO0
+ PTE_SRL k1, k1, 6
+ P_MTC0 k1, CP0_ENTRYLO1
+ nop
+ tlbwr
+ nop
+ eret
+ END(except_vec0_r4600)
+
+ /* TLB refill, EXL == 0, R52x0 "Nevada" version */
+ /*
+ * This version has a bug workaround for the Nevada. It seems
+ * as if under certain circumstances the move from cp0_context
+ * might produce a bogus result when the mfc0 instruction and
+ * it's consumer are in a different cacheline or a load instruction,
+ * probably any memory reference, is between them. This is
+ * potencially slower than the R4000 version, so we use this
+ * special version.
+ */
+ .set noreorder
+ .set noat
+ LEAF(except_vec0_nevada)
+ .set mips3
+ mfc0 k0, CP0_BADVADDR # Get faulting address
+ srl k0, k0, _PGDIR_SHIFT # get pgd only bits
+ lw k1, pgd_current # get pgd pointer
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0 # add in pgd offset
+ lw k1, (k1)
+ mfc0 k0, CP0_CONTEXT # get context reg
+ GET_PTE_OFF(k0) # get pte offset
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0 # add in offset
+ PTE_L k0, 0(k1) # get even pte
+ PTE_L k1, PTE_SIZE(k1) # get odd pte
+ PTE_SRL k0, k0, 6 # convert to entrylo0
+ P_MTC0 k0, CP0_ENTRYLO0 # load it
+ PTE_SRL k1, k1, 6 # convert to entrylo1
+ P_MTC0 k1, CP0_ENTRYLO1 # load it
+ nop # QED specified nops
+ nop
+ tlbwr # write random tlb entry
+ nop # traditional nop
+ eret # return from trap
+ END(except_vec0_nevada)
+
+ /* TLB refill, EXL == 0, SB1 with M3 errata handling version */
+ LEAF(except_vec0_sb1)
+#if BCM1250_M3_WAR
+ mfc0 k0, CP0_BADVADDR
+ mfc0 k1, CP0_ENTRYHI
+ xor k0, k1
+ srl k0, k0, PAGE_SHIFT+1
+ bnez k0, 1f
+#endif
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR # Get faulting address
+ srl k0, k0, _PGDIR_SHIFT # get pgd only bits
+ sll k0, k0, 2
+ addu k1, k1, k0 # add in pgd offset
+ mfc0 k0, CP0_CONTEXT # get context reg
+ lw k1, (k1)
+ GET_PTE_OFF(k0) # get pte offset
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0 # add in offset
+ PTE_L k0, 0(k1) # get even pte
+ PTE_L k1, PTE_SIZE(k1) # get odd pte
+ PTE_SRL k0, k0, 6 # convert to entrylo0
+ P_MTC0 k0, CP0_ENTRYLO0 # load it
+ PTE_SRL k1, k1, 6 # convert to entrylo1
+ P_MTC0 k1, CP0_ENTRYLO1 # load it
+ tlbwr # write random tlb entry
+1: eret # return from trap
+ END(except_vec0_sb1)
+
+ /* TLB refill, EXL == 0, R4[40]00/R5000 badvaddr hwbug version */
+ LEAF(except_vec0_r45k_bvahwbug)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR
+ srl k0, k0, _PGDIR_SHIFT
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+#ifndef CONFIG_64BIT_PHYS_ADDR
+ srl k0, k0, 1
+#endif
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0
+ PTE_L k0, 0(k1)
+ PTE_L k1, PTE_SIZE(k1)
+ nop /* XXX */
+ tlbp
+ PTE_SRL k0, k0, 6
+ P_MTC0 k0, CP0_ENTRYLO0
+ PTE_SRL k1, k1, 6
+ mfc0 k0, CP0_INDEX
+ P_MTC0 k1, CP0_ENTRYLO1
+ bltzl k0, 1f
+ tlbwr
+1:
+ nop
+ eret
+ END(except_vec0_r45k_bvahwbug)
+
+#ifdef CONFIG_SMP
+ /* TLB refill, EXL == 0, R4000 MP badvaddr hwbug version */
+ LEAF(except_vec0_r4k_mphwbug)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR
+ srl k0, k0, _PGDIR_SHIFT
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+#ifndef CONFIG_64BIT_PHYS_ADDR
+ srl k0, k0, 1
+#endif
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0
+ PTE_L k0, 0(k1)
+ PTE_L k1, PTE_SIZE(k1)
+ nop /* XXX */
+ tlbp
+ PTE_SRL k0, k0, 6
+ P_MTC0 k0, CP0_ENTRYLO0
+ PTE_SRL k1, k1, 6
+ mfc0 k0, CP0_INDEX
+ P_MTC0 k1, CP0_ENTRYLO1
+ bltzl k0, 1f
+ tlbwr
+1:
+ nop
+ eret
+ END(except_vec0_r4k_mphwbug)
+#endif
+
+ /* TLB refill, EXL == 0, R4000 UP 250MHZ entrylo[01] hwbug version */
+ LEAF(except_vec0_r4k_250MHZhwbug)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR
+ srl k0, k0, _PGDIR_SHIFT
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+#ifndef CONFIG_64BIT_PHYS_ADDR
+ srl k0, k0, 1
+#endif
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0
+ PTE_L k0, 0(k1)
+ PTE_L k1, PTE_SIZE(k1)
+ PTE_SRL k0, k0, 6
+ P_MTC0 zero, CP0_ENTRYLO0
+ P_MTC0 k0, CP0_ENTRYLO0
+ PTE_SRL k1, k1, 6
+ P_MTC0 zero, CP0_ENTRYLO1
+ P_MTC0 k1, CP0_ENTRYLO1
+ b 1f
+ tlbwr
+1:
+ nop
+ eret
+ END(except_vec0_r4k_250MHZhwbug)
+
+#ifdef CONFIG_SMP
+ /* TLB refill, EXL == 0, R4000 MP 250MHZ entrylo[01]+badvaddr bug version */
+ LEAF(except_vec0_r4k_MP250MHZhwbug)
+ .set mips3
+ GET_PGD(k0, k1) # get pgd pointer
+ mfc0 k0, CP0_BADVADDR
+ srl k0, k0, _PGDIR_SHIFT
+ sll k0, k0, 2 # log2(sizeof(pgd_t)
+ addu k1, k1, k0
+ mfc0 k0, CP0_CONTEXT
+ lw k1, (k1)
+#ifndef CONFIG_64BIT_PHYS_ADDR
+ srl k0, k0, 1
+#endif
+ and k0, k0, PTEP_INDX_MSK
+ addu k1, k1, k0
+ PTE_L k0, 0(k1)
+ PTE_L k1, PTE_SIZE(k1)
+ nop /* XXX */
+ tlbp
+ PTE_SRL k0, k0, 6
+ P_MTC0 zero, CP0_ENTRYLO0
+ P_MTC0 k0, CP0_ENTRYLO0
+ mfc0 k0, CP0_INDEX
+ PTE_SRL k1, k1, 6
+ P_MTC0 zero, CP0_ENTRYLO1
+ P_MTC0 k1, CP0_ENTRYLO1
+ bltzl k0, 1f
+ tlbwr
+1:
+ nop
+ eret
+ END(except_vec0_r4k_MP250MHZhwbug)
+#endif
+
+ __FINIT
+
+ .set noreorder
+
+/*
+ * From the IDT errata for the QED RM5230 (Nevada), processor revision 1.0:
+ * 2. A timing hazard exists for the TLBP instruction.
+ *
+ * stalling_instruction
+ * TLBP
+ *
+ * The JTLB is being read for the TLBP throughout the stall generated by the
+ * previous instruction. This is not really correct as the stalling instruction
+ * can modify the address used to access the JTLB. The failure symptom is that
+ * the TLBP instruction will use an address created for the stalling instruction
+ * and not the address held in C0_ENHI and thus report the wrong results.
+ *
+ * The software work-around is to not allow the instruction preceding the TLBP
+ * to stall - make it an NOP or some other instruction guaranteed not to stall.
+ *
+ * Errata 2 will not be fixed. This errata is also on the R5000.
+ *
+ * As if we MIPS hackers wouldn't know how to nop pipelines happy ...
+ */
+#define R5K_HAZARD nop
+
+ /*
+ * Note for many R4k variants tlb probes cannot be executed out
+ * of the instruction cache else you get bogus results.
+ */
+ .align 5
+ NESTED(handle_tlbl, PT_SIZE, sp)
+ .set noat
+#if BCM1250_M3_WAR
+ mfc0 k0, CP0_BADVADDR
+ mfc0 k1, CP0_ENTRYHI
+ xor k0, k1
+ srl k0, k0, PAGE_SHIFT+1
+ beqz k0, 1f
+ nop
+ .set mips3
+ eret
+ .set mips0
+1:
+#endif
+invalid_tlbl:
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ /* Test present bit in entry. */
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp
+ PTE_PRESENT(k0, k1, nopage_tlbl)
+ PTE_MAKEVALID(k0, k1)
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nopage_tlbl:
+ DO_FAULT(0)
+ END(handle_tlbl)
+
+ .align 5
+ NESTED(handle_tlbs, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ li k0,0
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp # find faulting entry
+ PTE_WRITABLE(k0, k1, nopage_tlbs)
+ PTE_MAKEWRITE(k0, k1)
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nopage_tlbs:
+ DO_FAULT(1)
+ END(handle_tlbs)
+
+ .align 5
+ NESTED(handle_mod, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp # find faulting entry
+ andi k0, k0, _PAGE_WRITE
+ beqz k0, nowrite_mod
+ PTE_L k0, (k1)
+
+ /* Present and writable bits set, set accessed and dirty bits. */
+ PTE_MAKEWRITE(k0, k1)
+
+ /* Now reload the entry into the tlb. */
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nowrite_mod:
+ DO_FAULT(1)
+ END(handle_mod)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000 Silicon Graphics, Inc.
+ * Written by Ulf Carlsson (ulfc@engr.sgi.com)
+ * Copyright (C) 2002 Maciej W. Rozycki
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/threads.h>
+
+#include <asm/asm.h>
+#include <asm/hazards.h>
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+#include <asm/stackframe.h>
+#include <asm/war.h>
+
+#define _VMALLOC_START 0xc000000000000000
+
+ /*
+ * After this macro runs we have a pointer to the pte of the address
+ * that caused the fault in PTR.
+ */
+ .macro LOAD_PTE2, ptr, tmp, kaddr
+#ifdef CONFIG_SMP
+ dmfc0 \ptr, CP0_CONTEXT
+ dmfc0 \tmp, CP0_BADVADDR
+ dsra \ptr, 23 # get pgd_current[cpu]
+#else
+ dmfc0 \tmp, CP0_BADVADDR
+ dla \ptr, pgd_current
+#endif
+ bltz \tmp, \kaddr
+ ld \ptr, (\ptr)
+ dsrl \tmp, (_PGDIR_SHIFT-3) # get pgd offset in bytes
+ andi \tmp, ((_PTRS_PER_PGD - 1)<<3)
+ daddu \ptr, \tmp # add in pgd offset
+ dmfc0 \tmp, CP0_BADVADDR
+ ld \ptr, (\ptr) # get pmd pointer
+ dsrl \tmp, (_PMD_SHIFT-3) # get pmd offset in bytes
+ andi \tmp, ((_PTRS_PER_PMD - 1)<<3)
+ daddu \ptr, \tmp # add in pmd offset
+ dmfc0 \tmp, CP0_XCONTEXT
+ ld \ptr, (\ptr) # get pte pointer
+ andi \tmp, 0xff0 # get pte offset
+ daddu \ptr, \tmp
+ .endm
+
+
+ /*
+ * Ditto for the kernel table.
+ */
+ .macro LOAD_KPTE2, ptr, tmp, not_vmalloc
+ /*
+ * First, determine that the address is in/above vmalloc range.
+ */
+ dmfc0 \tmp, CP0_BADVADDR
+ dli \ptr, _VMALLOC_START
+
+ /*
+ * Now find offset into kptbl.
+ */
+ dsubu \tmp, \tmp, \ptr
+ dla \ptr, kptbl
+ dsrl \tmp, (_PAGE_SHIFT+1) # get vpn2
+ dsll \tmp, 4 # byte offset of pte
+ daddu \ptr, \ptr, \tmp
+
+ /*
+ * Determine that fault address is within vmalloc range.
+ */
+ dla \tmp, ekptbl
+ slt \tmp, \ptr, \tmp
+ beqz \tmp, \not_vmalloc # not vmalloc
+ nop
+ .endm
+
+
+ /*
+ * This places the even/odd pte pair in the page table at the pte
+ * entry pointed to by PTE into ENTRYLO0 and ENTRYLO1.
+ */
+ .macro PTE_RELOAD, pte0, pte1
+ dsrl \pte0, 6 # convert to entrylo0
+ dmtc0 \pte0, CP0_ENTRYLO0 # load it
+ dsrl \pte1, 6 # convert to entrylo1
+ dmtc0 \pte1, CP0_ENTRYLO1 # load it
+ .endm
+
+
+ .text
+ .set noreorder
+ .set mips3
+
+ __INIT
+
+ /*
+ * TLB refill handlers for the R4000 and SB1.
+ * Attention: We may only use 32 instructions / 128 bytes.
+ */
+ .align 5
+LEAF(except_vec1_r4k)
+ .set noat
+ dla k0, handle_vec1_r4k
+ jr k0
+ nop
+END(except_vec1_r4k)
+
+LEAF(except_vec1_sb1)
+#if BCM1250_M3_WAR
+ dmfc0 k0, CP0_BADVADDR
+ dmfc0 k1, CP0_ENTRYHI
+ xor k0, k1
+ dsrl k0, k0, _PAGE_SHIFT+1
+ bnez k0, 1f
+#endif
+ .set noat
+ dla k0, handle_vec1_r4k
+ jr k0
+ nop
+
+1: eret
+ nop
+END(except_vec1_sb1)
+
+ __FINIT
+
+ .align 5
+LEAF(handle_vec1_r4k)
+ .set noat
+ LOAD_PTE2 k1 k0 9f
+ ld k0, 0(k1) # get even pte
+ ld k1, 8(k1) # get odd pte
+ PTE_RELOAD k0 k1
+ mtc0_tlbw_hazard
+ tlbwr
+ tlbw_eret_hazard
+ eret
+
+9: # handle the vmalloc range
+ LOAD_KPTE2 k1 k0 invalid_vmalloc_address
+ ld k0, 0(k1) # get even pte
+ ld k1, 8(k1) # get odd pte
+ PTE_RELOAD k0 k1
+ mtc0_tlbw_hazard
+ tlbwr
+ tlbw_eret_hazard
+ eret
+END(handle_vec1_r4k)
+
+
+ __INIT
+
+ /*
+ * TLB refill handler for the R10000.
+ * Attention: We may only use 32 instructions / 128 bytes.
+ */
+ .align 5
+LEAF(except_vec1_r10k)
+ .set noat
+ dla k0, handle_vec1_r10k
+ jr k0
+ nop
+END(except_vec1_r10k)
+
+ __FINIT
+
+ .align 5
+LEAF(handle_vec1_r10k)
+ .set noat
+ LOAD_PTE2 k1 k0 9f
+ ld k0, 0(k1) # get even pte
+ ld k1, 8(k1) # get odd pte
+ PTE_RELOAD k0 k1
+ nop
+ tlbwr
+ eret
+
+9: # handle the vmalloc range
+ LOAD_KPTE2 k1 k0 invalid_vmalloc_address
+ ld k0, 0(k1) # get even pte
+ ld k1, 8(k1) # get odd pte
+ PTE_RELOAD k0 k1
+ nop
+ tlbwr
+ eret
+END(handle_vec1_r10k)
+
+
+ .align 5
+LEAF(invalid_vmalloc_address)
+ .set noat
+ SAVE_ALL
+ CLI
+ dmfc0 t0, CP0_BADVADDR
+ sd t0, PT_BVADDR(sp)
+ move a0, sp
+ jal show_regs
+ PANIC("Invalid kernel address")
+END(invalid_vmalloc_address)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Marvell MV64340 interrupt fixup code.
+ *
+ * Marvell wants an NDA for their docs so this was written without
+ * documentation. You've been warned.
+ *
+ * Copyright (C) 2004 Ralf Baechle
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/mipsregs.h>
+#include <asm/pci_channel.h>
+
+/*
+ * WARNING: Example of how _NOT_ to do it.
+ */
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1)
+ return 3; /* PCI-X A */
+ if (bus == 0 && slot == 2)
+ return 4; /* PCI-X B */
+ if (bus == 1 && slot == 1)
+ return 5; /* PCI A */
+ if (bus == 1 && slot == 2)
+ return 6; /* PCI B */
+
+return 0;
+ panic("Whooops in pcibios_map_irq");
+}
+
+struct pci_fixup pcibios_fixups[] = {
+ {0}
+};
--- /dev/null
+/*
+ * fixup-mpc30x.c, The Victor MP-C303/304 specific PCI fixups.
+ *
+ * Copyright (C) 2002,2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/vr41xx/mpc30x.h>
+#include <asm/vr41xx/vrc4173.h>
+
+static const int internal_func_irqs[] __initdata = {
+ VRC4173_CASCADE_IRQ,
+ VRC4173_AC97_IRQ,
+ VRC4173_USB_IRQ,
+};
+
+static char irq_tab_mpc30x[] __initdata = {
+ [12] = VRC4173_PCMCIA1_IRQ,
+ [13] = VRC4173_PCMCIA2_IRQ,
+ [29] = MQ200_IRQ,
+};
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ if (slot == 30)
+ return internal_func_irqs[PCI_FUNC(dev->devfn)];
+
+ return irq_tab_mpc30x[slot];
+}
+
+struct pci_fixup pcibios_fixups[] __initdata = {
+ { .pass = 0, },
+};
--- /dev/null
+/*
+ * Copyright 2002 Momentum Computer Inc.
+ * Author: Matthew Dharm <mdharm@momenco.com>
+ *
+ * Based on work for the Linux port to the Ocelot board, which is
+ * Copyright 2001 MontaVista Software Inc.
+ * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net
+ *
+ * arch/mips/momentum/ocelot_g/pci.c
+ * Board-specific PCI routines for mv64340 controller.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1)
+ return 2; /* PCI-X A */
+ if (bus == 1 && slot == 1)
+ return 12; /* PCI-X B */
+ if (bus == 1 && slot == 2)
+ return 4; /* PCI B */
+
+return 0;
+ panic("Whooops in pcibios_map_irq");
+}
+
+struct pci_fixup pcibios_fixups[] = {
+ {0}
+};
--- /dev/null
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * Copyright (C) 2004 Ralf Baechle (ralf@linux-mips.org)
+ */
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1) /* Intel 82543 Gigabit MAC */
+ return 2; /* irq_nr is 2 for INT0 */
+
+ if (bus == 0 && slot == 2) /* Intel 82543 Gigabit MAC */
+ return 3; /* irq_nr is 3 for INT1 */
+
+ if (bus == 1 && slot == 3) /* Intel 21555 bridge */
+ return 5; /* irq_nr is 8 for INT6 */
+
+ if (bus == 1 && slot == 4) /* PMC Slot */
+ return 9; /* irq_nr is 9 for INT7 */
+
+ return -1;
+}
+
+struct pci_fixup pcibios_fixups[] = {
+ {0}
+};
--- /dev/null
+/*
+ * fixup-tb0219.c, The TANBAC TB0219 specific PCI fixups.
+ *
+ * Copyright (C) 2003 Megasolution Inc. <matsu@megasolution.jp>
+ * Copyright (C) 2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/vr41xx/tb0219.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int irq = -1;
+
+ switch (slot) {
+ case 12:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT1_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT1_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT1_IRQ;
+ break;
+ case 13:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT2_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT2_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT2_IRQ;
+ break;
+ case 14:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT3_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT3_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT3_IRQ;
+ break;
+ default:
+ break;
+ }
+
+ return irq;
+}
+
+struct pci_fixup pcibios_fixups[] __initdata = {
+ { .pass = 0, },
+};
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003, 2004 Ralf Baechle (ralf@linux-mips.org)
+ */
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+
+#include <asm/marvell.h>
+
+static int mv_read_config(struct pci_bus *bus, unsigned int devfn,
+ int where, int size, u32 * val)
+{
+ struct mv_pci_controller *mvbc = bus->sysdata;
+ unsigned long address_reg, data_reg;
+ u32 address;
+
+ address_reg = mvbc->config_addr;
+ data_reg = mvbc->config_vreg;
+
+ /* Accessing device 31 crashes those Marvells. Since years.
+ Will they ever make sane controllers ... */
+ if (PCI_SLOT(devfn) == 31)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ address = (bus->number << 16) | (devfn << 8) |
+ (where & 0xfc) | 0x80000000;
+
+ /* start the configuration cycle */
+ MV_WRITE(address_reg, address);
+
+ switch (size) {
+ case 1:
+ *val = MV_READ_8(data_reg + (where & 0x3));
+ break;
+
+ case 2:
+ *val = MV_READ_16(data_reg + (where & 0x3));
+ break;
+
+ case 4:
+ *val = MV_READ(data_reg);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int mv_write_config(struct pci_bus *bus, unsigned int devfn,
+ int where, int size, u32 val)
+{
+ struct mv_pci_controller *mvbc = bus->sysdata;
+ unsigned long address_reg, data_reg;
+ u32 address;
+
+ address_reg = mvbc->config_addr;
+ data_reg = mvbc->config_vreg;
+
+ /* Accessing device 31 crashes those Marvells. Since years.
+ Will they ever make sane controllers ... */
+ if (PCI_SLOT(devfn) == 31)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ address = (bus->number << 16) | (devfn << 8) |
+ (where & 0xfc) | 0x80000000;
+
+ /* start the configuration cycle */
+ MV_WRITE(address_reg, address);
+
+ switch (size) {
+ case 1:
+ MV_WRITE_8(data_reg + (where & 0x3), val);
+ break;
+
+ case 2:
+ MV_WRITE_16(data_reg + (where & 0x3), val);
+ break;
+
+ case 4:
+ MV_WRITE(data_reg, val);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+struct pci_ops mv_pci_ops = {
+ .read = mv_read_config,
+ .write = mv_write_config
+};
--- /dev/null
+/*
+ * Copyright 2003 PMC-Sierra
+ * Author: Manish Lachwani (lachwani@pmc-sierra.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <asm/io.h>
+
+#include <asm/titan_dep.h>
+
+static int titan_ht_config_read_dword(struct pci_bus *bus, unsigned int devfn,
+ int offset, u32 * val)
+{
+ volatile uint32_t address;
+ int busno;
+
+ busno = bus->number;
+
+ address = (busno << 16) | (devfn << 8) | (offset & 0xfc) | 0x80000000;
+ if (busno != 0)
+ address |= 1;
+
+ /*
+ * RM9000 HT Errata: Issue back to back HT config
+ * transcations. Issue a BIU sync before and
+ * after the HT cycle
+ */
+
+ *(volatile int32_t *) 0xfb0000f0 |= 0x2;
+
+ udelay(30);
+
+ *(volatile int32_t *) 0xfb0006f8 = address;
+ *(val) = *(volatile int32_t *) 0xfb0006fc;
+
+ udelay(30);
+
+ * (volatile int32_t *) 0xfb0000f0 |= 0x2;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int titan_ht_config_read(struct pci_bus *bus, unsigned int devfn,
+ int offset, int size, u32 * val)
+{
+ uint32_t dword;
+
+ titan_ht_config_read_dword(bus, devfn, offset, &dword);
+
+ dword >>= ((offset & 3) << 3);
+ dword &= (0xffffffffU >> ((4 - size) << 8));
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static inline int titan_ht_config_write_dword(struct pci_bus *bus,
+ unsigned int devfn, int offset, u32 val)
+{
+ volatile uint32_t address;
+ int busno;
+
+ busno = bus->number;
+
+ address = (busno << 16) | (devfn << 8) | (offset & 0xfc) | 0x80000000;
+ if (busno != 0)
+ address |= 1;
+
+ *(volatile int32_t *) 0xfb0000f0 |= 0x2;
+
+ udelay(30);
+
+ *(volatile int32_t *) 0xfb0006f8 = address;
+ *(volatile int32_t *) 0xfb0006fc = val;
+
+ udelay(30);
+
+ *(volatile int32_t *) 0xfb0000f0 |= 0x2;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int titan_ht_config_write(struct pci_bus *bus, unsigned int devfn,
+ int offset, int size, u32 val)
+{
+ uint32_t val1, val2, mask;
+
+ titan_ht_config_read_dword(bus, devfn, offset, &val2);
+
+ val1 = val << ((offset & 3) << 3);
+ mask = ~(0xffffffffU >> ((4 - size) << 8));
+ val2 &= ~(mask << ((offset & 3) << 8));
+
+ titan_ht_config_write_dword(bus, devfn, offset, val1 | val2);
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+struct pci_ops titan_ht_pci_ops = {
+ .read = titan_ht_config_read,
+ .write = titan_ht_config_write,
+};
--- /dev/null
+/*
+ * ops-vr41xx.c, PCI configuration routines for the PCIU of NEC VR4100 series.
+ *
+ * Copyright (C) 2001-2003 MontaVista Software Inc.
+ * Author: Yoichi Yuasa <yyuasa@mvista.com or source@mvista.com>
+ * Copyright (C) 2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+/*
+ * Changes:
+ * MontaVista Software Inc. <yyuasa@mvista.com> or <source@mvista.com>
+ * - New creation, NEC VR4122 and VR4131 are supported.
+ */
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include <asm/io.h>
+
+#define PCICONFDREG KSEG1ADDR(0x0f000c14)
+#define PCICONFAREG KSEG1ADDR(0x0f000c18)
+
+static inline int set_pci_configuration_address(unsigned char number,
+ unsigned int devfn, int where)
+{
+ if (number == 0) {
+ /*
+ * Type 0 configuration
+ */
+ if (PCI_SLOT(devfn) < 11 || where > 0xff)
+ return -EINVAL;
+
+ writel((1U << PCI_SLOT(devfn)) | (PCI_FUNC(devfn) << 8) |
+ (where & 0xfc), PCICONFAREG);
+ } else {
+ /*
+ * Type 1 configuration
+ */
+ if (where > 0xff)
+ return -EINVAL;
+
+ writel(((uint32_t)number << 16) | ((devfn & 0xff) << 8) |
+ (where & 0xfc) | 1U, PCICONFAREG);
+ }
+
+ return 0;
+}
+
+static int pci_config_read(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, uint32_t *val)
+{
+ uint32_t data;
+
+ *val = 0xffffffffU;
+ if (set_pci_configuration_address(bus->number, devfn, where) < 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ data = readl(PCICONFDREG);
+
+ switch (size) {
+ case 1:
+ *val = (data >> ((where & 3) << 3)) & 0xffU;
+ break;
+ case 2:
+ *val = (data >> ((where & 2) << 3)) & 0xffffU;
+ break;
+ case 4:
+ *val = data;
+ break;
+ default:
+ return PCIBIOS_FUNC_NOT_SUPPORTED;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int pci_config_write(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, uint32_t val)
+{
+ uint32_t data;
+ int shift;
+
+ if (set_pci_configuration_address(bus->number, devfn, where) < 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ data = readl(PCICONFDREG);
+
+ switch (size) {
+ case 1:
+ shift = (where & 3) << 3;
+ data &= ~(0xffU << shift);
+ data |= ((val & 0xffU) << shift);
+ break;
+ case 2:
+ shift = (where & 2) << 3;
+ data &= ~(0xffffU << shift);
+ data |= ((val & 0xffffU) << shift);
+ break;
+ case 4:
+ data = val;
+ break;
+ default:
+ return PCIBIOS_FUNC_NOT_SUPPORTED;
+ }
+
+ writel(data, PCICONFDREG);
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+struct pci_ops vr41xx_pci_ops = {
+ .read = pci_config_read,
+ .write = pci_config_write,
+};
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle
+ *
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <asm/gt64240.h>
+#include <asm/pci_channel.h>
+
+extern struct pci_ops titan_pci_ops;
+
+static struct resource py_mem_resource = {
+ "Titan PCI MEM", 0xe0000000UL, 0xe3ffffffUL, IORESOURCE_MEM
+};
+
+static struct resource py_io_resource = {
+ "Titan IO MEM", 0x00000000UL, 0x00ffffffUL, IORESOURCE_IO,
+};
+
+static struct pci_controller py_controller = {
+ .pci_ops = &titan_pci_ops,
+ .mem_resource = &py_mem_resource,
+ .mem_offset = 0x10000000UL,
+ .io_resource = &py_io_resource,
+ .io_offset = 0x00000000UL
+};
+
+static int __init pmc_yosemite_setup(void)
+{
+ register_pci_controller(&py_controller);
+}
--- /dev/null
+/*
+ * Copyright 2003 PMC-Sierra
+ * Author: Manish Lachwani (lachwani@pmc-sierra.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+/*
+ * Support for KGDB for the Yosemite board. We make use of single serial
+ * port to be used for KGDB as well as console. The second serial port
+ * seems to be having a problem. Single IRQ is allocated for both the
+ * ports. Hence, the interrupt routing code needs to figure out whether
+ * the interrupt came from channel A or B.
+ */
+
+#include <linux/config.h>
+
+#if defined(CONFIG_KGDB)
+#include <asm/serial.h>
+
+/*
+ * Baud rate, Parity, Data and Stop bit settings for the
+ * serial port on the Yosemite. Note that the Early printk
+ * patch has been added. So, we should be all set to go
+ */
+#define YOSEMITE_BAUD_2400 2400
+#define YOSEMITE_BAUD_4800 4800
+#define YOSEMITE_BAUD_9600 9600
+#define YOSEMITE_BAUD_19200 19200
+#define YOSEMITE_BAUD_38400 38400
+#define YOSEMITE_BAUD_57600 57600
+#define YOSEMITE_BAUD_115200 115200
+
+#define YOSEMITE_PARITY_NONE 0
+#define YOSEMITE_PARITY_ODD 0x08
+#define YOSEMITE_PARITY_EVEN 0x18
+#define YOSEMITE_PARITY_MARK 0x28
+#define YOSEMITE_PARITY_SPACE 0x38
+
+#define YOSEMITE_DATA_5BIT 0x0
+#define YOSEMITE_DATA_6BIT 0x1
+#define YOSEMITE_DATA_7BIT 0x2
+#define YOSEMITE_DATA_8BIT 0x3
+
+#define YOSEMITE_STOP_1BIT 0x0
+#define YOSEMITE_STOP_2BIT 0x4
+
+/* This is crucial */
+#define SERIAL_REG_OFS 0x1
+
+#define SERIAL_RCV_BUFFER 0x0
+#define SERIAL_TRANS_HOLD 0x0
+#define SERIAL_SEND_BUFFER 0x0
+#define SERIAL_INTR_ENABLE (1 * SERIAL_REG_OFS)
+#define SERIAL_INTR_ID (2 * SERIAL_REG_OFS)
+#define SERIAL_DATA_FORMAT (3 * SERIAL_REG_OFS)
+#define SERIAL_LINE_CONTROL (3 * SERIAL_REG_OFS)
+#define SERIAL_MODEM_CONTROL (4 * SERIAL_REG_OFS)
+#define SERIAL_RS232_OUTPUT (4 * SERIAL_REG_OFS)
+#define SERIAL_LINE_STATUS (5 * SERIAL_REG_OFS)
+#define SERIAL_MODEM_STATUS (6 * SERIAL_REG_OFS)
+#define SERIAL_RS232_INPUT (6 * SERIAL_REG_OFS)
+#define SERIAL_SCRATCH_PAD (7 * SERIAL_REG_OFS)
+
+#define SERIAL_DIVISOR_LSB (0 * SERIAL_REG_OFS)
+#define SERIAL_DIVISOR_MSB (1 * SERIAL_REG_OFS)
+
+/*
+ * Functions to READ and WRITE to serial port 0
+ */
+#define SERIAL_READ(ofs) (*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE + ofs)))
+
+#define SERIAL_WRITE(ofs, val) ((*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE + ofs))) = val)
+
+/*
+ * Functions to READ and WRITE to serial port 1
+ */
+#define SERIAL_READ_1(ofs) (*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE_1 + ofs)
+
+#define SERIAL_WRITE_1(ofs, val) ((*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE_1 + ofs))) = val)
+
+/*
+ * Second serial port initialization
+ */
+void init_second_port(void)
+{
+ /* Disable Interrupts */
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x0);
+ SERIAL_WRITE_1(SERIAL_INTR_ENABLE, 0x0);
+
+ {
+ unsigned int divisor;
+
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x80);
+ divisor = TITAN_SERIAL_BASE_BAUD / YOSEMITE_BAUD_115200;
+ SERIAL_WRITE_1(SERIAL_DIVISOR_LSB, divisor & 0xff);
+
+ SERIAL_WRITE_1(SERIAL_DIVISOR_MSB,
+ (divisor & 0xff00) >> 8);
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x0);
+ }
+
+ SERIAL_WRITE_1(SERIAL_DATA_FORMAT, YOSEMITE_DATA_8BIT |
+ YOSEMITE_PARITY_NONE | YOSEMITE_STOP_1BIT);
+
+ /* Enable Interrupts */
+ SERIAL_WRITE_1(SERIAL_INTR_ENABLE, 0xf);
+}
+
+/* Initialize the serial port for KGDB debugging */
+void debugInit(unsigned int baud, unsigned char data, unsigned char parity,
+ unsigned char stop)
+{
+ /* Disable Interrupts */
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x0);
+ SERIAL_WRITE(SERIAL_INTR_ENABLE, 0x0);
+
+ {
+ unsigned int divisor;
+
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x80);
+
+ divisor = TITAN_SERIAL_BASE_BAUD / baud;
+ SERIAL_WRITE(SERIAL_DIVISOR_LSB, divisor & 0xff);
+
+ SERIAL_WRITE(SERIAL_DIVISOR_MSB, (divisor & 0xff00) >> 8);
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x0);
+ }
+
+ SERIAL_WRITE(SERIAL_DATA_FORMAT, data | parity | stop);
+}
+
+static int remoteDebugInitialized = 0;
+
+unsigned char getDebugChar(void)
+{
+ if (!remoteDebugInitialized) {
+ remoteDebugInitialized = 1;
+ debugInit(YOSEMITE_BAUD_115200,
+ YOSEMITE_DATA_8BIT,
+ YOSEMITE_PARITY_NONE, YOSEMITE_STOP_1BIT);
+ }
+
+ while ((SERIAL_READ(SERIAL_LINE_STATUS) & 0x1) == 0);
+ return SERIAL_READ(SERIAL_RCV_BUFFER);
+}
+
+int putDebugChar(unsigned char byte)
+{
+ if (!remoteDebugInitialized) {
+ remoteDebugInitialized = 1;
+ debugInit(YOSEMITE_BAUD_115200,
+ YOSEMITE_DATA_8BIT,
+ YOSEMITE_PARITY_NONE, YOSEMITE_STOP_1BIT);
+ }
+
+ while ((SERIAL_READ(SERIAL_LINE_STATUS) & 0x20) == 0);
+ SERIAL_WRITE(SERIAL_SEND_BUFFER, byte);
+
+ return 1;
+}
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 PMC-Sierra Inc.
+ * Author: Manish Lachwani (lachwani@pmc-sierra.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+/*
+ * Detailed Description:
+ *
+ * This block implements the I2C interface to the slave devices like the
+ * Atmel 24C32 EEPROM and the MAX 1619 Sensors device. The I2C Master interface
+ * can be controlled by the SCMB block. And the SCMB block kicks in only when
+ * using the Ethernet Mode of operation and __not__ the SysAD mode
+ *
+ * The SCMB controls the two modes: MDIO and the I2C. The MDIO mode is used to
+ * communicate with the Quad-PHY from Marvel. The I2C is used to communicate
+ * with the I2C slave devices. It seems that the driver does not explicitly
+ * deal with the control of SDA and SCL serial lines. So, the driver will set
+ * the slave address, drive the command and then the data. The SCMB will then
+ * control the two serial lines as required.
+ *
+ * It seems the documents are very unclear abt this. Hence, I took some time
+ * out to write the desciption to have an idea of how the I2C can actually
+ * work. Currently, this Linux driver wont be integrated into the generic Linux
+ * I2C framework. And finally, the I2C interface is also known as the 2BI
+ * interface. 2BI means 2-bit interface referring to SDA and SCL serial lines
+ * respectively.
+ *
+ * - Manish Lachwani (12/09/2003)
+ */
+
+#include "i2c-yosemite.h"
+
+/*
+ * Poll the I2C interface for the BUSY bit.
+ */
+static int titan_i2c_poll(void)
+{
+ int i = 0;
+ unsigned long val = 0;
+
+ for (i = 0; i < TITAN_I2C_MAX_POLL; i++) {
+ val = TITAN_I2C_READ(TITAN_I2C_COMMAND);
+
+ if (!(val & 0x8000))
+ return 0;
+ }
+
+ return TITAN_I2C_ERR_TIMEOUT;
+}
+
+/*
+ * Execute the I2C command
+ */
+int titan_i2c_xfer(unsigned int slave_addr, titan_i2c_command * cmd,
+ int size, unsigned int *addr)
+{
+ int loop = 0, bytes, i;
+ unsigned int *write_data, data, *read_data;
+ unsigned long reg_val, val;
+
+ write_data = cmd->data;
+ read_data = addr;
+
+ TITAN_I2C_WRITE(TITAN_I2C_SLAVE_ADDRESS, slave_addr);
+
+ if (cmd->type == TITAN_I2C_CMD_WRITE)
+ loop = cmd->write_size;
+ else
+ loop = size;
+
+ while (loop > 0) {
+ if ((cmd->type == TITAN_I2C_CMD_WRITE) ||
+ (cmd->type == TITAN_I2C_CMD_READ_WRITE)) {
+
+ reg_val = TITAN_I2C_DATA;
+ for (i = 0; i < TITAN_I2C_MAX_WORDS_PER_RW;
+ ++i, write_data += 2, reg_val += 4) {
+ if (bytes < cmd->write_size) {
+ data = write_data[0];
+ ++data;
+ }
+
+ if (bytes < cmd->write_size) {
+ data = write_data[1];
+ ++data;
+ }
+
+ TITAN_I2C_WRITE(reg_val, data);
+ }
+ }
+
+ TITAN_I2C_WRITE(TITAN_I2C_COMMAND,
+ (unsigned int) (cmd->type << 13));
+ if (titan_i2c_poll() != TITAN_I2C_ERR_OK)
+ return TITAN_I2C_ERR_TIMEOUT;
+
+ if ((cmd->type == TITAN_I2C_CMD_READ) ||
+ (cmd->type == TITAN_I2C_CMD_READ_WRITE)) {
+
+ reg_val = TITAN_I2C_DATA;
+ for (i = 0; i < TITAN_I2C_MAX_WORDS_PER_RW;
+ ++i, read_data += 2, reg_val += 4) {
+ data = TITAN_I2C_READ(reg_val);
+
+ if (bytes < size) {
+ read_data[0] = data & 0xff;
+ ++bytes;
+ }
+
+ if (bytes < size) {
+ read_data[1] =
+ ((data >> 8) & 0xff);
+ ++bytes;
+ }
+ }
+ }
+
+ loop -= (TITAN_I2C_MAX_WORDS_PER_RW * 2);
+ }
+
+ /*
+ * Read the Interrupt status and then return the appropriate error code
+ */
+
+ val = TITAN_I2C_READ(TITAN_I2C_INTERRUPTS);
+ if (val & 0x0020)
+ return TITAN_I2C_ERR_ARB_LOST;
+
+ if (val & 0x0040)
+ return TITAN_I2C_ERR_NO_RESP;
+
+ if (val & 0x0080)
+ return TITAN_I2C_ERR_DATA_COLLISION;
+
+ return TITAN_I2C_ERR_OK;
+}
+
+/*
+ * Init the I2C subsystem of the PMC-Sierra Yosemite board
+ */
+int titan_i2c_init(titan_i2c_config * config)
+{
+ unsigned int val;
+
+ /*
+ * Reset the SCMB and program into the I2C mode
+ */
+ TITAN_I2C_WRITE(TITAN_I2C_SCMB_CONTROL, 0xA000);
+ TITAN_I2C_WRITE(TITAN_I2C_SCMB_CONTROL, 0x2000);
+
+ /*
+ * Configure the filtera and clka values
+ */
+ val = TITAN_I2C_READ(TITAN_I2C_SCMB_CLOCK_A);
+ val |= ((val & ~(0xF000)) | ((config->filtera << 12) & 0xF000));
+ val |= ((val & ~(0x03FF)) | (config->clka & 0x03FF));
+ TITAN_I2C_WRITE(TITAN_I2C_SCMB_CLOCK_A, val);
+
+ /*
+ * Configure the filterb and clkb values
+ */
+ val = TITAN_I2C_READ(TITAN_I2C_SCMB_CLOCK_B);
+ val |= ((val & ~(0xF000)) | ((config->filterb << 12) & 0xF000));
+ val |= ((val & ~(0x03FF)) | (config->clkb & 0x03FF));
+ TITAN_I2C_WRITE(TITAN_I2C_SCMB_CLOCK_B, val);
+
+ return TITAN_I2C_ERR_OK;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001, 2002, 2004 Ralf Baechle
+ */
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/termios.h>
+#include <linux/sched.h>
+#include <linux/tty.h>
+
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <asm/serial.h>
+#include <asm/io.h>
+
+/* SUPERIO uart register map */
+struct yo_uartregs {
+ union {
+ volatile u8 rbr; /* read only, DLAB == 0 */
+ volatile u8 thr; /* write only, DLAB == 0 */
+ volatile u8 dll; /* DLAB == 1 */
+ } u1;
+ union {
+ volatile u8 ier; /* DLAB == 0 */
+ volatile u8 dlm; /* DLAB == 1 */
+ } u2;
+ union {
+ volatile u8 iir; /* read only */
+ volatile u8 fcr; /* write only */
+ } u3;
+ volatile u8 iu_lcr;
+ volatile u8 iu_mcr;
+ volatile u8 iu_lsr;
+ volatile u8 iu_msr;
+ volatile u8 iu_scr;
+} yo_uregs_t;
+
+#define iu_rbr u1.rbr
+#define iu_thr u1.thr
+#define iu_dll u1.dll
+#define iu_ier u2.ier
+#define iu_dlm u2.dlm
+#define iu_iir u3.iir
+#define iu_fcr u3.fcr
+
+extern unsigned long uart_base;
+
+#define IO_BASE_64 0x9000000000000000ULL
+
+static unsigned char readb_outer_space(unsigned long phys)
+{
+ unsigned long long vaddr = IO_BASE_64 | phys;
+ unsigned char res;
+ unsigned int sr;
+
+ sr = read_c0_status();
+ write_c0_status((sr | ST0_KX) & ~ ST0_IE);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ __asm__ __volatile__ (
+ " .set mips3 \n"
+ " ld %0, (%0) \n"
+ " lbu %0, (%0) \n"
+ " .set mips0 \n"
+ : "=r" (res)
+ : "0" (&vaddr));
+
+ write_c0_status(sr);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ return res;
+}
+
+static void writeb_outer_space(unsigned long phys, unsigned char c)
+{
+ unsigned long long vaddr = IO_BASE_64 | phys;
+ unsigned long tmp;
+ unsigned int sr;
+
+ sr = read_c0_status();
+ write_c0_status((sr | ST0_KX) & ~ ST0_IE);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ __asm__ __volatile__ (
+ " .set mips3 \n"
+ " ld %0, (%1) \n"
+ " sb %2, (%0) \n"
+ " .set mips0 \n"
+ : "=r" (tmp)
+ : "r" (&vaddr), "r" (c));
+
+ write_c0_status(sr);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+}
+
+static inline struct yo_uartregs *console_uart(void)
+{
+ return (struct yo_uartregs *) (uart_base + 8);
+}
+
+void prom_putchar(char c)
+{
+ unsigned long lsr = 0xfd000008UL + offsetof(struct yo_uartregs, iu_lsr);
+ unsigned long thr = 0xfd000008UL + offsetof(struct yo_uartregs, iu_thr);
+
+ while ((readb_outer_space(lsr) & 0x20) == 0);
+ writeb_outer_space(thr, c);
+}
+
+char __init prom_getchar(void)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * tb0219.c, Setup for the TANBAC TB0219
+ *
+ * Copyright (C) 2003 Megasolution Inc. <matsu@megasolution.jp>
+ * Copyright (C) 2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/reboot.h>
+
+#define TB0219_RESET_REGS KSEG1ADDR(0x0a00000e)
+
+#define tb0219_hard_reset() writew(0, TB0219_RESET_REGS)
+
+static void tanbac_tb0219_restart(char *command)
+{
+ local_irq_disable();
+ tb0219_hard_reset();
+ while (1);
+}
+
+static int __init tanbac_tb0219_setup(void)
+{
+ _machine_restart = tanbac_tb0219_restart;
+
+ return 0;
+}
+
+early_initcall(tanbac_tb0219_setup);
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_PARISC=y
+CONFIG_MMU=y
+CONFIG_STACK_GROWSUP=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+# CONFIG_CLEAN_COMPILE is not set
+# CONFIG_STANDALONE is not set
+CONFIG_BROKEN=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=16
+CONFIG_HOTPLUG=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+CONFIG_KMOD=y
+
+#
+# Processor type and features
+#
+# CONFIG_PA7000 is not set
+# CONFIG_PA7100LC is not set
+# CONFIG_PA7200 is not set
+# CONFIG_PA7300LC is not set
+CONFIG_PA8X00=y
+CONFIG_PA20=y
+CONFIG_PREFETCH=y
+CONFIG_PARISC64=y
+CONFIG_64BIT=y
+# CONFIG_SMP is not set
+CONFIG_DISCONTIGMEM=y
+# CONFIG_PREEMPT is not set
+CONFIG_COMPAT=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, GSC, ISA)
+#
+# CONFIG_GSC is not set
+CONFIG_PCI=y
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+CONFIG_PCI_LBA=y
+CONFIG_IOSAPIC=y
+CONFIG_IOMMU_SBA=y
+# CONFIG_SUPERIO is not set
+CONFIG_CHASSIS_LCD_LED=y
+# CONFIG_PDC_CHASSIS is not set
+
+#
+# PCMCIA/CardBus support
+#
+CONFIG_PCMCIA=m
+CONFIG_PCMCIA_DEBUG=y
+CONFIG_YENTA=m
+CONFIG_CARDBUS=y
+# CONFIG_I82092 is not set
+# CONFIG_TCIC is not set
+
+#
+# PCI Hotplug Support
+#
+# CONFIG_HOTPLUG_PCI is not set
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+# CONFIG_FW_LOADER is not set
+CONFIG_DEBUG_DRIVER=y
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+CONFIG_BLK_DEV_UMEM=m
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=6144
+CONFIG_BLK_DEV_INITRD=y
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+CONFIG_SCSI=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=y
+# CONFIG_CHR_DEV_OSST is not set
+CONFIG_BLK_DEV_SR=y
+# CONFIG_BLK_DEV_SR_VENDOR is not set
+CONFIG_CHR_DEV_SG=y
+
+#
+# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
+#
+CONFIG_SCSI_MULTI_LUN=y
+# CONFIG_SCSI_CONSTANTS is not set
+# CONFIG_SCSI_LOGGING is not set
+
+#
+# SCSI Transport Attributes
+#
+CONFIG_SCSI_SPI_ATTRS=y
+CONFIG_SCSI_FC_ATTRS=m
+
+#
+# SCSI low-level drivers
+#
+# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
+# CONFIG_SCSI_ACARD is not set
+# CONFIG_SCSI_AACRAID is not set
+# CONFIG_SCSI_AIC7XXX is not set
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_ADVANSYS is not set
+# CONFIG_SCSI_MEGARAID is not set
+# CONFIG_SCSI_SATA is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_SCSI_CPQFCTS is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_EATA_PIO is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
+CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
+CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
+CONFIG_SCSI_SYM53C8XX_IOMAPPED=y
+# CONFIG_SCSI_IPR is not set
+# CONFIG_SCSI_PCI2000 is not set
+# CONFIG_SCSI_PCI2220I is not set
+# CONFIG_SCSI_QLOGIC_ISP is not set
+CONFIG_SCSI_QLOGIC_FC=m
+# CONFIG_SCSI_QLOGIC_FC_FIRMWARE is not set
+CONFIG_SCSI_QLOGIC_1280=m
+CONFIG_SCSI_QLA2XXX=y
+# CONFIG_SCSI_QLA21XX is not set
+# CONFIG_SCSI_QLA22XX is not set
+CONFIG_SCSI_QLA2300=m
+CONFIG_SCSI_QLA2322=m
+CONFIG_SCSI_QLA6312=m
+CONFIG_SCSI_QLA6322=m
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_DC390T is not set
+CONFIG_SCSI_DEBUG=m
+
+#
+# PCMCIA SCSI adapter support
+#
+# CONFIG_PCMCIA_FDOMAIN is not set
+# CONFIG_PCMCIA_QLOGIC is not set
+# CONFIG_PCMCIA_SYM53C500 is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+# CONFIG_MD_RAID5 is not set
+# CONFIG_MD_RAID6 is not set
+# CONFIG_MD_MULTIPATH is not set
+# CONFIG_BLK_DEV_DM is not set
+
+#
+# Fusion MPT device support
+#
+CONFIG_FUSION=m
+CONFIG_FUSION_MAX_SGE=40
+CONFIG_FUSION_ISENSE=m
+CONFIG_FUSION_CTL=m
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+CONFIG_PACKET_MMAP=y
+CONFIG_NETLINK_DEV=y
+CONFIG_UNIX=y
+CONFIG_NET_KEY=m
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+# CONFIG_INET_IPCOMP is not set
+
+#
+# IP: Virtual Server Configuration
+#
+# CONFIG_IP_VS is not set
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_IP_NF_CONNTRACK=m
+CONFIG_IP_NF_FTP=m
+CONFIG_IP_NF_IRC=m
+CONFIG_IP_NF_TFTP=m
+CONFIG_IP_NF_AMANDA=m
+CONFIG_IP_NF_QUEUE=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_LIMIT=m
+CONFIG_IP_NF_MATCH_IPRANGE=m
+CONFIG_IP_NF_MATCH_MAC=m
+CONFIG_IP_NF_MATCH_PKTTYPE=m
+CONFIG_IP_NF_MATCH_MARK=m
+CONFIG_IP_NF_MATCH_MULTIPORT=m
+CONFIG_IP_NF_MATCH_TOS=m
+CONFIG_IP_NF_MATCH_RECENT=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_DSCP=m
+CONFIG_IP_NF_MATCH_AH_ESP=m
+CONFIG_IP_NF_MATCH_LENGTH=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_MATCH_TCPMSS=m
+CONFIG_IP_NF_MATCH_HELPER=m
+CONFIG_IP_NF_MATCH_STATE=m
+CONFIG_IP_NF_MATCH_CONNTRACK=m
+CONFIG_IP_NF_MATCH_OWNER=m
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_NAT_NEEDED=y
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+CONFIG_IP_NF_TARGET_REDIRECT=m
+CONFIG_IP_NF_TARGET_NETMAP=m
+CONFIG_IP_NF_TARGET_SAME=m
+# CONFIG_IP_NF_NAT_LOCAL is not set
+CONFIG_IP_NF_NAT_SNMP_BASIC=m
+CONFIG_IP_NF_NAT_IRC=m
+CONFIG_IP_NF_NAT_FTP=m
+CONFIG_IP_NF_NAT_TFTP=m
+CONFIG_IP_NF_NAT_AMANDA=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_TOS=m
+CONFIG_IP_NF_TARGET_ECN=m
+CONFIG_IP_NF_TARGET_DSCP=m
+CONFIG_IP_NF_TARGET_MARK=m
+CONFIG_IP_NF_TARGET_CLASSIFY=m
+CONFIG_IP_NF_TARGET_LOG=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_IP_NF_TARGET_TCPMSS=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+CONFIG_IP_NF_ARP_MANGLE=m
+# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
+# CONFIG_IP_NF_COMPAT_IPFWADM is not set
+CONFIG_IP_NF_TARGET_NOTRACK=m
+CONFIG_IP_NF_RAW=m
+CONFIG_XFRM=y
+CONFIG_XFRM_USER=m
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+CONFIG_LLC=m
+CONFIG_LLC2=m
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+CONFIG_NET_PKTGEN=m
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=m
+CONFIG_BONDING=m
+# CONFIG_EQUALIZER is not set
+CONFIG_TUN=m
+# CONFIG_ETHERTAP is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=m
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+CONFIG_NET_VENDOR_3COM=y
+CONFIG_VORTEX=m
+CONFIG_TYPHOON=m
+
+#
+# Tulip family network device support
+#
+CONFIG_NET_TULIP=y
+CONFIG_DE2104X=y
+CONFIG_TULIP=y
+# CONFIG_TULIP_MWI is not set
+CONFIG_TULIP_MMIO=y
+# CONFIG_TULIP_NAPI is not set
+# CONFIG_DE4X5 is not set
+# CONFIG_WINBOND_840 is not set
+# CONFIG_DM9102 is not set
+CONFIG_PCMCIA_XIRCOM=m
+CONFIG_PCMCIA_XIRTULIP=m
+CONFIG_HP100=m
+CONFIG_NET_PCI=y
+CONFIG_PCNET32=m
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+CONFIG_EEPRO100=m
+# CONFIG_EEPRO100_PIO is not set
+CONFIG_E100=m
+CONFIG_E100_NAPI=y
+# CONFIG_FEALNX is not set
+CONFIG_NATSEMI=m
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+CONFIG_8139TOO=m
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_8139_OLD_RX_RESET is not set
+# CONFIG_SIS900 is not set
+CONFIG_EPIC100=m
+# CONFIG_SUNDANCE is not set
+CONFIG_VIA_RHINE=m
+CONFIG_VIA_RHINE_MMIO=y
+
+#
+# Ethernet (1000 Mbit)
+#
+CONFIG_ACENIC=m
+CONFIG_ACENIC_OMIT_TIGON_I=y
+CONFIG_DL2K=m
+CONFIG_E1000=m
+CONFIG_E1000_NAPI=y
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+CONFIG_TIGON3=m
+
+#
+# Ethernet (10000 Mbit)
+#
+CONFIG_IXGB=m
+CONFIG_IXGB_NAPI=y
+CONFIG_S2IO=m
+CONFIG_S2IO_NAPI=y
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+CONFIG_PCMCIA_WAVELAN=m
+CONFIG_PCMCIA_NETWAVE=m
+
+#
+# Wireless 802.11 Frequency Hopping cards support
+#
+# CONFIG_PCMCIA_RAYCS is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+# CONFIG_AIRO is not set
+CONFIG_HERMES=m
+CONFIG_PLX_HERMES=m
+CONFIG_TMD_HERMES=m
+CONFIG_PCI_HERMES=m
+# CONFIG_ATMEL is not set
+
+#
+# Wireless 802.11b Pcmcia/Cardbus cards support
+#
+CONFIG_PCMCIA_HERMES=m
+CONFIG_AIRO_CS=m
+# CONFIG_PCMCIA_WL3501 is not set
+
+#
+# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
+#
+# CONFIG_PRISM54 is not set
+CONFIG_NET_WIRELESS=y
+
+#
+# PCMCIA network device support
+#
+CONFIG_NET_PCMCIA=y
+CONFIG_PCMCIA_3C589=m
+CONFIG_PCMCIA_3C574=m
+# CONFIG_PCMCIA_FMVJ18X is not set
+# CONFIG_PCMCIA_PCNET is not set
+# CONFIG_PCMCIA_NMCLAN is not set
+CONFIG_PCMCIA_SMC91C92=m
+CONFIG_PCMCIA_XIRC2PS=m
+# CONFIG_PCMCIA_AXNET is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+CONFIG_PPP=m
+# CONFIG_PPP_MULTILINK is not set
+# CONFIG_PPP_FILTER is not set
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPP_BSDCOMP=m
+# CONFIG_PPPOE is not set
+# CONFIG_SLIP is not set
+# CONFIG_NET_FC is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+# CONFIG_SERIAL_8250_CS is not set
+CONFIG_SERIAL_8250_NR_UARTS=8
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_MANY_PORTS=y
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+# CONFIG_SERIAL_8250_DETECT_IRQ is not set
+# CONFIG_SERIAL_8250_MULTIPORT is not set
+# CONFIG_SERIAL_8250_RSA is not set
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_MUX is not set
+CONFIG_PDC_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+CONFIG_GEN_RTC=y
+CONFIG_GEN_RTC_X=y
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+CONFIG_RAW_DRIVER=y
+CONFIG_MAX_RAW_DEVS=256
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Console display driver support
+#
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE_COLUMNS=160
+CONFIG_DUMMY_CONSOLE_ROWS=64
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+# CONFIG_EXT3_FS_XATTR is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+# CONFIG_REISERFS_FS is not set
+CONFIG_JFS_FS=m
+# CONFIG_JFS_POSIX_ACL is not set
+# CONFIG_JFS_DEBUG is not set
+# CONFIG_JFS_STATISTICS is not set
+CONFIG_XFS_FS=m
+# CONFIG_XFS_RT is not set
+# CONFIG_XFS_QUOTA is not set
+# CONFIG_XFS_SECURITY is not set
+# CONFIG_XFS_POSIX_ACL is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+CONFIG_ISO9660_FS=y
+CONFIG_JOLIET=y
+# CONFIG_ZISOFS is not set
+CONFIG_UDF_FS=m
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=m
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLBFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+CONFIG_UFS_FS=m
+# CONFIG_UFS_FS_WRITE is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+CONFIG_NFS_V4=y
+CONFIG_NFS_DIRECTIO=y
+CONFIG_NFSD=m
+CONFIG_NFSD_V3=y
+CONFIG_NFSD_V4=y
+CONFIG_NFSD_TCP=y
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_EXPORTFS=m
+CONFIG_SUNRPC=y
+CONFIG_SUNRPC_GSS=y
+CONFIG_RPCSEC_GSS_KRB5=y
+CONFIG_SMB_FS=m
+CONFIG_SMB_NLS_DEFAULT=y
+CONFIG_SMB_NLS_REMOTE="cp437"
+CONFIG_CIFS=m
+# CONFIG_CIFS_STATS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+CONFIG_NLS_CODEPAGE_437=m
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+CONFIG_NLS_CODEPAGE_850=m
+CONFIG_NLS_CODEPAGE_852=m
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+CONFIG_NLS_CODEPAGE_863=m
+# CONFIG_NLS_CODEPAGE_864 is not set
+CONFIG_NLS_CODEPAGE_865=m
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+CONFIG_NLS_ISO8859_1=m
+CONFIG_NLS_ISO8859_2=m
+CONFIG_NLS_ISO8859_3=m
+CONFIG_NLS_ISO8859_4=m
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+CONFIG_NLS_ISO8859_15=m
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+CONFIG_NLS_UTF8=m
+
+#
+# Profiling support
+#
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=m
+
+#
+# Kernel hacking
+#
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_FRAME_POINTER is not set
+# CONFIG_DEBUG_INFO is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+CONFIG_CRYPTO=y
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_NULL=m
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_SHA1=m
+CONFIG_CRYPTO_SHA256=m
+CONFIG_CRYPTO_SHA512=m
+CONFIG_CRYPTO_DES=y
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_TWOFISH=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_AES=m
+CONFIG_CRYPTO_CAST5=m
+CONFIG_CRYPTO_CAST6=m
+# CONFIG_CRYPTO_ARC4 is not set
+CONFIG_CRYPTO_DEFLATE=m
+# CONFIG_CRYPTO_MICHAEL_MIC is not set
+CONFIG_CRYPTO_CRC32C=m
+CONFIG_CRYPTO_TEST=m
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+CONFIG_LIBCRC32C=m
+CONFIG_ZLIB_INFLATE=m
+CONFIG_ZLIB_DEFLATE=m
--- /dev/null
+/*
+ * Kernel unwinding support
+ *
+ * (c) 2002-2004 Randolph Chung <tausq@debian.org>
+ *
+ * Derived partially from the IA64 implementation. The PA-RISC
+ * Runtime Architecture Document is also a useful reference to
+ * understand what is happening here
+ */
+
+/*
+ * J. David Anglin writes:
+ *
+ * "You have to adjust the current sp to that at the begining of the function.
+ * There can be up to two stack additions to allocate the frame in the
+ * prologue. Similar things happen in the epilogue. In the presence of
+ * interrupts, you have to be concerned about where you are in the function
+ * and what stack adjustments have taken place."
+ *
+ * For now these cases are not handled, but they should be!
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+
+#include <asm/uaccess.h>
+
+#include <asm/unwind.h>
+
+/* #define DEBUG 1 */
+#ifdef DEBUG
+#define dbg(x...) printk(x)
+#else
+#define dbg(x...)
+#endif
+
+extern const struct unwind_table_entry __start___unwind[];
+extern const struct unwind_table_entry __stop___unwind[];
+
+static spinlock_t unwind_lock;
+/*
+ * the kernel unwind block is not dynamically allocated so that
+ * we can call unwind_init as early in the bootup process as
+ * possible (before the slab allocator is initialized)
+ */
+static struct unwind_table kernel_unwind_table;
+static struct unwind_table *unwind_tables, *unwind_tables_end;
+
+
+static inline const struct unwind_table_entry *
+find_unwind_entry_in_table(const struct unwind_table *table, unsigned long addr)
+{
+ const struct unwind_table_entry *e = 0;
+ unsigned long lo, hi, mid;
+
+ addr -= table->base_addr;
+
+ for (lo = 0, hi = table->length; lo < hi; )
+ {
+ mid = (lo + hi) / 2;
+ e = &table->table[mid];
+ if (addr < e->region_start)
+ hi = mid;
+ else if (addr > e->region_end)
+ lo = mid + 1;
+ else
+ break;
+ }
+
+ return e;
+}
+
+static inline const struct unwind_table_entry *
+find_unwind_entry(unsigned long addr)
+{
+ struct unwind_table *table = unwind_tables;
+ const struct unwind_table_entry *e = NULL;
+
+ if (addr >= kernel_unwind_table.start &&
+ addr <= kernel_unwind_table.end)
+ e = find_unwind_entry_in_table(&kernel_unwind_table, addr);
+ else
+ for (; table; table = table->next)
+ {
+ if (addr >= table->start &&
+ addr <= table->end)
+ e = find_unwind_entry_in_table(table, addr);
+ if (e)
+ break;
+ }
+
+ return e;
+}
+
+static void
+unwind_table_init(struct unwind_table *table, const char *name,
+ unsigned long base_addr, unsigned long gp,
+ const void *table_start, const void *table_end)
+{
+ const struct unwind_table_entry *start = table_start;
+ const struct unwind_table_entry *end = table_end - 1;
+
+ table->name = name;
+ table->base_addr = base_addr;
+ table->gp = gp;
+ table->start = base_addr + start->region_start;
+ table->end = base_addr + end->region_end;
+ table->table = (struct unwind_table_entry *)table_start;
+ table->length = end - start;
+ table->next = NULL;
+}
+
+void *
+unwind_table_add(const char *name, unsigned long base_addr,
+ unsigned long gp,
+ const void *start, const void *end)
+{
+ struct unwind_table *table;
+ unsigned long flags;
+
+ table = kmalloc(sizeof(struct unwind_table), GFP_USER);
+ if (table == NULL)
+ return 0;
+ unwind_table_init(table, name, base_addr, gp, start, end);
+ spin_lock_irqsave(&unwind_lock, flags);
+ if (unwind_tables)
+ {
+ unwind_tables_end->next = table;
+ unwind_tables_end = table;
+ }
+ else
+ {
+ unwind_tables = unwind_tables_end = table;
+ }
+ spin_unlock_irqrestore(&unwind_lock, flags);
+
+ return table;
+}
+
+/* Called from setup_arch to import the kernel unwind info */
+static int unwind_init(void)
+{
+ long start, stop;
+ register unsigned long gp __asm__ ("r27");
+
+ start = (long)&__start___unwind[0];
+ stop = (long)&__stop___unwind[0];
+
+ printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n",
+ start, stop,
+ (stop - start) / sizeof(struct unwind_table_entry));
+
+ unwind_table_init(&kernel_unwind_table, "kernel", KERNEL_START,
+ gp,
+ &__start___unwind[0], &__stop___unwind[0]);
+#if 0
+ {
+ int i;
+ for (i = 0; i < 10; i++)
+ {
+ printk("region 0x%x-0x%x\n",
+ __start___unwind[i].region_start,
+ __start___unwind[i].region_end);
+ }
+ }
+#endif
+ return 0;
+}
+
+static void unwind_frame_regs(struct unwind_frame_info *info)
+{
+ const struct unwind_table_entry *e;
+ unsigned long npc;
+ unsigned int insn;
+ long frame_size = 0;
+ int looking_for_rp, rpoffset = 0;
+
+ e = find_unwind_entry(info->ip);
+ if (!e) {
+ unsigned long sp;
+ extern char _stext[], _etext[];
+
+ dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip);
+
+ /* Since we are doing the unwinding blind, we don't know if
+ we are adjusting the stack correctly or extracting the rp
+ correctly. The rp is checked to see if it belongs to the
+ kernel text section, if not we assume we don't have a
+ correct stack frame and we continue to unwind the stack.
+ This is not quite correct, and will fail for loadable
+ modules. */
+ sp = info->sp & ~63;
+ do {
+ info->prev_sp = sp - 64;
+
+ /* FIXME: what happens if we unwind too far so that
+ sp no longer falls in a mapped kernel page? */
+#ifndef __LP64__
+ info->prev_ip = *(unsigned long *)(info->prev_sp - 20);
+#else
+ info->prev_ip = *(unsigned long *)(info->prev_sp - 16);
+#endif
+
+ sp = info->prev_sp;
+ } while (info->prev_ip < (unsigned long)_stext ||
+ info->prev_ip > (unsigned long)_etext);
+ } else {
+
+ dbg("e->start = 0x%x, e->end = 0x%x, Save_SP = %d, Save_RP = %d size = %u\n",
+ e->region_start, e->region_end, e->Save_SP, e->Save_RP, e->Total_frame_size);
+
+ looking_for_rp = e->Save_RP;
+
+ for (npc = e->region_start;
+ (frame_size < (e->Total_frame_size << 3) || looking_for_rp) &&
+ npc < info->ip;
+ npc += 4) {
+
+ insn = *(unsigned int *)npc;
+
+ if ((insn & 0xffffc000) == 0x37de0000 ||
+ (insn & 0xffe00000) == 0x6fc00000) {
+ /* ldo X(sp), sp, or stwm X,D(sp) */
+ frame_size += (insn & 0x1 ? -1 << 13 : 0) |
+ ((insn & 0x3fff) >> 1);
+ } else if ((insn & 0xffe00008) == 0x7ec00008) {
+ /* std,ma X,D(sp) */
+ frame_size += (insn & 0x1 ? -1 << 13 : 0) |
+ (((insn >> 4) & 0x3ff) << 3);
+ } else if (insn == 0x6bc23fd9) {
+ /* stw rp,-20(sp) */
+ rpoffset = 20;
+ looking_for_rp = 0;
+ } else if (insn == 0x0fc212c1) {
+ /* std rp,-16(sr0,sp) */
+ rpoffset = 16;
+ looking_for_rp = 0;
+ }
+ }
+
+ info->prev_sp = info->sp - frame_size;
+ if (rpoffset)
+ info->prev_ip = *(unsigned long *)(info->prev_sp - rpoffset);
+ }
+}
+
+void unwind_frame_init(struct unwind_frame_info *info, struct task_struct *t,
+ struct pt_regs *regs)
+{
+ memset(info, 0, sizeof(struct unwind_frame_info));
+ info->t = t;
+ info->sp = regs->ksp;
+ info->ip = regs->kpc;
+
+ dbg("(%d) Start unwind from sp=%08lx ip=%08lx\n", (int)t->pid, info->sp, info->ip);
+}
+
+void unwind_frame_init_from_blocked_task(struct unwind_frame_info *info, struct task_struct *t)
+{
+ struct pt_regs *regs = &t->thread.regs;
+ unwind_frame_init(info, t, regs);
+}
+
+int unwind_once(struct unwind_frame_info *next_frame)
+{
+ unwind_frame_regs(next_frame);
+
+ if (next_frame->prev_sp == 0 ||
+ next_frame->prev_ip == 0)
+ return -1;
+
+ next_frame->sp = next_frame->prev_sp;
+ next_frame->ip = next_frame->prev_ip;
+ next_frame->prev_sp = 0;
+ next_frame->prev_ip = 0;
+
+ dbg("(%d) Continue unwind to sp=%08lx ip=%08lx\n", (int)next_frame->t->pid, next_frame->sp, next_frame->ip);
+
+ return 0;
+}
+
+int unwind_to_user(struct unwind_frame_info *info)
+{
+ int ret;
+
+ do {
+ ret = unwind_once(info);
+ } while (!ret && !(info->ip & 3));
+
+ return ret;
+}
+
+module_init(unwind_init);
--- /dev/null
+/*
+ * Debugging versions of SMP locking primitives.
+ *
+ * Copyright (C) 2004 Thibaut VARENE <varenet@esiee.fr>
+ *
+ * Some code stollen from alpha & sparc64 ;)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <asm/system.h>
+#include <asm/hardirq.h> /* in_interrupt() */
+
+#undef INIT_STUCK
+#define INIT_STUCK 1L << 30
+
+#ifdef CONFIG_DEBUG_SPINLOCK
+
+void _dbg_spin_lock(spinlock_t * lock, const char *base_file, int line_no)
+{
+ volatile unsigned int *a;
+ long stuck = INIT_STUCK;
+ void *inline_pc = __builtin_return_address(0);
+ unsigned long started = jiffies;
+ int printed = 0;
+ int cpu = smp_processor_id();
+
+try_again:
+
+ /* Do the actual locking */
+ /* <T-Bone> ggg: we can't get stuck on the outter loop?
+ * <ggg> T-Bone: We can hit the outer loop
+ * alot if multiple CPUs are constantly racing for a lock
+ * and the backplane is NOT fair about which CPU sees
+ * the update first. But it won't hang since every failed
+ * attempt will drop us back into the inner loop and
+ * decrement `stuck'.
+ * <ggg> K-class and some of the others are NOT fair in the HW
+ * implementation so we could see false positives.
+ * But fixing the lock contention is easier than
+ * fixing the HW to be fair.
+ * <tausq> __ldcw() returns 1 if we get the lock; otherwise we
+ * spin until the value of the lock changes, or we time out.
+ */
+ a = __ldcw_align(lock);
+ while (stuck && (__ldcw(a) == 0))
+ while ((*a == 0) && --stuck);
+
+ if (unlikely(stuck <= 0)) {
+ printk(KERN_WARNING
+ "%s:%d: spin_lock(%s/%p) stuck in %s at %p(%d)"
+ " owned by %s:%d in %s at %p(%d)\n",
+ base_file, line_no, lock->module, lock,
+ current->comm, inline_pc, cpu,
+ lock->bfile, lock->bline, lock->task->comm,
+ lock->previous, lock->oncpu);
+ stuck = INIT_STUCK;
+ printed = 1;
+ goto try_again;
+ }
+
+ /* Exiting. Got the lock. */
+ lock->oncpu = cpu;
+ lock->previous = inline_pc;
+ lock->task = current;
+ lock->bfile = (char *)base_file;
+ lock->bline = line_no;
+
+ if (unlikely(printed)) {
+ printk(KERN_WARNING
+ "%s:%d: spin_lock grabbed in %s at %p(%d) %ld ticks\n",
+ base_file, line_no, current->comm, inline_pc,
+ cpu, jiffies - started);
+ }
+}
+
+void _dbg_spin_unlock(spinlock_t * lock, const char *base_file, int line_no)
+{
+ CHECK_LOCK(lock);
+ volatile unsigned int *a = __ldcw_align(lock);
+ if (unlikely((*a != 0) && lock->babble)) {
+ lock->babble--;
+ printk(KERN_WARNING
+ "%s:%d: spin_unlock(%s:%p) not locked\n",
+ base_file, line_no, lock->module, lock);
+ }
+ *a = 1;
+}
+
+int _dbg_spin_trylock(spinlock_t * lock, const char *base_file, int line_no)
+{
+ int ret;
+ volatile unsigned int *a = __ldcw_align(lock);
+ if ((ret = (__ldcw(a) != 0))) {
+ lock->oncpu = smp_processor_id();
+ lock->previous = __builtin_return_address(0);
+ lock->task = current;
+ } else {
+ lock->bfile = (char *)base_file;
+ lock->bline = line_no;
+ }
+ return ret;
+}
+
+#endif /* CONFIG_DEBUG_SPINLOCK */
+
+#ifdef CONFIG_DEBUG_RWLOCK
+
+/* Interrupts trouble detailed explanation, thx Grant:
+ *
+ * o writer (wants to modify data) attempts to acquire the rwlock
+ * o He gets the write lock.
+ * o Interupts are still enabled, we take an interrupt with the
+ * write still holding the lock.
+ * o interrupt handler tries to acquire the rwlock for read.
+ * o deadlock since the writer can't release it at this point.
+ *
+ * In general, any use of spinlocks that competes between "base"
+ * level and interrupt level code will risk deadlock. Interrupts
+ * need to be disabled in the base level routines to avoid it.
+ * Or more precisely, only the IRQ the base level routine
+ * is competing with for the lock. But it's more efficient/faster
+ * to just disable all interrupts on that CPU to guarantee
+ * once it gets the lock it can release it quickly too.
+ */
+
+void _dbg_write_lock(rwlock_t *rw, const char *bfile, int bline)
+{
+ void *inline_pc = __builtin_return_address(0);
+ unsigned long started = jiffies;
+ long stuck = INIT_STUCK;
+ int printed = 0;
+ int cpu = smp_processor_id();
+
+ if(unlikely(in_interrupt())) { /* acquiring write lock in interrupt context, bad idea */
+ printk(KERN_WARNING "write_lock caller: %s:%d, IRQs enabled,\n", bfile, bline);
+ BUG();
+ }
+
+ /* Note: if interrupts are disabled (which is most likely), the printk
+ will never show on the console. We might need a polling method to flush
+ the dmesg buffer anyhow. */
+
+retry:
+ _raw_spin_lock(&rw->lock);
+
+ if(rw->counter != 0) {
+ /* this basically never happens */
+ _raw_spin_unlock(&rw->lock);
+
+ stuck--;
+ if ((unlikely(stuck <= 0)) && (rw->counter < 0)) {
+ printk(KERN_WARNING
+ "%s:%d: write_lock stuck on writer"
+ " in %s at %p(%d) %ld ticks\n",
+ bfile, bline, current->comm, inline_pc,
+ cpu, jiffies - started);
+ stuck = INIT_STUCK;
+ printed = 1;
+ }
+ else if (unlikely(stuck <= 0)) {
+ printk(KERN_WARNING
+ "%s:%d: write_lock stuck on reader"
+ " in %s at %p(%d) %ld ticks\n",
+ bfile, bline, current->comm, inline_pc,
+ cpu, jiffies - started);
+ stuck = INIT_STUCK;
+ printed = 1;
+ }
+
+ while(rw->counter != 0);
+
+ goto retry;
+ }
+
+ /* got it. now leave without unlocking */
+ rw->counter = -1; /* remember we are locked */
+
+ if (unlikely(printed)) {
+ printk(KERN_WARNING
+ "%s:%d: write_lock grabbed in %s at %p(%d) %ld ticks\n",
+ bfile, bline, current->comm, inline_pc,
+ cpu, jiffies - started);
+ }
+}
+
+void _dbg_read_lock(rwlock_t * rw, const char *bfile, int bline)
+{
+#if 0
+ void *inline_pc = __builtin_return_address(0);
+ unsigned long started = jiffies;
+ int cpu = smp_processor_id();
+#endif
+ unsigned long flags;
+
+ local_irq_save(flags);
+ _raw_spin_lock(&rw->lock);
+
+ rw->counter++;
+#if 0
+ printk(KERN_WARNING
+ "%s:%d: read_lock grabbed in %s at %p(%d) %ld ticks\n",
+ bfile, bline, current->comm, inline_pc,
+ cpu, jiffies - started);
+#endif
+ _raw_spin_unlock(&rw->lock);
+ local_irq_restore(flags);
+}
+
+#endif /* CONFIG_DEBUG_RWLOCK */
--- /dev/null
+/*
+ * arch/ppc/boot/simple/mpc52xx_tty.c
+ *
+ * Minimal serial functions needed to send messages out a MPC52xx
+ * Programmable Serial Controller (PSC).
+ *
+ * Author: Dale Farnsworth <dfarnsworth@mvista.com>
+ *
+ * 2003-2004 (c) MontaVista, Software, Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is licensed
+ * "as is" without any warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/uaccess.h>
+#include <asm/mpc52xx.h>
+#include <asm/mpc52xx_psc.h>
+#include <asm/serial.h>
+#include <asm/time.h>
+
+#if MPC52xx_PF_CONSOLE_PORT == 0
+#define MPC52xx_CONSOLE MPC52xx_PSC1
+#define MPC52xx_PSC_CONFIG_SHIFT 0
+#elif MPC52xx_PF_CONSOLE_PORT == 1
+#define MPC52xx_CONSOLE MPC52xx_PSC2
+#define MPC52xx_PSC_CONFIG_SHIFT 4
+#elif MPC52xx_PF_CONSOLE_PORT == 2
+#define MPC52xx_CONSOLE MPC52xx_PSC3
+#define MPC52xx_PSC_CONFIG_SHIFT 8
+#else
+#error "MPC52xx_PF_CONSOLE_PORT not defined"
+#endif
+
+static struct mpc52xx_psc *psc = (struct mpc52xx_psc *)MPC52xx_CONSOLE;
+
+/* The decrementer counts at the system bus clock frequency
+ * divided by four. The most accurate time base is connected to the
+ * rtc. We read the decrementer change during one rtc tick (one second)
+ * and multiply by 4 to get the system bus clock frequency.
+ */
+int
+mpc52xx_ipbfreq(void)
+{
+ struct mpc52xx_rtc *rtc = (struct mpc52xx_rtc*)MPC52xx_RTC;
+ struct mpc52xx_cdm *cdm = (struct mpc52xx_cdm*)MPC52xx_CDM;
+ int current_time, previous_time;
+ int tbl_start, tbl_end;
+ int xlbfreq, ipbfreq;
+
+ out_be32(&rtc->dividers, 0x8f1f0000); /* Set RTC 64x faster */
+ previous_time = in_be32(&rtc->time);
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_start = get_tbl();
+ previous_time = current_time;
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_end = get_tbl();
+ out_be32(&rtc->dividers, 0xffff0000); /* Restore RTC */
+
+ xlbfreq = (tbl_end - tbl_start) << 8;
+ ipbfreq = (in_8(&cdm->ipb_clk_sel) & 1) ? xlbfreq / 2 : xlbfreq;
+
+ return ipbfreq;
+}
+
+unsigned long
+serial_init(int ignored, void *ignored2)
+{
+ struct mpc52xx_gpio *gpio = (struct mpc52xx_gpio *)MPC52xx_GPIO;
+ int divisor;
+ int mode1;
+ int mode2;
+ u32 val32;
+
+ static int been_here = 0;
+
+ if (been_here)
+ return 0;
+
+ been_here = 1;
+
+ val32 = in_be32(&gpio->port_config);
+ val32 &= ~(0x7 << MPC52xx_PSC_CONFIG_SHIFT);
+ val32 |= MPC52xx_GPIO_PSC_CONFIG_UART_WITHOUT_CD
+ << MPC52xx_PSC_CONFIG_SHIFT;
+ out_be32(&gpio->port_config, val32);
+
+ out_8(&psc->command, MPC52xx_PSC_RST_TX
+ | MPC52xx_PSC_RX_DISABLE | MPC52xx_PSC_TX_ENABLE);
+ out_8(&psc->command, MPC52xx_PSC_RST_RX);
+
+ out_be32(&psc->sicr, 0x0);
+ out_be16(&psc->mpc52xx_psc_clock_select, 0xdd00);
+ out_be16(&psc->tfalarm, 0xf8);
+
+ out_8(&psc->command, MPC52xx_PSC_SEL_MODE_REG_1
+ | MPC52xx_PSC_RX_ENABLE
+ | MPC52xx_PSC_TX_ENABLE);
+
+ divisor = ((mpc52xx_ipbfreq()
+ / (CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD * 16)) + 1) >> 1;
+
+ mode1 = MPC52xx_PSC_MODE_8_BITS | MPC52xx_PSC_MODE_PARNONE
+ | MPC52xx_PSC_MODE_ERR;
+ mode2 = MPC52xx_PSC_MODE_ONE_STOP;
+
+ out_8(&psc->ctur, divisor>>8);
+ out_8(&psc->ctlr, divisor);
+ out_8(&psc->command, MPC52xx_PSC_SEL_MODE_REG_1);
+ out_8(&psc->mode, mode1);
+ out_8(&psc->mode, mode2);
+
+ return 0; /* ignored */
+}
+
+void
+serial_putc(void *ignored, const char c)
+{
+ serial_init(0, 0);
+
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP)) ;
+ out_8(&psc->mpc52xx_psc_buffer_8, c);
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP)) ;
+}
+
+char
+serial_getc(void *ignored)
+{
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_RXRDY)) ;
+
+ return in_8(&psc->mpc52xx_psc_buffer_8);
+}
+
+int
+serial_tstc(void *ignored)
+{
+ return (in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_RXRDY) != 0;
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_MMU=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_PPC=y
+CONFIG_PPC32=y
+CONFIG_GENERIC_NVRAM=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+# CONFIG_KALLSYMS is not set
+CONFIG_FUTEX=y
+# CONFIG_EPOLL is not set
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# Processor
+#
+CONFIG_6xx=y
+# CONFIG_40x is not set
+# CONFIG_44x is not set
+# CONFIG_POWER3 is not set
+# CONFIG_POWER4 is not set
+# CONFIG_8xx is not set
+# CONFIG_CPU_FREQ is not set
+CONFIG_EMBEDDEDBOOT=y
+CONFIG_PPC_STD_MMU=y
+
+#
+# Platform options
+#
+# CONFIG_PPC_MULTIPLATFORM is not set
+# CONFIG_APUS is not set
+# CONFIG_WILLOW is not set
+# CONFIG_PCORE is not set
+# CONFIG_POWERPMC250 is not set
+# CONFIG_EV64260 is not set
+# CONFIG_SPRUCE is not set
+# CONFIG_LOPEC is not set
+# CONFIG_MCPN765 is not set
+# CONFIG_MVME5100 is not set
+# CONFIG_PPLUS is not set
+# CONFIG_PRPMC750 is not set
+# CONFIG_PRPMC800 is not set
+# CONFIG_SANDPOINT is not set
+# CONFIG_ADIR is not set
+# CONFIG_K2 is not set
+# CONFIG_PAL4 is not set
+# CONFIG_GEMINI is not set
+# CONFIG_EST8260 is not set
+# CONFIG_SBC82xx is not set
+# CONFIG_SBS8260 is not set
+# CONFIG_RPX6 is not set
+# CONFIG_TQM8260 is not set
+CONFIG_ADS8272=y
+CONFIG_PQ2ADS=y
+CONFIG_8260=y
+CONFIG_8272=y
+CONFIG_CPM2=y
+# CONFIG_PC_KEYBOARD is not set
+CONFIG_SERIAL_CONSOLE=y
+# CONFIG_SMP is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_HIGHMEM is not set
+CONFIG_KERNEL_ELF=y
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+# CONFIG_CMDLINE_BOOL is not set
+
+#
+# Bus options
+#
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+# CONFIG_PCI_LEGACY_PROC is not set
+# CONFIG_PCI_NAMES is not set
+
+#
+# Advanced setup
+#
+# CONFIG_ADVANCED_OPTIONS is not set
+
+#
+# Default settings for advanced configuration options are used
+#
+CONFIG_HIGHMEM_START=0xfe000000
+CONFIG_LOWMEM_SIZE=0x30000000
+CONFIG_KERNEL_START=0xc0000000
+CONFIG_TASK_SIZE=0x80000000
+CONFIG_BOOT_LOAD=0x00400000
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=32768
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Macintosh device drivers
+#
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
+# CONFIG_OAKNET is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_PCI is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+CONFIG_GEN_RTC=y
+# CONFIG_GEN_RTC_X is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+# CONFIG_MSDOS_PARTITION is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+# CONFIG_SCC_ENET is not set
+CONFIG_FEC_ENET=y
+# CONFIG_USE_MDIO is not set
+
+#
+# CPM2 Options
+#
+CONFIG_SCC_CONSOLE=y
+CONFIG_FCC1_ENET=y
+# CONFIG_FCC2_ENET is not set
+# CONFIG_FCC3_ENET is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_DEBUG_KERNEL is not set
+# CONFIG_KGDB_CONSOLE is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_MMU=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_PPC=y
+CONFIG_PPC32=y
+CONFIG_GENERIC_NVRAM=y
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_ALL is not set
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_MODULE_FORCE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+CONFIG_MODVERSIONS=y
+CONFIG_KMOD=y
+#
+# Processor
+#
+CONFIG_6xx=y
+# CONFIG_40x is not set
+# CONFIG_44x is not set
+# CONFIG_POWER3 is not set
+# CONFIG_POWER4 is not set
+# CONFIG_8xx is not set
+# CONFIG_E500 is not set
+# CONFIG_ALTIVEC is not set
+# CONFIG_TAU is not set
+# CONFIG_CPU_FREQ is not set
+CONFIG_FSL_OCP=y
+CONFIG_PPC_STD_MMU=y
+#
+# Platform options
+#
+# CONFIG_PPC_MULTIPLATFORM is not set
+# CONFIG_APUS is not set
+# CONFIG_WILLOW is not set
+# CONFIG_PCORE is not set
+# CONFIG_POWERPMC250 is not set
+# CONFIG_EV64260 is not set
+# CONFIG_SPRUCE is not set
+# CONFIG_LOPEC is not set
+# CONFIG_MCPN765 is not set
+# CONFIG_MVME5100 is not set
+# CONFIG_PPLUS is not set
+# CONFIG_PRPMC750 is not set
+# CONFIG_PRPMC800 is not set
+# CONFIG_SANDPOINT is not set
+# CONFIG_ADIR is not set
+# CONFIG_K2 is not set
+# CONFIG_PAL4 is not set
+# CONFIG_GEMINI is not set
+# CONFIG_EST8260 is not set
+# CONFIG_SBC82xx is not set
+# CONFIG_SBS8260 is not set
+# CONFIG_RPX6 is not set
+# CONFIG_TQM8260 is not set
+# CONFIG_ADS8272 is not set
+CONFIG_LITE5200=y
+CONFIG_PPC_MPC52xx=y
+# CONFIG_SMP is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_HIGHMEM is not set
+CONFIG_KERNEL_ELF=y
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="console=ttyS0 root=/dev/ram0 rw"
+#
+# Bus options
+#
+CONFIG_GENERIC_ISA_DMA=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+# CONFIG_PCI_LEGACY_PROC is not set
+# CONFIG_PCI_NAMES is not set
+#
+# Advanced setup
+#
+CONFIG_ADVANCED_OPTIONS=y
+CONFIG_HIGHMEM_START=0xfe000000
+# CONFIG_LOWMEM_SIZE_BOOL is not set
+CONFIG_LOWMEM_SIZE=0x30000000
+# CONFIG_KERNEL_START_BOOL is not set
+CONFIG_KERNEL_START=0xc0000000
+# CONFIG_TASK_SIZE_BOOL is not set
+CONFIG_TASK_SIZE=0x80000000
+# CONFIG_BOOT_LOAD_BOOL is not set
+CONFIG_BOOT_LOAD=0x00800000
+#
+# Device Drivers
+#
+#
+# Generic Driver Options
+#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+# CONFIG_DEBUG_DRIVER is not set
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+#
+# Plug and Play support
+#
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_SX8 is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_LBD is not set
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+#
+# Fusion MPT device support
+#
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+#
+# Macintosh device drivers
+#
+#
+# Networking support
+#
+# CONFIG_NET is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+#
+# ISDN subsystem
+#
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+#
+# Input device support
+#
+CONFIG_INPUT=y
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_EVBUG=y
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PCIPS2 is not set
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_SERIAL_MPC52xx=y
+CONFIG_SERIAL_MPC52xx_CONSOLE=y
+CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD=9600
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+#
+# Misc devices
+#
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+#
+# Digital Video Broadcasting Devices
+#
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+#
+# Console display driver support
+#
+CONFIG_VGA_CONSOLE=y
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+#
+# USB support
+#
+# CONFIG_USB is not set
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+# CONFIG_NLS_CODEPAGE_932 is not set
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ASCII is not set
+CONFIG_NLS_ISO8859_1=m
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+#
+# Library routines
+#
+# CONFIG_CRC16 is not set
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+#
+# Kernel hacking
+#
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SPINLOCK is not set
+CONFIG_DEBUG_SPINLOCK_SLEEP=y
+# CONFIG_KGDB is not set
+# CONFIG_XMON is not set
+# CONFIG_BDI_SWITCH is not set
+CONFIG_DEBUG_INFO=y
+CONFIG_SERIAL_TEXT_DEBUG=y
+CONFIG_PPC_OCP=y
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_MMU=y
+CONFIG_RWSEM_XCHGADD_ALGORITHM=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_PPC=y
+CONFIG_PPC32=y
+CONFIG_GENERIC_NVRAM=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+# CONFIG_SWAP is not set
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+# CONFIG_KALLSYMS is not set
+CONFIG_FUTEX=y
+# CONFIG_EPOLL is not set
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# Processor
+#
+CONFIG_6xx=y
+# CONFIG_40x is not set
+# CONFIG_44x is not set
+# CONFIG_POWER3 is not set
+# CONFIG_POWER4 is not set
+# CONFIG_8xx is not set
+# CONFIG_E500 is not set
+# CONFIG_CPU_FREQ is not set
+CONFIG_EMBEDDEDBOOT=y
+CONFIG_PPC_STD_MMU=y
+
+#
+# Platform options
+#
+# CONFIG_PPC_MULTIPLATFORM is not set
+# CONFIG_APUS is not set
+# CONFIG_WILLOW is not set
+# CONFIG_PCORE is not set
+# CONFIG_POWERPMC250 is not set
+# CONFIG_EV64260 is not set
+# CONFIG_SPRUCE is not set
+# CONFIG_LOPEC is not set
+# CONFIG_MCPN765 is not set
+# CONFIG_MVME5100 is not set
+# CONFIG_PPLUS is not set
+# CONFIG_PRPMC750 is not set
+# CONFIG_PRPMC800 is not set
+# CONFIG_SANDPOINT is not set
+# CONFIG_ADIR is not set
+# CONFIG_K2 is not set
+# CONFIG_PAL4 is not set
+# CONFIG_GEMINI is not set
+# CONFIG_EST8260 is not set
+# CONFIG_SBC82xx is not set
+# CONFIG_SBS8260 is not set
+CONFIG_RPX8260=y
+# CONFIG_TQM8260 is not set
+# CONFIG_ADS8272 is not set
+CONFIG_8260=y
+CONFIG_CPM2=y
+# CONFIG_PC_KEYBOARD is not set
+# CONFIG_SMP is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_HIGHMEM is not set
+CONFIG_KERNEL_ELF=y
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+# CONFIG_CMDLINE_BOOL is not set
+
+#
+# Bus options
+#
+# CONFIG_PCI is not set
+# CONFIG_PCI_DOMAINS is not set
+
+#
+# Advanced setup
+#
+# CONFIG_ADVANCED_OPTIONS is not set
+
+#
+# Default settings for advanced configuration options are used
+#
+CONFIG_HIGHMEM_START=0xfe000000
+CONFIG_LOWMEM_SIZE=0x30000000
+CONFIG_KERNEL_START=0xc0000000
+CONFIG_TASK_SIZE=0x80000000
+CONFIG_BOOT_LOAD=0x00400000
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+
+#
+# Macintosh device drivers
+#
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_IP_MROUTE is not set
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
+# CONFIG_OAKNET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+
+#
+# Ethernet (10000 Mbit)
+#
+
+#
+# Token Ring devices
+#
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+# CONFIG_INPUT is not set
+
+#
+# Userland interfaces
+#
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_SERIAL_CPM=y
+CONFIG_SERIAL_CPM_CONSOLE=y
+# CONFIG_SERIAL_CPM_SCC1 is not set
+# CONFIG_SERIAL_CPM_SCC2 is not set
+# CONFIG_SERIAL_CPM_SCC3 is not set
+# CONFIG_SERIAL_CPM_SCC4 is not set
+CONFIG_SERIAL_CPM_SMC1=y
+# CONFIG_SERIAL_CPM_SMC2 is not set
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+# CONFIG_MSDOS_PARTITION is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+# CONFIG_SCC_ENET is not set
+CONFIG_FEC_ENET=y
+# CONFIG_USE_MDIO is not set
+
+#
+# CPM2 Options
+#
+# CONFIG_FCC1_ENET is not set
+# CONFIG_FCC2_ENET is not set
+CONFIG_FCC3_ENET=y
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_DEBUG_KERNEL is not set
+# CONFIG_KGDB_CONSOLE is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
--- /dev/null
+/*
+ * PowerPC version derived from arch/arm/mm/consistent.c
+ * Copyright (C) 2001 Dan Malek (dmalek@jlc.net)
+ *
+ * Copyright (C) 2000 Russell King
+ *
+ * Consistent memory allocators. Used for DMA devices that want to
+ * share uncached memory with the processor core. The function return
+ * is the virtual address and 'dma_handle' is the physical address.
+ * Mostly stolen from the ARM port, with some changes for PowerPC.
+ * -- Dan
+ *
+ * Reorganized to get rid of the arch-specific consistent_* functions
+ * and provide non-coherent implementations for the DMA API. -Matt
+ *
+ * Added in_interrupt() safe dma_alloc_coherent()/dma_free_coherent()
+ * implementation. This is pulled straight from ARM and barely
+ * modified. -Matt
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/bootmem.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <asm/io.h>
+#include <asm/hardirq.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/uaccess.h>
+#include <asm/smp.h>
+#include <asm/machdep.h>
+
+int map_page(unsigned long va, phys_addr_t pa, int flags);
+
+#include <asm/tlbflush.h>
+
+/*
+ * This address range defaults to a value that is safe for all
+ * platforms which currently set CONFIG_NOT_COHERENT_CACHE. It
+ * can be further configured for specific applications under
+ * the "Advanced Setup" menu. -Matt
+ */
+#define CONSISTENT_BASE (CONFIG_CONSISTENT_START)
+#define CONSISTENT_END (CONFIG_CONSISTENT_START + CONFIG_CONSISTENT_SIZE)
+#define CONSISTENT_OFFSET(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT)
+
+/*
+ * This is the page table (2MB) covering uncached, DMA consistent allocations
+ */
+static pte_t *consistent_pte;
+static spinlock_t consistent_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * VM region handling support.
+ *
+ * This should become something generic, handling VM region allocations for
+ * vmalloc and similar (ioremap, module space, etc).
+ *
+ * I envisage vmalloc()'s supporting vm_struct becoming:
+ *
+ * struct vm_struct {
+ * struct vm_region region;
+ * unsigned long flags;
+ * struct page **pages;
+ * unsigned int nr_pages;
+ * unsigned long phys_addr;
+ * };
+ *
+ * get_vm_area() would then call vm_region_alloc with an appropriate
+ * struct vm_region head (eg):
+ *
+ * struct vm_region vmalloc_head = {
+ * .vm_list = LIST_HEAD_INIT(vmalloc_head.vm_list),
+ * .vm_start = VMALLOC_START,
+ * .vm_end = VMALLOC_END,
+ * };
+ *
+ * However, vmalloc_head.vm_start is variable (typically, it is dependent on
+ * the amount of RAM found at boot time.) I would imagine that get_vm_area()
+ * would have to initialise this each time prior to calling vm_region_alloc().
+ */
+struct vm_region {
+ struct list_head vm_list;
+ unsigned long vm_start;
+ unsigned long vm_end;
+};
+
+static struct vm_region consistent_head = {
+ .vm_list = LIST_HEAD_INIT(consistent_head.vm_list),
+ .vm_start = CONSISTENT_BASE,
+ .vm_end = CONSISTENT_END,
+};
+
+static struct vm_region *
+vm_region_alloc(struct vm_region *head, size_t size, int gfp)
+{
+ unsigned long addr = head->vm_start, end = head->vm_end - size;
+ unsigned long flags;
+ struct vm_region *c, *new;
+
+ new = kmalloc(sizeof(struct vm_region), gfp);
+ if (!new)
+ goto out;
+
+ spin_lock_irqsave(&consistent_lock, flags);
+
+ list_for_each_entry(c, &head->vm_list, vm_list) {
+ if ((addr + size) < addr)
+ goto nospc;
+ if ((addr + size) <= c->vm_start)
+ goto found;
+ addr = c->vm_end;
+ if (addr > end)
+ goto nospc;
+ }
+
+ found:
+ /*
+ * Insert this entry _before_ the one we found.
+ */
+ list_add_tail(&new->vm_list, &c->vm_list);
+ new->vm_start = addr;
+ new->vm_end = addr + size;
+
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ return new;
+
+ nospc:
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ kfree(new);
+ out:
+ return NULL;
+}
+
+static struct vm_region *vm_region_find(struct vm_region *head, unsigned long addr)
+{
+ struct vm_region *c;
+
+ list_for_each_entry(c, &head->vm_list, vm_list) {
+ if (c->vm_start == addr)
+ goto out;
+ }
+ c = NULL;
+ out:
+ return c;
+}
+
+/*
+ * Allocate DMA-coherent memory space and return both the kernel remapped
+ * virtual and bus address for that space.
+ */
+void *
+__dma_alloc_coherent(size_t size, dma_addr_t *handle, int gfp)
+{
+ struct page *page;
+ struct vm_region *c;
+ unsigned long order;
+ u64 mask = 0x00ffffff, limit; /* ISA default */
+
+ if (!consistent_pte) {
+ printk(KERN_ERR "%s: not initialised\n", __func__);
+ dump_stack();
+ return NULL;
+ }
+
+ size = PAGE_ALIGN(size);
+ limit = (mask + 1) & ~mask;
+ if ((limit && size >= limit) || size >= (CONSISTENT_END - CONSISTENT_BASE)) {
+ printk(KERN_WARNING "coherent allocation too big (requested %#x mask %#Lx)\n",
+ size, mask);
+ return NULL;
+ }
+
+ order = get_order(size);
+
+ if (mask != 0xffffffff)
+ gfp |= GFP_DMA;
+
+ page = alloc_pages(gfp, order);
+ if (!page)
+ goto no_page;
+
+ /*
+ * Invalidate any data that might be lurking in the
+ * kernel direct-mapped region for device DMA.
+ */
+ {
+ unsigned long kaddr = (unsigned long)page_address(page);
+ memset(page_address(page), 0, size);
+ flush_dcache_range(kaddr, kaddr + size);
+ }
+
+ /*
+ * Allocate a virtual address in the consistent mapping region.
+ */
+ c = vm_region_alloc(&consistent_head, size,
+ gfp & ~(__GFP_DMA | __GFP_HIGHMEM));
+ if (c) {
+ pte_t *pte = consistent_pte + CONSISTENT_OFFSET(c->vm_start);
+ struct page *end = page + (1 << order);
+
+ /*
+ * Set the "dma handle"
+ */
+ *handle = page_to_bus(page);
+
+ do {
+ BUG_ON(!pte_none(*pte));
+
+ set_page_count(page, 1);
+ SetPageReserved(page);
+ set_pte(pte, mk_pte(page, pgprot_noncached(PAGE_KERNEL)));
+ page++;
+ pte++;
+ } while (size -= PAGE_SIZE);
+
+ /*
+ * Free the otherwise unused pages.
+ */
+ while (page < end) {
+ set_page_count(page, 1);
+ __free_page(page);
+ page++;
+ }
+
+ return (void *)c->vm_start;
+ }
+
+ if (page)
+ __free_pages(page, order);
+ no_page:
+ return NULL;
+}
+
+/*
+ * free a page as defined by the above mapping.
+ */
+void __dma_free_coherent(size_t size, void *vaddr)
+{
+ struct vm_region *c;
+ unsigned long flags;
+ pte_t *ptep;
+
+ size = PAGE_ALIGN(size);
+
+ spin_lock_irqsave(&consistent_lock, flags);
+
+ c = vm_region_find(&consistent_head, (unsigned long)vaddr);
+ if (!c)
+ goto no_area;
+
+ if ((c->vm_end - c->vm_start) != size) {
+ printk(KERN_ERR "%s: freeing wrong coherent size (%ld != %d)\n",
+ __func__, c->vm_end - c->vm_start, size);
+ dump_stack();
+ size = c->vm_end - c->vm_start;
+ }
+
+ ptep = consistent_pte + CONSISTENT_OFFSET(c->vm_start);
+ do {
+ pte_t pte = ptep_get_and_clear(ptep);
+ unsigned long pfn;
+
+ ptep++;
+
+ if (!pte_none(pte) && pte_present(pte)) {
+ pfn = pte_pfn(pte);
+
+ if (pfn_valid(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+ ClearPageReserved(page);
+
+ __free_page(page);
+ continue;
+ }
+ }
+
+ printk(KERN_CRIT "%s: bad page in kernel page table\n",
+ __func__);
+ } while (size -= PAGE_SIZE);
+
+ flush_tlb_kernel_range(c->vm_start, c->vm_end);
+
+ list_del(&c->vm_list);
+
+ spin_unlock_irqrestore(&consistent_lock, flags);
+
+ kfree(c);
+ return;
+
+ no_area:
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ printk(KERN_ERR "%s: trying to free invalid coherent area: %p\n",
+ __func__, vaddr);
+ dump_stack();
+}
+EXPORT_SYMBOL(dma_free_coherent);
+
+/*
+ * Initialise the consistent memory allocation.
+ */
+static int __init dma_alloc_init(void)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+ int ret = 0;
+
+ spin_lock(&init_mm.page_table_lock);
+
+ do {
+ pgd = pgd_offset(&init_mm, CONSISTENT_BASE);
+ pmd = pmd_alloc(&init_mm, pgd, CONSISTENT_BASE);
+ if (!pmd) {
+ printk(KERN_ERR "%s: no pmd tables\n", __func__);
+ ret = -ENOMEM;
+ break;
+ }
+ WARN_ON(!pmd_none(*pmd));
+
+ pte = pte_alloc_kernel(&init_mm, pmd, CONSISTENT_BASE);
+ if (!pte) {
+ printk(KERN_ERR "%s: no pte tables\n", __func__);
+ ret = -ENOMEM;
+ break;
+ }
+
+ consistent_pte = pte;
+ } while (0);
+
+ spin_unlock(&init_mm.page_table_lock);
+
+ return ret;
+}
+
+core_initcall(dma_alloc_init);
+
+/*
+ * make an area consistent.
+ */
+void __dma_sync(void *vaddr, size_t size, int direction)
+{
+ unsigned long start = (unsigned long)vaddr;
+ unsigned long end = start + size;
+
+ switch (direction) {
+ case DMA_NONE:
+ BUG();
+ case DMA_FROM_DEVICE: /* invalidate only */
+ invalidate_dcache_range(start, end);
+ break;
+ case DMA_TO_DEVICE: /* writeback only */
+ clean_dcache_range(start, end);
+ break;
+ case DMA_BIDIRECTIONAL: /* writeback and invalidate */
+ flush_dcache_range(start, end);
+ break;
+ }
+}
+
+#ifdef CONFIG_HIGHMEM
+/*
+ * __dma_sync_page() implementation for systems using highmem.
+ * In this case, each page of a buffer must be kmapped/kunmapped
+ * in order to have a virtual address for __dma_sync(). This must
+ * not sleep so kmap_atmomic()/kunmap_atomic() are used.
+ *
+ * Note: yes, it is possible and correct to have a buffer extend
+ * beyond the first page.
+ */
+static inline void __dma_sync_page_highmem(struct page *page,
+ unsigned long offset, size_t size, int direction)
+{
+ size_t seg_size = min((size_t)PAGE_SIZE, size) - offset;
+ size_t cur_size = seg_size;
+ unsigned long flags, start, seg_offset = offset;
+ int nr_segs = PAGE_ALIGN(size + (PAGE_SIZE - offset))/PAGE_SIZE;
+ int seg_nr = 0;
+
+ local_irq_save(flags);
+
+ do {
+ start = (unsigned long)kmap_atomic(page + seg_nr,
+ KM_PPC_SYNC_PAGE) + seg_offset;
+
+ /* Sync this buffer segment */
+ __dma_sync((void *)start, seg_size, direction);
+ kunmap_atomic((void *)start, KM_PPC_SYNC_PAGE);
+ seg_nr++;
+
+ /* Calculate next buffer segment size */
+ seg_size = min((size_t)PAGE_SIZE, size - cur_size);
+
+ /* Add the segment size to our running total */
+ cur_size += seg_size;
+ seg_offset = 0;
+ } while (seg_nr < nr_segs);
+
+ local_irq_restore(flags);
+}
+#endif /* CONFIG_HIGHMEM */
+
+/*
+ * __dma_sync_page makes memory consistent. identical to __dma_sync, but
+ * takes a struct page instead of a virtual address
+ */
+void __dma_sync_page(struct page *page, unsigned long offset,
+ size_t size, int direction)
+{
+#ifdef CONFIG_HIGHMEM
+ __dma_sync_page_highmem(page, offset, size, direction);
+#else
+ unsigned long start = (unsigned long)page_address(page) + offset;
+ __dma_sync((void *)start, size, direction);
+#endif
+}
--- /dev/null
+/*
+ * arch/ppc/kernel/head_e500.S
+ *
+ * Kernel execution entry point code.
+ *
+ * Copyright (c) 1995-1996 Gary Thomas <gdt@linuxppc.org>
+ * Initial PowerPC version.
+ * Copyright (c) 1996 Cort Dougan <cort@cs.nmt.edu>
+ * Rewritten for PReP
+ * Copyright (c) 1996 Paul Mackerras <paulus@cs.anu.edu.au>
+ * Low-level exception handers, MMU support, and rewrite.
+ * Copyright (c) 1997 Dan Malek <dmalek@jlc.net>
+ * PowerPC 8xx modifications.
+ * Copyright (c) 1998-1999 TiVo, Inc.
+ * PowerPC 403GCX modifications.
+ * Copyright (c) 1999 Grant Erickson <grant@lcse.umn.edu>
+ * PowerPC 403GCX/405GP modifications.
+ * Copyright 2000 MontaVista Software Inc.
+ * PPC405 modifications
+ * PowerPC 403GCX/405GP modifications.
+ * Author: MontaVista Software, Inc.
+ * frank_rowand@mvista.com or source@mvista.com
+ * debbie_chu@mvista.com
+ * Copyright 2002-2004 MontaVista Software, Inc.
+ * PowerPC 44x support, Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004 Freescale Semiconductor, Inc
+ * PowerPC e500 modifications, Kumar Gala <kumar.gala@freescale.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/mmu.h>
+#include <asm/pgtable.h>
+#include <asm/cputable.h>
+#include <asm/thread_info.h>
+#include <asm/ppc_asm.h>
+#include <asm/offsets.h>
+
+/*
+ * Macros
+ */
+
+#define SET_IVOR(vector_number, vector_label) \
+ li r26,vector_label@l; \
+ mtspr SPRN_IVOR##vector_number,r26; \
+ sync
+
+/* As with the other PowerPC ports, it is expected that when code
+ * execution begins here, the following registers contain valid, yet
+ * optional, information:
+ *
+ * r3 - Board info structure pointer (DRAM, frequency, MAC address, etc.)
+ * r4 - Starting address of the init RAM disk
+ * r5 - Ending address of the init RAM disk
+ * r6 - Start of kernel command line string (e.g. "mem=128")
+ * r7 - End of kernel command line string
+ *
+ */
+ .text
+_GLOBAL(_stext)
+_GLOBAL(_start)
+ /*
+ * Reserve a word at a fixed location to store the address
+ * of abatron_pteptrs
+ */
+ nop
+/*
+ * Save parameters we are passed
+ */
+ mr r31,r3
+ mr r30,r4
+ mr r29,r5
+ mr r28,r6
+ mr r27,r7
+ li r24,0 /* CPU number */
+
+/* We try to not make any assumptions about how the boot loader
+ * setup or used the TLBs. We invalidate all mappings from the
+ * boot loader and load a single entry in TLB1[0] to map the
+ * first 16M of kernel memory. Any boot info passed from the
+ * bootloader needs to live in this first 16M.
+ *
+ * Requirement on bootloader:
+ * - The page we're executing in needs to reside in TLB1 and
+ * have IPROT=1. If not an invalidate broadcast could
+ * evict the entry we're currently executing in.
+ *
+ * r3 = Index of TLB1 were executing in
+ * r4 = Current MSR[IS]
+ * r5 = Index of TLB1 temp mapping
+ *
+ * Later in mapin_ram we will correctly map lowmem, and resize TLB1[0]
+ * if needed
+ */
+
+/* 1. Find the index of the entry we're executing in */
+ bl invstr /* Find our address */
+invstr: mflr r6 /* Make it accessible */
+ mfmsr r7
+ rlwinm r4,r7,27,31,31 /* extract MSR[IS] */
+ mfspr r7, SPRN_PID0
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* search MSR[IS], SPID=PID0 */
+ mfspr r7,SPRN_MAS1
+ andis. r7,r7,MAS1_VALID@h
+ bne match_TLB
+ mfspr r7,SPRN_PID1
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* search MSR[IS], SPID=PID1 */
+ mfspr r7,SPRN_MAS1
+ andis. r7,r7,MAS1_VALID@h
+ bne match_TLB
+ mfspr r7, SPRN_PID2
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* Fall through, we had to match */
+match_TLB:
+ mfspr r7,SPRN_MAS0
+ rlwinm r3,r7,16,28,31 /* Extract MAS0(Entry) */
+
+ mfspr r7,SPRN_MAS1 /* Insure IPROT set */
+ oris r7,r7,MAS1_IPROT@h
+ mtspr SPRN_MAS1,r7
+ tlbwe
+
+/* 2. Invalidate all entries except the entry we're executing in */
+ mfspr r9,SPRN_TLB1CFG
+ andi. r9,r9,0xfff
+ li r6,0 /* Set Entry counter to 0 */
+1: lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r6,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ mfspr r7,SPRN_MAS1
+ rlwinm r7,r7,0,2,31 /* Clear MAS1 Valid and IPROT */
+ cmpw r3,r6
+ beq skpinv /* Dont update the current execution TLB */
+ mtspr SPRN_MAS1,r7
+ tlbwe
+ isync
+skpinv: addi r6,r6,1 /* Increment */
+ cmpw r6,r9 /* Are we done? */
+ bne 1b /* If not, repeat */
+
+ /* Invalidate TLB0 */
+ li r6,0x04
+ tlbivax 0,r6
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ /* Invalidate TLB1 */
+ li r6,0x0c
+ tlbivax 0,r6
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+/* 3. Setup a temp mapping and jump to it */
+ andi. r5, r3, 0x1 /* Find an entry not used and is non-zero */
+ addi r5, r5, 0x1
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+
+ /* Just modify the entry ID and EPN for the temp mapping */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ mtspr SPRN_MAS0,r7
+ xori r6,r4,1 /* Setup TMP mapping in the other Address space */
+ slwi r6,r6,12
+ oris r6,r6,(MAS1_VALID|MAS1_IPROT)@h
+ ori r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_4K))@l
+ mtspr SPRN_MAS1,r6
+ mfspr r6,SPRN_MAS2
+ li r7,0 /* temp EPN = 0 */
+ rlwimi r7,r6,0,20,31
+ mtspr SPRN_MAS2,r7
+ tlbwe
+
+ xori r6,r4,1
+ slwi r6,r6,5 /* setup new context with other address space */
+ bl 1f /* Find our address */
+1: mflr r9
+ rlwimi r7,r9,0,20,31
+ addi r7,r7,24
+ mtspr SRR0,r7
+ mtspr SRR1,r6
+ rfi
+
+/* 4. Clear out PIDs & Search info */
+ li r6,0
+ mtspr SPRN_PID0,r6
+ mtspr SPRN_PID1,r6
+ mtspr SPRN_PID2,r6
+ mtspr SPRN_MAS6,r6
+
+/* 5. Invalidate mapping we started in */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ li r6,0
+ mtspr SPRN_MAS1,r6
+ tlbwe
+ /* Invalidate TLB1 */
+ li r9,0x0c
+ tlbivax 0,r9
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+/* 6. Setup KERNELBASE mapping in TLB1[0] */
+ lis r6,0x1000 /* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
+ mtspr SPRN_MAS0,r6
+ lis r6,(MAS1_VALID|MAS1_IPROT)@h
+ ori r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_16M))@l
+ mtspr SPRN_MAS1,r6
+ li r7,0
+ lis r6,KERNELBASE@h
+ ori r6,r6,KERNELBASE@l
+ rlwimi r6,r7,0,20,31
+ mtspr SPRN_MAS2,r6
+ li r7,(MAS3_SX|MAS3_SW|MAS3_SR)
+ mtspr SPRN_MAS3,r7
+ tlbwe
+
+/* 7. Jump to KERNELBASE mapping */
+ li r7,0
+ bl 1f /* Find our address */
+1: mflr r9
+ rlwimi r6,r9,0,20,31
+ addi r6,r6,24
+ mtspr SRR0,r6
+ mtspr SRR1,r7
+ rfi /* start execution out of TLB1[0] entry */
+
+/* 8. Clear out the temp mapping */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ mtspr SPRN_MAS1,r8
+ tlbwe
+ /* Invalidate TLB1 */
+ li r9,0x0c
+ tlbivax 0,r9
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+ /* Establish the interrupt vector offsets */
+ SET_IVOR(0, CriticalInput);
+ SET_IVOR(1, MachineCheck);
+ SET_IVOR(2, DataStorage);
+ SET_IVOR(3, InstructionStorage);
+ SET_IVOR(4, ExternalInput);
+ SET_IVOR(5, Alignment);
+ SET_IVOR(6, Program);
+ SET_IVOR(7, FloatingPointUnavailable);
+ SET_IVOR(8, SystemCall);
+ SET_IVOR(9, AuxillaryProcessorUnavailable);
+ SET_IVOR(10, Decrementer);
+ SET_IVOR(11, FixedIntervalTimer);
+ SET_IVOR(12, WatchdogTimer);
+ SET_IVOR(13, DataTLBError);
+ SET_IVOR(14, InstructionTLBError);
+ SET_IVOR(15, Debug);
+ SET_IVOR(32, SPEUnavailable);
+ SET_IVOR(33, SPEFloatingPointData);
+ SET_IVOR(34, SPEFloatingPointRound);
+ SET_IVOR(35, PerformanceMonitor);
+
+ /* Establish the interrupt vector base */
+ lis r4,interrupt_base@h /* IVPR only uses the high 16-bits */
+ mtspr SPRN_IVPR,r4
+
+ /* Setup the defaults for TLB entries */
+ li r2,MAS4_TSIZED(BOOKE_PAGESZ_4K)
+ mtspr SPRN_MAS4, r2
+
+#if 0
+ /* Enable DOZE */
+ mfspr r2,SPRN_HID0
+ oris r2,r2,HID0_DOZE@h
+ mtspr SPRN_HID0, r2
+#endif
+
+ /*
+ * This is where the main kernel code starts.
+ */
+
+ /* ptr to current */
+ lis r2,init_task@h
+ ori r2,r2,init_task@l
+
+ /* ptr to current thread */
+ addi r4,r2,THREAD /* init task's THREAD */
+ mtspr SPRG3,r4
+
+ /* stack */
+ lis r1,init_thread_union@h
+ ori r1,r1,init_thread_union@l
+ li r0,0
+ stwu r0,THREAD_SIZE-STACK_FRAME_OVERHEAD(r1)
+
+ bl early_init
+
+ mfspr r3,SPRN_TLB1CFG
+ andi. r3,r3,0xfff
+ lis r4,num_tlbcam_entries@ha
+ stw r3,num_tlbcam_entries@l(r4)
+/*
+ * Decide what sort of machine this is and initialize the MMU.
+ */
+ mr r3,r31
+ mr r4,r30
+ mr r5,r29
+ mr r6,r28
+ mr r7,r27
+ bl machine_init
+ bl MMU_init
+
+ /* Setup PTE pointers for the Abatron bdiGDB */
+ lis r6, swapper_pg_dir@h
+ ori r6, r6, swapper_pg_dir@l
+ lis r5, abatron_pteptrs@h
+ ori r5, r5, abatron_pteptrs@l
+ lis r4, KERNELBASE@h
+ ori r4, r4, KERNELBASE@l
+ stw r5, 0(r4) /* Save abatron_pteptrs at a fixed location */
+ stw r6, 0(r5)
+
+ /* Let's move on */
+ lis r4,start_kernel@h
+ ori r4,r4,start_kernel@l
+ lis r3,MSR_KERNEL@h
+ ori r3,r3,MSR_KERNEL@l
+ mtspr SRR0,r4
+ mtspr SRR1,r3
+ rfi /* change context and jump to start_kernel */
+
+/*
+ * Interrupt vector entry code
+ *
+ * The Book E MMUs are always on so we don't need to handle
+ * interrupts in real mode as with previous PPC processors. In
+ * this case we handle interrupts in the kernel virtual address
+ * space.
+ *
+ * Interrupt vectors are dynamically placed relative to the
+ * interrupt prefix as determined by the address of interrupt_base.
+ * The interrupt vectors offsets are programmed using the labels
+ * for each interrupt vector entry.
+ *
+ * Interrupt vectors must be aligned on a 16 byte boundary.
+ * We align on a 32 byte cache line boundary for good measure.
+ */
+
+#define NORMAL_EXCEPTION_PROLOG \
+ mtspr SPRN_SPRG0,r10; /* save two registers to work with */\
+ mtspr SPRN_SPRG1,r11; \
+ mtspr SPRN_SPRG4W,r1; \
+ mfcr r10; /* save CR in r10 for now */\
+ mfspr r11,SPRN_SRR1; /* check whether user or kernel */\
+ andi. r11,r11,MSR_PR; \
+ beq 1f; \
+ mfspr r1,SPRG3; /* if from user, start at top of */\
+ lwz r1,THREAD_INFO-THREAD(r1); /* this thread's kernel stack */\
+ addi r1,r1,THREAD_SIZE; \
+1: subi r1,r1,INT_FRAME_SIZE; /* Allocate an exception frame */\
+ tophys(r11,r1); \
+ stw r10,_CCR(r11); /* save various registers */\
+ stw r12,GPR12(r11); \
+ stw r9,GPR9(r11); \
+ mfspr r10,SPRG0; \
+ stw r10,GPR10(r11); \
+ mfspr r12,SPRG1; \
+ stw r12,GPR11(r11); \
+ mflr r10; \
+ stw r10,_LINK(r11); \
+ mfspr r10,SPRG4R; \
+ mfspr r12,SRR0; \
+ stw r10,GPR1(r11); \
+ mfspr r9,SRR1; \
+ stw r10,0(r11); \
+ rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
+ stw r0,GPR0(r11); \
+ SAVE_4GPRS(3, r11); \
+ SAVE_2GPRS(7, r11)
+
+/*
+ * Exception prolog for critical exceptions. This is a little different
+ * from the normal exception prolog above since a critical exception
+ * can potentially occur at any point during normal exception processing.
+ * Thus we cannot use the same SPRG registers as the normal prolog above.
+ * Instead we use a couple of words of memory at low physical addresses.
+ * This is OK since we don't support SMP on these processors. For Book E
+ * processors, we also have a reserved register (SPRG2) that is only used
+ * in critical exceptions so we can free up a GPR to use as the base for
+ * indirect access to the critical exception save area. This is necessary
+ * since the MMU is always on and the save area is offset from KERNELBASE.
+ */
+#define CRITICAL_EXCEPTION_PROLOG \
+ mtspr SPRG2,r8; /* SPRG2 only used in criticals */ \
+ lis r8,crit_save@ha; \
+ stw r10,crit_r10@l(r8); \
+ stw r11,crit_r11@l(r8); \
+ mfspr r10,SPRG0; \
+ stw r10,crit_sprg0@l(r8); \
+ mfspr r10,SPRG1; \
+ stw r10,crit_sprg1@l(r8); \
+ mfspr r10,SPRG4R; \
+ stw r10,crit_sprg4@l(r8); \
+ mfspr r10,SPRG5R; \
+ stw r10,crit_sprg5@l(r8); \
+ mfspr r10,SPRG7R; \
+ stw r10,crit_sprg7@l(r8); \
+ mfspr r10,SPRN_PID; \
+ stw r10,crit_pid@l(r8); \
+ mfspr r10,SRR0; \
+ stw r10,crit_srr0@l(r8); \
+ mfspr r10,SRR1; \
+ stw r10,crit_srr1@l(r8); \
+ mfspr r8,SPRG2; /* SPRG2 only used in criticals */ \
+ mfcr r10; /* save CR in r10 for now */\
+ mfspr r11,SPRN_CSRR1; /* check whether user or kernel */\
+ andi. r11,r11,MSR_PR; \
+ lis r11,critical_stack_top@h; \
+ ori r11,r11,critical_stack_top@l; \
+ beq 1f; \
+ /* COMING FROM USER MODE */ \
+ mfspr r11,SPRG3; /* if from user, start at top of */\
+ lwz r11,THREAD_INFO-THREAD(r11); /* this thread's kernel stack */\
+ addi r11,r11,THREAD_SIZE; \
+1: subi r11,r11,INT_FRAME_SIZE; /* Allocate an exception frame */\
+ stw r10,_CCR(r11); /* save various registers */\
+ stw r12,GPR12(r11); \
+ stw r9,GPR9(r11); \
+ mflr r10; \
+ stw r10,_LINK(r11); \
+ mfspr r12,SPRN_DEAR; /* save DEAR and ESR in the frame */\
+ stw r12,_DEAR(r11); /* since they may have had stuff */\
+ mfspr r9,SPRN_ESR; /* in them at the point where the */\
+ stw r9,_ESR(r11); /* exception was taken */\
+ mfspr r12,CSRR0; \
+ stw r1,GPR1(r11); \
+ mfspr r9,CSRR1; \
+ stw r1,0(r11); \
+ tovirt(r1,r11); \
+ rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
+ stw r0,GPR0(r11); \
+ SAVE_4GPRS(3, r11); \
+ SAVE_2GPRS(7, r11)
+
+/*
+ * Exception prolog for machine check exceptions. This is similar to
+ * the critical exception prolog, except that machine check exceptions
+ * have their own save area. For Book E processors, we also have a
+ * reserved register (SPRG6) that is only used in machine check exceptions
+ * so we can free up a GPR to use as the base for indirect access to the
+ * machine check exception save area. This is necessary since the MMU
+ * is always on and the save area is offset from KERNELBASE.
+ */
+#define MCHECK_EXCEPTION_PROLOG \
+ mtspr SPRG6W,r8; /* SPRG6 used in machine checks */ \
+ lis r8,mcheck_save@ha; \
+ stw r10,mcheck_r10@l(r8); \
+ stw r11,mcheck_r11@l(r8); \
+ mfspr r10,SPRG0; \
+ stw r10,mcheck_sprg0@l(r8); \
+ mfspr r10,SPRG1; \
+ stw r10,mcheck_sprg1@l(r8); \
+ mfspr r10,SPRG4R; \
+ stw r10,mcheck_sprg4@l(r8); \
+ mfspr r10,SPRG5R; \
+ stw r10,mcheck_sprg5@l(r8); \
+ mfspr r10,SPRG7R; \
+ stw r10,mcheck_sprg7@l(r8); \
+ mfspr r10,SPRN_PID; \
+ stw r10,mcheck_pid@l(r8); \
+ mfspr r10,SRR0; \
+ stw r10,mcheck_srr0@l(r8); \
+ mfspr r10,SRR1; \
+ stw r10,mcheck_srr1@l(r8); \
+ mfspr r10,CSRR0; \
+ stw r10,mcheck_csrr0@l(r8); \
+ mfspr r10,CSRR1; \
+ stw r10,mcheck_csrr1@l(r8); \
+ mfspr r8,SPRG6R; /* SPRG6 used in machine checks */ \
+ mfcr r10; /* save CR in r10 for now */\
+ mfspr r11,SPRN_MCSRR1; /* check whether user or kernel */\
+ andi. r11,r11,MSR_PR; \
+ lis r11,mcheck_stack_top@h; \
+ ori r11,r11,mcheck_stack_top@l; \
+ beq 1f; \
+ /* COMING FROM USER MODE */ \
+ mfspr r11,SPRG3; /* if from user, start at top of */\
+ lwz r11,THREAD_INFO-THREAD(r11); /* this thread's kernel stack */\
+ addi r11,r11,THREAD_SIZE; \
+1: subi r11,r11,INT_FRAME_SIZE; /* Allocate an exception frame */\
+ stw r10,_CCR(r11); /* save various registers */\
+ stw r12,GPR12(r11); \
+ stw r9,GPR9(r11); \
+ mflr r10; \
+ stw r10,_LINK(r11); \
+ mfspr r12,SPRN_DEAR; /* save DEAR and ESR in the frame */\
+ stw r12,_DEAR(r11); /* since they may have had stuff */\
+ mfspr r9,SPRN_ESR; /* in them at the point where the */\
+ stw r9,_ESR(r11); /* exception was taken */\
+ mfspr r12,MCSRR0; \
+ stw r1,GPR1(r11); \
+ mfspr r9,MCSRR1; \
+ stw r1,0(r11); \
+ tovirt(r1,r11); \
+ rlwinm r9,r9,0,14,12; /* clear MSR_WE (necessary?) */\
+ stw r0,GPR0(r11); \
+ SAVE_4GPRS(3, r11); \
+ SAVE_2GPRS(7, r11)
+
+/*
+ * Exception vectors.
+ */
+#define START_EXCEPTION(label) \
+ .align 5; \
+label:
+
+#define FINISH_EXCEPTION(func) \
+ bl transfer_to_handler_full; \
+ .long func; \
+ .long ret_from_except_full
+
+#define EXCEPTION(n, label, hdlr, xfer) \
+ START_EXCEPTION(label); \
+ NORMAL_EXCEPTION_PROLOG; \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ xfer(n, hdlr)
+
+#define CRITICAL_EXCEPTION(n, label, hdlr) \
+ START_EXCEPTION(label); \
+ CRITICAL_EXCEPTION_PROLOG; \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
+ NOCOPY, transfer_to_handler_full, \
+ ret_from_except_full)
+
+#define MCHECK_EXCEPTION(n, label, hdlr) \
+ START_EXCEPTION(label); \
+ MCHECK_EXCEPTION_PROLOG; \
+ mfspr r5,SPRN_ESR; \
+ stw r5,_ESR(r11); \
+ addi r3,r1,STACK_FRAME_OVERHEAD; \
+ EXC_XFER_TEMPLATE(hdlr, n+2, (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
+ NOCOPY, mcheck_transfer_to_handler, \
+ ret_from_mcheck_exc)
+
+#define EXC_XFER_TEMPLATE(hdlr, trap, msr, copyee, tfer, ret) \
+ li r10,trap; \
+ stw r10,TRAP(r11); \
+ lis r10,msr@h; \
+ ori r10,r10,msr@l; \
+ copyee(r10, r9); \
+ bl tfer; \
+ .long hdlr; \
+ .long ret
+
+#define COPY_EE(d, s) rlwimi d,s,0,16,16
+#define NOCOPY(d, s)
+
+#define EXC_XFER_STD(n, hdlr) \
+ EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, NOCOPY, transfer_to_handler_full, \
+ ret_from_except_full)
+
+#define EXC_XFER_LITE(n, hdlr) \
+ EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, NOCOPY, transfer_to_handler, \
+ ret_from_except)
+
+#define EXC_XFER_EE(n, hdlr) \
+ EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, COPY_EE, transfer_to_handler_full, \
+ ret_from_except_full)
+
+#define EXC_XFER_EE_LITE(n, hdlr) \
+ EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, COPY_EE, transfer_to_handler, \
+ ret_from_except)
+
+interrupt_base:
+ /* Critical Input Interrupt */
+ CRITICAL_EXCEPTION(0x0100, CriticalInput, UnknownException)
+
+ /* Machine Check Interrupt */
+ MCHECK_EXCEPTION(0x0200, MachineCheck, MachineCheckException)
+
+ /* Data Storage Interrupt */
+ START_EXCEPTION(DataStorage)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+
+ /*
+ * Check if it was a store fault, if not then bail
+ * because a user tried to access a kernel or
+ * read-protected page. Otherwise, get the
+ * offending address and handle it.
+ */
+ mfspr r10, SPRN_ESR
+ andis. r10, r10, ESR_ST@h
+ beq 2f
+
+ mfspr r10, SPRN_DEAR /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 0, r10, r11
+ bge 2f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+
+ /* Are _PAGE_USER & _PAGE_RW set & _PAGE_HWWRITE not? */
+ andi. r13, r11, _PAGE_RW|_PAGE_USER|_PAGE_HWWRITE
+ cmpwi 0, r13, _PAGE_RW|_PAGE_USER
+ bne 2f /* Bail if not */
+
+ /* Update 'changed'. */
+ ori r11, r11, _PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_HWWRITE
+ stw r11, 0(r12) /* Update Linux page table */
+
+ /* MAS2 not updated as the entry does exist in the tlb, this
+ fault taken to detect state transition (eg: COW -> DIRTY)
+ */
+ lis r12, MAS3_RPN@h
+ ori r12, r12, _PAGE_HWEXEC | MAS3_RPN@l
+ and r11, r11, r12
+ rlwimi r11, r11, 31, 27, 27 /* SX <- _PAGE_HWEXEC */
+ ori r11, r11, (MAS3_UW|MAS3_SW|MAS3_UR|MAS3_SR)@l /* set static perms */
+
+ /* update search PID in MAS6, AS = 0 */
+ mfspr r12, SPRN_PID0
+ slwi r12, r12, 16
+ mtspr SPRN_MAS6, r12
+
+ /* find the TLB index that caused the fault. It has to be here. */
+ tlbsx 0, r10
+
+ mtspr SPRN_MAS3,r11
+ tlbwe
+
+ /* Done...restore registers and get out of here. */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ rfi /* Force context change */
+
+2:
+ /*
+ * The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b data_access
+
+ /* Instruction Storage Interrupt */
+ START_EXCEPTION(InstructionStorage)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r5,SPRN_ESR /* Grab the ESR and save it */
+ stw r5,_ESR(r11)
+ mr r4,r12 /* Pass SRR0 as arg2 */
+ li r5,0 /* Pass zero as arg3 */
+ EXC_XFER_EE_LITE(0x0400, handle_page_fault)
+
+ /* External Input Interrupt */
+ EXCEPTION(0x0500, ExternalInput, do_IRQ, EXC_XFER_LITE)
+
+ /* Alignment Interrupt */
+ START_EXCEPTION(Alignment)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r4,SPRN_DEAR /* Grab the DEAR and save it */
+ stw r4,_DEAR(r11)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE(0x0600, AlignmentException)
+
+ /* Program Interrupt */
+ START_EXCEPTION(Program)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r4,SPRN_ESR /* Grab the ESR and save it */
+ stw r4,_ESR(r11)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_STD(0x0700, ProgramCheckException)
+
+ /* Floating Point Unavailable Interrupt */
+ EXCEPTION(0x0800, FloatingPointUnavailable, UnknownException, EXC_XFER_EE)
+
+ /* System Call Interrupt */
+ START_EXCEPTION(SystemCall)
+ NORMAL_EXCEPTION_PROLOG
+ EXC_XFER_EE_LITE(0x0c00, DoSyscall)
+
+ /* Auxillary Processor Unavailable Interrupt */
+ EXCEPTION(0x2900, AuxillaryProcessorUnavailable, UnknownException, EXC_XFER_EE)
+
+ /* Decrementer Interrupt */
+ START_EXCEPTION(Decrementer)
+ NORMAL_EXCEPTION_PROLOG
+ lis r0,TSR_DIS@h /* Setup the DEC interrupt mask */
+ mtspr SPRN_TSR,r0 /* Clear the DEC interrupt */
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_LITE(0x0900, timer_interrupt)
+
+ /* Fixed Internal Timer Interrupt */
+ /* TODO: Add FIT support */
+ EXCEPTION(0x3100, FixedIntervalTimer, UnknownException, EXC_XFER_EE)
+
+ /* Watchdog Timer Interrupt */
+ /* TODO: Add watchdog support */
+ CRITICAL_EXCEPTION(0x3200, WatchdogTimer, UnknownException)
+
+ /* Data TLB Error Interrupt */
+ START_EXCEPTION(DataTLBError)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+ mfspr r10, SPRN_DEAR /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 5, r10, r11
+ blt 5, 3f
+ lis r11, swapper_pg_dir@h
+ ori r11, r11, swapper_pg_dir@l
+
+ mfspr r12,SPRN_MAS1 /* Set TID to 0 */
+ li r13,MAS1_TID@l
+ andc r12,r12,r13
+ mtspr SPRN_MAS1,r12
+
+ b 4f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+ andi. r13, r11, _PAGE_PRESENT
+ beq 2f
+
+ ori r11, r11, _PAGE_ACCESSED
+ stw r11, 0(r12)
+
+ /* Jump to common tlb load */
+ b finish_tlb_load
+2:
+ /* The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b data_access
+
+ /* Instruction TLB Error Interrupt */
+ /*
+ * Nearly the same as above, except we get our
+ * information from different registers and bailout
+ * to a different point.
+ */
+ START_EXCEPTION(InstructionTLBError)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+ mfspr r10, SRR0 /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 5, r10, r11
+ blt 5, 3f
+ lis r11, swapper_pg_dir@h
+ ori r11, r11, swapper_pg_dir@l
+
+ mfspr r12,SPRN_MAS1 /* Set TID to 0 */
+ li r13,MAS1_TID@l
+ andc r12,r12,r13
+ mtspr SPRN_MAS1,r12
+
+ b 4f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+ andi. r13, r11, _PAGE_PRESENT
+ beq 2f
+
+ ori r11, r11, _PAGE_ACCESSED
+ stw r11, 0(r12)
+
+ /* Jump to common TLB load point */
+ b finish_tlb_load
+
+2:
+ /* The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b InstructionStorage
+
+#ifdef CONFIG_SPE
+ /* SPE Unavailable */
+ START_EXCEPTION(SPEUnavailable)
+ NORMAL_EXCEPTION_PROLOG
+ bne load_up_spe
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE_LITE(0x2010, KernelSPE)
+#else
+ EXCEPTION(0x2020, SPEUnavailable, UnknownException, EXC_XFER_EE)
+#endif /* CONFIG_SPE */
+
+ /* SPE Floating Point Data */
+#ifdef CONFIG_SPE
+ EXCEPTION(0x2030, SPEFloatingPointData, SPEFloatingPointException, EXC_XFER_EE);
+#else
+ EXCEPTION(0x2040, SPEFloatingPointData, UnknownException, EXC_XFER_EE)
+#endif /* CONFIG_SPE */
+
+ /* SPE Floating Point Round */
+ EXCEPTION(0x2050, SPEFloatingPointRound, UnknownException, EXC_XFER_EE)
+
+ /* Performance Monitor */
+ EXCEPTION(0x2060, PerformanceMonitor, UnknownException, EXC_XFER_EE)
+
+/* Check for a single step debug exception while in an exception
+ * handler before state has been saved. This is to catch the case
+ * where an instruction that we are trying to single step causes
+ * an exception (eg ITLB/DTLB miss) and thus the first instruction of
+ * the exception handler generates a single step debug exception.
+ *
+ * If we get a debug trap on the first instruction of an exception handler,
+ * we reset the MSR_DE in the _exception handler's_ MSR (the debug trap is
+ * a critical exception, so we are using SPRN_CSRR1 to manipulate the MSR).
+ * The exception handler was handling a non-critical interrupt, so it will
+ * save (and later restore) the MSR via SPRN_SRR1, which will still have
+ * the MSR_DE bit set.
+ */
+ /* Debug Interrupt */
+ START_EXCEPTION(Debug)
+ CRITICAL_EXCEPTION_PROLOG
+
+ /*
+ * If this is a single step or branch-taken exception in an
+ * exception entry sequence, it was probably meant to apply to
+ * the code where the exception occurred (since exception entry
+ * doesn't turn off DE automatically). We simulate the effect
+ * of turning off DE on entry to an exception handler by turning
+ * off DE in the CSRR1 value and clearing the debug status.
+ */
+ mfspr r10,SPRN_DBSR /* check single-step/branch taken */
+ andis. r10,r10,(DBSR_IC|DBSR_BT)@h
+ beq+ 1f
+ andi. r0,r9,MSR_PR /* check supervisor */
+ beq 2f /* branch if we need to fix it up... */
+
+ /* continue normal handling for a critical exception... */
+1: mfspr r4,SPRN_DBSR
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_TEMPLATE(DebugException, 0x2002, \
+ (MSR_KERNEL & ~(MSR_ME|MSR_DE|MSR_CE)), \
+ NOCOPY, crit_transfer_to_handler, ret_from_crit_exc)
+
+ /* here it looks like we got an inappropriate debug exception. */
+2: rlwinm r9,r9,0,~MSR_DE /* clear DE in the CSRR1 value */
+ mtspr SPRN_DBSR,r10 /* clear the IC/BT debug intr status */
+ /* restore state and get out */
+ lwz r10,_CCR(r11)
+ lwz r0,GPR0(r11)
+ lwz r1,GPR1(r11)
+ mtcrf 0x80,r10
+ mtspr CSRR0,r12
+ mtspr CSRR1,r9
+ lwz r9,GPR9(r11)
+
+ mtspr SPRG2,r8; /* SPRG2 only used in criticals */
+ lis r8,crit_save@ha;
+ lwz r10,crit_r10@l(r8)
+ lwz r11,crit_r11@l(r8)
+ mfspr r8,SPRG2
+
+ rfci
+ b .
+
+/*
+ * Local functions
+ */
+ /*
+ * Data TLB exceptions will bail out to this point
+ * if they can't resolve the lightweight TLB fault.
+ */
+data_access:
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
+ stw r5,_ESR(r11)
+ mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
+ andis. r10,r5,(ESR_ILK|ESR_DLK)@h
+ bne 1f
+ EXC_XFER_EE_LITE(0x0300, handle_page_fault)
+1:
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE_LITE(0x0300, CacheLockingException)
+
+/*
+
+ * Both the instruction and data TLB miss get to this
+ * point to load the TLB.
+ * r10 - EA of fault
+ * r11 - TLB (info from Linux PTE)
+ * r12, r13 - available to use
+ * CR5 - results of addr < TASK_SIZE
+ * MAS0, MAS1 - loaded with proper value when we get here
+ * MAS2, MAS3 - will need additional info from Linux PTE
+ * Upon exit, we reload everything and RFI.
+ */
+finish_tlb_load:
+ /*
+ * We set execute, because we don't have the granularity to
+ * properly set this at the page level (Linux problem).
+ * Many of these bits are software only. Bits we don't set
+ * here we (properly should) assume have the appropriate value.
+ */
+
+ mfspr r12, SPRN_MAS2
+ rlwimi r12, r11, 26, 27, 31 /* extract WIMGE from pte */
+ mtspr SPRN_MAS2, r12
+
+ bge 5, 1f
+
+ /* addr > TASK_SIZE */
+ li r10, (MAS3_UX | MAS3_UW | MAS3_UR)
+ andi. r13, r11, (_PAGE_USER | _PAGE_HWWRITE | _PAGE_HWEXEC)
+ andi. r12, r11, _PAGE_USER /* Test for _PAGE_USER */
+ iseleq r12, 0, r10
+ and r10, r12, r13
+ srwi r12, r10, 1
+ or r12, r12, r10 /* Copy user perms into supervisor */
+ b 2f
+
+ /* addr <= TASK_SIZE */
+1: rlwinm r12, r11, 31, 29, 29 /* Extract _PAGE_HWWRITE into SW */
+ ori r12, r12, (MAS3_SX | MAS3_SR)
+
+2: rlwimi r11, r12, 0, 20, 31 /* Extract RPN from PTE and merge with perms */
+ mtspr SPRN_MAS3, r11
+ tlbwe
+
+ /* Done...restore registers and get out of here. */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ rfi /* Force context change */
+
+#ifdef CONFIG_SPE
+/* Note that the SPE support is closely modeled after the AltiVec
+ * support. Changes to one are likely to be applicable to the
+ * other! */
+load_up_spe:
+/*
+ * Disable SPE for the task which had SPE previously,
+ * and save its SPE registers in its thread_struct.
+ * Enables SPE for use in the kernel on return.
+ * On SMP we know the SPE units are free, since we give it up every
+ * switch. -- Kumar
+ */
+ mfmsr r5
+ oris r5,r5,MSR_SPE@h
+ mtmsr r5 /* enable use of SPE now */
+ isync
+/*
+ * For SMP, we don't do lazy SPE switching because it just gets too
+ * horrendously complex, especially when a task switches from one CPU
+ * to another. Instead we call giveup_spe in switch_to.
+ */
+#ifndef CONFIG_SMP
+ lis r3,last_task_used_spe@ha
+ lwz r4,last_task_used_spe@l(r3)
+ cmpi 0,r4,0
+ beq 1f
+ addi r4,r4,THREAD /* want THREAD of last_task_used_spe */
+ SAVE_32EVR(0,r10,r4)
+ evxor evr10, evr10, evr10 /* clear out evr10 */
+ evmwumiaa evr10, evr10, evr10 /* evr10 <- ACC = 0 * 0 + ACC */
+ li r5,THREAD_ACC
+ evstddx evr10, r4, r5 /* save off accumulator */
+ lwz r5,PT_REGS(r4)
+ lwz r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+ lis r10,MSR_SPE@h
+ andc r4,r4,r10 /* disable SPE for previous task */
+ stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+1:
+#endif /* CONFIG_SMP */
+ /* enable use of SPE after return */
+ oris r9,r9,MSR_SPE@h
+ mfspr r5,SPRG3 /* current task's THREAD (phys) */
+ li r4,1
+ li r10,THREAD_ACC
+ stw r4,THREAD_USED_SPE(r5)
+ evlddx evr4,r10,r5
+ evmra evr4,evr4
+ REST_32EVR(0,r10,r5)
+#ifndef CONFIG_SMP
+ subi r4,r5,THREAD
+ stw r4,last_task_used_spe@l(r3)
+#endif /* CONFIG_SMP */
+ /* restore registers and return */
+2: REST_4GPRS(3, r11)
+ lwz r10,_CCR(r11)
+ REST_GPR(1, r11)
+ mtcr r10
+ lwz r10,_LINK(r11)
+ mtlr r10
+ REST_GPR(10, r11)
+ mtspr SRR1,r9
+ mtspr SRR0,r12
+ REST_GPR(9, r11)
+ REST_GPR(12, r11)
+ lwz r11,GPR11(r11)
+ SYNC
+ rfi
+
+
+
+/*
+ * SPE unavailable trap from kernel - print a message, but let
+ * the task use SPE in the kernel until it returns to user mode.
+ */
+KernelSPE:
+ lwz r3,_MSR(r1)
+ oris r3,r3,MSR_SPE@h
+ stw r3,_MSR(r1) /* enable use of SPE after return */
+ lis r3,87f@h
+ ori r3,r3,87f@l
+ mr r4,r2 /* current */
+ lwz r5,_NIP(r1)
+ bl printk
+ b ret_from_except
+87: .string "SPE used in kernel (task=%p, pc=%x) \n"
+ .align 4,0
+
+#endif /* CONFIG_SPE */
+
+/*
+ * Global functions
+ */
+
+/*
+ * extern void loadcam_entry(unsigned int index)
+ *
+ * Load TLBCAM[index] entry in to the L2 CAM MMU
+ */
+_GLOBAL(loadcam_entry)
+ lis r4,TLBCAM@ha
+ addi r4,r4,TLBCAM@l
+ mulli r5,r3,20
+ add r3,r5,r4
+ lwz r4,0(r3)
+ mtspr SPRN_MAS0,r4
+ lwz r4,4(r3)
+ mtspr SPRN_MAS1,r4
+ lwz r4,8(r3)
+ mtspr SPRN_MAS2,r4
+ lwz r4,12(r3)
+ mtspr SPRN_MAS3,r4
+ tlbwe
+ isync
+ blr
+
+/*
+ * extern void giveup_altivec(struct task_struct *prev)
+ *
+ * The e500 core does not have an AltiVec unit.
+ */
+_GLOBAL(giveup_altivec)
+ blr
+
+#ifdef CONFIG_SPE
+/*
+ * extern void giveup_spe(struct task_struct *prev)
+ *
+ */
+_GLOBAL(giveup_spe)
+ mfmsr r5
+ oris r5,r5,MSR_SPE@h
+ SYNC
+ mtmsr r5 /* enable use of SPE now */
+ isync
+ cmpi 0,r3,0
+ beqlr- /* if no previous owner, done */
+ addi r3,r3,THREAD /* want THREAD of task */
+ lwz r5,PT_REGS(r3)
+ cmpi 0,r5,0
+ SAVE_32EVR(0, r4, r3)
+ evxor evr6, evr6, evr6 /* clear out evr6 */
+ evmwumiaa evr6, evr6, evr6 /* evr6 <- ACC = 0 * 0 + ACC */
+ li r4,THREAD_ACC
+ evstddx evr6, r4, r3 /* save off accumulator */
+ beq 1f
+ lwz r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+ lis r3,MSR_SPE@h
+ andc r4,r4,r3 /* disable SPE for previous task */
+ stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+1:
+#ifndef CONFIG_SMP
+ li r5,0
+ lis r4,last_task_used_spe@ha
+ stw r5,last_task_used_spe@l(r4)
+#endif /* CONFIG_SMP */
+ blr
+#endif /* CONFIG_SPE */
+
+/*
+ * extern void giveup_fpu(struct task_struct *prev)
+ *
+ * The e500 core does not have an FPU.
+ */
+_GLOBAL(giveup_fpu)
+ blr
+
+/*
+ * extern void abort(void)
+ *
+ * At present, this routine just applies a system reset.
+ */
+_GLOBAL(abort)
+ li r13,0
+ mtspr SPRN_DBCR0,r13 /* disable all debug events */
+ mfmsr r13
+ ori r13,r13,MSR_DE@l /* Enable Debug Events */
+ mtmsr r13
+ mfspr r13,SPRN_DBCR0
+ lis r13,(DBCR0_IDM|DBCR0_RST_CHIP)@h
+ mtspr SPRN_DBCR0,r13
+
+_GLOBAL(set_context)
+
+#ifdef CONFIG_BDI_SWITCH
+ /* Context switch the PTE pointer for the Abatron BDI2000.
+ * The PGDIR is the second parameter.
+ */
+ lis r5, abatron_pteptrs@h
+ ori r5, r5, abatron_pteptrs@l
+ stw r4, 0x4(r5)
+#endif
+ mtspr SPRN_PID,r3
+ isync /* Force context change */
+ blr
+
+/*
+ * We put a few things here that have to be page-aligned. This stuff
+ * goes at the beginning of the data segment, which is page-aligned.
+ */
+ .data
+_GLOBAL(sdata)
+_GLOBAL(empty_zero_page)
+ .space 4096
+_GLOBAL(swapper_pg_dir)
+ .space 4096
+
+ .section .bss
+/* Stack for handling critical exceptions from kernel mode */
+critical_stack_bottom:
+ .space 4096
+critical_stack_top:
+ .previous
+
+/* Stack for handling machine check exceptions from kernel mode */
+mcheck_stack_bottom:
+ .space 4096
+mcheck_stack_top:
+ .previous
+
+/*
+ * This area is used for temporarily saving registers during the
+ * critical and machine check exception prologs. It must always
+ * follow the page aligned allocations, so it starts on a page
+ * boundary, ensuring that all crit_save areas are in a single
+ * page.
+ */
+
+/* crit_save */
+_GLOBAL(crit_save)
+ .space 4
+_GLOBAL(crit_r10)
+ .space 4
+_GLOBAL(crit_r11)
+ .space 4
+_GLOBAL(crit_sprg0)
+ .space 4
+_GLOBAL(crit_sprg1)
+ .space 4
+_GLOBAL(crit_sprg4)
+ .space 4
+_GLOBAL(crit_sprg5)
+ .space 4
+_GLOBAL(crit_sprg7)
+ .space 4
+_GLOBAL(crit_pid)
+ .space 4
+_GLOBAL(crit_srr0)
+ .space 4
+_GLOBAL(crit_srr1)
+ .space 4
+
+/* mcheck_save */
+_GLOBAL(mcheck_save)
+ .space 4
+_GLOBAL(mcheck_r10)
+ .space 4
+_GLOBAL(mcheck_r11)
+ .space 4
+_GLOBAL(mcheck_sprg0)
+ .space 4
+_GLOBAL(mcheck_sprg1)
+ .space 4
+_GLOBAL(mcheck_sprg4)
+ .space 4
+_GLOBAL(mcheck_sprg5)
+ .space 4
+_GLOBAL(mcheck_sprg7)
+ .space 4
+_GLOBAL(mcheck_pid)
+ .space 4
+_GLOBAL(mcheck_srr0)
+ .space 4
+_GLOBAL(mcheck_srr1)
+ .space 4
+_GLOBAL(mcheck_csrr0)
+ .space 4
+_GLOBAL(mcheck_csrr1)
+ .space 4
+
+/*
+ * This space gets a copy of optional info passed to us by the bootstrap
+ * which is used to pass parameters into the kernel like root=/dev/sda1, etc.
+ */
+_GLOBAL(cmd_line)
+ .space 512
+
+/*
+ * Room for two PTE pointers, usually the kernel and current user pointers
+ * to their respective root page table.
+ */
+abatron_pteptrs:
+ .space 8
+
+
--- /dev/null
+/*
+ * Routines to emulate some Altivec/VMX instructions, specifically
+ * those that can trap when given denormalized operands in Java mode.
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <asm/ptrace.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+
+/* Functions in vector.S */
+extern void vaddfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vsubfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vmaddfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vnmsubfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vrefp(vector128 *dst, vector128 *src);
+extern void vrsqrtefp(vector128 *dst, vector128 *src);
+extern void vexptep(vector128 *dst, vector128 *src);
+
+static unsigned int exp2s[8] = {
+ 0x800000,
+ 0x8b95c2,
+ 0x9837f0,
+ 0xa5fed7,
+ 0xb504f3,
+ 0xc5672a,
+ 0xd744fd,
+ 0xeac0c7
+};
+
+/*
+ * Computes an estimate of 2^x. The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int eexp2(unsigned int s)
+{
+ int exp, pwr;
+ unsigned int mant, frac;
+
+ /* extract exponent field from input */
+ exp = ((s >> 23) & 0xff) - 127;
+ if (exp > 7) {
+ /* check for NaN input */
+ if (exp == 128 && (s & 0x7fffff) != 0)
+ return s | 0x400000; /* return QNaN */
+ /* 2^-big = 0, 2^+big = +Inf */
+ return (s & 0x80000000)? 0: 0x7f800000; /* 0 or +Inf */
+ }
+ if (exp < -23)
+ return 0x3f800000; /* 1.0 */
+
+ /* convert to fixed point integer in 9.23 representation */
+ pwr = (s & 0x7fffff) | 0x800000;
+ if (exp > 0)
+ pwr <<= exp;
+ else
+ pwr >>= -exp;
+ if (s & 0x80000000)
+ pwr = -pwr;
+
+ /* extract integer part, which becomes exponent part of result */
+ exp = (pwr >> 23) + 126;
+ if (exp >= 254)
+ return 0x7f800000;
+ if (exp < -23)
+ return 0;
+
+ /* table lookup on top 3 bits of fraction to get mantissa */
+ mant = exp2s[(pwr >> 20) & 7];
+
+ /* linear interpolation using remaining 20 bits of fraction */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" (pwr << 12), "r" (0x172b83ff));
+ asm("mulhwu %0,%1,%2" : "=r" (frac) : "r" (frac), "r" (mant));
+ mant += frac;
+
+ if (exp >= 0)
+ return mant + (exp << 23);
+
+ /* denormalized result */
+ exp = -exp;
+ mant += 1 << (exp - 1);
+ return mant >> exp;
+}
+
+/*
+ * Computes an estimate of log_2(x). The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int elog2(unsigned int s)
+{
+ int exp, mant, lz, frac;
+
+ exp = s & 0x7f800000;
+ mant = s & 0x7fffff;
+ if (exp == 0x7f800000) { /* Inf or NaN */
+ if (mant != 0)
+ s |= 0x400000; /* turn NaN into QNaN */
+ return s;
+ }
+ if ((exp | mant) == 0) /* +0 or -0 */
+ return 0xff800000; /* return -Inf */
+
+ if (exp == 0) {
+ /* denormalized */
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (mant));
+ mant <<= lz - 8;
+ exp = (-118 - lz) << 23;
+ } else {
+ mant |= 0x800000;
+ exp -= 127 << 23;
+ }
+
+ if (mant >= 0xb504f3) { /* 2^0.5 * 2^23 */
+ exp |= 0x400000; /* 0.5 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xb504f334)); /* 2^-0.5 * 2^32 */
+ }
+ if (mant >= 0x9837f0) { /* 2^0.25 * 2^23 */
+ exp |= 0x200000; /* 0.25 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xd744fccb)); /* 2^-0.25 * 2^32 */
+ }
+ if (mant >= 0x8b95c2) { /* 2^0.125 * 2^23 */
+ exp |= 0x100000; /* 0.125 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xeac0c6e8)); /* 2^-0.125 * 2^32 */
+ }
+ if (mant > 0x800000) { /* 1.0 * 2^23 */
+ /* calculate (mant - 1) * 1.381097463 */
+ /* 1.381097463 == 0.125 / (2^0.125 - 1) */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" ((mant - 0x800000) << 1), "r" (0xb0c7cd3a));
+ exp += frac;
+ }
+ s = exp & 0x80000000;
+ if (exp != 0) {
+ if (s)
+ exp = -exp;
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (exp));
+ lz = 8 - lz;
+ if (lz > 0)
+ exp >>= lz;
+ else if (lz < 0)
+ exp <<= -lz;
+ s += ((lz + 126) << 23) + exp;
+ }
+ return s;
+}
+
+#define VSCR_SAT 1
+
+static int ctsxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp, mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (exp >= 31) {
+ /* saturate, unless the result would be -2^31 */
+ if (x + (scale << 23) != 0xcf000000)
+ *vscrp |= VSCR_SAT;
+ return (x & 0x80000000)? 0x80000000: 0x7fffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 7) >> (30 - exp);
+ return (x & 0x80000000)? -mant: mant;
+}
+
+static unsigned int ctuxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp;
+ unsigned int mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (x & 0x80000000) {
+ /* negative => saturate to 0 */
+ *vscrp |= VSCR_SAT;
+ return 0;
+ }
+ if (exp >= 32) {
+ /* saturate */
+ *vscrp |= VSCR_SAT;
+ return 0xffffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 8) >> (31 - exp);
+ return mant;
+}
+
+/* Round to floating integer, towards 0 */
+static unsigned int rfiz(unsigned int x)
+{
+ int exp;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < 0)
+ return x & 0x80000000; /* |x| < 1.0 rounds to 0 */
+ return x & ~(0x7fffff >> exp);
+}
+
+/* Round to floating integer, towards +/- Inf */
+static unsigned int rfii(unsigned int x)
+{
+ int exp, mask;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if ((x & 0x7fffffff) == 0)
+ return x; /* +/-0 -> +/-0 */
+ if (exp < 0)
+ /* 0 < |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ mask = 0x7fffff >> exp;
+ /* mantissa overflows into exponent - that's OK,
+ it can't overflow into the sign bit */
+ return (x + mask) & ~mask;
+}
+
+/* Round to floating integer, to nearest */
+static unsigned int rfin(unsigned int x)
+{
+ int exp, half;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < -1)
+ return x & 0x80000000; /* |x| < 0.5 -> +/-0 */
+ if (exp == -1)
+ /* 0.5 <= |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ half = 0x400000 >> exp;
+ /* add 0.5 to the magnitude and chop off the fraction bits */
+ return (x + half) & ~(0x7fffff >> exp);
+}
+
+int
+emulate_altivec(struct pt_regs *regs)
+{
+ unsigned int instr, i;
+ unsigned int va, vb, vc, vd;
+ vector128 *vrs;
+
+ if (get_user(instr, (unsigned int *) regs->nip))
+ return -EFAULT;
+ if ((instr >> 26) != 4)
+ return -EINVAL; /* not an altivec instruction */
+ vd = (instr >> 21) & 0x1f;
+ va = (instr >> 16) & 0x1f;
+ vb = (instr >> 11) & 0x1f;
+ vc = (instr >> 6) & 0x1f;
+
+ vrs = current->thread.vr;
+ switch (instr & 0x3f) {
+ case 10:
+ switch (vc) {
+ case 0: /* vaddfp */
+ vaddfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 1: /* vsubfp */
+ vsubfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 4: /* vrefp */
+ vrefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 5: /* vrsqrtefp */
+ vrsqrtefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 6: /* vexptefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = eexp2(vrs[vb].u[i]);
+ break;
+ case 7: /* vlogefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = elog2(vrs[vb].u[i]);
+ break;
+ case 8: /* vrfin */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfin(vrs[vb].u[i]);
+ break;
+ case 9: /* vrfiz */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfiz(vrs[vb].u[i]);
+ break;
+ case 10: /* vrfip */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfiz(x): rfii(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 11: /* vrfim */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfii(x): rfiz(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 14: /* vctuxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctuxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ case 15: /* vctsxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctsxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case 46: /* vmaddfp */
+ vmaddfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ case 47: /* vnmsubfp */
+ vnmsubfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
--- /dev/null
+#include <asm/ppc_asm.h>
+#include <asm/processor.h>
+
+/*
+ * The routines below are in assembler so we can closely control the
+ * usage of floating-point registers. These routines must be called
+ * with preempt disabled.
+ */
+ .data
+fpzero:
+ .long 0
+fpone:
+ .long 0x3f800000 /* 1.0 in single-precision FP */
+fphalf:
+ .long 0x3f000000 /* 0.5 in single-precision FP */
+
+ .text
+/*
+ * Internal routine to enable floating point and set FPSCR to 0.
+ * Don't call it from C; it doesn't use the normal calling convention.
+ */
+fpenable:
+ mfmsr r10
+ ori r11,r10,MSR_FP
+ mtmsr r11
+ isync
+ stfd fr0,24(r1)
+ stfd fr1,16(r1)
+ stfd fr31,8(r1)
+ lis r11,fpzero@ha
+ mffs fr31
+ lfs fr1,fpzero@l(r11)
+ mtfsf 0xff,fr1
+ blr
+
+fpdisable:
+ mtfsf 0xff,fr31
+ lfd fr31,8(r1)
+ lfd fr1,16(r1)
+ lfd fr0,24(r1)
+ mtmsr r10
+ isync
+ blr
+
+/*
+ * Vector add, floating point.
+ */
+ .globl vaddfp
+vaddfp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fadds fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector subtract, floating point.
+ */
+ .globl vsubfp
+vsubfp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fsubs fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector multiply and add, floating point.
+ */
+ .globl vmaddfp
+vmaddfp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fmadds fr0,fr0,fr1,fr2
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,52(r1)
+ mtlr r0
+ addi r1,r1,48
+ blr
+
+/*
+ * Vector negative multiply and subtract, floating point.
+ */
+ .globl vnmsubfp
+vnmsubfp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fnmsubs fr0,fr0,fr1,fr2
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,52(r1)
+ mtlr r0
+ addi r1,r1,48
+ blr
+
+/*
+ * Vector reciprocal estimate. We just compute 1.0/x.
+ * r3 -> destination, r4 -> source.
+ */
+ .globl vrefp
+vrefp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ lis r9,fpone@ha
+ li r0,4
+ lfs fr1,fpone@l(r9)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ fdivs fr0,fr1,fr0
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector reciprocal square-root estimate, floating point.
+ * We use the frsqrte instruction for the initial estimate followed
+ * by 2 iterations of Newton-Raphson to get sufficient accuracy.
+ * r3 -> destination, r4 -> source.
+ */
+ .globl vrsqrtefp
+vrsqrtefp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ stfd fr3,40(r1)
+ stfd fr4,48(r1)
+ stfd fr5,56(r1)
+ lis r9,fpone@ha
+ lis r8,fphalf@ha
+ li r0,4
+ lfs fr4,fpone@l(r9)
+ lfs fr5,fphalf@l(r8)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ frsqrte fr1,fr0 /* r = frsqrte(s) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ stfsx fr1,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ lfd fr5,56(r1)
+ lfd fr4,48(r1)
+ lfd fr3,40(r1)
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
--- /dev/null
+/*
+ * arch/ppc/syslib/rheap.c
+ *
+ * A Remote Heap. Remote means that we don't touch the memory that the
+ * heap points to. Normal heap implementations use the memory they manage
+ * to place their list. We cannot do that because the memory we manage may
+ * have special properties, for example it is uncachable or of different
+ * endianess.
+ *
+ * Author: Pantelis Antoniou <panto@intracom.gr>
+ *
+ * 2004 (c) INTRACOM S.A. Greece. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+
+#include <asm/rheap.h>
+
+/*
+ * Fixup a list_head, needed when copying lists. If the pointers fall
+ * between s and e, apply the delta. This assumes that
+ * sizeof(struct list_head *) == sizeof(unsigned long *).
+ */
+static inline void fixup(unsigned long s, unsigned long e, int d,
+ struct list_head *l)
+{
+ unsigned long *pp;
+
+ pp = (unsigned long *)&l->next;
+ if (*pp >= s && *pp < e)
+ *pp += d;
+
+ pp = (unsigned long *)&l->prev;
+ if (*pp >= s && *pp < e)
+ *pp += d;
+}
+
+/* Grow the allocated blocks */
+static int grow(rh_info_t * info, int max_blocks)
+{
+ rh_block_t *block, *blk;
+ int i, new_blocks;
+ int delta;
+ unsigned long blks, blke;
+
+ if (max_blocks <= info->max_blocks)
+ return -EINVAL;
+
+ new_blocks = max_blocks - info->max_blocks;
+
+ block = kmalloc(sizeof(rh_block_t) * max_blocks, GFP_KERNEL);
+ if (block == NULL)
+ return -ENOMEM;
+
+ if (info->max_blocks > 0) {
+
+ /* copy old block area */
+ memcpy(block, info->block,
+ sizeof(rh_block_t) * info->max_blocks);
+
+ delta = (char *)block - (char *)info->block;
+
+ /* and fixup list pointers */
+ blks = (unsigned long)info->block;
+ blke = (unsigned long)(info->block + info->max_blocks);
+
+ for (i = 0, blk = block; i < info->max_blocks; i++, blk++)
+ fixup(blks, blke, delta, &blk->list);
+
+ fixup(blks, blke, delta, &info->empty_list);
+ fixup(blks, blke, delta, &info->free_list);
+ fixup(blks, blke, delta, &info->taken_list);
+
+ /* free the old allocated memory */
+ if ((info->flags & RHIF_STATIC_BLOCK) == 0)
+ kfree(info->block);
+ }
+
+ info->block = block;
+ info->empty_slots += new_blocks;
+ info->max_blocks = max_blocks;
+ info->flags &= ~RHIF_STATIC_BLOCK;
+
+ /* add all new blocks to the free list */
+ for (i = 0, blk = block + info->max_blocks; i < new_blocks; i++, blk++)
+ list_add(&blk->list, &info->empty_list);
+
+ return 0;
+}
+
+/*
+ * Assure at least the required amount of empty slots. If this function
+ * causes a grow in the block area then all pointers kept to the block
+ * area are invalid!
+ */
+static int assure_empty(rh_info_t * info, int slots)
+{
+ int max_blocks;
+
+ /* This function is not meant to be used to grow uncontrollably */
+ if (slots >= 4)
+ return -EINVAL;
+
+ /* Enough space */
+ if (info->empty_slots >= slots)
+ return 0;
+
+ /* Next 16 sized block */
+ max_blocks = ((info->max_blocks + slots) + 15) & ~15;
+
+ return grow(info, max_blocks);
+}
+
+static rh_block_t *get_slot(rh_info_t * info)
+{
+ rh_block_t *blk;
+
+ /* If no more free slots, and failure to extend. */
+ /* XXX: You should have called assure_empty before */
+ if (info->empty_slots == 0) {
+ printk(KERN_ERR "rh: out of slots; crash is imminent.\n");
+ return NULL;
+ }
+
+ /* Get empty slot to use */
+ blk = list_entry(info->empty_list.next, rh_block_t, list);
+ list_del_init(&blk->list);
+ info->empty_slots--;
+
+ /* Initialize */
+ blk->start = NULL;
+ blk->size = 0;
+ blk->owner = NULL;
+
+ return blk;
+}
+
+static inline void release_slot(rh_info_t * info, rh_block_t * blk)
+{
+ list_add(&blk->list, &info->empty_list);
+ info->empty_slots++;
+}
+
+static void attach_free_block(rh_info_t * info, rh_block_t * blkn)
+{
+ rh_block_t *blk;
+ rh_block_t *before;
+ rh_block_t *after;
+ rh_block_t *next;
+ int size;
+ unsigned long s, e, bs, be;
+ struct list_head *l;
+
+ /* We assume that they are aligned properly */
+ size = blkn->size;
+ s = (unsigned long)blkn->start;
+ e = s + size;
+
+ /* Find the blocks immediately before and after the given one
+ * (if any) */
+ before = NULL;
+ after = NULL;
+ next = NULL;
+
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+
+ bs = (unsigned long)blk->start;
+ be = bs + blk->size;
+
+ if (next == NULL && s >= bs)
+ next = blk;
+
+ if (be == s)
+ before = blk;
+
+ if (e == bs)
+ after = blk;
+
+ /* If both are not null, break now */
+ if (before != NULL && after != NULL)
+ break;
+ }
+
+ /* Now check if they are really adjacent */
+ if (before != NULL && s != (unsigned long)before->start + before->size)
+ before = NULL;
+
+ if (after != NULL && e != (unsigned long)after->start)
+ after = NULL;
+
+ /* No coalescing; list insert and return */
+ if (before == NULL && after == NULL) {
+
+ if (next != NULL)
+ list_add(&blkn->list, &next->list);
+ else
+ list_add(&blkn->list, &info->free_list);
+
+ return;
+ }
+
+ /* We don't need it anymore */
+ release_slot(info, blkn);
+
+ /* Grow the before block */
+ if (before != NULL && after == NULL) {
+ before->size += size;
+ return;
+ }
+
+ /* Grow the after block backwards */
+ if (before == NULL && after != NULL) {
+ (int8_t *) after->start -= size;
+ after->size += size;
+ return;
+ }
+
+ /* Grow the before block, and release the after block */
+ before->size += size + after->size;
+ list_del(&after->list);
+ release_slot(info, after);
+}
+
+static void attach_taken_block(rh_info_t * info, rh_block_t * blkn)
+{
+ rh_block_t *blk;
+ struct list_head *l;
+
+ /* Find the block immediately before the given one (if any) */
+ list_for_each(l, &info->taken_list) {
+ blk = list_entry(l, rh_block_t, list);
+ if (blk->start > blkn->start) {
+ list_add_tail(&blkn->list, &blk->list);
+ return;
+ }
+ }
+
+ list_add_tail(&blkn->list, &info->taken_list);
+}
+
+/*
+ * Create a remote heap dynamically. Note that no memory for the blocks
+ * are allocated. It will upon the first allocation
+ */
+rh_info_t *rh_create(unsigned int alignment)
+{
+ rh_info_t *info;
+
+ /* Alignment must be a power of two */
+ if ((alignment & (alignment - 1)) != 0)
+ return NULL;
+
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (info == NULL)
+ return NULL;
+
+ info->alignment = alignment;
+
+ /* Initially everything as empty */
+ info->block = NULL;
+ info->max_blocks = 0;
+ info->empty_slots = 0;
+ info->flags = 0;
+
+ INIT_LIST_HEAD(&info->empty_list);
+ INIT_LIST_HEAD(&info->free_list);
+ INIT_LIST_HEAD(&info->taken_list);
+
+ return info;
+}
+
+/*
+ * Destroy a dynamically created remote heap. Deallocate only if the areas
+ * are not static
+ */
+void rh_destroy(rh_info_t * info)
+{
+ if ((info->flags & RHIF_STATIC_BLOCK) == 0 && info->block != NULL)
+ kfree(info->block);
+
+ if ((info->flags & RHIF_STATIC_INFO) == 0)
+ kfree(info);
+}
+
+/*
+ * Initialize in place a remote heap info block. This is needed to support
+ * operation very early in the startup of the kernel, when it is not yet safe
+ * to call kmalloc.
+ */
+void rh_init(rh_info_t * info, unsigned int alignment, int max_blocks,
+ rh_block_t * block)
+{
+ int i;
+ rh_block_t *blk;
+
+ /* Alignment must be a power of two */
+ if ((alignment & (alignment - 1)) != 0)
+ return;
+
+ info->alignment = alignment;
+
+ /* Initially everything as empty */
+ info->block = block;
+ info->max_blocks = max_blocks;
+ info->empty_slots = max_blocks;
+ info->flags = RHIF_STATIC_INFO | RHIF_STATIC_BLOCK;
+
+ INIT_LIST_HEAD(&info->empty_list);
+ INIT_LIST_HEAD(&info->free_list);
+ INIT_LIST_HEAD(&info->taken_list);
+
+ /* Add all new blocks to the free list */
+ for (i = 0, blk = block; i < max_blocks; i++, blk++)
+ list_add(&blk->list, &info->empty_list);
+}
+
+/* Attach a free memory region, coalesces regions if adjuscent */
+int rh_attach_region(rh_info_t * info, void *start, int size)
+{
+ rh_block_t *blk;
+ unsigned long s, e, m;
+ int r;
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ /* Take final values */
+ start = (void *)s;
+ size = (int)(e - s);
+
+ /* Grow the blocks, if needed */
+ r = assure_empty(info, 1);
+ if (r < 0)
+ return r;
+
+ blk = get_slot(info);
+ blk->start = start;
+ blk->size = size;
+ blk->owner = NULL;
+
+ attach_free_block(info, blk);
+
+ return 0;
+}
+
+/* Detatch given address range, splits free block if needed. */
+void *rh_detach_region(rh_info_t * info, void *start, int size)
+{
+ struct list_head *l;
+ rh_block_t *blk, *newblk;
+ unsigned long s, e, m, bs, be;
+
+ /* Validate size */
+ if (size <= 0)
+ return NULL;
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ if (assure_empty(info, 1) < 0)
+ return NULL;
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ /* The range must lie entirely inside one free block */
+ bs = (unsigned long)blk->start;
+ be = (unsigned long)blk->start + blk->size;
+ if (s >= bs && e <= be)
+ break;
+ blk = NULL;
+ }
+
+ if (blk == NULL)
+ return NULL;
+
+ /* Perfect fit */
+ if (bs == s && be == e) {
+ /* Delete from free list, release slot */
+ list_del(&blk->list);
+ release_slot(info, blk);
+ return (void *)s;
+ }
+
+ /* blk still in free list, with updated start and/or size */
+ if (bs == s || be == e) {
+ if (bs == s)
+ (int8_t *) blk->start += size;
+ blk->size -= size;
+
+ } else {
+ /* The front free fragment */
+ blk->size = s - bs;
+
+ /* the back free fragment */
+ newblk = get_slot(info);
+ newblk->start = (void *)e;
+ newblk->size = be - e;
+
+ list_add(&newblk->list, &blk->list);
+ }
+
+ return (void *)s;
+}
+
+void *rh_alloc(rh_info_t * info, int size, const char *owner)
+{
+ struct list_head *l;
+ rh_block_t *blk;
+ rh_block_t *newblk;
+ void *start;
+
+ /* Validate size */
+ if (size <= 0)
+ return NULL;
+
+ /* Align to configured alignment */
+ size = (size + (info->alignment - 1)) & ~(info->alignment - 1);
+
+ if (assure_empty(info, 1) < 0)
+ return NULL;
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ if (size <= blk->size)
+ break;
+ blk = NULL;
+ }
+
+ if (blk == NULL)
+ return NULL;
+
+ /* Just fits */
+ if (blk->size == size) {
+ /* Move from free list to taken list */
+ list_del(&blk->list);
+ blk->owner = owner;
+ start = blk->start;
+
+ attach_taken_block(info, blk);
+
+ return start;
+ }
+
+ newblk = get_slot(info);
+ newblk->start = blk->start;
+ newblk->size = size;
+ newblk->owner = owner;
+
+ /* blk still in free list, with updated start, size */
+ (int8_t *) blk->start += size;
+ blk->size -= size;
+
+ start = newblk->start;
+
+ attach_taken_block(info, newblk);
+
+ return start;
+}
+
+/* allocate at precisely the given address */
+void *rh_alloc_fixed(rh_info_t * info, void *start, int size, const char *owner)
+{
+ struct list_head *l;
+ rh_block_t *blk, *newblk1, *newblk2;
+ unsigned long s, e, m, bs, be;
+
+ /* Validate size */
+ if (size <= 0)
+ return NULL;
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ if (assure_empty(info, 2) < 0)
+ return NULL;
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ /* The range must lie entirely inside one free block */
+ bs = (unsigned long)blk->start;
+ be = (unsigned long)blk->start + blk->size;
+ if (s >= bs && e <= be)
+ break;
+ }
+
+ if (blk == NULL)
+ return NULL;
+
+ /* Perfect fit */
+ if (bs == s && be == e) {
+ /* Move from free list to taken list */
+ list_del(&blk->list);
+ blk->owner = owner;
+
+ start = blk->start;
+ attach_taken_block(info, blk);
+
+ return start;
+
+ }
+
+ /* blk still in free list, with updated start and/or size */
+ if (bs == s || be == e) {
+ if (bs == s)
+ (int8_t *) blk->start += size;
+ blk->size -= size;
+
+ } else {
+ /* The front free fragment */
+ blk->size = s - bs;
+
+ /* The back free fragment */
+ newblk2 = get_slot(info);
+ newblk2->start = (void *)e;
+ newblk2->size = be - e;
+
+ list_add(&newblk2->list, &blk->list);
+ }
+
+ newblk1 = get_slot(info);
+ newblk1->start = (void *)s;
+ newblk1->size = e - s;
+ newblk1->owner = owner;
+
+ start = newblk1->start;
+ attach_taken_block(info, newblk1);
+
+ return start;
+}
+
+int rh_free(rh_info_t * info, void *start)
+{
+ rh_block_t *blk, *blk2;
+ struct list_head *l;
+ int size;
+
+ /* Linear search for block */
+ blk = NULL;
+ list_for_each(l, &info->taken_list) {
+ blk2 = list_entry(l, rh_block_t, list);
+ if (start < blk2->start)
+ break;
+ blk = blk2;
+ }
+
+ if (blk == NULL || start > (blk->start + blk->size))
+ return -EINVAL;
+
+ /* Remove from taken list */
+ list_del(&blk->list);
+
+ /* Get size of freed block */
+ size = blk->size;
+ attach_free_block(info, blk);
+
+ return size;
+}
+
+int rh_get_stats(rh_info_t * info, int what, int max_stats, rh_stats_t * stats)
+{
+ rh_block_t *blk;
+ struct list_head *l;
+ struct list_head *h;
+ int nr;
+
+ switch (what) {
+
+ case RHGS_FREE:
+ h = &info->free_list;
+ break;
+
+ case RHGS_TAKEN:
+ h = &info->taken_list;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ /* Linear search for block */
+ nr = 0;
+ list_for_each(l, h) {
+ blk = list_entry(l, rh_block_t, list);
+ if (stats != NULL && nr < max_stats) {
+ stats->start = blk->start;
+ stats->size = blk->size;
+ stats->owner = blk->owner;
+ stats++;
+ }
+ nr++;
+ }
+
+ return nr;
+}
+
+int rh_set_owner(rh_info_t * info, void *start, const char *owner)
+{
+ rh_block_t *blk, *blk2;
+ struct list_head *l;
+ int size;
+
+ /* Linear search for block */
+ blk = NULL;
+ list_for_each(l, &info->taken_list) {
+ blk2 = list_entry(l, rh_block_t, list);
+ if (start < blk2->start)
+ break;
+ blk = blk2;
+ }
+
+ if (blk == NULL || start > (blk->start + blk->size))
+ return -EINVAL;
+
+ blk->owner = owner;
+
+ return size;
+}
+
+void rh_dump(rh_info_t * info)
+{
+ static rh_stats_t st[32]; /* XXX maximum 32 blocks */
+ int maxnr;
+ int i, nr;
+
+ maxnr = sizeof(st) / sizeof(st[0]);
+
+ printk(KERN_INFO
+ "info @0x%p (%d slots empty / %d max)\n",
+ info, info->empty_slots, info->max_blocks);
+
+ printk(KERN_INFO " Free:\n");
+ nr = rh_get_stats(info, RHGS_FREE, maxnr, st);
+ if (nr > maxnr)
+ nr = maxnr;
+ for (i = 0; i < nr; i++)
+ printk(KERN_INFO
+ " 0x%p-0x%p (%u)\n",
+ st[i].start, (int8_t *) st[i].start + st[i].size,
+ st[i].size);
+ printk(KERN_INFO "\n");
+
+ printk(KERN_INFO " Taken:\n");
+ nr = rh_get_stats(info, RHGS_TAKEN, maxnr, st);
+ if (nr > maxnr)
+ nr = maxnr;
+ for (i = 0; i < nr; i++)
+ printk(KERN_INFO
+ " 0x%p-0x%p (%u) %s\n",
+ st[i].start, (int8_t *) st[i].start + st[i].size,
+ st[i].size, st[i].owner != NULL ? st[i].owner : "");
+ printk(KERN_INFO "\n");
+}
+
+void rh_dump_blk(rh_info_t * info, rh_block_t * blk)
+{
+ printk(KERN_INFO
+ "blk @0x%p: 0x%p-0x%p (%u)\n",
+ blk, blk->start, (int8_t *) blk->start + blk->size, blk->size);
+}
--- /dev/null
+/*
+ * Modifications by Kumar Gala (kumar.gala@freescale.com) to support
+ * E500 Book E processors.
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This file contains the routines for initializing the MMU
+ * on the 4xx series of chips.
+ * -- paulus
+ *
+ * Derived from arch/ppc/mm/init.c:
+ * Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
+ *
+ * Modifications by Paul Mackerras (PowerMac) (paulus@cs.anu.edu.au)
+ * and Cort Dougan (PReP) (cort@cs.nmt.edu)
+ * Copyright (C) 1996 Paul Mackerras
+ * Amiga/APUS changes by Jesper Skov (jskov@cygnus.co.uk).
+ *
+ * Derived from "arch/i386/mm/init.c"
+ * Copyright (C) 1991, 1992, 1993, 1994 Linus Torvalds
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/bootmem.h>
+#include <linux/highmem.h>
+
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <asm/io.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/uaccess.h>
+#include <asm/smp.h>
+#include <asm/bootx.h>
+#include <asm/machdep.h>
+#include <asm/setup.h>
+
+extern void loadcam_entry(unsigned int index);
+unsigned int tlbcam_index;
+unsigned int num_tlbcam_entries;
+static unsigned long __cam0, __cam1, __cam2;
+extern unsigned long total_lowmem;
+extern unsigned long __max_low_memory;
+#define MAX_LOW_MEM CONFIG_LOWMEM_SIZE
+
+struct tlbcam {
+ u32 MAS0;
+ u32 MAS1;
+ u32 MAS2;
+ u32 MAS3;
+ u32 MAS7;
+} TLBCAM[NUM_TLBCAMS];
+
+struct tlbcamrange {
+ unsigned long start;
+ unsigned long limit;
+ phys_addr_t phys;
+} tlbcam_addrs[NUM_TLBCAMS];
+
+extern unsigned int tlbcam_index;
+
+/*
+ * Return PA for this VA if it is mapped by a CAM, or 0
+ */
+unsigned long v_mapped_by_tlbcam(unsigned long va)
+{
+ int b;
+ for (b = 0; b < tlbcam_index; ++b)
+ if (va >= tlbcam_addrs[b].start && va < tlbcam_addrs[b].limit)
+ return tlbcam_addrs[b].phys + (va - tlbcam_addrs[b].start);
+ return 0;
+}
+
+/*
+ * Return VA for a given PA or 0 if not mapped
+ */
+unsigned long p_mapped_by_tlbcam(unsigned long pa)
+{
+ int b;
+ for (b = 0; b < tlbcam_index; ++b)
+ if (pa >= tlbcam_addrs[b].phys
+ && pa < (tlbcam_addrs[b].limit-tlbcam_addrs[b].start)
+ +tlbcam_addrs[b].phys)
+ return tlbcam_addrs[b].start+(pa-tlbcam_addrs[b].phys);
+ return 0;
+}
+
+/*
+ * Set up one of the I/D BAT (block address translation) register pairs.
+ * The parameters are not checked; in particular size must be a power
+ * of 4 between 4k and 256M.
+ */
+void settlbcam(int index, unsigned long virt, phys_addr_t phys,
+ unsigned int size, int flags, unsigned int pid)
+{
+ unsigned int tsize, lz;
+
+ asm ("cntlzw %0,%1" : "=r" (lz) : "r" (size));
+ tsize = (21 - lz) / 2;
+
+#ifdef CONFIG_SMP
+ if ((flags & _PAGE_NO_CACHE) == 0)
+ flags |= _PAGE_COHERENT;
+#endif
+
+ TLBCAM[index].MAS0 = MAS0_TLBSEL | (index << 16);
+ TLBCAM[index].MAS1 = MAS1_VALID | MAS1_IPROT | MAS1_TSIZE(tsize) | ((pid << 16) & MAS1_TID);
+ TLBCAM[index].MAS2 = virt & PAGE_MASK;
+
+ TLBCAM[index].MAS2 |= (flags & _PAGE_WRITETHRU) ? MAS2_W : 0;
+ TLBCAM[index].MAS2 |= (flags & _PAGE_NO_CACHE) ? MAS2_I : 0;
+ TLBCAM[index].MAS2 |= (flags & _PAGE_COHERENT) ? MAS2_M : 0;
+ TLBCAM[index].MAS2 |= (flags & _PAGE_GUARDED) ? MAS2_G : 0;
+ TLBCAM[index].MAS2 |= (flags & _PAGE_ENDIAN) ? MAS2_E : 0;
+
+ TLBCAM[index].MAS3 = (phys & PAGE_MASK) | MAS3_SX | MAS3_SR;
+ TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_SW : 0);
+
+#ifndef CONFIG_KGDB /* want user access for breakpoints */
+ if (flags & _PAGE_USER) {
+ TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
+ TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
+ }
+#else
+ TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
+ TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
+#endif
+
+ tlbcam_addrs[index].start = virt;
+ tlbcam_addrs[index].limit = virt + size - 1;
+ tlbcam_addrs[index].phys = phys;
+
+ loadcam_entry(index);
+}
+
+void invalidate_tlbcam_entry(int index)
+{
+ TLBCAM[index].MAS0 = MAS0_TLBSEL | (index << 16);
+ TLBCAM[index].MAS1 = ~MAS1_VALID;
+
+ loadcam_entry(index);
+}
+
+void __init cam_mapin_ram(unsigned long cam0, unsigned long cam1,
+ unsigned long cam2)
+{
+ settlbcam(0, KERNELBASE, PPC_MEMSTART, cam0, _PAGE_KERNEL, 0);
+ tlbcam_index++;
+ if (cam1) {
+ tlbcam_index++;
+ settlbcam(1, KERNELBASE+cam0, PPC_MEMSTART+cam0, cam1, _PAGE_KERNEL, 0);
+ }
+ if (cam2) {
+ tlbcam_index++;
+ settlbcam(2, KERNELBASE+cam0+cam1, PPC_MEMSTART+cam0+cam1, cam2, _PAGE_KERNEL, 0);
+ }
+}
+
+/*
+ * MMU_init_hw does the chip-specific initialization of the MMU hardware.
+ */
+void __init MMU_init_hw(void)
+{
+ flush_instruction_cache();
+}
+
+unsigned long __init mmu_mapin_ram(void)
+{
+ cam_mapin_ram(__cam0, __cam1, __cam2);
+
+ return __cam0 + __cam1 + __cam2;
+}
+
+
+void __init
+adjust_total_lowmem(void)
+{
+ unsigned long max_low_mem = MAX_LOW_MEM;
+ unsigned long cam_max = 0x10000000;
+ unsigned long ram;
+
+ /* adjust CAM size to max_low_mem */
+ if (max_low_mem < cam_max)
+ cam_max = max_low_mem;
+
+ /* adjust lowmem size to max_low_mem */
+ if (max_low_mem < total_lowmem)
+ ram = max_low_mem;
+ else
+ ram = total_lowmem;
+
+ /* Calculate CAM values */
+ __cam0 = 1UL << 2 * (__ilog2(ram) / 2);
+ if (__cam0 > cam_max)
+ __cam0 = cam_max;
+ ram -= __cam0;
+ if (ram) {
+ __cam1 = 1UL << 2 * (__ilog2(ram) / 2);
+ if (__cam1 > cam_max)
+ __cam1 = cam_max;
+ ram -= __cam1;
+ }
+ if (ram) {
+ __cam2 = 1UL << 2 * (__ilog2(ram) / 2);
+ if (__cam2 > cam_max)
+ __cam2 = cam_max;
+ ram -= __cam2;
+ }
+
+ printk(KERN_INFO "Memory CAM mapping: CAM0=%ldMb, CAM1=%ldMb,"
+ " CAM2=%ldMb residual: %ldMb\n",
+ __cam0 >> 20, __cam1 >> 20, __cam2 >> 20,
+ (total_lowmem - __cam0 - __cam1 - __cam2) >> 20);
+ __max_low_memory = max_low_mem = __cam0 + __cam1 + __cam2;
+}
--- /dev/null
+
+menu "Profiling support"
+ depends on EXPERIMENTAL
+
+config PROFILING
+ bool "Profiling support (EXPERIMENTAL)"
+ help
+ Say Y here to enable the extended profiling support mechanisms used
+ by profilers such as OProfile.
+
+
+config OPROFILE
+ tristate "OProfile system profiling (EXPERIMENTAL)"
+ depends on PROFILING
+ help
+ OProfile is a profiling system capable of profiling the
+ whole system, include the kernel, kernel modules, libraries,
+ and applications.
+
+ If unsure, say N.
+
+endmenu
+
--- /dev/null
+obj-$(CONFIG_OPROFILE) += oprofile.o
+
+DRIVER_OBJS := $(addprefix ../../../drivers/oprofile/, \
+ oprof.o cpu_buffer.o buffer_sync.o \
+ event_buffer.o oprofile_files.o \
+ oprofilefs.o oprofile_stats.o \
+ timer_int.o )
+
+oprofile-y := $(DRIVER_OBJS) init.o
--- /dev/null
+/**
+ * @file init.c
+ *
+ * @remark Copyright 2002 OProfile authors
+ * @remark Read the file COPYING
+ *
+ * @author John Levon <levon@movementarian.org>
+ */
+
+#include <linux/kernel.h>
+#include <linux/oprofile.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+
+int __init oprofile_arch_init(struct oprofile_operations ** ops)
+{
+ return -ENODEV;
+}
+
+
+void oprofile_arch_exit(void)
+{
+}
--- /dev/null
+/*
+ * Support for IBM PPC 405EP evaluation board (Bubinga).
+ *
+ * Author: SAW (IBM), derived from walnut.c.
+ * Maintained by MontaVista Software <source@mvista.com>
+ *
+ * 2003 (c) MontaVista Softare Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/threads.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/blkdev.h>
+#include <linux/pci.h>
+#include <linux/rtc.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+
+#include <asm/system.h>
+#include <asm/pci-bridge.h>
+#include <asm/processor.h>
+#include <asm/machdep.h>
+#include <asm/page.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/todc.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <asm/ibm_ocp_pci.h>
+
+#include <platforms/4xx/ibm405ep.h>
+
+#undef DEBUG
+
+#ifdef DEBUG
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+extern bd_t __res;
+
+void *bubinga_rtc_base;
+
+/* Some IRQs unique to the board
+ * Used by the generic 405 PCI setup functions in ppc4xx_pci.c
+ */
+int __init
+ppc405_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {28, 28, 28, 28}, /* IDSEL 1 - PCI slot 1 */
+ {29, 29, 29, 29}, /* IDSEL 2 - PCI slot 2 */
+ {30, 30, 30, 30}, /* IDSEL 3 - PCI slot 3 */
+ {31, 31, 31, 31}, /* IDSEL 4 - PCI slot 4 */
+ };
+
+ const long min_idsel = 1, max_idsel = 4, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+};
+
+/* The serial clock for the chip is an internal clock determined by
+ * different clock speeds/dividers.
+ * Calculate the proper input baud rate and setup the serial driver.
+ */
+static void __init
+bubinga_early_serial_map(void)
+{
+ u32 uart_div;
+ int uart_clock;
+ struct uart_port port;
+
+ /* Calculate the serial clock input frequency
+ *
+ * The base baud is the PLL OUTA (provided in the board info
+ * structure) divided by the external UART Divisor, divided
+ * by 16.
+ */
+ uart_div = (mfdcr(DCRN_CPC0_UCR_BASE) & DCRN_CPC0_UCR_U0DIV);
+ uart_clock = __res.bi_pllouta_freq / uart_div;
+
+ /* Setup serial port access */
+ memset(&port, 0, sizeof(port));
+ port.membase = (void*)ACTING_UART0_IO_BASE;
+ port.irq = ACTING_UART0_INT;
+ port.uartclk = uart_clock;
+ port.regshift = 0;
+ port.iotype = SERIAL_IO_MEM;
+ port.flags = ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST;
+ port.line = 0;
+
+ if (early_serial_setup(&port) != 0) {
+ printk("Early serial init of port 0 failed\n");
+ }
+
+ port.membase = (void*)ACTING_UART1_IO_BASE;
+ port.irq = ACTING_UART1_INT;
+ port.line = 1;
+
+ if (early_serial_setup(&port) != 0) {
+ printk("Early serial init of port 1 failed\n");
+ }
+}
+
+void __init
+bios_fixup(struct pci_controller *hose, struct pcil0_regs *pcip)
+{
+
+ unsigned int bar_response, bar;
+ /*
+ * Expected PCI mapping:
+ *
+ * PLB addr PCI memory addr
+ * --------------------- ---------------------
+ * 0000'0000 - 7fff'ffff <--- 0000'0000 - 7fff'ffff
+ * 8000'0000 - Bfff'ffff ---> 8000'0000 - Bfff'ffff
+ *
+ * PLB addr PCI io addr
+ * --------------------- ---------------------
+ * e800'0000 - e800'ffff ---> 0000'0000 - 0001'0000
+ *
+ * The following code is simplified by assuming that the bootrom
+ * has been well behaved in following this mapping.
+ */
+
+#ifdef DEBUG
+ int i;
+
+ printk("ioremap PCLIO_BASE = 0x%x\n", pcip);
+ printk("PCI bridge regs before fixup \n");
+ for (i = 0; i <= 3; i++) {
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].ma)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].la)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].pcila)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].pciha)));
+ }
+ printk(" ptm1ms\t0x%x\n", in_le32(&(pcip->ptm1ms)));
+ printk(" ptm1la\t0x%x\n", in_le32(&(pcip->ptm1la)));
+ printk(" ptm2ms\t0x%x\n", in_le32(&(pcip->ptm2ms)));
+ printk(" ptm2la\t0x%x\n", in_le32(&(pcip->ptm2la)));
+
+#endif
+
+ /* added for IBM boot rom version 1.15 bios bar changes -AK */
+
+ /* Disable region first */
+ out_le32((void *) &(pcip->pmm[0].ma), 0x00000000);
+ /* PLB starting addr, PCI: 0x80000000 */
+ out_le32((void *) &(pcip->pmm[0].la), 0x80000000);
+ /* PCI start addr, 0x80000000 */
+ out_le32((void *) &(pcip->pmm[0].pcila), PPC405_PCI_MEM_BASE);
+ /* 512MB range of PLB to PCI */
+ out_le32((void *) &(pcip->pmm[0].pciha), 0x00000000);
+ /* Enable no pre-fetch, enable region */
+ out_le32((void *) &(pcip->pmm[0].ma), ((0xffffffff -
+ (PPC405_PCI_UPPER_MEM -
+ PPC405_PCI_MEM_BASE)) | 0x01));
+
+ /* Disable region one */
+ out_le32((void *) &(pcip->pmm[1].ma), 0x00000000);
+ out_le32((void *) &(pcip->pmm[1].la), 0x00000000);
+ out_le32((void *) &(pcip->pmm[1].pcila), 0x00000000);
+ out_le32((void *) &(pcip->pmm[1].pciha), 0x00000000);
+ out_le32((void *) &(pcip->pmm[1].ma), 0x00000000);
+ out_le32((void *) &(pcip->ptm1ms), 0x00000001);
+
+ /* Disable region two */
+ out_le32((void *) &(pcip->pmm[2].ma), 0x00000000);
+ out_le32((void *) &(pcip->pmm[2].la), 0x00000000);
+ out_le32((void *) &(pcip->pmm[2].pcila), 0x00000000);
+ out_le32((void *) &(pcip->pmm[2].pciha), 0x00000000);
+ out_le32((void *) &(pcip->pmm[2].ma), 0x00000000);
+ out_le32((void *) &(pcip->ptm2ms), 0x00000000);
+ out_le32((void *) &(pcip->ptm2la), 0x00000000);
+
+ /* Zero config bars */
+ for (bar = PCI_BASE_ADDRESS_1; bar <= PCI_BASE_ADDRESS_2; bar += 4) {
+ early_write_config_dword(hose, hose->first_busno,
+ PCI_FUNC(hose->first_busno), bar,
+ 0x00000000);
+ early_read_config_dword(hose, hose->first_busno,
+ PCI_FUNC(hose->first_busno), bar,
+ &bar_response);
+ DBG("BUS %d, device %d, Function %d bar 0x%8.8x is 0x%8.8x\n",
+ hose->first_busno, PCI_SLOT(hose->first_busno),
+ PCI_FUNC(hose->first_busno), bar, bar_response);
+ }
+ /* end work arround */
+
+#ifdef DEBUG
+ printk("PCI bridge regs after fixup \n");
+ for (i = 0; i <= 3; i++) {
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].ma)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].la)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].pcila)));
+ printk(" pmm%dma\t0x%x\n", i, in_le32(&(pcip->pmm[i].pciha)));
+ }
+ printk(" ptm1ms\t0x%x\n", in_le32(&(pcip->ptm1ms)));
+ printk(" ptm1la\t0x%x\n", in_le32(&(pcip->ptm1la)));
+ printk(" ptm2ms\t0x%x\n", in_le32(&(pcip->ptm2ms)));
+ printk(" ptm2la\t0x%x\n", in_le32(&(pcip->ptm2la)));
+
+#endif
+}
+
+void __init
+bubinga_setup_arch(void)
+{
+ ppc4xx_setup_arch();
+
+ ibm_ocp_set_emac(0, 1);
+
+ bubinga_early_serial_map();
+
+ /* RTC step for the evb405ep */
+ bubinga_rtc_base = (void *) BUBINGA_RTC_VADDR;
+ TODC_INIT(TODC_TYPE_DS1743, bubinga_rtc_base, bubinga_rtc_base,
+ bubinga_rtc_base, 8);
+ /* Identify the system */
+ printk("IBM Bubinga port (MontaVista Software, Inc. <source@mvista.com>)\n");
+}
+
+void __init
+bubinga_map_io(void)
+{
+ ppc4xx_map_io();
+ io_block_mapping(BUBINGA_RTC_VADDR,
+ BUBINGA_RTC_PADDR, BUBINGA_RTC_SIZE, _PAGE_IO);
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ ppc4xx_init(r3, r4, r5, r6, r7);
+
+ ppc_md.setup_arch = bubinga_setup_arch;
+ ppc_md.setup_io_mappings = bubinga_map_io;
+
+#ifdef CONFIG_GEN_RTC
+ ppc_md.time_init = todc_time_init;
+ ppc_md.set_rtc_time = todc_set_rtc_time;
+ ppc_md.get_rtc_time = todc_get_rtc_time;
+ ppc_md.nvram_read_val = todc_direct_read_val;
+ ppc_md.nvram_write_val = todc_direct_write_val;
+#endif
+#ifdef CONFIG_KGDB
+ ppc_md.early_serial_map = bubinga_early_serial_map;
+#endif
+}
+
--- /dev/null
+/*
+ * Support for IBM PPC 405EP evaluation board (Bubinga).
+ *
+ * Author: SAW (IBM), derived from walnut.h.
+ * Maintained by MontaVista Software <source@mvista.com>
+ *
+ * 2003 (c) MontaVista Softare Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifdef __KERNEL__
+#ifndef __BUBINGA_H__
+#define __BUBINGA_H__
+
+/* 405EP */
+#include <platforms/4xx/ibm405ep.h>
+
+#ifndef __ASSEMBLY__
+/*
+ * Data structure defining board information maintained by the boot
+ * ROM on IBM's evaluation board. An effort has been made to
+ * keep the field names consistent with the 8xx 'bd_t' board info
+ * structures.
+ */
+
+typedef struct board_info {
+ unsigned char bi_s_version[4]; /* Version of this structure */
+ unsigned char bi_r_version[30]; /* Version of the IBM ROM */
+ unsigned int bi_memsize; /* DRAM installed, in bytes */
+ unsigned char bi_enetaddr[2][6]; /* Local Ethernet MAC address */ unsigned char bi_pci_enetaddr[6]; /* PCI Ethernet MAC address */
+ unsigned int bi_intfreq; /* Processor speed, in Hz */
+ unsigned int bi_busfreq; /* PLB Bus speed, in Hz */
+ unsigned int bi_pci_busfreq; /* PCI Bus speed, in Hz */
+ unsigned int bi_opb_busfreq; /* OPB Bus speed, in Hz */
+ unsigned int bi_pllouta_freq; /* PLL OUTA speed, in Hz */
+} bd_t;
+
+/* Some 4xx parts use a different timebase frequency from the internal clock.
+*/
+#define bi_tbfreq bi_intfreq
+
+
+/* Memory map for the Bubinga board.
+ * Generic 4xx plus RTC.
+ */
+
+extern void *bubinga_rtc_base;
+#define BUBINGA_RTC_PADDR ((uint)0xf0000000)
+#define BUBINGA_RTC_VADDR BUBINGA_RTC_PADDR
+#define BUBINGA_RTC_SIZE ((uint)8*1024)
+
+/* The UART clock is based off an internal clock -
+ * define BASE_BAUD based on the internal clock and divider(s).
+ * Since BASE_BAUD must be a constant, we will initialize it
+ * using clock/divider values which OpenBIOS initializes
+ * for typical configurations at various CPU speeds.
+ * The base baud is calculated as (FWDA / EXT UART DIV / 16)
+ */
+#define BASE_BAUD 0
+
+#define BUBINGA_FPGA_BASE 0xF0300000
+
+#define PPC4xx_MACHINE_NAME "IBM Bubinga"
+
+#endif /* !__ASSEMBLY__ */
+#endif /* __BUBINGA_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/ibm405ep.c
+ *
+ * Support for IBM PPC 405EP processors.
+ *
+ * Author: SAW (IBM), derived from ibmnp405l.c.
+ * Maintained by MontaVista Software <source@mvista.com>
+ *
+ * 2003 (c) MontaVista Softare Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/threads.h>
+#include <linux/param.h>
+#include <linux/string.h>
+
+#include <asm/ibm4xx.h>
+#include <asm/ocp.h>
+
+#include <platforms/4xx/ibm405ep.h>
+
+static struct ocp_func_mal_data ibm405ep_mal0_def = {
+ .num_tx_chans = 4, /* Number of TX channels */
+ .num_rx_chans = 2, /* Number of RX channels */
+ .txeob_irq = 11, /* TX End Of Buffer IRQ */
+ .rxeob_irq = 12, /* RX End Of Buffer IRQ */
+ .txde_irq = 13, /* TX Descriptor Error IRQ */
+ .rxde_irq = 14, /* RX Descriptor Error IRQ */
+ .serr_irq = 10, /* MAL System Error IRQ */
+};
+OCP_SYSFS_MAL_DATA()
+
+static struct ocp_func_emac_data ibm405ep_emac0_def = {
+ .rgmii_idx = -1, /* No RGMII */
+ .rgmii_mux = -1, /* No RGMII */
+ .zmii_idx = -1, /* ZMII device index */
+ .zmii_mux = 0, /* ZMII input of this EMAC */
+ .mal_idx = 0, /* MAL device index */
+ .mal_rx_chan = 0, /* MAL rx channel number */
+ .mal_tx_chan = 0, /* MAL tx channel number */
+ .wol_irq = 9, /* WOL interrupt number */
+ .mdio_idx = 0, /* MDIO via EMAC0 */
+ .tah_idx = -1, /* No TAH */
+};
+
+static struct ocp_func_emac_data ibm405ep_emac1_def = {
+ .rgmii_idx = -1, /* No RGMII */
+ .rgmii_mux = -1, /* No RGMII */
+ .zmii_idx = -1, /* ZMII device index */
+ .zmii_mux = 0, /* ZMII input of this EMAC */
+ .mal_idx = 0, /* MAL device index */
+ .mal_rx_chan = 1, /* MAL rx channel number */
+ .mal_tx_chan = 2, /* MAL tx channel number */
+ .wol_irq = 9, /* WOL interrupt number */
+ .mdio_idx = 0, /* MDIO via EMAC0 */
+ .tah_idx = -1, /* No TAH */
+};
+OCP_SYSFS_EMAC_DATA()
+
+static struct ocp_func_iic_data ibm405ep_iic0_def = {
+ .fast_mode = 0, /* Use standad mode (100Khz) */
+};
+OCP_SYSFS_IIC_DATA()
+
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_OPB,
+ .index = 0,
+ .paddr = 0xEF600000,
+ .irq = OCP_IRQ_NA,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_16550,
+ .index = 0,
+ .paddr = UART0_IO_BASE,
+ .irq = UART0_INT,
+ .pm = IBM_CPM_UART0
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_16550,
+ .index = 1,
+ .paddr = UART1_IO_BASE,
+ .irq = UART1_INT,
+ .pm = IBM_CPM_UART1
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_IIC,
+ .paddr = 0xEF600500,
+ .irq = 2,
+ .pm = IBM_CPM_IIC0,
+ .additions = &ibm405ep_iic0_def,
+ .show = &ocp_show_iic_data
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_GPIO,
+ .paddr = 0xEF600700,
+ .irq = OCP_IRQ_NA,
+ .pm = IBM_CPM_GPIO0
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_MAL,
+ .paddr = OCP_PADDR_NA,
+ .irq = OCP_IRQ_NA,
+ .pm = OCP_CPM_NA,
+ .additions = &ibm405ep_mal0_def,
+ .show = &ocp_show_mal_data
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_EMAC,
+ .index = 0,
+ .paddr = EMAC0_BASE,
+ .irq = 15,
+ .pm = OCP_CPM_NA,
+ .additions = &ibm405ep_emac0_def,
+ .show = &ocp_show_emac_data
+ },
+ { .vendor = OCP_VENDOR_IBM,
+ .function = OCP_FUNC_EMAC,
+ .index = 1,
+ .paddr = 0xEF600900,
+ .irq = 17,
+ .pm = OCP_CPM_NA,
+ .additions = &ibm405ep_emac1_def,
+ .show = &ocp_show_emac_data
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/4xx/ibm405ep.h
+ *
+ * IBM PPC 405EP processor defines.
+ *
+ * Author: SAW (IBM), derived from ibm405gp.h.
+ * Maintained by MontaVista Software <source@mvista.com>
+ *
+ * 2003 (c) MontaVista Softare Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_IBM405EP_H__
+#define __ASM_IBM405EP_H__
+
+#include <linux/config.h>
+
+/* ibm405.h at bottom of this file */
+
+/* PCI
+ * PCI Bridge config reg definitions
+ * see 17-19 of manual
+ */
+
+#define PPC405_PCI_CONFIG_ADDR 0xeec00000
+#define PPC405_PCI_CONFIG_DATA 0xeec00004
+
+#define PPC405_PCI_PHY_MEM_BASE 0x80000000 /* hose_a->pci_mem_offset */
+ /* setbat */
+#define PPC405_PCI_MEM_BASE PPC405_PCI_PHY_MEM_BASE /* setbat */
+#define PPC405_PCI_PHY_IO_BASE 0xe8000000 /* setbat */
+#define PPC405_PCI_IO_BASE PPC405_PCI_PHY_IO_BASE /* setbat */
+
+#define PPC405_PCI_LOWER_MEM 0x80000000 /* hose_a->mem_space.start */
+#define PPC405_PCI_UPPER_MEM 0xBfffffff /* hose_a->mem_space.end */
+#define PPC405_PCI_LOWER_IO 0x00000000 /* hose_a->io_space.start */
+#define PPC405_PCI_UPPER_IO 0x0000ffff /* hose_a->io_space.end */
+
+#define PPC405_ISA_IO_BASE PPC405_PCI_IO_BASE
+
+#define PPC4xx_PCI_IO_PADDR ((uint)PPC405_PCI_PHY_IO_BASE)
+#define PPC4xx_PCI_IO_VADDR PPC4xx_PCI_IO_PADDR
+#define PPC4xx_PCI_IO_SIZE ((uint)64*1024)
+#define PPC4xx_PCI_CFG_PADDR ((uint)PPC405_PCI_CONFIG_ADDR)
+#define PPC4xx_PCI_CFG_VADDR PPC4xx_PCI_CFG_PADDR
+#define PPC4xx_PCI_CFG_SIZE ((uint)4*1024)
+#define PPC4xx_PCI_LCFG_PADDR ((uint)0xef400000)
+#define PPC4xx_PCI_LCFG_VADDR PPC4xx_PCI_LCFG_PADDR
+#define PPC4xx_PCI_LCFG_SIZE ((uint)4*1024)
+#define PPC4xx_ONB_IO_PADDR ((uint)0xef600000)
+#define PPC4xx_ONB_IO_VADDR PPC4xx_ONB_IO_PADDR
+#define PPC4xx_ONB_IO_SIZE ((uint)4*1024)
+
+/* serial port defines */
+#define RS_TABLE_SIZE 2
+
+#define UART0_INT 0
+#define UART1_INT 1
+
+#define PCIL0_BASE 0xEF400000
+#define UART0_IO_BASE 0xEF600300
+#define UART1_IO_BASE 0xEF600400
+#define EMAC0_BASE 0xEF600800
+
+#define BD_EMAC_ADDR(e,i) bi_enetaddr[e][i]
+
+#if defined(CONFIG_UART0_TTYS0)
+#define ACTING_UART0_IO_BASE UART0_IO_BASE
+#define ACTING_UART1_IO_BASE UART1_IO_BASE
+#define ACTING_UART0_INT UART0_INT
+#define ACTING_UART1_INT UART1_INT
+#else
+#define ACTING_UART0_IO_BASE UART1_IO_BASE
+#define ACTING_UART1_IO_BASE UART0_IO_BASE
+#define ACTING_UART0_INT UART1_INT
+#define ACTING_UART1_INT UART0_INT
+#endif
+
+#define STD_UART_OP(num) \
+ { 0, BASE_BAUD, 0, ACTING_UART##num##_INT, \
+ (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST), \
+ iomem_base: (u8 *)ACTING_UART##num##_IO_BASE, \
+ io_type: SERIAL_IO_MEM},
+
+#define SERIAL_DEBUG_IO_BASE ACTING_UART0_IO_BASE
+#define SERIAL_PORT_DFNS \
+ STD_UART_OP(0) \
+ STD_UART_OP(1)
+
+/* DCR defines */
+#define DCRN_CPMSR_BASE 0x0BA
+#define DCRN_CPMFR_BASE 0x0B9
+
+#define DCRN_CPC0_PLLMR0_BASE 0x0F0
+#define DCRN_CPC0_BOOT_BASE 0x0F1
+#define DCRN_CPC0_CR1_BASE 0x0F2
+#define DCRN_CPC0_EPRCSR_BASE 0x0F3
+#define DCRN_CPC0_PLLMR1_BASE 0x0F4
+#define DCRN_CPC0_UCR_BASE 0x0F5
+#define DCRN_CPC0_UCR_U0DIV 0x07F
+#define DCRN_CPC0_SRR_BASE 0x0F6
+#define DCRN_CPC0_JTAGID_BASE 0x0F7
+#define DCRN_CPC0_SPARE_BASE 0x0F8
+#define DCRN_CPC0_PCI_BASE 0x0F9
+
+
+#define IBM_CPM_GPT 0x80000000 /* GPT interface */
+#define IBM_CPM_PCI 0x40000000 /* PCI bridge */
+#define IBM_CPM_UIC 0x00010000 /* Universal Int Controller */
+#define IBM_CPM_CPU 0x00008000 /* processor core */
+#define IBM_CPM_EBC 0x00002000 /* EBC controller */
+#define IBM_CPM_SDRAM0 0x00004000 /* SDRAM memory controller */
+#define IBM_CPM_GPIO0 0x00001000 /* General Purpose IO */
+#define IBM_CPM_TMRCLK 0x00000400 /* CPU timers */
+#define IBM_CPM_PLB 0x00000100 /* PLB bus arbiter */
+#define IBM_CPM_OPB 0x00000080 /* PLB to OPB bridge */
+#define IBM_CPM_DMA 0x00000040 /* DMA controller */
+#define IBM_CPM_IIC0 0x00000010 /* IIC interface */
+#define IBM_CPM_UART1 0x00000002 /* serial port 0 */
+#define IBM_CPM_UART0 0x00000001 /* serial port 1 */
+#define DFLT_IBM4xx_PM ~(IBM_CPM_PCI | IBM_CPM_CPU | IBM_CPM_DMA \
+ | IBM_CPM_OPB | IBM_CPM_EBC \
+ | IBM_CPM_SDRAM0 | IBM_CPM_PLB \
+ | IBM_CPM_UIC | IBM_CPM_TMRCLK)
+#define DCRN_DMA0_BASE 0x100
+#define DCRN_DMA1_BASE 0x108
+#define DCRN_DMA2_BASE 0x110
+#define DCRN_DMA3_BASE 0x118
+#define DCRNCAP_DMA_SG 1 /* have DMA scatter/gather capability */
+#define DCRN_DMASR_BASE 0x120
+#define DCRN_EBC_BASE 0x012
+#define DCRN_DCP0_BASE 0x014
+#define DCRN_MAL_BASE 0x180
+#define DCRN_OCM0_BASE 0x018
+#define DCRN_PLB0_BASE 0x084
+#define DCRN_PLLMR_BASE 0x0B0
+#define DCRN_POB0_BASE 0x0A0
+#define DCRN_SDRAM0_BASE 0x010
+#define DCRN_UIC0_BASE 0x0C0
+#define UIC0 DCRN_UIC0_BASE
+
+#include <asm/ibm405.h>
+
+#endif /* __ASM_IBM405EP_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+config 85xx
+ bool
+ depends on E500
+ default y
+
+config PPC_INDIRECT_PCI_BE
+ bool
+ depends on 85xx
+ default y
+
+menu "Freescale 85xx options"
+ depends on E500
+
+choice
+ prompt "Machine Type"
+ depends on 85xx
+ default MPC8540_ADS
+
+config MPC8540_ADS
+ bool "MPC8540ADS"
+ help
+ This option enables support for the MPC 8540 ADS evaluation board.
+
+endchoice
+
+# It's often necessary to know the specific 85xx processor type.
+# Fortunately, it is implied (so far) from the board type, so we
+# don't need to ask more redundant questions.
+config MPC8540
+ bool
+ depends on MPC8540_ADS
+ default y
+
+config FSL_OCP
+ bool
+ depends on 85xx
+ default y
+
+config PPC_GEN550
+ bool
+ depends on MPC8540
+ default y
+
+endmenu
--- /dev/null
+#
+# Makefile for the PowerPC 85xx linux kernel.
+#
+
+obj-$(CONFIG_MPC8540_ADS) += mpc85xx_ads_common.o mpc8540_ads.o
+
+obj-$(CONFIG_MPC8540) += mpc8540.o
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8540.c
+ *
+ * MPC8540 I/O descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/mpc85xx.h>
+#include <asm/ocp.h>
+
+/* These should be defined in platform code */
+extern struct ocp_gfar_data mpc85xx_tsec1_def;
+extern struct ocp_gfar_data mpc85xx_tsec2_def;
+extern struct ocp_gfar_data mpc85xx_fec_def;
+extern struct ocp_mpc_i2c_data mpc85xx_i2c1_def;
+
+/* We use offsets for paddr since we do not know at compile time
+ * what CCSRBAR is, platform code should fix this up in
+ * setup_arch
+ *
+ * Only the first IRQ is given even if a device has
+ * multiple lines associated with ita
+ */
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = MPC85xx_IIC1_OFFSET,
+ .irq = MPC85xx_IRQ_IIC1,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_i2c1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 0,
+ .paddr = MPC85xx_UART0_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 1,
+ .paddr = MPC85xx_UART1_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 0,
+ .paddr = MPC85xx_ENET1_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC1_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 1,
+ .paddr = MPC85xx_ENET2_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC2_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec2_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 2,
+ .paddr = MPC85xx_ENET3_OFFSET,
+ .irq = MPC85xx_IRQ_FEC,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_fec_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_DMA,
+ .index = 0,
+ .paddr = MPC85xx_DMA_OFFSET,
+ .irq = MPC85xx_IRQ_DMA0,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PERFMON,
+ .index = 0,
+ .paddr = MPC85xx_PERFMON_OFFSET,
+ .irq = MPC85xx_IRQ_PERFMON,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8540_ads.c
+ *
+ * MPC8540ADS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON
+ | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON
+ | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_fec_def = {
+ .interruptTransmit = MPC85xx_IRQ_FEC,
+ .interruptError = MPC85xx_IRQ_FEC,
+ .interruptReceive = MPC85xx_IRQ_FEC,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = 0,
+ .phyid = 3,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+mpc8540ads_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8540ads_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+
+#ifdef CONFIG_SERIAL_8250
+ mpc85xx_early_serial_map();
+#endif
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 2);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet2addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+ }
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ {
+ bd_t *binfo = (bd_t *) __res;
+
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
+ binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+ }
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc8540ads_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc85xx_ads_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8540ads_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8540_ads.h
+ *
+ * MPC8540ADS board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC8540ADS_H__
+#define __MACH_MPC8540ADS_H__
+
+#include <linux/config.h>
+#include <linux/serial.h>
+#include <linux/initrd.h>
+#include <syslib/ppc85xx_setup.h>
+#include <platforms/85xx/mpc85xx_ads_common.h>
+
+#define SERIAL_PORT_DFNS \
+ STD_UART_OP(0) \
+ STD_UART_OP(1)
+
+#endif /* __MACH_MPC8540ADS_H__ */
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/mpc8555.c
+ *
+ * MPC8555 I/O descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/mpc85xx.h>
+#include <asm/ocp.h>
+
+/* These should be defined in platform code */
+extern struct ocp_gfar_data mpc85xx_tsec1_def;
+extern struct ocp_gfar_data mpc85xx_tsec2_def;
+extern struct ocp_mpc_i2c_data mpc85xx_i2c1_def;
+
+/* We use offsets for paddr since we do not know at compile time
+ * what CCSRBAR is, platform code should fix this up in
+ * setup_arch
+ *
+ * Only the first IRQ is given even if a device has
+ * multiple lines associated with ita
+ */
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = MPC85xx_IIC1_OFFSET,
+ .irq = MPC85xx_IRQ_IIC1,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_i2c1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 0,
+ .paddr = MPC85xx_UART0_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 1,
+ .paddr = MPC85xx_UART1_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 0,
+ .paddr = MPC85xx_ENET1_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC1_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 1,
+ .paddr = MPC85xx_ENET2_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC2_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec2_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_DMA,
+ .index = 0,
+ .paddr = MPC85xx_DMA_OFFSET,
+ .irq = MPC85xx_IRQ_DMA0,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PERFMON,
+ .index = 0,
+ .paddr = MPC85xx_PERFMON_OFFSET,
+ .irq = MPC85xx_IRQ_PERFMON,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/mpc8555_cds.h
+ *
+ * MPC8555CDS board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC8555CDS_H__
+#define __MACH_MPC8555CDS_H__
+
+#include <linux/config.h>
+#include <linux/serial.h>
+#include <platforms/85xx/mpc85xx_cds_common.h>
+
+#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
+
+#endif /* __MACH_MPC8555CDS_H__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8560.c
+ *
+ * MPC8560 I/O descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/mpc85xx.h>
+#include <asm/ocp.h>
+
+/* These should be defined in platform code */
+extern struct ocp_gfar_data mpc85xx_tsec1_def;
+extern struct ocp_gfar_data mpc85xx_tsec2_def;
+extern struct ocp_mpc_i2c_data mpc85xx_i2c1_def;
+
+/* We use offsets for paddr since we do not know at compile time
+ * what CCSRBAR is, platform code should fix this up in
+ * setup_arch
+ *
+ * Only the first IRQ is given even if a device has
+ * multiple lines associated with ita
+ */
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = MPC85xx_IIC1_OFFSET,
+ .irq = MPC85xx_IRQ_IIC1,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_i2c1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 0,
+ .paddr = MPC85xx_ENET1_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC1_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 1,
+ .paddr = MPC85xx_ENET2_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC2_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec2_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_DMA,
+ .index = 0,
+ .paddr = MPC85xx_DMA_OFFSET,
+ .irq = MPC85xx_IRQ_DMA0,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PERFMON,
+ .index = 0,
+ .paddr = MPC85xx_PERFMON_OFFSET,
+ .irq = MPC85xx_IRQ_PERFMON,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8560_ads.c
+ *
+ * MPC8560ADS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/initrd.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <asm/cpm2.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/cpm2_pic.h>
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+extern void cpm2_reset(void);
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON | GFAR_HAS_COALESCE
+ | GFAR_HAS_PHY_INTR),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON | GFAR_HAS_COALESCE
+ | GFAR_HAS_PHY_INTR),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+
+static void __init
+mpc8560ads_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ cpm2_reset();
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8560ads_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+static irqreturn_t cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
+{
+ while ((irq = cpm2_get_irq(regs)) >= 0) {
+ ppc_irq_dispatch_handler(regs, irq);
+ }
+ return IRQ_HANDLED;
+}
+
+static void __init
+mpc8560_ads_init_IRQ(void)
+{
+ int i;
+ volatile cpm2_map_t *immap = cpm2_immr;
+
+ /* Setup OpenPIC */
+ mpc85xx_ads_init_IRQ();
+
+ /* disable all CPM interupts */
+ immap->im_intctl.ic_simrh = 0x0;
+ immap->im_intctl.ic_simrl = 0x0;
+
+ for (i = CPM_IRQ_OFFSET; i < (NR_CPM_INTS + CPM_IRQ_OFFSET); i++)
+ irq_desc[i].handler = &cpm2_pic;
+
+ /* Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ immap->im_intctl.ic_sicr = 0;
+ immap->im_intctl.ic_scprrh = 0x05309770;
+ immap->im_intctl.ic_scprrl = 0x05309770;
+
+ request_irq(MPC85xx_IRQ_CPM, cpm2_cascade, SA_INTERRUPT, "cpm2_cascade", NULL);
+
+ return;
+}
+
+
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+
+ }
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc8560ads_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc8560_ads_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8560ads_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/mpc8560_ads.h
+ *
+ * MPC8540ADS board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC8560ADS_H
+#define __MACH_MPC8560ADS_H
+
+#include <linux/config.h>
+#include <syslib/ppc85xx_setup.h>
+#include <platforms/85xx/mpc85xx_ads_common.h>
+
+#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
+#define PHY_INTERRUPT MPC85xx_IRQ_EXT7
+
+#endif /* __MACH_MPC8560ADS_H */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_ads_common.c
+ *
+ * MPC85xx ADS board common routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/ocp.h>
+
+#include <mm/mmu_decl.h>
+
+#include <platforms/85xx/mpc85xx_ads_common.h>
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+unsigned char __res[sizeof (bd_t)];
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char mpc85xx_ads_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+ 0x0, /* External 0: */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 1: PCI slot 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI slot 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI slot 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 4: PCI slot 3 */
+#else
+ 0x0, /* External 1: */
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+ 0x0, /* External 4: */
+#endif
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PHY */
+ 0x0, /* External 6: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 7: PHY */
+ 0x0, /* External 8: */
+ 0x0, /* External 9: */
+ 0x0, /* External 10: */
+ 0x0, /* External 11: */
+};
+
+/* ************************************************************************ */
+int
+mpc85xx_ads_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Freescale Semiconductor\n");
+
+ switch (svid & 0xffff0000) {
+ case SVR_8540:
+ seq_printf(m, "Machine\t\t: mpc8540ads\n");
+ break;
+ case SVR_8560:
+ seq_printf(m, "Machine\t\t: mpc8560ads\n");
+ break;
+ default:
+ seq_printf(m, "Machine\t\t: unknown\n");
+ break;
+ }
+ seq_printf(m, "bus freq\t: %u.%.6u MHz\n", freq / 1000000,
+ freq % 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+void __init
+mpc85xx_ads_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr =
+ binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = mpc85xx_ads_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (mpc85xx_ads_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+ return;
+}
+
+#ifdef CONFIG_PCI
+/*
+ * interrupt routing
+ */
+
+int
+mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * This is little evil, but works around the fact
+ * that revA boards have IDSEL starting at 18
+ * and others boards (older) start at 12
+ *
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 2 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 5 */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 12 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 15 */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 18 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 21 */
+ };
+
+ const long min_idsel = 2, max_idsel = 21, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+int
+mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+
+#endif /* CONFIG_PCI */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_ads_common.h
+ *
+ * MPC85XX ADS common board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC85XX_ADS_H__
+#define __MACH_MPC85XX_ADS_H__
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <asm/ppcboot.h>
+
+#define BOARD_CCSRBAR ((uint)0xe0000000)
+#define BCSR_ADDR ((uint)0xf8000000)
+#define BCSR_SIZE ((uint)(32 * 1024))
+
+extern int mpc85xx_ads_show_cpuinfo(struct seq_file *m);
+extern void mpc85xx_ads_init_IRQ(void) __init;
+extern void mpc85xx_ads_map_io(void) __init;
+
+/* PCI interrupt controller */
+#define PIRQA MPC85xx_IRQ_EXT1
+#define PIRQB MPC85xx_IRQ_EXT2
+#define PIRQC MPC85xx_IRQ_EXT3
+#define PIRQD MPC85xx_IRQ_EXT4
+
+#define MPC85XX_PCI1_LOWER_IO 0x00000000
+#define MPC85XX_PCI1_UPPER_IO 0x00ffffff
+
+#define MPC85XX_PCI1_LOWER_MEM 0x80000000
+#define MPC85XX_PCI1_UPPER_MEM 0x9fffffff
+
+#define MPC85XX_PCI1_IO_BASE 0xe2000000
+#define MPC85XX_PCI1_MEM_OFFSET 0x00000000
+
+#define MPC85XX_PCI1_IO_SIZE 0x01000000
+
+#endif /* __MACH_MPC85XX_ADS_H__ */
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/mpc85xx_cds_common.c
+ *
+ * MPC85xx CDS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+#include <linux/root_dev.h>
+#include <linux/initrd.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/immap_cpm2.h>
+#include <asm/ocp.h>
+#include <asm/kgdb.h>
+
+#include <mm/mmu_decl.h>
+#include <syslib/cpm2_pic.h>
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+unsigned char __res[sizeof (bd_t)];
+
+static int cds_pci_slot = 2;
+static volatile u8 * cadmus;
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char mpc85xx_cds_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 0: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 1: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI1 slot */
+#else
+ 0x0, /* External 0: */
+ 0x0, /* External 1: */
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+#endif
+ 0x0, /* External 4: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PHY */
+ 0x0, /* External 6: */
+ 0x0, /* External 7: */
+ 0x0, /* External 8: */
+ 0x0, /* External 9: */
+ 0x0, /* External 10: */
+#if defined(CONFIG_85xx_PCI2) && defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 11: PCI2 slot 0 */
+#else
+ 0x0, /* External 11: */
+#endif
+};
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
+ GFAR_HAS_PHY_INTR),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
+ GFAR_HAS_PHY_INTR),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************ */
+int
+mpc85xx_cds_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Freescale Semiconductor\n");
+ seq_printf(m, "Machine\t\t: CDS (%x)\n", cadmus[CM_VER]);
+ seq_printf(m, "bus freq\t: %u.%.6u MHz\n", freq / 1000000,
+ freq % 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+#ifdef CONFIG_CPM2
+static void cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
+{
+ while((irq = cpm2_get_irq(regs)) >= 0)
+ {
+ ppc_irq_dispatch_handler(regs,irq);
+ }
+}
+#endif /* CONFIG_CPM2 */
+
+void __init
+mpc85xx_cds_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+#ifdef CONFIG_CPM2
+ volatile cpm2_map_t *immap = cpm2_immr;
+ int i;
+#endif
+
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = mpc85xx_cds_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (mpc85xx_cds_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+#ifdef CONFIG_CPM2
+ /* disable all CPM interupts */
+ immap->im_intctl.ic_simrh = 0x0;
+ immap->im_intctl.ic_simrl = 0x0;
+
+ for (i = CPM_IRQ_OFFSET; i < (NR_CPM_INTS + CPM_IRQ_OFFSET); i++)
+ irq_desc[i].handler = &cpm2_pic;
+
+ /* Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ immap->im_intctl.ic_sicr = 0;
+ immap->im_intctl.ic_scprrh = 0x05309770;
+ immap->im_intctl.ic_scprrl = 0x05309770;
+
+ request_irq(MPC85xx_IRQ_CPM, cpm2_cascade, SA_INTERRUPT, "cpm2_cascade", NULL);
+#endif
+
+ return;
+}
+
+#ifdef CONFIG_PCI
+/*
+ * interrupt routing
+ */
+int
+mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ struct pci_controller *hose = pci_bus_to_hose(dev->bus->number);
+
+ if (!hose->index)
+ {
+ /* Handle PCI1 interrupts */
+ char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+
+ /* Note IRQ assignment for slots is based on which slot the elysium is
+ * in -- in this setup elysium is in slot #2 (this PIRQA as first
+ * interrupt on slot */
+ {
+ { 0, 1, 2, 3 }, /* 16 - PMC */
+ { 3, 0, 0, 0 }, /* 17 P2P (Tsi320) */
+ { 0, 1, 2, 3 }, /* 18 - Slot 1 */
+ { 1, 2, 3, 0 }, /* 19 - Slot 2 */
+ { 2, 3, 0, 1 }, /* 20 - Slot 3 */
+ { 3, 0, 1, 2 }, /* 21 - Slot 4 */
+ };
+
+ const long min_idsel = 16, max_idsel = 21, irqs_per_slot = 4;
+ int i, j;
+
+ for (i = 0; i < 6; i++)
+ for (j = 0; j < 4; j++)
+ pci_irq_table[i][j] =
+ ((pci_irq_table[i][j] + 5 -
+ cds_pci_slot) & 0x3) + PIRQ0A;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+ } else {
+ /* Handle PCI2 interrupts (if we have one) */
+ char pci_irq_table[][4] =
+ {
+ /*
+ * We only have one slot and one interrupt
+ * going to PIRQA - PIRQD */
+ { PIRQ1A, PIRQ1A, PIRQ1A, PIRQ1A }, /* 21 - slot 0 */
+ };
+
+ const long min_idsel = 21, max_idsel = 21, irqs_per_slot = 4;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+ }
+}
+
+#define ARCADIA_HOST_BRIDGE_IDSEL 17
+#define ARCADIA_2ND_BRIDGE_IDSEL 3
+
+int
+mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+#if CONFIG_85xx_PCI2
+ /* With the current code we know PCI2 will be bus 2, however this may
+ * not be guarnteed */
+ if (bus == 2 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+#endif
+ /* We explicitly do not go past the Tundra 320 Bridge */
+ if (bus == 1)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+#endif /* CONFIG_PCI */
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+mpc85xx_cds_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ printk("mpc85xx_cds_setup_arch\n");
+
+#ifdef CONFIG_CPM2
+ cpm2_reset();
+#endif
+
+ cadmus = ioremap(CADMUS_BASE, CADMUS_SIZE);
+ cds_pci_slot = ((cadmus[CM_CSR] >> 6) & 0x3) + 1;
+ printk("CDS Version = %x in PCI slot %d\n", cadmus[CM_VER], cds_pci_slot);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+
+#ifdef CONFIG_SERIAL_8250
+ mpc85xx_early_serial_map();
+#endif
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+
+ }
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ {
+ bd_t *binfo = (bd_t *) __res;
+
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
+ binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+
+ }
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc85xx_cds_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_cds_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc85xx_cds_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc85xx_cds_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_cds_common.h
+ *
+ * MPC85xx CDS board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_MPC85XX_CDS_H__
+#define __MACH_MPC85XX_CDS_H__
+
+#include <linux/config.h>
+#include <linux/serial.h>
+#include <asm/ppcboot.h>
+#include <linux/initrd.h>
+#include <syslib/ppc85xx_setup.h>
+
+#define BOARD_CCSRBAR ((uint)0xe0000000)
+#define CCSRBAR_SIZE ((uint)1024*1024)
+
+/* CADMUS info */
+#define CADMUS_BASE (0xf8004000)
+#define CADMUS_SIZE (256)
+#define CM_VER (0)
+#define CM_CSR (1)
+#define CM_RST (2)
+
+/* PCI config */
+#define PCI1_CFG_ADDR_OFFSET (0x8000)
+#define PCI1_CFG_DATA_OFFSET (0x8004)
+
+#define PCI2_CFG_ADDR_OFFSET (0x9000)
+#define PCI2_CFG_DATA_OFFSET (0x9004)
+
+/* PCI interrupt controller */
+#define PIRQ0A MPC85xx_IRQ_EXT0
+#define PIRQ0B MPC85xx_IRQ_EXT1
+#define PIRQ0C MPC85xx_IRQ_EXT2
+#define PIRQ0D MPC85xx_IRQ_EXT3
+#define PIRQ1A MPC85xx_IRQ_EXT11
+
+/* PCI 1 memory map */
+#define MPC85XX_PCI1_LOWER_IO 0x00000000
+#define MPC85XX_PCI1_UPPER_IO 0x00ffffff
+
+#define MPC85XX_PCI1_LOWER_MEM 0x80000000
+#define MPC85XX_PCI1_UPPER_MEM 0x9fffffff
+
+#define MPC85XX_PCI1_IO_BASE 0xe2000000
+#define MPC85XX_PCI1_MEM_OFFSET 0x00000000
+
+#define MPC85XX_PCI1_IO_SIZE 0x01000000
+
+/* PCI 2 memory map */
+#define MPC85XX_PCI2_LOWER_IO 0x01000000
+#define MPC85XX_PCI2_UPPER_IO 0x01ffffff
+
+#define MPC85XX_PCI2_LOWER_MEM 0xa0000000
+#define MPC85XX_PCI2_UPPER_MEM 0xbfffffff
+
+#define MPC85XX_PCI2_IO_BASE 0xe3000000
+#define MPC85XX_PCI2_MEM_OFFSET 0x00000000
+
+#define MPC85XX_PCI2_IO_SIZE 0x01000000
+
+#define SERIAL_PORT_DFNS \
+ STD_UART_OP(0) \
+ STD_UART_OP(1)
+
+#endif /* __MACH_MPC85XX_CDS_H__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/sbc8560.c
+ *
+ * Wind River SBC8560 board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT6,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
+ .phyid = 25,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT7,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
+ .phyid = 26,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+
+#ifdef CONFIG_SERIAL_8250
+static void __init
+sbc8560_early_serial_map(void)
+{
+ struct uart_port uart_req;
+
+ /* Setup serial port access */
+ memset(&uart_req, 0, sizeof (uart_req));
+ uart_req.irq = MPC85xx_IRQ_EXT9;
+ uart_req.flags = STD_COM_FLAGS;
+ uart_req.uartclk = BASE_BAUD * 16;
+ uart_req.iotype = SERIAL_IO_MEM;
+ uart_req.mapbase = UARTA_ADDR;
+ uart_req.membase = ioremap(uart_req.mapbase, MPC85xx_UART0_SIZE);
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(0, &uart_req);
+#endif
+
+ if (early_serial_setup(&uart_req) != 0)
+ printk("Early serial init of port 0 failed\n");
+
+ /* Assume early_serial_setup() doesn't modify uart_req */
+ uart_req.line = 1;
+ uart_req.mapbase = UARTB_ADDR;
+ uart_req.membase = ioremap(uart_req.mapbase, MPC85xx_UART1_SIZE);
+ uart_req.irq = MPC85xx_IRQ_EXT10;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(1, &uart_req);
+#endif
+
+ if (early_serial_setup(&uart_req) != 0)
+ printk("Early serial init of port 1 failed\n");
+}
+#endif
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+sbc8560_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("sbc8560_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+#ifdef CONFIG_SERIAL_8250
+ sbc8560_early_serial_map();
+#endif
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ /* Set up MAC addresses for the Ethernet devices */
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+ }
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, UARTA_ADDR,
+ UARTA_ADDR, 0x1000, _PAGE_IO, 0);
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = sbc8560_setup_arch;
+ ppc_md.show_cpuinfo = sbc8560_show_cpuinfo;
+
+ ppc_md.init_IRQ = sbc8560_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("sbc8560_init(): exit", 0);
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/sbc8560.h
+ *
+ * Wind River SBC8560 board definitions
+ *
+ * Copyright 2003 Motorola Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __MACH_SBC8560_H__
+#define __MACH_SBC8560_H__
+
+#include <linux/config.h>
+#include <linux/serial.h>
+#include <platforms/85xx/sbc85xx.h>
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 2
+#endif
+
+/* Rate for the 1.8432 Mhz clock for the onboard serial chip */
+#define BASE_BAUD ( 1843200 / 16 )
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+#define STD_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, UARTA_ADDR, MPC85xx_IRQ_EXT9, STD_COM_FLAGS, /* ttyS0 */ \
+ iomem_base: (u8 *)UARTA_ADDR, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, UARTB_ADDR, MPC85xx_IRQ_EXT10, STD_COM_FLAGS, /* ttyS1 */ \
+ iomem_base: (u8 *)UARTB_ADDR, \
+ io_type: SERIAL_IO_MEM },
+
+#define SERIAL_PORT_DFNS \
+ STD_SERIAL_PORT_DFNS
+
+#endif /* __MACH_SBC8560_H__ */
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/sbc85xx.c
+ *
+ * WindRiver PowerQUICC III SBC85xx board common routines
+ *
+ * Copyright 2002, 2003 Motorola Inc.
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/ocp.h>
+
+#include <mm/mmu_decl.h>
+
+#include <platforms/85xx/sbc85xx.h>
+
+unsigned char __res[sizeof (bd_t)];
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+unsigned long pci_dram_offset = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char sbc8560_openpic_initsenses[] __initdata = {
+ (IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+ 0x0, /* External 0: */
+ 0x0, /* External 1: */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI slot 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI slot 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 4: PCI slot 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PCI slot 3 */
+#else
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+ 0x0, /* External 4: */
+ 0x0, /* External 5: */
+#endif
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 6: PHY */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 7: PHY */
+ 0x0, /* External 8: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* External 9: PHY */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* External 10: PHY */
+ 0x0, /* External 11: */
+};
+
+/* ************************************************************************ */
+int
+sbc8560_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Wind River\n");
+
+ switch (svid & 0xffff0000) {
+ case SVR_8540:
+ seq_printf(m, "Machine\t\t: hhmmm, this board isn't made yet!\n");
+ break;
+ case SVR_8560:
+ seq_printf(m, "Machine\t\t: SBC8560\n");
+ break;
+ default:
+ seq_printf(m, "Machine\t\t: unknown\n");
+ break;
+ }
+ seq_printf(m, "bus freq\t: %u.%.6u MHz\n", freq / 1000000,
+ freq % 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+void __init
+sbc8560_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr =
+ binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = sbc8560_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (sbc8560_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+ return;
+}
+
+/*
+ * interrupt routing
+ */
+
+#ifdef CONFIG_PCI
+int mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel,
+ unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQA, PIRQB, PIRQC, PIRQD},
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA},
+ };
+
+ const long min_idsel = 12, max_idsel = 15, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+int mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+#endif /* CONFIG_PCI */
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/sbc85xx.h
+ *
+ * WindRiver PowerQUICC III SBC85xx common board definitions
+ *
+ * Copyright 2003 Motorola Inc.
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __PLATFORMS_85XX_SBC85XX_H__
+#define __PLATFORMS_85XX_SBC85XX_H__
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <asm/ppcboot.h>
+
+#define BOARD_CCSRBAR ((uint)0xff700000)
+#define CCSRBAR_SIZE ((uint)1024*1024)
+
+#define BCSR_ADDR ((uint)0xfc000000)
+#define BCSR_SIZE ((uint)(16 * 1024 * 1024))
+
+#define UARTA_ADDR (BCSR_ADDR + 0x00700000)
+#define UARTB_ADDR (BCSR_ADDR + 0x00800000)
+#define RTC_DEVICE_ADDR (BCSR_ADDR + 0x00900000)
+#define EEPROM_ADDR (BCSR_ADDR + 0x00b00000)
+
+extern int sbc8560_show_cpuinfo(struct seq_file *m);
+extern void sbc8560_init_IRQ(void) __init;
+
+/* PCI interrupt controller */
+#define PIRQA MPC85xx_IRQ_EXT1
+#define PIRQB MPC85xx_IRQ_EXT2
+#define PIRQC MPC85xx_IRQ_EXT3
+#define PIRQD MPC85xx_IRQ_EXT4
+
+#define MPC85XX_PCI1_LOWER_IO 0x00000000
+#define MPC85XX_PCI1_UPPER_IO 0x00ffffff
+
+#define MPC85XX_PCI1_LOWER_MEM 0x80000000
+#define MPC85XX_PCI1_UPPER_MEM 0x9fffffff
+
+#define MPC85XX_PCI1_IO_BASE 0xe2000000
+#define MPC85XX_PCI1_MEM_OFFSET 0x00000000
+
+#define MPC85XX_PCI1_IO_SIZE 0x01000000
+
+#endif /* __PLATFORMS_85XX_SBC85XX_H__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/lite5200.c
+ *
+ * Platform support file for the Freescale LITE5200 based on MPC52xx.
+ * A maximum of this file should be moved to syslib/mpc52xx_?????
+ * so that new platform based on MPC52xx need a minimal platform file
+ * ( avoid code duplication )
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based on the 2.4 code written by Kent Borg,
+ * Dale Farnsworth <dale.farnsworth@mvista.com> and
+ * Wolfgang Denk <wd@denx.de>
+ *
+ * Copyright 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright 2003 Motorola Inc.
+ * Copyright 2003 MontaVista Software Inc.
+ * Copyright 2003 DENX Software Engineering (wd@denx.de)
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/initrd.h>
+#include <linux/seq_file.h>
+#include <linux/kdev_t.h>
+#include <linux/root_dev.h>
+#include <linux/console.h>
+
+#include <asm/bootinfo.h>
+#include <asm/io.h>
+#include <asm/ocp.h>
+#include <asm/mpc52xx.h>
+
+
+/* Board data given by U-Boot */
+bd_t __res;
+EXPORT_SYMBOL(__res); /* For modules */
+
+
+/* ======================================================================== */
+/* OCP device definition */
+/* For board/shared resources like PSCs */
+/* ======================================================================== */
+/* Be sure not to load conficting devices : e.g. loading the UART drivers for
+ * PSC1 and then also loading a AC97 for this same PSC.
+ * For details about how to create an entry, look in the doc of the concerned
+ * driver ( eg drivers/serial/mpc52xx_uart.c for the PSC in uart mode )
+ */
+
+struct ocp_def board_ocp[] = {
+ {
+ .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PSC_UART,
+ .index = 0,
+ .paddr = MPC52xx_PSC1,
+ .irq = MPC52xx_PSC1_IRQ,
+ .pm = OCP_CPM_NA,
+ },
+ { /* Terminating entry */
+ .vendor = OCP_VENDOR_INVALID
+ }
+};
+
+
+/* ======================================================================== */
+/* Platform specific code */
+/* ======================================================================== */
+
+static int
+icecube_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "machine\t\t: Freescale LITE5200\n");
+ return 0;
+}
+
+static void __init
+icecube_setup_arch(void)
+{
+
+ /* Add board OCP definitions */
+ mpc52xx_add_board_devices(board_ocp);
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic MPC52xx platform initialization */
+ /* TODO Create one and move a max of stuff in it.
+ Put this init in the syslib */
+
+ struct bi_record *bootinfo = find_bootinfo();
+
+ if (bootinfo)
+ parse_bootinfo(bootinfo);
+ else {
+ /* Load the bd_t board info structure */
+ if (r3)
+ memcpy((void*)&__res,(void*)(r3+KERNELBASE),
+ sizeof(bd_t));
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ /* Load the initrd */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif
+
+ /* Load the command line */
+ if (r6) {
+ *(char *)(r7+KERNELBASE) = 0;
+ strcpy(cmd_line, (char *)(r6+KERNELBASE));
+ }
+ }
+
+ /* BAT setup */
+ mpc52xx_set_bat();
+
+ /* No ISA bus AFAIK */
+ isa_io_base = 0;
+ isa_mem_base = 0;
+
+ /* Setup the ppc_md struct */
+ ppc_md.setup_arch = icecube_setup_arch;
+ ppc_md.show_cpuinfo = icecube_show_cpuinfo;
+ ppc_md.show_percpuinfo = NULL;
+ ppc_md.init_IRQ = mpc52xx_init_irq;
+ ppc_md.get_irq = mpc52xx_get_irq;
+
+ ppc_md.find_end_of_memory = mpc52xx_find_end_of_memory;
+ ppc_md.setup_io_mappings = mpc52xx_map_io;
+
+ ppc_md.restart = mpc52xx_restart;
+ ppc_md.power_off = mpc52xx_power_off;
+ ppc_md.halt = mpc52xx_halt;
+
+ /* No time keeper on the IceCube */
+ ppc_md.time_init = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.set_rtc_time = NULL;
+
+ ppc_md.calibrate_decr = mpc52xx_calibrate_decr;
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ ppc_md.progress = mpc52xx_progress;
+#endif
+}
+
--- /dev/null
+/*
+ * arch/ppc/platforms/lite5200.h
+ *
+ * Definitions for Freescale LITE5200 : MPC52xx Standard Development
+ * Platform board support
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#ifndef __PLATFORMS_LITE5200_H__
+#define __PLATFORMS_LITE5200_H__
+
+/* Serial port used for low-level debug */
+#define MPC52xx_PF_CONSOLE_PORT 0 /* PSC1 */
+
+
+#endif /* __PLATFORMS_LITE5200_H__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/mpc5200.c
+ *
+ * OCP Definitions for the boards based on MPC5200 processor. Contains
+ * definitions for every common peripherals. (Mostly all but PSCs)
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Copyright 2004 Sylvain Munaut <tnt@246tNt.com>
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <asm/ocp.h>
+#include <asm/mpc52xx.h>
+
+/* Here is the core_ocp struct.
+ * With all the devices common to all board. Even if port multiplexing is
+ * not setup for them (if the user don't want them, just don't select the
+ * config option). The potentially conflicting devices (like PSCs) goes in
+ * board specific file.
+ */
+struct ocp_def core_ocp[] = {
+ { /* Terminating entry */
+ .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * A collection of structures, addresses, and values associated with
+ * the Motorola MPC8260ADS/MPC8266ADS-PCI boards.
+ * Copied from the RPX-Classic and SBS8260 stuff.
+ *
+ * Copyright (c) 2001 Dan Malek (dan@mvista.com)
+ */
+#ifdef __KERNEL__
+#ifndef __MACH_ADS8260_DEFS
+#define __MACH_ADS8260_DEFS
+
+#include <linux/config.h>
+
+#include <asm/ppcboot.h>
+
+/* Memory map is configured by the PROM startup.
+ * We just map a few things we need. The CSR is actually 4 byte-wide
+ * registers that can be accessed as 8-, 16-, or 32-bit values.
+ */
+#define CPM_MAP_ADDR ((uint)0xf0000000)
+#define BCSR_ADDR ((uint)0xf4500000)
+#define BCSR_SIZE ((uint)(32 * 1024))
+
+#define BOOTROM_RESTART_ADDR ((uint)0xff000104)
+
+/* The ADS8260 has 16, 32-bit wide control/status registers, accessed
+ * only on word boundaries.
+ * Not all are used (yet), or are interesting to us (yet).
+ */
+
+/* Things of interest in the CSR.
+*/
+#define BCSR0_LED0 ((uint)0x02000000) /* 0 == on */
+#define BCSR0_LED1 ((uint)0x01000000) /* 0 == on */
+#define BCSR1_FETHIEN ((uint)0x08000000) /* 0 == enable */
+#define BCSR1_FETH_RST ((uint)0x04000000) /* 0 == reset */
+#define BCSR1_RS232_EN1 ((uint)0x02000000) /* 0 == enable */
+#define BCSR1_RS232_EN2 ((uint)0x01000000) /* 0 == enable */
+
+#define PHY_INTERRUPT SIU_INT_IRQ7
+
+#ifdef CONFIG_PCI
+/* PCI interrupt controller */
+#define PCI_INT_STAT_REG 0xF8200000
+#define PCI_INT_MASK_REG 0xF8200004
+#define PIRQA (NR_SIU_INTS + 0)
+#define PIRQB (NR_SIU_INTS + 1)
+#define PIRQC (NR_SIU_INTS + 2)
+#define PIRQD (NR_SIU_INTS + 3)
+
+/*
+ * PCI memory map definitions for MPC8266ADS-PCI.
+ *
+ * processor view
+ * local address PCI address target
+ * 0x80000000-0x9FFFFFFF 0x80000000-0x9FFFFFFF PCI mem with prefetch
+ * 0xA0000000-0xBFFFFFFF 0xA0000000-0xBFFFFFFF PCI mem w/o prefetch
+ * 0xF4000000-0xF7FFFFFF 0x00000000-0x03FFFFFF PCI IO
+ *
+ * PCI master view
+ * local address PCI address target
+ * 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory
+ */
+
+/* window for a PCI master to access MPC8266 memory */
+#define PCI_SLV_MEM_LOCAL 0x00000000 /* Local base */
+#define PCI_SLV_MEM_BUS 0x00000000 /* PCI base */
+
+/* window for the processor to access PCI memory with prefetching */
+#define PCI_MSTR_MEM_LOCAL 0x80000000 /* Local base */
+#define PCI_MSTR_MEM_BUS 0x80000000 /* PCI base */
+#define PCI_MSTR_MEM_SIZE 0x20000000 /* 512MB */
+
+/* window for the processor to access PCI memory without prefetching */
+#define PCI_MSTR_MEMIO_LOCAL 0xA0000000 /* Local base */
+#define PCI_MSTR_MEMIO_BUS 0xA0000000 /* PCI base */
+#define PCI_MSTR_MEMIO_SIZE 0x20000000 /* 512MB */
+
+/* window for the processor to access PCI I/O */
+#define PCI_MSTR_IO_LOCAL 0xF4000000 /* Local base */
+#define PCI_MSTR_IO_BUS 0x00000000 /* PCI base */
+#define PCI_MSTR_IO_SIZE 0x04000000 /* 64MB */
+
+#define _IO_BASE PCI_MSTR_IO_LOCAL
+#define _ISA_MEM_BASE PCI_MSTR_MEMIO_LOCAL
+#define PCI_DRAM_OFFSET PCI_SLV_MEM_BUS
+#endif /* CONFIG_PCI */
+
+#endif /* __MACH_ADS8260_DEFS */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/pq2ads_setup.c
+ *
+ * PQ2ADS platform support
+ *
+ * Author: Kumar Gala <kumar.gala@freescale.com>
+ * Derived from: est8260_setup.c by Allen Curtis
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+static int
+pq2ads_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: Motorola\n"
+ "machine\t\t: PQ2 ADS PowerPC\n"
+ "\n"
+ "mem size\t\t: 0x%08lx\n"
+ "console baud\t\t: %ld\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+pq2ads_setup_arch(void)
+{
+ printk("PQ2 ADS Port\n");
+ callback_setup_arch();
+ *(volatile uint *)(BCSR_ADDR + 4) &= ~BCSR1_RS232_EN2;
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = pq2ads_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = pq2ads_setup_arch;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/rpx8260.c
+ *
+ * RPC EP8260 platform support
+ *
+ * Author: Dan Malek <dan@embeddededge.com>
+ * Derived from: pq2ads_setup.c by Kumar
+ *
+ * Copyright 2004 Embedded Edge, LLC
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+static int
+ep8260_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: RPC\n"
+ "machine\t\t: EP8260 PPC\n"
+ "\n"
+ "mem size\t\t: 0x%08x\n"
+ "console baud\t\t: %d\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+ep8260_setup_arch(void)
+{
+ printk("RPC EP8260 Port\n");
+ callback_setup_arch();
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = ep8260_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = ep8260_setup_arch;
+}
--- /dev/null
+/*
+ * A collection of structures, addresses, and values associated with
+ * the Embedded Planet RPX6 (or RPX Super) MPC8260 board.
+ * Copied from the RPX-Classic and SBS8260 stuff.
+ *
+ * Copyright (c) 2001 Dan Malek <dan@embeddededge.com>
+ */
+#ifdef __KERNEL__
+#ifndef __ASM_PLATFORMS_RPX8260_H__
+#define __ASM_PLATFORMS_RPX8260_H__
+
+/* A Board Information structure that is given to a program when
+ * prom starts it up.
+ */
+typedef struct bd_info {
+ unsigned int bi_memstart; /* Memory start address */
+ unsigned int bi_memsize; /* Memory (end) size in bytes */
+ unsigned int bi_nvsize; /* NVRAM size in bytes (can be 0) */
+ unsigned int bi_intfreq; /* Internal Freq, in Hz */
+ unsigned int bi_busfreq; /* Bus Freq, in MHz */
+ unsigned int bi_cpmfreq; /* CPM Freq, in MHz */
+ unsigned int bi_brgfreq; /* BRG Freq, in MHz */
+ unsigned int bi_vco; /* VCO Out from PLL */
+ unsigned int bi_baudrate; /* Default console baud rate */
+ unsigned int bi_immr; /* IMMR when called from boot rom */
+ unsigned char bi_enetaddr[6];
+} bd_t;
+
+extern bd_t m8xx_board_info;
+
+/* Memory map is configured by the PROM startup.
+ * We just map a few things we need. The CSR is actually 4 byte-wide
+ * registers that can be accessed as 8-, 16-, or 32-bit values.
+ */
+#define CPM_MAP_ADDR ((uint)0xf0000000)
+#define RPX_CSR_ADDR ((uint)0xfa000000)
+#define RPX_CSR_SIZE ((uint)(512 * 1024))
+#define RPX_NVRTC_ADDR ((uint)0xfa080000)
+#define RPX_NVRTC_SIZE ((uint)(512 * 1024))
+
+/* The RPX6 has 16, byte wide control/status registers.
+ * Not all are used (yet).
+ */
+extern volatile u_char *rpx6_csr_addr;
+
+/* Things of interest in the CSR.
+*/
+#define BCSR0_ID_MASK ((u_char)0xf0) /* Read only */
+#define BCSR0_SWITCH_MASK ((u_char)0x0f) /* Read only */
+#define BCSR1_XCVR_SMC1 ((u_char)0x80)
+#define BCSR1_XCVR_SMC2 ((u_char)0x40)
+#define BCSR2_FLASH_WENABLE ((u_char)0x20)
+#define BCSR2_NVRAM_ENABLE ((u_char)0x10)
+#define BCSR2_ALT_IRQ2 ((u_char)0x08)
+#define BCSR2_ALT_IRQ3 ((u_char)0x04)
+#define BCSR2_PRST ((u_char)0x02) /* Force reset */
+#define BCSR2_ENPRST ((u_char)0x01) /* Enable POR */
+#define BCSR3_MODCLK_MASK ((u_char)0xe0)
+#define BCSR3_ENCLKHDR ((u_char)0x10)
+#define BCSR3_LED5 ((u_char)0x04) /* 0 == on */
+#define BCSR3_LED6 ((u_char)0x02) /* 0 == on */
+#define BCSR3_LED7 ((u_char)0x01) /* 0 == on */
+#define BCSR4_EN_PHY ((u_char)0x80) /* Enable PHY */
+#define BCSR4_EN_MII ((u_char)0x40) /* Enable PHY */
+#define BCSR4_MII_READ ((u_char)0x04)
+#define BCSR4_MII_MDC ((u_char)0x02)
+#define BCSR4_MII_MDIO ((u_char)0x01)
+#define BCSR13_FETH_IRQMASK ((u_char)0xf0)
+#define BCSR15_FETH_IRQ ((u_char)0x20)
+
+#define PHY_INTERRUPT SIU_INT_IRQ7
+
+#endif /* __ASM_PLATFORMS_RPX8260_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/sbc82xx.c
+ *
+ * SBC82XX platform support
+ *
+ * Author: Guy Streeter <streeter@redhat.com>
+ *
+ * Derived from: est8260_setup.c by Allen Curtis, ONZ
+ *
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+#include <linux/stddef.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+#include <asm/io.h>
+#include <asm/todc.h>
+#include <asm/immap_8260.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+extern void (*late_time_init)(void);
+
+static int
+sbc82xx_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: Wind River\n"
+ "machine\t\t: SBC PowerQUICC II\n"
+ "\n"
+ "mem size\t\t: 0x%08lx\n"
+ "console baud\t\t: %ld\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+sbc82xx_setup_arch(void)
+{
+ printk("SBC PowerQUICC II Port\n");
+ callback_setup_arch();
+}
+
+TODC_ALLOC();
+
+/*
+ * Timer init happens before mem_init but after paging init, so we cannot
+ * directly use ioremap() at that time.
+ * late_time_init() is call after paging init.
+ */
+#ifdef CONFIG_GEN_RTC
+static void sbc82xx_time_init(void)
+{
+ volatile memctl8260_t *mc = &immr->im_memctl;
+ TODC_INIT(TODC_TYPE_MK48T59, 0, 0, SBC82xx_TODC_NVRAM_ADDR, 0);
+
+ /* Set up CS11 for RTC chip */
+ mc->memc_br11=0;
+ mc->memc_or11=0xffff0836;
+ mc->memc_br11=0x80000801;
+
+ todc_info->nvram_data =
+ (unsigned int)ioremap(todc_info->nvram_data, 0x2000);
+ BUG_ON(!todc_info->nvram_data);
+ ppc_md.get_rtc_time = todc_get_rtc_time;
+ ppc_md.set_rtc_time = todc_set_rtc_time;
+ ppc_md.nvram_read_val = todc_direct_read_val;
+ ppc_md.nvram_write_val = todc_direct_write_val;
+ todc_time_init();
+}
+#endif /* CONFIG_GEN_RTC */
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* u-boot may be using one of the FCC Ethernet devices.
+ Use the MAC address to the SCC. */
+ __res[offsetof(bd_t, bi_enetaddr[5])] &= ~3;
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = sbc82xx_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = sbc82xx_setup_arch;
+#ifdef CONFIG_GEN_RTC
+ ppc_md.time_init = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.nvram_read_val = NULL;
+ ppc_md.nvram_write_val = NULL;
+ late_time_init = sbc82xx_time_init;
+#endif /* CONFIG_GEN_RTC */
+}
--- /dev/null
+/* Board information for the SBCPowerQUICCII, which should be generic for
+ * all 8260 boards. The IMMR is now given to us so the hard define
+ * will soon be removed. All of the clock values are computed from
+ * the configuration SCMR and the Power-On-Reset word.
+ */
+
+#ifndef __PPC_SBC82xx_H__
+#define __PPC_SBC82xx_H__
+
+#include <asm/ppcboot.h>
+
+#define IMAP_ADDR 0xf0000000
+#define CPM_MAP_ADDR 0xf0000000
+
+#define SBC82xx_TODC_NVRAM_ADDR 0x80000000
+
+#define SBC82xx_MACADDR_NVRAM_FCC1 0x220000c9 /* JP6B */
+#define SBC82xx_MACADDR_NVRAM_SCC1 0x220000cf /* JP6A */
+#define SBC82xx_MACADDR_NVRAM_FCC2 0x220000d5 /* JP7A */
+#define SBC82xx_MACADDR_NVRAM_FCC3 0x220000db /* JP7B */
+
+#define BOOTROM_RESTART_ADDR ((uint)0x40000104)
+
+#endif /* __PPC_SBC82xx_H__ */
--- /dev/null
+/*
+ * General Purpose functions for the global management of the
+ * 8260 Communication Processor Module.
+ * Copyright (c) 1999 Dan Malek (dmalek@jlc.net)
+ * Copyright (c) 2000 MontaVista Software, Inc (source@mvista.com)
+ * 2.3.99 Updates
+ *
+ * In addition to the individual control of the communication
+ * channels, there are a few functions that globally affect the
+ * communication processor.
+ *
+ * Buffer descriptors must be allocated from the dual ported memory
+ * space. The allocator for that is here. When the communication
+ * process is reset, we reclaim the memory available. There is
+ * currently no deallocator for this memory.
+ */
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/bootmem.h>
+#include <linux/module.h>
+#include <asm/irq.h>
+#include <asm/mpc8260.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/immap_cpm2.h>
+#include <asm/cpm2.h>
+#include <asm/rheap.h>
+
+static void cpm2_dpinit(void);
+cpm_cpm2_t *cpmp; /* Pointer to comm processor space */
+
+/* We allocate this here because it is used almost exclusively for
+ * the communication processor devices.
+ */
+cpm2_map_t *cpm2_immr;
+
+void
+cpm2_reset(void)
+{
+ cpm2_immr = (cpm2_map_t *)CPM_MAP_ADDR;
+
+ /* Reclaim the DP memory for our use.
+ */
+ cpm2_dpinit();
+
+ /* Tell everyone where the comm processor resides.
+ */
+ cpmp = &cpm2_immr->im_cpm;
+}
+
+/* Set a baud rate generator. This needs lots of work. There are
+ * eight BRGs, which can be connected to the CPM channels or output
+ * as clocks. The BRGs are in two different block of internal
+ * memory mapped space.
+ * The baud rate clock is the system clock divided by something.
+ * It was set up long ago during the initial boot phase and is
+ * is given to us.
+ * Baud rate clocks are zero-based in the driver code (as that maps
+ * to port numbers). Documentation uses 1-based numbering.
+ */
+#define BRG_INT_CLK (((bd_t *)__res)->bi_brgfreq)
+#define BRG_UART_CLK (BRG_INT_CLK/16)
+
+/* This function is used by UARTS, or anything else that uses a 16x
+ * oversampled clock.
+ */
+void
+cpm2_setbrg(uint brg, uint rate)
+{
+ volatile uint *bp;
+
+ /* This is good enough to get SMCs running.....
+ */
+ if (brg < 4) {
+ bp = (uint *)&cpm2_immr->im_brgc1;
+ }
+ else {
+ bp = (uint *)&cpm2_immr->im_brgc5;
+ brg -= 4;
+ }
+ bp += brg;
+ *bp = ((BRG_UART_CLK / rate) << 1) | CPM_BRG_EN;
+}
+
+/* This function is used to set high speed synchronous baud rate
+ * clocks.
+ */
+void
+cpm2_fastbrg(uint brg, uint rate, int div16)
+{
+ volatile uint *bp;
+
+ if (brg < 4) {
+ bp = (uint *)&cpm2_immr->im_brgc1;
+ }
+ else {
+ bp = (uint *)&cpm2_immr->im_brgc5;
+ brg -= 4;
+ }
+ bp += brg;
+ *bp = ((BRG_INT_CLK / rate) << 1) | CPM_BRG_EN;
+ if (div16)
+ *bp |= CPM_BRG_DIV16;
+}
+
+/*
+ * dpalloc / dpfree bits.
+ */
+static spinlock_t cpm_dpmem_lock;
+/* 16 blocks should be enough to satisfy all requests
+ * until the memory subsystem goes up... */
+static rh_block_t cpm_boot_dpmem_rh_block[16];
+static rh_info_t cpm_dpmem_info;
+
+static void cpm2_dpinit(void)
+{
+ void *dprambase = &((cpm2_map_t *)CPM_MAP_ADDR)->im_dprambase;
+
+ spin_lock_init(&cpm_dpmem_lock);
+
+ /* initialize the info header */
+ rh_init(&cpm_dpmem_info, 1,
+ sizeof(cpm_boot_dpmem_rh_block) /
+ sizeof(cpm_boot_dpmem_rh_block[0]),
+ cpm_boot_dpmem_rh_block);
+
+ /* Attach the usable dpmem area */
+ /* XXX: This is actually crap. CPM_DATAONLY_BASE and
+ * CPM_DATAONLY_SIZE is only a subset of the available dpram. It
+ * varies with the processor and the microcode patches activated.
+ * But the following should be at least safe.
+ */
+ rh_attach_region(&cpm_dpmem_info, dprambase + CPM_DATAONLY_BASE,
+ CPM_DATAONLY_SIZE);
+}
+
+/* This function used to return an index into the DPRAM area.
+ * Now it returns the actuall physical address of that area.
+ * use cpm2_dpram_offset() to get the index
+ */
+void *cpm2_dpalloc(uint size, uint align)
+{
+ void *start;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cpm_dpmem_lock, flags);
+ cpm_dpmem_info.alignment = align;
+ start = rh_alloc(&cpm_dpmem_info, size, "commproc");
+ spin_unlock_irqrestore(&cpm_dpmem_lock, flags);
+
+ return start;
+}
+EXPORT_SYMBOL(cpm2_dpalloc);
+
+int cpm2_dpfree(void *addr)
+{
+ int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cpm_dpmem_lock, flags);
+ ret = rh_free(&cpm_dpmem_info, addr);
+ spin_unlock_irqrestore(&cpm_dpmem_lock, flags);
+
+ return ret;
+}
+EXPORT_SYMBOL(cpm2_dpfree);
+
+/* not sure if this is ever needed */
+void *cpm2_dpalloc_fixed(void *addr, uint size, uint align)
+{
+ void *start;
+ unsigned long flags;
+
+ spin_lock_irqsave(&cpm_dpmem_lock, flags);
+ cpm_dpmem_info.alignment = align;
+ start = rh_alloc_fixed(&cpm_dpmem_info, addr, size, "commproc");
+ spin_unlock_irqrestore(&cpm_dpmem_lock, flags);
+
+ return start;
+}
+EXPORT_SYMBOL(cpm2_dpalloc_fixed);
+
+void cpm2_dpdump(void)
+{
+ rh_dump(&cpm_dpmem_info);
+}
+EXPORT_SYMBOL(cpm2_dpdump);
+
+uint cpm2_dpram_offset(void *addr)
+{
+ return (uint)((u_char *)addr -
+ ((uint)((cpm2_map_t *)CPM_MAP_ADDR)->im_dprambase));
+}
+EXPORT_SYMBOL(cpm2_dpram_offset);
+
+void *cpm2_dpram_addr(int offset)
+{
+ return (void *)&((cpm2_map_t *)CPM_MAP_ADDR)->im_dprambase[offset];
+}
+EXPORT_SYMBOL(cpm2_dpram_addr);
--- /dev/null
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <asm/irq.h>
+#include <asm/immap_cpm2.h>
+#include <asm/mpc8260.h>
+#include "cpm2_pic.h"
+
+/* The CPM2 internal interrupt controller. It is usually
+ * the only interrupt controller.
+ * There are two 32-bit registers (high/low) for up to 64
+ * possible interrupts.
+ *
+ * Now, the fun starts.....Interrupt Numbers DO NOT MAP
+ * in a simple arithmetic fashion to mask or pending registers.
+ * That is, interrupt 4 does not map to bit position 4.
+ * We create two tables, indexed by vector number, to indicate
+ * which register to use and which bit in the register to use.
+ */
+static u_char irq_to_siureg[] = {
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0
+};
+
+static u_char irq_to_siubit[] = {
+ 31, 16, 17, 18, 19, 20, 21, 22,
+ 23, 24, 25, 26, 27, 28, 29, 30,
+ 29, 30, 16, 17, 18, 19, 20, 21,
+ 22, 23, 24, 25, 26, 27, 28, 31,
+ 0, 1, 2, 3, 4, 5, 6, 7,
+ 8, 9, 10, 11, 12, 13, 14, 15,
+ 15, 14, 13, 12, 11, 10, 9, 8,
+ 7, 6, 5, 4, 3, 2, 1, 0
+};
+
+static void cpm2_mask_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+}
+
+static void cpm2_unmask_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] |= (1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+}
+
+static void cpm2_mask_and_ack(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr, *sipnr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ sipnr = &(cpm2_immr->im_intctl.ic_sipnrh);
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+ sipnr[word] = 1 << (31 - bit);
+}
+
+static void cpm2_end_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ if (!(irq_desc[irq_nr].status & (IRQ_DISABLED|IRQ_INPROGRESS))
+ && irq_desc[irq_nr].action) {
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] |= (1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+ }
+}
+
+struct hw_interrupt_type cpm2_pic = {
+ " CPM2 SIU ",
+ NULL,
+ NULL,
+ cpm2_unmask_irq,
+ cpm2_mask_irq,
+ cpm2_mask_and_ack,
+ cpm2_end_irq,
+ 0
+};
+
+
+int
+cpm2_get_irq(struct pt_regs *regs)
+{
+ int irq;
+ unsigned long bits;
+
+ /* For CPM2, read the SIVEC register and shift the bits down
+ * to get the irq number. */
+ bits = cpm2_immr->im_intctl.ic_sivec;
+ irq = bits >> 26;
+
+ if (irq == 0)
+ return(-1);
+#if 0
+ irq += ppc8260_pic.irq_offset;
+#endif
+ return irq;
+}
+
--- /dev/null
+#ifndef _PPC_KERNEL_CPM2_H
+#define _PPC_KERNEL_CPM2_H
+
+#include <linux/irq.h>
+
+extern struct hw_interrupt_type cpm2_pic;
+
+void cpm2_pic_init(void);
+void cpm2_do_IRQ(struct pt_regs *regs,
+ int cpu);
+int cpm2_get_irq(struct pt_regs *regs);
+
+#endif /* _PPC_KERNEL_CPM2_H */
--- /dev/null
+/*
+ * arch/ppc/syslib/dcr.S
+ *
+ * "Indirect" DCR access
+ *
+ * Copyright (c) 2004 Eugene Surovegin <ebs@ebshome.net>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <asm/ppc_asm.h>
+#include <asm/processor.h>
+
+#define DCR_ACCESS_PROLOG(table) \
+ rlwinm r3,r3,4,18,27; \
+ lis r5,table@h; \
+ ori r5,r5,table@l; \
+ add r3,r3,r5; \
+ mtctr r3; \
+ bctr
+
+_GLOBAL(__mfdcr)
+ DCR_ACCESS_PROLOG(__mfdcr_table)
+
+_GLOBAL(__mtdcr)
+ DCR_ACCESS_PROLOG(__mtdcr_table)
+
+__mfdcr_table:
+ mfdcr r3,0; blr
+__mtdcr_table:
+ mtdcr 0,r4; blr
+
+dcr = 1
+ .rept 1023
+ mfdcr r3,dcr; blr
+ mtdcr dcr,r4; blr
+ dcr = dcr + 1
+ .endr
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm440gx_common.c
+ *
+ * PPC440GX system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <asm/ibm44x.h>
+#include <asm/mmu.h>
+#include <asm/processor.h>
+#include <syslib/ibm440gx_common.h>
+
+/*
+ * Calculate 440GX clocks
+ */
+static inline u32 __fix_zero(u32 v, u32 def){
+ return v ? v : def;
+}
+
+void __init ibm440gx_get_clocks(struct ibm44x_clocks* p, unsigned int sys_clk,
+ unsigned int ser_clk)
+{
+ u32 pllc = CPR_READ(DCRN_CPR_PLLC);
+ u32 plld = CPR_READ(DCRN_CPR_PLLD);
+ u32 uart0 = SDR_READ(DCRN_SDR_UART0);
+ u32 uart1 = SDR_READ(DCRN_SDR_UART1);
+
+ /* Dividers */
+ u32 fbdv = __fix_zero((plld >> 24) & 0x1f, 32);
+ u32 fwdva = __fix_zero((plld >> 16) & 0xf, 16);
+ u32 fwdvb = __fix_zero((plld >> 8) & 7, 8);
+ u32 lfbdv = __fix_zero(plld & 0x3f, 64);
+ u32 pradv0 = __fix_zero((CPR_READ(DCRN_CPR_PRIMAD) >> 24) & 7, 8);
+ u32 prbdv0 = __fix_zero((CPR_READ(DCRN_CPR_PRIMBD) >> 24) & 7, 8);
+ u32 opbdv0 = __fix_zero((CPR_READ(DCRN_CPR_OPBD) >> 24) & 3, 4);
+ u32 perdv0 = __fix_zero((CPR_READ(DCRN_CPR_PERD) >> 24) & 3, 4);
+
+ /* Input clocks for primary dividers */
+ u32 clk_a, clk_b;
+
+ if (pllc & 0x40000000){
+ u32 m;
+
+ /* Feedback path */
+ switch ((pllc >> 24) & 7){
+ case 0:
+ /* PLLOUTx */
+ m = ((pllc & 0x20000000) ? fwdvb : fwdva) * lfbdv;
+ break;
+ case 1:
+ /* CPU */
+ m = fwdva * pradv0;
+ break;
+ case 5:
+ /* PERClk */
+ m = fwdvb * prbdv0 * opbdv0 * perdv0;
+ break;
+ default:
+ printk(KERN_EMERG "invalid PLL feedback source\n");
+ goto bypass;
+ }
+ m *= fbdv;
+ p->vco = sys_clk * m;
+ clk_a = p->vco / fwdva;
+ clk_b = p->vco / fwdvb;
+ }
+ else {
+bypass:
+ /* Bypass system PLL */
+ p->vco = 0;
+ clk_a = clk_b = sys_clk;
+ }
+
+ p->cpu = clk_a / pradv0;
+ p->plb = clk_b / prbdv0;
+ p->opb = p->plb / opbdv0;
+ p->ebc = p->opb / perdv0;
+
+ /* UARTs clock */
+ if (uart0 & 0x00800000)
+ p->uart0 = ser_clk;
+ else
+ p->uart0 = p->plb / __fix_zero(uart0 & 0xff, 256);
+
+ if (uart1 & 0x00800000)
+ p->uart1 = ser_clk;
+ else
+ p->uart1 = p->plb / __fix_zero(uart1 & 0xff, 256);
+}
+
+/* Enable L2 cache (call with IRQs disabled) */
+void __init ibm440gx_l2c_enable(void){
+ u32 r;
+
+ asm volatile ("sync" ::: "memory");
+
+ /* Disable SRAM */
+ mtdcr(DCRN_SRAM0_DPC, mfdcr(DCRN_SRAM0_DPC) & ~SRAM_DPC_ENABLE);
+ mtdcr(DCRN_SRAM0_SB0CR, mfdcr(DCRN_SRAM0_SB0CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB1CR, mfdcr(DCRN_SRAM0_SB1CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB2CR, mfdcr(DCRN_SRAM0_SB2CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB3CR, mfdcr(DCRN_SRAM0_SB3CR) & ~SRAM_SBCR_BU_MASK);
+
+ /* Enable L2_MODE without ICU/DCU */
+ r = mfdcr(DCRN_L2C0_CFG) & ~(L2C_CFG_ICU | L2C_CFG_DCU | L2C_CFG_SS_MASK);
+ r |= L2C_CFG_L2M | L2C_CFG_SS_256;
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ mtdcr(DCRN_L2C0_ADDR, 0);
+
+ /* Hardware Clear Command */
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_HCC);
+ while (!(mfdcr(DCRN_L2C0_SR) & L2C_SR_CC)) ;
+
+ /* Clear Cache Parity and Tag Errors */
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_CCP | L2C_CMD_CTE);
+
+ /* Enable 64G snoop region starting at 0 */
+ r = mfdcr(DCRN_L2C0_SNP0) & ~(L2C_SNP_BA_MASK | L2C_SNP_SSR_MASK);
+ r |= L2C_SNP_SSR_32G | L2C_SNP_ESR;
+ mtdcr(DCRN_L2C0_SNP0, r);
+
+ r = mfdcr(DCRN_L2C0_SNP1) & ~(L2C_SNP_BA_MASK | L2C_SNP_SSR_MASK);
+ r |= 0x80000000 | L2C_SNP_SSR_32G | L2C_SNP_ESR;
+ mtdcr(DCRN_L2C0_SNP1, r);
+
+ asm volatile ("sync" ::: "memory");
+
+ /* Enable ICU/DCU ports */
+ r = mfdcr(DCRN_L2C0_CFG);
+ r &= ~(L2C_CFG_DCW_MASK | L2C_CFG_CPIM | L2C_CFG_TPIM | L2C_CFG_LIM
+ | L2C_CFG_PMUX_MASK | L2C_CFG_PMIM | L2C_CFG_TPEI | L2C_CFG_CPEI
+ | L2C_CFG_NAM | L2C_CFG_NBRM);
+ r |= L2C_CFG_ICU | L2C_CFG_DCU | L2C_CFG_TPC | L2C_CFG_CPC | L2C_CFG_FRAN
+ | L2C_CFG_SMCM;
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ asm volatile ("sync; isync" ::: "memory");
+}
+
+/* Disable L2 cache (call with IRQs disabled) */
+void __init ibm440gx_l2c_disable(void){
+ u32 r;
+
+ asm volatile ("sync" ::: "memory");
+
+ /* Disable L2C mode */
+ r = mfdcr(DCRN_L2C0_CFG) & ~(L2C_CFG_L2M | L2C_CFG_ICU | L2C_CFG_DCU);
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ /* Enable SRAM */
+ mtdcr(DCRN_SRAM0_DPC, mfdcr(DCRN_SRAM0_DPC) | SRAM_DPC_ENABLE);
+ mtdcr(DCRN_SRAM0_SB0CR,
+ SRAM_SBCR_BAS0 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB1CR,
+ SRAM_SBCR_BAS1 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB2CR,
+ SRAM_SBCR_BAS2 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB3CR,
+ SRAM_SBCR_BAS3 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+
+ asm volatile ("sync; isync" ::: "memory");
+}
+
+int __init ibm440gx_get_eth_grp(void)
+{
+ return (SDR_READ(DCRN_SDR_PFC1) & DCRN_SDR_PFC1_EPS) >> DCRN_SDR_PFC1_EPS_SHIFT;
+}
+
+void __init ibm440gx_set_eth_grp(int group)
+{
+ SDR_WRITE(DCRN_SDR_PFC1, (SDR_READ(DCRN_SDR_PFC1) & ~DCRN_SDR_PFC1_EPS) | (group << DCRN_SDR_PFC1_EPS_SHIFT));
+}
+
+void __init ibm440gx_tah_enable(void)
+{
+ /* Enable TAH0 and TAH1 */
+ SDR_WRITE(DCRN_SDR_MFR,SDR_READ(DCRN_SDR_MFR) &
+ ~DCRN_SDR_MFR_TAH0);
+ SDR_WRITE(DCRN_SDR_MFR,SDR_READ(DCRN_SDR_MFR) &
+ ~DCRN_SDR_MFR_TAH1);
+}
+
+int ibm440gx_show_cpuinfo(struct seq_file *m){
+
+ u32 l2c_cfg = mfdcr(DCRN_L2C0_CFG);
+ const char* s;
+ if (l2c_cfg & L2C_CFG_L2M){
+ switch (l2c_cfg & (L2C_CFG_ICU | L2C_CFG_DCU)){
+ case L2C_CFG_ICU: s = "I-Cache only"; break;
+ case L2C_CFG_DCU: s = "D-Cache only"; break;
+ default: s = "I-Cache/D-Cache"; break;
+ }
+ }
+ else
+ s = "disabled";
+
+ seq_printf(m, "L2-Cache\t: %s (0x%08x 0x%08x)\n", s,
+ l2c_cfg, mfdcr(DCRN_L2C0_SR));
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm440gx_common.h
+ *
+ * PPC440GX system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#ifdef __KERNEL__
+#ifndef __PPC_SYSLIB_IBM440GX_COMMON_H
+#define __PPC_SYSLIB_IBM440GX_COMMON_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <syslib/ibm44x_common.h>
+
+/*
+ * Please, refer to the Figure 14.1 in 440GX user manual
+ *
+ * if internal UART clock is used, ser_clk is ignored
+ */
+void ibm440gx_get_clocks(struct ibm44x_clocks*, unsigned int sys_clk,
+ unsigned int ser_clk) __init;
+
+/* Enable L2 cache */
+void ibm440gx_l2c_enable(void) __init;
+
+/* Disable L2 cache */
+void ibm440gx_l2c_disable(void) __init;
+
+/* Get Ethernet Group */
+int ibm440gx_get_eth_grp(void) __init;
+
+/* Set Ethernet Group */
+void ibm440gx_set_eth_grp(int group) __init;
+
+/* Enable TAH devices */
+void ibm440gx_tah_enable(void) __init;
+
+/* Add L2C info to /proc/cpuinfo */
+int ibm440gx_show_cpuinfo(struct seq_file*);
+
+#endif /* __ASSEMBLY__ */
+#endif /* __PPC_SYSLIB_IBM440GX_COMMON_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm44x_common.h
+ *
+ * PPC44x system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#ifdef __KERNEL__
+#ifndef __PPC_SYSLIB_IBM44x_COMMON_H
+#define __PPC_SYSLIB_IBM44x_COMMON_H
+
+#ifndef __ASSEMBLY__
+
+/*
+ * All clocks are in Hz
+ */
+struct ibm44x_clocks {
+ unsigned int vco; /* VCO, 0 if system PLL is bypassed */
+ unsigned int cpu; /* CPUCoreClk */
+ unsigned int plb; /* PLBClk */
+ unsigned int opb; /* OPBClk */
+ unsigned int ebc; /* PerClk */
+ unsigned int uart0;
+ unsigned int uart1;
+};
+
+#endif /* __ASSEMBLY__ */
+#endif /* __PPC_SYSLIB_IBM44x_COMMON_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * (C) Copyright 2003
+ * Wolfgang Denk, DENX Software Engineering, wd@denx.de.
+ *
+ * (C) Copyright 2004 Red Hat, Inc.
+ *
+ * See file CREDITS for list of people who contributed to this
+ * project.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+
+#include <asm/byteorder.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/machdep.h>
+#include <asm/pci-bridge.h>
+#include <asm/immap_cpm2.h>
+#include <asm/mpc8260.h>
+
+#include "m8260_pci.h"
+
+
+/* PCI bus configuration registers.
+ */
+
+static void __init m8260_setup_pci(struct pci_controller *hose)
+{
+ volatile cpm2_map_t *immap = cpm2_immr;
+ unsigned long pocmr;
+ u16 tempShort;
+
+#ifndef CONFIG_ATC /* already done in U-Boot */
+ /*
+ * Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]),
+ * and local bus for PCI (SIUMCR [LBPC]).
+ */
+ immap->im_siu_conf.siu_82xx.sc_siumcr = 0x00640000;
+#endif
+
+ /* Make PCI lowest priority */
+ /* Each 4 bits is a device bus request and the MS 4bits
+ is highest priority */
+ /* Bus 4bit value
+ --- ----------
+ CPM high 0b0000
+ CPM middle 0b0001
+ CPM low 0b0010
+ PCI reguest 0b0011
+ Reserved 0b0100
+ Reserved 0b0101
+ Internal Core 0b0110
+ External Master 1 0b0111
+ External Master 2 0b1000
+ External Master 3 0b1001
+ The rest are reserved */
+ immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893;
+
+ /* Park bus on core while modifying PCI Bus accesses */
+ immap->im_siu_conf.siu_82xx.sc_ppc_acr = 0x6;
+
+ /*
+ * Set up master window that allows the CPU to access PCI space. This
+ * window is set up using the first SIU PCIBR registers.
+ */
+ immap->im_memctl.memc_pcimsk0 = MPC826x_PCI_MASK;
+ immap->im_memctl.memc_pcibr0 = MPC826x_PCI_BASE | PCIBR_ENABLE;
+
+ /* Disable machine check on no response or target abort */
+ immap->im_pci.pci_emr = cpu_to_le32(0x1fe7);
+ /* Release PCI RST (by default the PCI RST signal is held low) */
+ immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN);
+
+ /* give it some time */
+ mdelay(1);
+
+ /*
+ * Set up master window that allows the CPU to access PCI Memory (prefetch)
+ * space. This window is set up using the first set of Outbound ATU registers.
+ */
+ immap->im_pci.pci_potar0 = cpu_to_le32(MPC826x_PCI_LOWER_MEM >> 12);
+ immap->im_pci.pci_pobar0 = cpu_to_le32((MPC826x_PCI_LOWER_MEM - MPC826x_PCI_MEM_OFFSET) >> 12);
+ pocmr = ((MPC826x_PCI_UPPER_MEM - MPC826x_PCI_LOWER_MEM) >> 12) ^ 0xfffff;
+ immap->im_pci.pci_pocmr0 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PREFETCH_EN);
+
+ /*
+ * Set up master window that allows the CPU to access PCI Memory (non-prefetch)
+ * space. This window is set up using the second set of Outbound ATU registers.
+ */
+ immap->im_pci.pci_potar1 = cpu_to_le32(MPC826x_PCI_LOWER_MMIO >> 12);
+ immap->im_pci.pci_pobar1 = cpu_to_le32((MPC826x_PCI_LOWER_MMIO - MPC826x_PCI_MMIO_OFFSET) >> 12);
+ pocmr = ((MPC826x_PCI_UPPER_MMIO - MPC826x_PCI_LOWER_MMIO) >> 12) ^ 0xfffff;
+ immap->im_pci.pci_pocmr1 = cpu_to_le32(pocmr | POCMR_ENABLE);
+
+ /*
+ * Set up master window that allows the CPU to access PCI IO space. This window
+ * is set up using the third set of Outbound ATU registers.
+ */
+ immap->im_pci.pci_potar2 = cpu_to_le32(MPC826x_PCI_IO_BASE >> 12);
+ immap->im_pci.pci_pobar2 = cpu_to_le32(MPC826x_PCI_LOWER_IO >> 12);
+ pocmr = ((MPC826x_PCI_UPPER_IO - MPC826x_PCI_LOWER_IO) >> 12) ^ 0xfffff;
+ immap->im_pci.pci_pocmr2 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PCI_IO);
+
+ /*
+ * Set up slave window that allows PCI masters to access MPC826x local memory.
+ * This window is set up using the first set of Inbound ATU registers
+ */
+
+ immap->im_pci.pci_pitar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_LOCAL >> 12);
+ immap->im_pci.pci_pibar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_BUS >> 12);
+ pocmr = ((MPC826x_PCI_SLAVE_MEM_SIZE-1) >> 12) ^ 0xfffff;
+ immap->im_pci.pci_picmr0 = cpu_to_le32(pocmr | PICMR_ENABLE | PICMR_PREFETCH_EN);
+
+ /* See above for description - puts PCI request as highest priority */
+ immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567;
+
+ /* Park the bus on the PCI */
+ immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI;
+
+ /* Host mode - specify the bridge as a host-PCI bridge */
+ early_write_config_word(hose, 0, 0, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_HOST);
+
+ /* Enable the host bridge to be a master on the PCI bus, and to act as a PCI memory target */
+ early_read_config_word(hose, 0, 0, PCI_COMMAND, &tempShort);
+ early_write_config_word(hose, 0, 0, PCI_COMMAND,
+ tempShort | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY);
+}
+
+void __init m8260_find_bridges(void)
+{
+ extern int pci_assign_all_busses;
+ struct pci_controller * hose;
+
+ pci_assign_all_busses = 1;
+
+ hose = pcibios_alloc_controller();
+
+ if (!hose)
+ return;
+
+ ppc_md.pci_swizzle = common_swizzle;
+
+ hose->first_busno = 0;
+ hose->bus_offset = 0;
+ hose->last_busno = 0xff;
+
+ setup_m8260_indirect_pci(hose,
+ (unsigned long)&cpm2_immr->im_pci.pci_cfg_addr,
+ (unsigned long)&cpm2_immr->im_pci.pci_cfg_data);
+
+ m8260_setup_pci(hose);
+ hose->pci_mem_offset = MPC826x_PCI_MEM_OFFSET;
+
+ isa_io_base =
+ (unsigned long) ioremap(MPC826x_PCI_IO_BASE,
+ MPC826x_PCI_IO_SIZE);
+ hose->io_base_virt = (void *) isa_io_base;
+
+ /* setup resources */
+ pci_init_resource(&hose->mem_resources[0],
+ MPC826x_PCI_LOWER_MEM,
+ MPC826x_PCI_UPPER_MEM,
+ IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory");
+
+ pci_init_resource(&hose->mem_resources[1],
+ MPC826x_PCI_LOWER_MMIO,
+ MPC826x_PCI_UPPER_MMIO,
+ IORESOURCE_MEM, "PCI memory");
+
+ pci_init_resource(&hose->io_resource,
+ MPC826x_PCI_LOWER_IO,
+ MPC826x_PCI_UPPER_IO,
+ IORESOURCE_IO, "PCI I/O");
+}
--- /dev/null
+
+#ifndef _PPC_KERNEL_M8260_PCI_H
+#define _PPC_KERNEL_M8260_PCI_H
+
+#include <asm/m8260_pci.h>
+
+/*
+ * Local->PCI map (from CPU) controlled by
+ * MPC826x master window
+ *
+ * 0x80000000 - 0xBFFFFFFF Total CPU2PCI space PCIBR0
+ *
+ * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1)
+ * 0xA0000000 - 0xAFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2)
+ * 0xB0000000 - 0xB0FFFFFF 32-bit PCI IO (Outbound ATU #3)
+ *
+ * PCI->Local map (from PCI)
+ * MPC826x slave window controlled by
+ *
+ * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1)
+ */
+
+/*
+ * Slave window that allows PCI masters to access MPC826x local memory.
+ * This window is set up using the first set of Inbound ATU registers
+ */
+
+#ifndef MPC826x_PCI_SLAVE_MEM_LOCAL
+#define MPC826x_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart)
+#define MPC826x_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart)
+#define MPC826x_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize)
+#endif
+
+/*
+ * This is the window that allows the CPU to access PCI address space.
+ * It will be setup with the SIU PCIBR0 register. All three PCI master
+ * windows, which allow the CPU to access PCI prefetch, non prefetch,
+ * and IO space (see below), must all fit within this window.
+ */
+#ifndef MPC826x_PCI_BASE
+#define MPC826x_PCI_BASE 0x80000000
+#define MPC826x_PCI_MASK 0xc0000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_MEM
+#define MPC826x_PCI_LOWER_MEM 0x80000000
+#define MPC826x_PCI_UPPER_MEM 0x9fffffff
+#define MPC826x_PCI_MEM_OFFSET 0x00000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_MMIO
+#define MPC826x_PCI_LOWER_MMIO 0xa0000000
+#define MPC826x_PCI_UPPER_MMIO 0xafffffff
+#define MPC826x_PCI_MMIO_OFFSET 0x00000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_IO
+#define MPC826x_PCI_LOWER_IO 0x00000000
+#define MPC826x_PCI_UPPER_IO 0x00ffffff
+#define MPC826x_PCI_IO_BASE 0xb0000000
+#define MPC826x_PCI_IO_SIZE 0x01000000
+#endif
+
+#ifndef _IO_BASE
+#define _IO_BASE isa_io_base
+#endif
+
+#ifdef CONFIG_8260_PCI9
+extern void setup_m8260_indirect_pci(struct pci_controller* hose,
+ u32 cfg_addr, u32 cfg_data);
+#else
+#define setup_m8260_indirect_pci setup_indirect_pci
+#endif
+
+#endif /* _PPC_KERNEL_M8260_PCI_H */
--- /dev/null
+/*
+ * arch/ppc/platforms/mpc8260_pci9.c
+ *
+ * Workaround for device erratum PCI 9.
+ * See Motorola's "XPC826xA Family Device Errata Reference."
+ * The erratum applies to all 8260 family Hip4 processors. It is scheduled
+ * to be fixed in HiP4 Rev C. Erratum PCI 9 states that a simultaneous PCI
+ * inbound write transaction and PCI outbound read transaction can result in a
+ * bus deadlock. The suggested workaround is to use the IDMA controller to
+ * perform all reads from PCI configuration, memory, and I/O space.
+ *
+ * Author: andy_lowe@mvista.com
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/kernel.h>
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include <asm/io.h>
+#include <asm/pci-bridge.h>
+#include <asm/machdep.h>
+#include <asm/byteorder.h>
+#include <asm/mpc8260.h>
+#include <asm/immap_cpm2.h>
+#include <asm/cpm2.h>
+
+#include "m8260_pci.h"
+
+#ifdef CONFIG_8260_PCI9
+/*#include <asm/mpc8260_pci9.h>*/ /* included in asm/io.h */
+
+#define IDMA_XFER_BUF_SIZE 64 /* size of the IDMA transfer buffer */
+
+/* define a structure for the IDMA dpram usage */
+typedef struct idma_dpram_s {
+ idma_t pram; /* IDMA parameter RAM */
+ u_char xfer_buf[IDMA_XFER_BUF_SIZE]; /* IDMA transfer buffer */
+ idma_bd_t bd; /* buffer descriptor */
+} idma_dpram_t;
+
+/* define offsets relative to start of IDMA dpram */
+#define IDMA_XFER_BUF_OFFSET (sizeof(idma_t))
+#define IDMA_BD_OFFSET (sizeof(idma_t) + IDMA_XFER_BUF_SIZE)
+
+/* define globals */
+static volatile idma_dpram_t *idma_dpram;
+
+/* Exactly one of CONFIG_8260_PCI9_IDMAn must be defined,
+ * where n is 1, 2, 3, or 4. This selects the IDMA channel used for
+ * the PCI9 workaround.
+ */
+#ifdef CONFIG_8260_PCI9_IDMA1
+#define IDMA_CHAN 0
+#define PROFF_IDMA PROFF_IDMA1_BASE
+#define IDMA_PAGE CPM_CR_IDMA1_PAGE
+#define IDMA_SBLOCK CPM_CR_IDMA1_SBLOCK
+#endif
+#ifdef CONFIG_8260_PCI9_IDMA2
+#define IDMA_CHAN 1
+#define PROFF_IDMA PROFF_IDMA2_BASE
+#define IDMA_PAGE CPM_CR_IDMA2_PAGE
+#define IDMA_SBLOCK CPM_CR_IDMA2_SBLOCK
+#endif
+#ifdef CONFIG_8260_PCI9_IDMA3
+#define IDMA_CHAN 2
+#define PROFF_IDMA PROFF_IDMA3_BASE
+#define IDMA_PAGE CPM_CR_IDMA3_PAGE
+#define IDMA_SBLOCK CPM_CR_IDMA3_SBLOCK
+#endif
+#ifdef CONFIG_8260_PCI9_IDMA4
+#define IDMA_CHAN 3
+#define PROFF_IDMA PROFF_IDMA4_BASE
+#define IDMA_PAGE CPM_CR_IDMA4_PAGE
+#define IDMA_SBLOCK CPM_CR_IDMA4_SBLOCK
+#endif
+
+void idma_pci9_init(void)
+{
+ uint dpram_offset;
+ volatile idma_t *pram;
+ volatile im_idma_t *idma_reg;
+ volatile cpm2_map_t *immap = cpm2_immr;
+
+ /* allocate IDMA dpram */
+ dpram_offset = cpm2_dpalloc(sizeof(idma_dpram_t), 64);
+ idma_dpram =
+ (volatile idma_dpram_t *)&immap->im_dprambase[dpram_offset];
+
+ /* initialize the IDMA parameter RAM */
+ memset((void *)idma_dpram, 0, sizeof(idma_dpram_t));
+ pram = &idma_dpram->pram;
+ pram->ibase = dpram_offset + IDMA_BD_OFFSET;
+ pram->dpr_buf = dpram_offset + IDMA_XFER_BUF_OFFSET;
+ pram->ss_max = 32;
+ pram->dts = 32;
+
+ /* initialize the IDMA_BASE pointer to the IDMA parameter RAM */
+ *((ushort *) &immap->im_dprambase[PROFF_IDMA]) = dpram_offset;
+
+ /* initialize the IDMA registers */
+ idma_reg = (volatile im_idma_t *) &immap->im_sdma.sdma_idsr1;
+ idma_reg[IDMA_CHAN].idmr = 0; /* mask all IDMA interrupts */
+ idma_reg[IDMA_CHAN].idsr = 0xff; /* clear all event flags */
+
+ printk("<4>Using IDMA%d for MPC8260 device erratum PCI 9 workaround\n",
+ IDMA_CHAN + 1);
+
+ return;
+}
+
+/* Use the IDMA controller to transfer data from I/O memory to local RAM.
+ * The src address must be a physical address suitable for use by the DMA
+ * controller with no translation. The dst address must be a kernel virtual
+ * address. The dst address is translated to a physical address via
+ * virt_to_phys().
+ * The sinc argument specifies whether or not the source address is incremented
+ * by the DMA controller. The source address is incremented if and only if sinc
+ * is non-zero. The destination address is always incremented since the
+ * destination is always host RAM.
+ */
+static void
+idma_pci9_read(u8 *dst, u8 *src, int bytes, int unit_size, int sinc)
+{
+ unsigned long flags;
+ volatile idma_t *pram = &idma_dpram->pram;
+ volatile idma_bd_t *bd = &idma_dpram->bd;
+ volatile cpm2_map_t *immap = cpm2_immr;
+
+ local_irq_save(flags);
+
+ /* initialize IDMA parameter RAM for this transfer */
+ if (sinc)
+ pram->dcm = IDMA_DCM_DMA_WRAP_64 | IDMA_DCM_SINC
+ | IDMA_DCM_DINC | IDMA_DCM_SD_MEM2MEM;
+ else
+ pram->dcm = IDMA_DCM_DMA_WRAP_64 | IDMA_DCM_DINC
+ | IDMA_DCM_SD_MEM2MEM;
+ pram->ibdptr = pram->ibase;
+ pram->sts = unit_size;
+ pram->istate = 0;
+
+ /* initialize the buffer descriptor */
+ bd->dst = virt_to_phys(dst);
+ bd->src = (uint) src;
+ bd->len = bytes;
+ bd->flags = IDMA_BD_V | IDMA_BD_W | IDMA_BD_I | IDMA_BD_L | IDMA_BD_DGBL
+ | IDMA_BD_DBO_BE | IDMA_BD_SBO_BE | IDMA_BD_SDTB;
+
+ /* issue the START_IDMA command to the CP */
+ while (immap->im_cpm.cp_cpcr & CPM_CR_FLG);
+ immap->im_cpm.cp_cpcr = mk_cr_cmd(IDMA_PAGE, IDMA_SBLOCK, 0,
+ CPM_CR_START_IDMA) | CPM_CR_FLG;
+ while (immap->im_cpm.cp_cpcr & CPM_CR_FLG);
+
+ /* wait for transfer to complete */
+ while(bd->flags & IDMA_BD_V);
+
+ local_irq_restore(flags);
+
+ return;
+}
+
+/* Use the IDMA controller to transfer data from I/O memory to local RAM.
+ * The dst address must be a physical address suitable for use by the DMA
+ * controller with no translation. The src address must be a kernel virtual
+ * address. The src address is translated to a physical address via
+ * virt_to_phys().
+ * The dinc argument specifies whether or not the dest address is incremented
+ * by the DMA controller. The source address is incremented if and only if sinc
+ * is non-zero. The source address is always incremented since the
+ * source is always host RAM.
+ */
+static void
+idma_pci9_write(u8 *dst, u8 *src, int bytes, int unit_size, int dinc)
+{
+ unsigned long flags;
+ volatile idma_t *pram = &idma_dpram->pram;
+ volatile idma_bd_t *bd = &idma_dpram->bd;
+ volatile cpm2_map_t *immap = cpm2_immr;
+
+ local_irq_save(flags);
+
+ /* initialize IDMA parameter RAM for this transfer */
+ if (dinc)
+ pram->dcm = IDMA_DCM_DMA_WRAP_64 | IDMA_DCM_SINC
+ | IDMA_DCM_DINC | IDMA_DCM_SD_MEM2MEM;
+ else
+ pram->dcm = IDMA_DCM_DMA_WRAP_64 | IDMA_DCM_SINC
+ | IDMA_DCM_SD_MEM2MEM;
+ pram->ibdptr = pram->ibase;
+ pram->sts = unit_size;
+ pram->istate = 0;
+
+ /* initialize the buffer descriptor */
+ bd->dst = (uint) dst;
+ bd->src = virt_to_phys(src);
+ bd->len = bytes;
+ bd->flags = IDMA_BD_V | IDMA_BD_W | IDMA_BD_I | IDMA_BD_L | IDMA_BD_DGBL
+ | IDMA_BD_DBO_BE | IDMA_BD_SBO_BE | IDMA_BD_SDTB;
+
+ /* issue the START_IDMA command to the CP */
+ while (immap->im_cpm.cp_cpcr & CPM_CR_FLG);
+ immap->im_cpm.cp_cpcr = mk_cr_cmd(IDMA_PAGE, IDMA_SBLOCK, 0,
+ CPM_CR_START_IDMA) | CPM_CR_FLG;
+ while (immap->im_cpm.cp_cpcr & CPM_CR_FLG);
+
+ /* wait for transfer to complete */
+ while(bd->flags & IDMA_BD_V);
+
+ local_irq_restore(flags);
+
+ return;
+}
+
+/* Same as idma_pci9_read, but 16-bit little-endian byte swapping is performed
+ * if the unit_size is 2, and 32-bit little-endian byte swapping is performed if
+ * the unit_size is 4.
+ */
+static void
+idma_pci9_read_le(u8 *dst, u8 *src, int bytes, int unit_size, int sinc)
+{
+ int i;
+ u8 *p;
+
+ idma_pci9_read(dst, src, bytes, unit_size, sinc);
+ switch(unit_size) {
+ case 2:
+ for (i = 0, p = dst; i < bytes; i += 2, p += 2)
+ swab16s((u16 *) p);
+ break;
+ case 4:
+ for (i = 0, p = dst; i < bytes; i += 4, p += 4)
+ swab32s((u32 *) p);
+ break;
+ default:
+ break;
+ }
+}
+EXPORT_SYMBOL(idma_pci9_init);
+EXPORT_SYMBOL(idma_pci9_read);
+EXPORT_SYMBOL(idma_pci9_read_le);
+
+static inline int is_pci_mem(unsigned long addr)
+{
+ if (addr >= MPC826x_PCI_LOWER_MMIO &&
+ addr <= MPC826x_PCI_UPPER_MMIO)
+ return 1;
+ if (addr >= MPC826x_PCI_LOWER_MEM &&
+ addr <= MPC826x_PCI_UPPER_MEM)
+ return 1;
+ return 0;
+}
+
+#define is_pci_mem(pa) ( (pa > 0x80000000) && (pa < 0xc0000000))
+int readb(volatile unsigned char *addr)
+{
+ u8 val;
+ unsigned long pa = iopa((unsigned long) addr);
+
+ if (!is_pci_mem(pa))
+ return in_8(addr);
+
+ idma_pci9_read((u8 *)&val, (u8 *)pa, sizeof(val), sizeof(val), 0);
+ return val;
+}
+
+int readw(volatile unsigned short *addr)
+{
+ u16 val;
+ unsigned long pa = iopa((unsigned long) addr);
+
+ if (!is_pci_mem(pa))
+ return in_le16(addr);
+
+ idma_pci9_read((u8 *)&val, (u8 *)pa, sizeof(val), sizeof(val), 0);
+ return swab16(val);
+}
+
+unsigned readl(volatile unsigned *addr)
+{
+ u32 val;
+ unsigned long pa = iopa((unsigned long) addr);
+
+ if (!is_pci_mem(pa))
+ return in_le32(addr);
+
+ idma_pci9_read((u8 *)&val, (u8 *)pa, sizeof(val), sizeof(val), 0);
+ return swab32(val);
+}
+
+int inb(unsigned port)
+{
+ u8 val;
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)&val, (u8 *)addr, sizeof(val), sizeof(val), 0);
+ return val;
+}
+
+int inw(unsigned port)
+{
+ u16 val;
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)&val, (u8 *)addr, sizeof(val), sizeof(val), 0);
+ return swab16(val);
+}
+
+unsigned inl(unsigned port)
+{
+ u32 val;
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)&val, (u8 *)addr, sizeof(val), sizeof(val), 0);
+ return swab32(val);
+}
+
+void insb(unsigned port, void *buf, int ns)
+{
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)buf, (u8 *)addr, ns*sizeof(u8), sizeof(u8), 0);
+}
+
+void insw(unsigned port, void *buf, int ns)
+{
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)buf, (u8 *)addr, ns*sizeof(u16), sizeof(u16), 0);
+}
+
+void insl(unsigned port, void *buf, int nl)
+{
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)buf, (u8 *)addr, nl*sizeof(u32), sizeof(u32), 0);
+}
+
+void insw_ns(unsigned port, void *buf, int ns)
+{
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)buf, (u8 *)addr, ns*sizeof(u16), sizeof(u16), 0);
+}
+
+void insl_ns(unsigned port, void *buf, int nl)
+{
+ u8 *addr = (u8 *)(port + _IO_BASE);
+
+ idma_pci9_read((u8 *)buf, (u8 *)addr, nl*sizeof(u32), sizeof(u32), 0);
+}
+
+void *memcpy_fromio(void *dest, unsigned long src, size_t count)
+{
+ unsigned long pa = iopa((unsigned long) src);
+
+ if (is_pci_mem(pa))
+ idma_pci9_read((u8 *)dest, (u8 *)pa, count, 32, 1);
+ else
+ memcpy(dest, (void *)src, count);
+ return dest;
+}
+
+EXPORT_SYMBOL(readb);
+EXPORT_SYMBOL(readw);
+EXPORT_SYMBOL(readl);
+EXPORT_SYMBOL(inb);
+EXPORT_SYMBOL(inw);
+EXPORT_SYMBOL(inl);
+EXPORT_SYMBOL(insb);
+EXPORT_SYMBOL(insw);
+EXPORT_SYMBOL(insl);
+EXPORT_SYMBOL(insw_ns);
+EXPORT_SYMBOL(insl_ns);
+EXPORT_SYMBOL(memcpy_fromio);
+
+#endif /* ifdef CONFIG_8260_PCI9 */
+
+/* Indirect PCI routines adapted from arch/ppc/kernel/indirect_pci.c.
+ * Copyright (C) 1998 Gabriel Paubert.
+ */
+#ifndef CONFIG_8260_PCI9
+#define cfg_read(val, addr, type, op) *val = op((type)(addr))
+#else
+#define cfg_read(val, addr, type, op) \
+ idma_pci9_read_le((u8*)(val),(u8*)(addr),sizeof(*(val)),sizeof(*(val)),0)
+#endif
+
+#define cfg_write(val, addr, type, op) op((type *)(addr), (val))
+
+static int indirect_write_config(struct pci_bus *pbus, unsigned int devfn, int where,
+ int size, u32 value)
+{
+ struct pci_controller *hose = pbus->sysdata;
+ u8 cfg_type = 0;
+ if (ppc_md.pci_exclude_device)
+ if (ppc_md.pci_exclude_device(pbus->number, devfn))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ if (hose->set_cfg_type)
+ if (pbus->number != hose->first_busno)
+ cfg_type = 1;
+
+ out_be32(hose->cfg_addr,
+ (((where & 0xfc) | cfg_type) << 24) | (devfn << 16)
+ | ((pbus->number - hose->bus_offset) << 8) | 0x80);
+
+ switch (size)
+ {
+ case 1:
+ cfg_write(value, hose->cfg_data + (where & 3), u8, out_8);
+ break;
+ case 2:
+ cfg_write(value, hose->cfg_data + (where & 2), u16, out_le16);
+ break;
+ case 4:
+ cfg_write(value, hose->cfg_data + (where & 0), u32, out_le32);
+ break;
+ }
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int indirect_read_config(struct pci_bus *pbus, unsigned int devfn, int where,
+ int size, u32 *value)
+{
+ struct pci_controller *hose = pbus->sysdata;
+ u8 cfg_type = 0;
+ if (ppc_md.pci_exclude_device)
+ if (ppc_md.pci_exclude_device(pbus->number, devfn))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ if (hose->set_cfg_type)
+ if (pbus->number != hose->first_busno)
+ cfg_type = 1;
+
+ out_be32(hose->cfg_addr,
+ (((where & 0xfc) | cfg_type) << 24) | (devfn << 16)
+ | ((pbus->number - hose->bus_offset) << 8) | 0x80);
+
+ switch (size)
+ {
+ case 1:
+ cfg_read(value, hose->cfg_data + (where & 3), u8 *, in_8);
+ break;
+ case 2:
+ cfg_read(value, hose->cfg_data + (where & 2), u16 *, in_le16);
+ break;
+ case 4:
+ cfg_read(value, hose->cfg_data + (where & 0), u32 *, in_le32);
+ break;
+ }
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops indirect_pci_ops =
+{
+ .read = indirect_read_config,
+ .write = indirect_write_config,
+};
+
+void
+setup_m8260_indirect_pci(struct pci_controller* hose, u32 cfg_addr, u32 cfg_data)
+{
+ hose->ops = &indirect_pci_ops;
+ hose->cfg_addr = (unsigned int *) ioremap(cfg_addr, 4);
+ hose->cfg_data = (unsigned char *) ioremap(cfg_data, 4);
+}
--- /dev/null
+/*
+ * arch/ppc/syslib/mpc52xx_pic.c
+ *
+ * Programmable Interrupt Controller functions for the Freescale MPC52xx
+ * embedded CPU.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based on (well, mostly copied from) the code from the 2.4 kernel by
+ * Dale Farnsworth <dfarnsworth@mvista.com> and Kent Borg.
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 Montavista Software, Inc
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/stddef.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+#include <asm/irq.h>
+#include <asm/mpc52xx.h>
+
+
+static struct mpc52xx_intr *intr;
+static struct mpc52xx_sdma *sdma;
+
+static void
+mpc52xx_ic_disable(unsigned int irq)
+{
+ u32 val;
+
+ if (irq == MPC52xx_IRQ0) {
+ val = in_be32(&intr->ctrl);
+ val &= ~(1 << 11);
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_IRQ1) {
+ BUG();
+ }
+ else if (irq <= MPC52xx_IRQ3) {
+ val = in_be32(&intr->ctrl);
+ val &= ~(1 << (10 - (irq - MPC52xx_IRQ1)));
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_SDMA_IRQ_BASE) {
+ val = in_be32(&intr->main_mask);
+ val |= 1 << (16 - (irq - MPC52xx_MAIN_IRQ_BASE));
+ out_be32(&intr->main_mask, val);
+ }
+ else if (irq < MPC52xx_PERP_IRQ_BASE) {
+ val = in_be32(&sdma->IntMask);
+ val |= 1 << (irq - MPC52xx_SDMA_IRQ_BASE);
+ out_be32(&sdma->IntMask, val);
+ }
+ else {
+ val = in_be32(&intr->per_mask);
+ val |= 1 << (31 - (irq - MPC52xx_PERP_IRQ_BASE));
+ out_be32(&intr->per_mask, val);
+ }
+}
+
+static void
+mpc52xx_ic_enable(unsigned int irq)
+{
+ u32 val;
+
+ if (irq == MPC52xx_IRQ0) {
+ val = in_be32(&intr->ctrl);
+ val |= 1 << 11;
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_IRQ1) {
+ BUG();
+ }
+ else if (irq <= MPC52xx_IRQ3) {
+ val = in_be32(&intr->ctrl);
+ val |= 1 << (10 - (irq - MPC52xx_IRQ1));
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_SDMA_IRQ_BASE) {
+ val = in_be32(&intr->main_mask);
+ val &= ~(1 << (16 - (irq - MPC52xx_MAIN_IRQ_BASE)));
+ out_be32(&intr->main_mask, val);
+ }
+ else if (irq < MPC52xx_PERP_IRQ_BASE) {
+ val = in_be32(&sdma->IntMask);
+ val &= ~(1 << (irq - MPC52xx_SDMA_IRQ_BASE));
+ out_be32(&sdma->IntMask, val);
+ }
+ else {
+ val = in_be32(&intr->per_mask);
+ val &= ~(1 << (31 - (irq - MPC52xx_PERP_IRQ_BASE)));
+ out_be32(&intr->per_mask, val);
+ }
+}
+
+static void
+mpc52xx_ic_ack(unsigned int irq)
+{
+ u32 val;
+
+ /*
+ * Only some irqs are reset here, others in interrupting hardware.
+ */
+
+ switch (irq) {
+ case MPC52xx_IRQ0:
+ val = in_be32(&intr->ctrl);
+ val |= 0x08000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_CCS_IRQ:
+ val = in_be32(&intr->enc_status);
+ val |= 0x00000400;
+ out_be32(&intr->enc_status, val);
+ break;
+ case MPC52xx_IRQ1:
+ val = in_be32(&intr->ctrl);
+ val |= 0x04000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_IRQ2:
+ val = in_be32(&intr->ctrl);
+ val |= 0x02000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_IRQ3:
+ val = in_be32(&intr->ctrl);
+ val |= 0x01000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ default:
+ if (irq >= MPC52xx_SDMA_IRQ_BASE
+ && irq < (MPC52xx_SDMA_IRQ_BASE + MPC52xx_SDMA_IRQ_NUM)) {
+ out_be32(&sdma->IntPend,
+ 1 << (irq - MPC52xx_SDMA_IRQ_BASE));
+ }
+ break;
+ }
+}
+
+static void
+mpc52xx_ic_disable_and_ack(unsigned int irq)
+{
+ mpc52xx_ic_disable(irq);
+ mpc52xx_ic_ack(irq);
+}
+
+static void
+mpc52xx_ic_end(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
+ mpc52xx_ic_enable(irq);
+}
+
+static struct hw_interrupt_type mpc52xx_ic = {
+ "MPC52xx",
+ NULL, /* startup(irq) */
+ NULL, /* shutdown(irq) */
+ mpc52xx_ic_enable, /* enable(irq) */
+ mpc52xx_ic_disable, /* disable(irq) */
+ mpc52xx_ic_disable_and_ack, /* disable_and_ack(irq) */
+ mpc52xx_ic_end, /* end(irq) */
+ 0 /* set_affinity(irq, cpumask) SMP. */
+};
+
+void __init
+mpc52xx_init_irq(void)
+{
+ int i;
+
+ /* Remap the necessary zones */
+ intr = (struct mpc52xx_intr *)
+ ioremap(MPC52xx_INTR, sizeof(struct mpc52xx_intr));
+ sdma = (struct mpc52xx_sdma *)
+ ioremap(MPC52xx_SDMA, sizeof(struct mpc52xx_sdma));
+
+ if ((intr==NULL) || (sdma==NULL))
+ panic("Can't ioremap PIC/SDMA register for init_irq !");
+
+ /* Disable all interrupt sources. */
+ out_be32(&sdma->IntPend, 0xffffffff); /* 1 means clear pending */
+ out_be32(&sdma->IntMask, 0xffffffff); /* 1 means disabled */
+ out_be32(&intr->per_mask, 0x7ffffc00); /* 1 means disabled */
+ out_be32(&intr->main_mask, 0x00010fff); /* 1 means disabled */
+ out_be32(&intr->ctrl,
+ 0x0f000000 | /* clear IRQ 0-3 */
+ 0x00c00000 | /* IRQ0: level-sensitive, active low */
+ 0x00001000 | /* MEE master external enable */
+ 0x00000000 | /* 0 means disable IRQ 0-3 */
+ 0x00000001); /* CEb route critical normally */
+
+ /* Zero a bunch of the priority settings. */
+ out_be32(&intr->per_pri1, 0);
+ out_be32(&intr->per_pri2, 0);
+ out_be32(&intr->per_pri3, 0);
+ out_be32(&intr->main_pri1, 0);
+ out_be32(&intr->main_pri2, 0);
+
+ /* Initialize irq_desc[i].handler's with mpc52xx_ic. */
+ for (i = 0; i < NR_IRQS; i++) {
+ irq_desc[i].handler = &mpc52xx_ic;
+ irq_desc[i].status = IRQ_LEVEL;
+ }
+}
+
+int
+mpc52xx_get_irq(struct pt_regs *regs)
+{
+ u32 status;
+ int irq = -1;
+
+ status = in_be32(&intr->enc_status);
+
+ if (status & 0x00000400) { /* critical */
+ irq = (status >> 8) & 0x3;
+ if (irq == 2) /* high priority peripheral */
+ goto peripheral;
+ irq += MPC52xx_CRIT_IRQ_BASE;
+ }
+ else if (status & 0x00200000) { /* main */
+ irq = (status >> 16) & 0x1f;
+ if (irq == 4) /* low priority peripheral */
+ goto peripheral;
+ irq += MPC52xx_MAIN_IRQ_BASE;
+ }
+ else if (status & 0x20000000) { /* peripheral */
+peripheral:
+ irq = (status >> 24) & 0x1f;
+ if (irq == 0) { /* bestcomm */
+ status = in_be32(&sdma->IntPend);
+ irq = ffs(status) + MPC52xx_SDMA_IRQ_BASE-1;
+ }
+ else
+ irq += MPC52xx_PERP_IRQ_BASE;
+ }
+
+ return irq;
+}
+
--- /dev/null
+/*
+ * arch/ppc/syslib/mpc52xx_common.c
+ *
+ * Common code for the boards based on Freescale MPC52xx embedded CPU.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Support for other bootloaders than UBoot by Dale Farnsworth
+ * <dfarnsworth@mvista.com>
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 Montavista Software, Inc
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+
+#include <asm/time.h>
+#include <asm/mpc52xx.h>
+#include <asm/mpc52xx_psc.h>
+#include <asm/ocp.h>
+#include <asm/ppcboot.h>
+
+extern bd_t __res;
+
+static int core_mult[] = { /* CPU Frequency multiplier, taken */
+ 0, 0, 0, 10, 20, 20, 25, 45, /* from the datasheet used to compute */
+ 30, 55, 40, 50, 0, 60, 35, 0, /* CPU frequency from XLB freq and */
+ 30, 25, 65, 10, 70, 20, 75, 45, /* external jumper config */
+ 0, 55, 40, 50, 80, 60, 35, 0
+};
+
+void
+mpc52xx_restart(char *cmd)
+{
+ struct mpc52xx_gpt* gpt0 = (struct mpc52xx_gpt*) MPC52xx_GPTx(0);
+
+ local_irq_disable();
+
+ /* Turn on the watchdog and wait for it to expire. It effectively
+ does a reset */
+ if (gpt0 != NULL) {
+ out_be32(&gpt0->count, 0x000000ff);
+ out_be32(&gpt0->mode, 0x00009004);
+ } else
+ printk(KERN_ERR "mpc52xx_restart: Unable to ioremap GPT0 registers, -> looping ...");
+
+ while (1);
+}
+
+void
+mpc52xx_halt(void)
+{
+ local_irq_disable();
+
+ while (1);
+}
+
+void
+mpc52xx_power_off(void)
+{
+ /* By default we don't have any way of shut down.
+ If a specific board wants to, it can set the power down
+ code to any hardware implementation dependent code */
+ mpc52xx_halt();
+}
+
+
+void __init
+mpc52xx_set_bat(void)
+{
+ /* Set BAT 2 to map the 0xf0000000 area */
+ /* This mapping is used during mpc52xx_progress,
+ * mpc52xx_find_end_of_memory, and UARTs/GPIO access for debug
+ */
+ mb();
+ mtspr(DBAT2U, 0xf0001ffe);
+ mtspr(DBAT2L, 0xf000002a);
+ mb();
+}
+
+void __init
+mpc52xx_map_io(void)
+{
+ /* Here we only map the MBAR */
+ io_block_mapping(
+ MPC52xx_MBAR_VIRT, MPC52xx_MBAR, MPC52xx_MBAR_SIZE, _PAGE_IO);
+}
+
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+#ifdef MPC52xx_PF_CONSOLE_PORT
+#define MPC52xx_CONSOLE MPC52xx_PSCx(MPC52xx_PF_CONSOLE_PORT)
+#else
+#error "mpc52xx PSC for console not selected"
+#endif
+
+void
+mpc52xx_progress(char *s, unsigned short hex)
+{
+ struct mpc52xx_psc *psc = (struct mpc52xx_psc *)MPC52xx_CONSOLE;
+ char c;
+
+ /* Don't we need to disable serial interrupts ? */
+
+ while ((c = *s++) != 0) {
+ if (c == '\n') {
+ while (!(in_be16(&psc->mpc52xx_psc_status) &
+ MPC52xx_PSC_SR_TXRDY)) ;
+ out_8(&psc->mpc52xx_psc_buffer_8, '\r');
+ }
+ while (!(in_be16(&psc->mpc52xx_psc_status) &
+ MPC52xx_PSC_SR_TXRDY)) ;
+ out_8(&psc->mpc52xx_psc_buffer_8, c);
+ }
+}
+
+#endif /* CONFIG_SERIAL_TEXT_DEBUG */
+
+
+unsigned long __init
+mpc52xx_find_end_of_memory(void)
+{
+ u32 ramsize = __res.bi_memsize;
+
+ /*
+ * if bootloader passed a memsize, just use it
+ * else get size from sdram config registers
+ */
+ if (ramsize == 0) {
+ struct mpc52xx_mmap_ctl *mmap_ctl;
+ u32 sdram_config_0, sdram_config_1;
+
+ /* Temp BAT2 mapping active when this is called ! */
+ mmap_ctl = (struct mpc52xx_mmap_ctl*) MPC52xx_MMAP_CTL;
+
+ sdram_config_0 = in_be32(&mmap_ctl->sdram0);
+ sdram_config_1 = in_be32(&mmap_ctl->sdram1);
+
+ if ((sdram_config_0 & 0x1f) >= 0x13)
+ ramsize = 1 << ((sdram_config_0 & 0xf) + 17);
+
+ if (((sdram_config_1 & 0x1f) >= 0x13) &&
+ ((sdram_config_1 & 0xfff00000) == ramsize))
+ ramsize += 1 << ((sdram_config_1 & 0xf) + 17);
+
+ iounmap(mmap_ctl);
+ }
+
+ return ramsize;
+}
+
+void __init
+mpc52xx_calibrate_decr(void)
+{
+ int current_time, previous_time;
+ int tbl_start, tbl_end;
+ unsigned int xlbfreq, cpufreq, ipbfreq, pcifreq, divisor;
+
+ xlbfreq = __res.bi_busfreq;
+ /* if bootloader didn't pass bus frequencies, calculate them */
+ if (xlbfreq == 0) {
+ /* Get RTC & Clock manager modules */
+ struct mpc52xx_rtc *rtc;
+ struct mpc52xx_cdm *cdm;
+
+ rtc = (struct mpc52xx_rtc*)
+ ioremap(MPC52xx_RTC, sizeof(struct mpc52xx_rtc));
+ cdm = (struct mpc52xx_cdm*)
+ ioremap(MPC52xx_CDM, sizeof(struct mpc52xx_cdm));
+
+ if ((rtc==NULL) || (cdm==NULL))
+ panic("Can't ioremap RTC/CDM while computing bus freq");
+
+ /* Count bus clock during 1/64 sec */
+ out_be32(&rtc->dividers, 0x8f1f0000); /* Set RTC 64x faster */
+ previous_time = in_be32(&rtc->time);
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_start = get_tbl();
+ previous_time = current_time;
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_end = get_tbl();
+ out_be32(&rtc->dividers, 0xffff0000); /* Restore RTC */
+
+ /* Compute all frequency from that & CDM settings */
+ xlbfreq = (tbl_end - tbl_start) << 8;
+ cpufreq = (xlbfreq * core_mult[in_be32(&cdm->rstcfg)&0x1f])/10;
+ ipbfreq = (in_8(&cdm->ipb_clk_sel) & 1) ?
+ xlbfreq / 2 : xlbfreq;
+ switch (in_8(&cdm->pci_clk_sel) & 3) {
+ case 0:
+ pcifreq = ipbfreq;
+ break;
+ case 1:
+ pcifreq = ipbfreq / 2;
+ break;
+ default:
+ pcifreq = xlbfreq / 4;
+ break;
+ }
+ __res.bi_busfreq = xlbfreq;
+ __res.bi_intfreq = cpufreq;
+ __res.bi_ipbfreq = ipbfreq;
+ __res.bi_pcifreq = pcifreq;
+
+ /* Release mapping */
+ iounmap((void*)rtc);
+ iounmap((void*)cdm);
+ }
+
+ divisor = 4;
+
+ tb_ticks_per_jiffy = xlbfreq / HZ / divisor;
+ tb_to_us = mulhwu_scale_factor(xlbfreq / divisor, 1000000);
+}
+
+
+void __init
+mpc52xx_add_board_devices(struct ocp_def board_ocp[]) {
+ while (board_ocp->vendor != OCP_VENDOR_INVALID)
+ if(ocp_add_one_device(board_ocp++))
+ printk("mpc5200-ocp: Failed to add board device !\n");
+}
+
--- /dev/null
+/*
+ * ocp.c
+ *
+ * (c) Benjamin Herrenschmidt (benh@kernel.crashing.org)
+ * Mipsys - France
+ *
+ * Derived from work (c) Armin Kuster akuster@pacbell.net
+ *
+ * Additional support and port to 2.6 LDM/sysfs by
+ * Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * OCP (On Chip Peripheral) is a software emulated "bus" with a
+ * pseudo discovery method for dumb peripherals. Usually these type
+ * of peripherals are found on embedded SoC (System On a Chip)
+ * processors or highly integrated system controllers that have
+ * a host bridge and many peripherals. Common examples where
+ * this is already used include the PPC4xx, PPC85xx, MPC52xx,
+ * and MV64xxx parts.
+ *
+ * This subsystem creates a standard OCP bus type within the
+ * device model. The devices on the OCP bus are seeded by an
+ * an initial OCP device array created by the arch-specific
+ * Device entries can be added/removed/modified through OCP
+ * helper functions to accomodate system and board-specific
+ * parameters commonly found in embedded systems. OCP also
+ * provides a standard method for devices to describe extended
+ * attributes about themselves to the system. A standard access
+ * method allows OCP drivers to obtain the information, both
+ * SoC-specific and system/board-specific, needed for operation.
+ */
+
+#include <linux/module.h>
+#include <linux/config.h>
+#include <linux/list.h>
+#include <linux/miscdevice.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/pm.h>
+#include <linux/bootmem.h>
+#include <linux/device.h>
+
+#include <asm/io.h>
+#include <asm/ocp.h>
+#include <asm/errno.h>
+#include <asm/rwsem.h>
+#include <asm/semaphore.h>
+
+//#define DBG(x) printk x
+#define DBG(x)
+
+extern int mem_init_done;
+
+extern struct ocp_def core_ocp[]; /* Static list of devices, provided by
+ CPU core */
+
+LIST_HEAD(ocp_devices); /* List of all OCP devices */
+DECLARE_RWSEM(ocp_devices_sem); /* Global semaphores for those lists */
+
+static int ocp_inited;
+
+/* Sysfs support */
+#define OCP_DEF_ATTR(field, format_string) \
+static ssize_t \
+show_##field(struct device *dev, char *buf) \
+{ \
+ struct ocp_device *odev = to_ocp_dev(dev); \
+ \
+ return sprintf(buf, format_string, odev->def->field); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+OCP_DEF_ATTR(vendor, "0x%04x\n");
+OCP_DEF_ATTR(function, "0x%04x\n");
+OCP_DEF_ATTR(index, "0x%04x\n");
+#ifdef CONFIG_PTE_64BIT
+OCP_DEF_ATTR(paddr, "0x%016Lx\n");
+#else
+OCP_DEF_ATTR(paddr, "0x%08lx\n");
+#endif
+OCP_DEF_ATTR(irq, "%d\n");
+OCP_DEF_ATTR(pm, "%lu\n");
+
+void ocp_create_sysfs_dev_files(struct ocp_device *odev)
+{
+ struct device *dev = &odev->dev;
+
+ /* Current OCP device def attributes */
+ device_create_file(dev, &dev_attr_vendor);
+ device_create_file(dev, &dev_attr_function);
+ device_create_file(dev, &dev_attr_index);
+ device_create_file(dev, &dev_attr_paddr);
+ device_create_file(dev, &dev_attr_irq);
+ device_create_file(dev, &dev_attr_pm);
+ /* Current OCP device additions attributes */
+ if (odev->def->additions && odev->def->show)
+ odev->def->show(dev);
+}
+
+/**
+ * ocp_device_match - Match one driver to one device
+ * @drv: driver to match
+ * @dev: device to match
+ *
+ * This function returns 0 if the driver and device don't match
+ */
+static int
+ocp_device_match(struct device *dev, struct device_driver *drv)
+{
+ struct ocp_device *ocp_dev = to_ocp_dev(dev);
+ struct ocp_driver *ocp_drv = to_ocp_drv(drv);
+ const struct ocp_device_id *ids = ocp_drv->id_table;
+
+ if (!ids)
+ return 0;
+
+ while (ids->vendor || ids->function) {
+ if ((ids->vendor == OCP_ANY_ID
+ || ids->vendor == ocp_dev->def->vendor)
+ && (ids->function == OCP_ANY_ID
+ || ids->function == ocp_dev->def->function))
+ return 1;
+ ids++;
+ }
+ return 0;
+}
+
+static int
+ocp_device_probe(struct device *dev)
+{
+ int error = 0;
+ struct ocp_driver *drv;
+ struct ocp_device *ocp_dev;
+
+ drv = to_ocp_drv(dev->driver);
+ ocp_dev = to_ocp_dev(dev);
+
+ if (drv->probe) {
+ error = drv->probe(ocp_dev);
+ if (error >= 0) {
+ ocp_dev->driver = drv;
+ error = 0;
+ }
+ }
+ return error;
+}
+
+static int
+ocp_device_remove(struct device *dev)
+{
+ struct ocp_device *ocp_dev = to_ocp_dev(dev);
+
+ if (ocp_dev->driver) {
+ if (ocp_dev->driver->remove)
+ ocp_dev->driver->remove(ocp_dev);
+ ocp_dev->driver = NULL;
+ }
+ return 0;
+}
+
+static int
+ocp_device_suspend(struct device *dev, u32 state)
+{
+ struct ocp_device *ocp_dev = to_ocp_dev(dev);
+ struct ocp_driver *ocp_drv = to_ocp_drv(dev->driver);
+
+ if (dev->driver && ocp_drv->suspend)
+ return ocp_drv->suspend(ocp_dev, state);
+ return 0;
+}
+
+static int
+ocp_device_resume(struct device *dev)
+{
+ struct ocp_device *ocp_dev = to_ocp_dev(dev);
+ struct ocp_driver *ocp_drv = to_ocp_drv(dev->driver);
+
+ if (dev->driver && ocp_drv->resume)
+ return ocp_drv->resume(ocp_dev);
+ return 0;
+}
+
+struct bus_type ocp_bus_type = {
+ .name = "ocp",
+ .match = ocp_device_match,
+ .suspend = ocp_device_suspend,
+ .resume = ocp_device_resume,
+};
+
+/**
+ * ocp_register_driver - Register an OCP driver
+ * @drv: pointer to statically defined ocp_driver structure
+ *
+ * The driver's probe() callback is called either recursively
+ * by this function or upon later call of ocp_driver_init
+ *
+ * NOTE: Detection of devices is a 2 pass step on this implementation,
+ * hotswap isn't supported. First, all OCP devices are put in the device
+ * list, _then_ all drivers are probed on each match.
+ */
+int
+ocp_register_driver(struct ocp_driver *drv)
+{
+ /* initialize common driver fields */
+ drv->driver.name = drv->name;
+ drv->driver.bus = &ocp_bus_type;
+ drv->driver.probe = ocp_device_probe;
+ drv->driver.remove = ocp_device_remove;
+
+ /* register with core */
+ return driver_register(&drv->driver);
+}
+
+/**
+ * ocp_unregister_driver - Unregister an OCP driver
+ * @drv: pointer to statically defined ocp_driver structure
+ *
+ * The driver's remove() callback is called recursively
+ * by this function for any device already registered
+ */
+void
+ocp_unregister_driver(struct ocp_driver *drv)
+{
+ DBG(("ocp: ocp_unregister_driver(%s)...\n", drv->name));
+
+ driver_unregister(&drv->driver);
+
+ DBG(("ocp: ocp_unregister_driver(%s)... done.\n", drv->name));
+}
+
+/* Core of ocp_find_device(). Caller must hold ocp_devices_sem */
+static struct ocp_device *
+__ocp_find_device(unsigned int vendor, unsigned int function, int index)
+{
+ struct list_head *entry;
+ struct ocp_device *dev, *found = NULL;
+
+ DBG(("ocp: __ocp_find_device(vendor: %x, function: %x, index: %d)...\n", vendor, function, index));
+
+ list_for_each(entry, &ocp_devices) {
+ dev = list_entry(entry, struct ocp_device, link);
+ if (vendor != OCP_ANY_ID && vendor != dev->def->vendor)
+ continue;
+ if (function != OCP_ANY_ID && function != dev->def->function)
+ continue;
+ if (index != OCP_ANY_INDEX && index != dev->def->index)
+ continue;
+ found = dev;
+ break;
+ }
+
+ DBG(("ocp: __ocp_find_device(vendor: %x, function: %x, index: %d)... done\n", vendor, function, index));
+
+ return found;
+}
+
+/**
+ * ocp_find_device - Find a device by function & index
+ * @vendor: vendor ID of the device (or OCP_ANY_ID)
+ * @function: function code of the device (or OCP_ANY_ID)
+ * @idx: index of the device (or OCP_ANY_INDEX)
+ *
+ * This function allows a lookup of a given function by it's
+ * index, it's typically used to find the MAL or ZMII associated
+ * with an EMAC or similar horrors.
+ * You can pass vendor, though you usually want OCP_ANY_ID there...
+ */
+struct ocp_device *
+ocp_find_device(unsigned int vendor, unsigned int function, int index)
+{
+ struct ocp_device *dev;
+
+ down_read(&ocp_devices_sem);
+ dev = __ocp_find_device(vendor, function, index);
+ up_read(&ocp_devices_sem);
+
+ return dev;
+}
+
+/**
+ * ocp_get_one_device - Find a def by function & index
+ * @vendor: vendor ID of the device (or OCP_ANY_ID)
+ * @function: function code of the device (or OCP_ANY_ID)
+ * @idx: index of the device (or OCP_ANY_INDEX)
+ *
+ * This function allows a lookup of a given ocp_def by it's
+ * vendor, function, and index. The main purpose for is to
+ * allow modification of the def before binding to the driver
+ */
+struct ocp_def *
+ocp_get_one_device(unsigned int vendor, unsigned int function, int index)
+{
+ struct ocp_device *dev;
+ struct ocp_def *found = NULL;
+
+ DBG(("ocp: ocp_get_one_device(vendor: %x, function: %x, index: %d)...\n",
+ vendor, function, index));
+
+ dev = ocp_find_device(vendor, function, index);
+
+ if (dev)
+ found = dev->def;
+
+ DBG(("ocp: ocp_get_one_device(vendor: %x, function: %x, index: %d)... done.\n",
+ vendor, function, index));
+
+ return found;
+}
+
+/**
+ * ocp_add_one_device - Add a device
+ * @def: static device definition structure
+ *
+ * This function adds a device definition to the
+ * device list. It may only be called before
+ * ocp_driver_init() and will return an error
+ * otherwise.
+ */
+int
+ocp_add_one_device(struct ocp_def *def)
+{
+ struct ocp_device *dev;
+
+ DBG(("ocp: ocp_add_one_device()...\n"));
+
+ /* Can't be called after ocp driver init */
+ if (ocp_inited)
+ return 1;
+
+ if (mem_init_done)
+ dev = kmalloc(sizeof(*dev), GFP_KERNEL);
+ else
+ dev = alloc_bootmem(sizeof(*dev));
+
+ if (dev == NULL)
+ return 1;
+ memset(dev, 0, sizeof(*dev));
+ dev->def = def;
+ dev->current_state = 4;
+ sprintf(dev->name, "OCP device %04x:%04x:%04x",
+ dev->def->vendor, dev->def->function, dev->def->index);
+ down_write(&ocp_devices_sem);
+ list_add_tail(&dev->link, &ocp_devices);
+ up_write(&ocp_devices_sem);
+
+ DBG(("ocp: ocp_add_one_device()...done\n"));
+
+ return 0;
+}
+
+/**
+ * ocp_remove_one_device - Remove a device by function & index
+ * @vendor: vendor ID of the device (or OCP_ANY_ID)
+ * @function: function code of the device (or OCP_ANY_ID)
+ * @idx: index of the device (or OCP_ANY_INDEX)
+ *
+ * This function allows removal of a given function by its
+ * index. It may only be called before ocp_driver_init()
+ * and will return an error otherwise.
+ */
+int
+ocp_remove_one_device(unsigned int vendor, unsigned int function, int index)
+{
+ struct ocp_device *dev;
+
+ DBG(("ocp: ocp_remove_one_device(vendor: %x, function: %x, index: %d)...\n", vendor, function, index));
+
+ /* Can't be called after ocp driver init */
+ if (ocp_inited)
+ return 1;
+
+ down_write(&ocp_devices_sem);
+ dev = __ocp_find_device(vendor, function, index);
+ list_del((struct list_head *)dev);
+ up_write(&ocp_devices_sem);
+
+ DBG(("ocp: ocp_remove_one_device(vendor: %x, function: %x, index: %d)... done.\n", vendor, function, index));
+
+ return 0;
+}
+
+/**
+ * ocp_for_each_device - Iterate over OCP devices
+ * @callback: routine to execute for each ocp device.
+ * @arg: user data to be passed to callback routine.
+ *
+ * This routine holds the ocp_device semaphore, so the
+ * callback routine cannot modify the ocp_device list.
+ */
+void
+ocp_for_each_device(void(*callback)(struct ocp_device *, void *arg), void *arg)
+{
+ struct list_head *entry;
+
+ if (callback) {
+ down_read(&ocp_devices_sem);
+ list_for_each(entry, &ocp_devices)
+ callback(list_entry(entry, struct ocp_device, link),
+ arg);
+ up_read(&ocp_devices_sem);
+ }
+}
+
+/**
+ * ocp_early_init - Init OCP device management
+ *
+ * This function builds the list of devices before setup_arch.
+ * This allows platform code to modify the device lists before
+ * they are bound to drivers (changes to paddr, removing devices
+ * etc)
+ */
+int __init
+ocp_early_init(void)
+{
+ struct ocp_def *def;
+
+ DBG(("ocp: ocp_early_init()...\n"));
+
+ /* Fill the devices list */
+ for (def = core_ocp; def->vendor != OCP_VENDOR_INVALID; def++)
+ ocp_add_one_device(def);
+
+ DBG(("ocp: ocp_early_init()... done.\n"));
+
+ return 0;
+}
+
+/**
+ * ocp_driver_init - Init OCP device management
+ *
+ * This function is meant to be called via OCP bus registration.
+ */
+static int __init
+ocp_driver_init(void)
+{
+ int ret = 0, index = 0;
+ struct device *ocp_bus;
+ struct list_head *entry;
+ struct ocp_device *dev;
+
+ if (ocp_inited)
+ return ret;
+ ocp_inited = 1;
+
+ DBG(("ocp: ocp_driver_init()...\n"));
+
+ /* Allocate/register primary OCP bus */
+ ocp_bus = kmalloc(sizeof(struct device), GFP_KERNEL);
+ if (ocp_bus == NULL)
+ return 1;
+ memset(ocp_bus, 0, sizeof(struct device));
+ strcpy(ocp_bus->bus_id, "ocp");
+
+ bus_register(&ocp_bus_type);
+
+ device_register(ocp_bus);
+
+ /* Put each OCP device into global device list */
+ list_for_each(entry, &ocp_devices) {
+ dev = list_entry(entry, struct ocp_device, link);
+ sprintf(dev->dev.bus_id, "%2.2x", index);
+ dev->dev.parent = ocp_bus;
+ dev->dev.bus = &ocp_bus_type;
+ device_register(&dev->dev);
+ ocp_create_sysfs_dev_files(dev);
+ index++;
+ }
+
+ DBG(("ocp: ocp_driver_init()... done.\n"));
+
+ return 0;
+}
+
+postcore_initcall(ocp_driver_init);
+
+EXPORT_SYMBOL(ocp_bus_type);
+EXPORT_SYMBOL(ocp_find_device);
+EXPORT_SYMBOL(ocp_register_driver);
+EXPORT_SYMBOL(ocp_unregister_driver);
--- /dev/null
+/*
+ * arch/ppc/kernel/ppc4xx_sgdma.c
+ *
+ * IBM PPC4xx DMA engine scatter/gather library
+ *
+ * Copyright 2002-2003 MontaVista Software Inc.
+ *
+ * Cleaned up and converted to new DCR access
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * Original code by Armin Kuster <akuster@mvista.com>
+ * and Pete Popov <ppopov@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/ppc4xx_dma.h>
+
+void
+ppc4xx_set_sg_addr(int dmanr, phys_addr_t sg_addr)
+{
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk("ppc4xx_set_sg_addr: bad channel: %d\n", dmanr);
+ return;
+ }
+
+#ifdef PPC4xx_DMA_64BIT
+ mtdcr(DCRN_ASGH0 + (dmanr * 0x8), (u32)(sg_addr >> 32));
+#endif
+ mtdcr(DCRN_ASG0 + (dmanr * 0x8), (u32)sg_addr);
+}
+
+/*
+ * Add a new sgl descriptor to the end of a scatter/gather list
+ * which was created by alloc_dma_handle().
+ *
+ * For a memory to memory transfer, both dma addresses must be
+ * valid. For a peripheral to memory transfer, one of the addresses
+ * must be set to NULL, depending on the direction of the transfer:
+ * memory to peripheral: set dst_addr to NULL,
+ * peripheral to memory: set src_addr to NULL.
+ */
+int
+ppc4xx_add_dma_sgl(sgl_handle_t handle, phys_addr_t src_addr, phys_addr_t dst_addr,
+ unsigned int count)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+ ppc_dma_ch_t *p_dma_ch;
+
+ if (!handle) {
+ printk("ppc4xx_add_dma_sgl: null handle\n");
+ return DMA_STATUS_BAD_HANDLE;
+ }
+
+ if (psgl->dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk("ppc4xx_add_dma_sgl: bad channel: %d\n", psgl->dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+
+ p_dma_ch = &dma_channels[psgl->dmanr];
+
+#ifdef DEBUG_4xxDMA
+ {
+ int error = 0;
+ unsigned int aligned =
+ (unsigned) src_addr | (unsigned) dst_addr | count;
+ switch (p_dma_ch->pwidth) {
+ case PW_8:
+ break;
+ case PW_16:
+ if (aligned & 0x1)
+ error = 1;
+ break;
+ case PW_32:
+ if (aligned & 0x3)
+ error = 1;
+ break;
+ case PW_64:
+ if (aligned & 0x7)
+ error = 1;
+ break;
+ default:
+ printk("ppc4xx_add_dma_sgl: invalid bus width: 0x%x\n",
+ p_dma_ch->pwidth);
+ return DMA_STATUS_GENERAL_ERROR;
+ }
+ if (error)
+ printk
+ ("Alignment warning: ppc4xx_add_dma_sgl src 0x%x dst 0x%x count 0x%x bus width var %d\n",
+ src_addr, dst_addr, count, p_dma_ch->pwidth);
+
+ }
+#endif
+
+ if ((unsigned) (psgl->ptail + 1) >= ((unsigned) psgl + SGL_LIST_SIZE)) {
+ printk("sgl handle out of memory \n");
+ return DMA_STATUS_OUT_OF_MEMORY;
+ }
+
+ if (!psgl->ptail) {
+ psgl->phead = (ppc_sgl_t *)
+ ((unsigned) psgl + sizeof (sgl_list_info_t));
+ psgl->phead_dma = psgl->dma_addr + sizeof(sgl_list_info_t);
+ psgl->ptail = psgl->phead;
+ psgl->ptail_dma = psgl->phead_dma;
+ } else {
+ psgl->ptail->next = psgl->ptail_dma + sizeof(ppc_sgl_t);
+ psgl->ptail++;
+ psgl->ptail_dma += sizeof(ppc_sgl_t);
+ }
+
+ psgl->ptail->control = psgl->control;
+ psgl->ptail->src_addr = src_addr;
+ psgl->ptail->dst_addr = dst_addr;
+ psgl->ptail->control_count = (count >> p_dma_ch->shift) |
+ psgl->sgl_control;
+ psgl->ptail->next = (uint32_t) NULL;
+
+ return DMA_STATUS_GOOD;
+}
+
+/*
+ * Enable (start) the DMA described by the sgl handle.
+ */
+void
+ppc4xx_enable_dma_sgl(sgl_handle_t handle)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+ ppc_dma_ch_t *p_dma_ch;
+ uint32_t sg_command;
+
+ if (!handle) {
+ printk("ppc4xx_enable_dma_sgl: null handle\n");
+ return;
+ } else if (psgl->dmanr > (MAX_PPC4xx_DMA_CHANNELS - 1)) {
+ printk("ppc4xx_enable_dma_sgl: bad channel in handle %d\n",
+ psgl->dmanr);
+ return;
+ } else if (!psgl->phead) {
+ printk("ppc4xx_enable_dma_sgl: sg list empty\n");
+ return;
+ }
+
+ p_dma_ch = &dma_channels[psgl->dmanr];
+ psgl->ptail->control_count &= ~SG_LINK; /* make this the last dscrptr */
+ sg_command = mfdcr(DCRN_ASGC);
+
+ ppc4xx_set_sg_addr(psgl->dmanr, psgl->phead_dma);
+
+ sg_command |= SSG_ENABLE(psgl->dmanr);
+
+ mtdcr(DCRN_ASGC, sg_command); /* start transfer */
+}
+
+/*
+ * Halt an active scatter/gather DMA operation.
+ */
+void
+ppc4xx_disable_dma_sgl(sgl_handle_t handle)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+ uint32_t sg_command;
+
+ if (!handle) {
+ printk("ppc4xx_enable_dma_sgl: null handle\n");
+ return;
+ } else if (psgl->dmanr > (MAX_PPC4xx_DMA_CHANNELS - 1)) {
+ printk("ppc4xx_enable_dma_sgl: bad channel in handle %d\n",
+ psgl->dmanr);
+ return;
+ }
+
+ sg_command = mfdcr(DCRN_ASGC);
+ sg_command &= ~SSG_ENABLE(psgl->dmanr);
+ mtdcr(DCRN_ASGC, sg_command); /* stop transfer */
+}
+
+/*
+ * Returns number of bytes left to be transferred from the entire sgl list.
+ * *src_addr and *dst_addr get set to the source/destination address of
+ * the sgl descriptor where the DMA stopped.
+ *
+ * An sgl transfer must NOT be active when this function is called.
+ */
+int
+ppc4xx_get_dma_sgl_residue(sgl_handle_t handle, phys_addr_t * src_addr,
+ phys_addr_t * dst_addr)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+ ppc_dma_ch_t *p_dma_ch;
+ ppc_sgl_t *pnext, *sgl_addr;
+ uint32_t count_left;
+
+ if (!handle) {
+ printk("ppc4xx_get_dma_sgl_residue: null handle\n");
+ return DMA_STATUS_BAD_HANDLE;
+ } else if (psgl->dmanr > (MAX_PPC4xx_DMA_CHANNELS - 1)) {
+ printk("ppc4xx_get_dma_sgl_residue: bad channel in handle %d\n",
+ psgl->dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+
+ sgl_addr = (ppc_sgl_t *) __va(mfdcr(DCRN_ASG0 + (psgl->dmanr * 0x8)));
+ count_left = mfdcr(DCRN_DMACT0 + (psgl->dmanr * 0x8));
+
+ if (!sgl_addr) {
+ printk("ppc4xx_get_dma_sgl_residue: sgl addr register is null\n");
+ goto error;
+ }
+
+ pnext = psgl->phead;
+ while (pnext &&
+ ((unsigned) pnext < ((unsigned) psgl + SGL_LIST_SIZE) &&
+ (pnext != sgl_addr))
+ ) {
+ pnext++;
+ }
+
+ if (pnext == sgl_addr) { /* found the sgl descriptor */
+
+ *src_addr = pnext->src_addr;
+ *dst_addr = pnext->dst_addr;
+
+ /*
+ * Now search the remaining descriptors and add their count.
+ * We already have the remaining count from this descriptor in
+ * count_left.
+ */
+ pnext++;
+
+ while ((pnext != psgl->ptail) &&
+ ((unsigned) pnext < ((unsigned) psgl + SGL_LIST_SIZE))
+ ) {
+ count_left += pnext->control_count & SG_COUNT_MASK;
+ }
+
+ if (pnext != psgl->ptail) { /* should never happen */
+ printk
+ ("ppc4xx_get_dma_sgl_residue error (1) psgl->ptail 0x%x handle 0x%x\n",
+ (unsigned int) psgl->ptail, (unsigned int) handle);
+ goto error;
+ }
+
+ /* success */
+ p_dma_ch = &dma_channels[psgl->dmanr];
+ return (count_left << p_dma_ch->shift); /* count in bytes */
+
+ } else {
+ /* this shouldn't happen */
+ printk
+ ("get_dma_sgl_residue, unable to match current address 0x%x, handle 0x%x\n",
+ (unsigned int) sgl_addr, (unsigned int) handle);
+
+ }
+
+ error:
+ *src_addr = (phys_addr_t) NULL;
+ *dst_addr = (phys_addr_t) NULL;
+ return 0;
+}
+
+/*
+ * Returns the address(es) of the buffer(s) contained in the head element of
+ * the scatter/gather list. The element is removed from the scatter/gather
+ * list and the next element becomes the head.
+ *
+ * This function should only be called when the DMA is not active.
+ */
+int
+ppc4xx_delete_dma_sgl_element(sgl_handle_t handle, phys_addr_t * src_dma_addr,
+ phys_addr_t * dst_dma_addr)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+
+ if (!handle) {
+ printk("ppc4xx_delete_sgl_element: null handle\n");
+ return DMA_STATUS_BAD_HANDLE;
+ } else if (psgl->dmanr > (MAX_PPC4xx_DMA_CHANNELS - 1)) {
+ printk("ppc4xx_delete_sgl_element: bad channel in handle %d\n",
+ psgl->dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+
+ if (!psgl->phead) {
+ printk("ppc4xx_delete_sgl_element: sgl list empty\n");
+ *src_dma_addr = (phys_addr_t) NULL;
+ *dst_dma_addr = (phys_addr_t) NULL;
+ return DMA_STATUS_SGL_LIST_EMPTY;
+ }
+
+ *src_dma_addr = (phys_addr_t) psgl->phead->src_addr;
+ *dst_dma_addr = (phys_addr_t) psgl->phead->dst_addr;
+
+ if (psgl->phead == psgl->ptail) {
+ /* last descriptor on the list */
+ psgl->phead = NULL;
+ psgl->ptail = NULL;
+ } else {
+ psgl->phead++;
+ psgl->phead_dma += sizeof(ppc_sgl_t);
+ }
+
+ return DMA_STATUS_GOOD;
+}
+
+
+/*
+ * Create a scatter/gather list handle. This is simply a structure which
+ * describes a scatter/gather list.
+ *
+ * A handle is returned in "handle" which the driver should save in order to
+ * be able to access this list later. A chunk of memory will be allocated
+ * to be used by the API for internal management purposes, including managing
+ * the sg list and allocating memory for the sgl descriptors. One page should
+ * be more than enough for that purpose. Perhaps it's a bit wasteful to use
+ * a whole page for a single sg list, but most likely there will be only one
+ * sg list per channel.
+ *
+ * Interrupt notes:
+ * Each sgl descriptor has a copy of the DMA control word which the DMA engine
+ * loads in the control register. The control word has a "global" interrupt
+ * enable bit for that channel. Interrupts are further qualified by a few bits
+ * in the sgl descriptor count register. In order to setup an sgl, we have to
+ * know ahead of time whether or not interrupts will be enabled at the completion
+ * of the transfers. Thus, enable_dma_interrupt()/disable_dma_interrupt() MUST
+ * be called before calling alloc_dma_handle(). If the interrupt mode will never
+ * change after powerup, then enable_dma_interrupt()/disable_dma_interrupt()
+ * do not have to be called -- interrupts will be enabled or disabled based
+ * on how the channel was configured after powerup by the hw_init_dma_channel()
+ * function. Each sgl descriptor will be setup to interrupt if an error occurs;
+ * however, only the last descriptor will be setup to interrupt. Thus, an
+ * interrupt will occur (if interrupts are enabled) only after the complete
+ * sgl transfer is done.
+ */
+int
+ppc4xx_alloc_dma_handle(sgl_handle_t * phandle, unsigned int mode, unsigned int dmanr)
+{
+ sgl_list_info_t *psgl;
+ dma_addr_t dma_addr;
+ ppc_dma_ch_t *p_dma_ch = &dma_channels[dmanr];
+ uint32_t sg_command;
+ void *ret;
+
+ if (dmanr >= MAX_PPC4xx_DMA_CHANNELS) {
+ printk("ppc4xx_alloc_dma_handle: invalid channel 0x%x\n", dmanr);
+ return DMA_STATUS_BAD_CHANNEL;
+ }
+
+ if (!phandle) {
+ printk("ppc4xx_alloc_dma_handle: null handle pointer\n");
+ return DMA_STATUS_NULL_POINTER;
+ }
+
+ /* Get a page of memory, which is zeroed out by consistent_alloc() */
+ ret = dma_alloc_coherent(NULL, DMA_PPC4xx_SIZE, &dma_addr, GFP_KERNEL);
+ if (ret != NULL) {
+ memset(ret, 0, DMA_PPC4xx_SIZE);
+ psgl = (sgl_list_info_t *) ret;
+ }
+
+ if (psgl == NULL) {
+ *phandle = (sgl_handle_t) NULL;
+ return DMA_STATUS_OUT_OF_MEMORY;
+ }
+
+ psgl->dma_addr = dma_addr;
+ psgl->dmanr = dmanr;
+
+ /*
+ * Modify and save the control word. These words will be
+ * written to each sgl descriptor. The DMA engine then
+ * loads this control word into the control register
+ * every time it reads a new descriptor.
+ */
+ psgl->control = p_dma_ch->control;
+ /* Clear all mode bits */
+ psgl->control &= ~(DMA_TM_MASK | DMA_TD);
+ /* Save control word and mode */
+ psgl->control |= (mode | DMA_CE_ENABLE);
+
+ /* In MM mode, we must set ETD/TCE */
+ if (mode == DMA_MODE_MM)
+ psgl->control |= DMA_ETD_OUTPUT | DMA_TCE_ENABLE;
+
+ if (p_dma_ch->int_enable) {
+ /* Enable channel interrupt */
+ psgl->control |= DMA_CIE_ENABLE;
+ } else {
+ psgl->control &= ~DMA_CIE_ENABLE;
+ }
+
+ sg_command = mfdcr(DCRN_ASGC);
+ sg_command |= SSG_MASK_ENABLE(dmanr);
+
+ /* Enable SGL control access */
+ mtdcr(DCRN_ASGC, sg_command);
+ psgl->sgl_control = SG_ERI_ENABLE | SG_LINK;
+
+ if (p_dma_ch->int_enable) {
+ if (p_dma_ch->tce_enable)
+ psgl->sgl_control |= SG_TCI_ENABLE;
+ else
+ psgl->sgl_control |= SG_ETI_ENABLE;
+ }
+
+ *phandle = (sgl_handle_t) psgl;
+ return DMA_STATUS_GOOD;
+}
+
+/*
+ * Destroy a scatter/gather list handle that was created by alloc_dma_handle().
+ * The list must be empty (contain no elements).
+ */
+void
+ppc4xx_free_dma_handle(sgl_handle_t handle)
+{
+ sgl_list_info_t *psgl = (sgl_list_info_t *) handle;
+
+ if (!handle) {
+ printk("ppc4xx_free_dma_handle: got NULL\n");
+ return;
+ } else if (psgl->phead) {
+ printk("ppc4xx_free_dma_handle: list not empty\n");
+ return;
+ } else if (!psgl->dma_addr) { /* should never happen */
+ printk("ppc4xx_free_dma_handle: no dma address\n");
+ return;
+ }
+
+ dma_free_coherent(NULL, DMA_PPC4xx_SIZE, (void *) psgl, 0);
+}
+
+EXPORT_SYMBOL(ppc4xx_alloc_dma_handle);
+EXPORT_SYMBOL(ppc4xx_free_dma_handle);
+EXPORT_SYMBOL(ppc4xx_add_dma_sgl);
+EXPORT_SYMBOL(ppc4xx_delete_dma_sgl_element);
+EXPORT_SYMBOL(ppc4xx_enable_dma_sgl);
+EXPORT_SYMBOL(ppc4xx_disable_dma_sgl);
+EXPORT_SYMBOL(ppc4xx_get_dma_sgl_residue);
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc85xx_common.c
+ *
+ * MPC85xx support routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/init.h>
+
+#include <asm/mpc85xx.h>
+#include <asm/mmu.h>
+#include <asm/ocp.h>
+
+/* ************************************************************************ */
+/* Return the value of CCSRBAR for the current board */
+
+phys_addr_t
+get_ccsrbar(void)
+{
+ return BOARD_CCSRBAR;
+}
+
+/* ************************************************************************ */
+/* Update the 85xx OCP tables paddr field */
+void
+mpc85xx_update_paddr_ocp(struct ocp_device *dev, void *arg)
+{
+ phys_addr_t ccsrbar;
+ if (arg) {
+ ccsrbar = *(phys_addr_t *)arg;
+ dev->def->paddr += ccsrbar;
+ }
+}
+
+EXPORT_SYMBOL(get_ccsrbar);
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc85xx_common.h
+ *
+ * MPC85xx support routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef __PPC_SYSLIB_PPC85XX_COMMON_H
+#define __PPC_SYSLIB_PPC85XX_COMMON_H
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <asm/ocp.h>
+
+/* Provide access to ccsrbar for any modules, etc */
+phys_addr_t get_ccsrbar(void);
+
+/* Update the 85xx OCP tables paddr field */
+void mpc85xx_update_paddr_ocp(struct ocp_device *dev, void *ccsrbar);
+
+#endif /* __PPC_SYSLIB_PPC85XX_COMMON_H */
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc85xx_setup.c
+ *
+ * MPC85XX common board code
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+
+#include <asm/prom.h>
+#include <asm/time.h>
+#include <asm/mpc85xx.h>
+#include <asm/immap_85xx.h>
+#include <asm/mmu.h>
+#include <asm/ocp.h>
+#include <asm/kgdb.h>
+
+/* Return the amount of memory */
+unsigned long __init
+mpc85xx_find_end_of_memory(void)
+{
+ bd_t *binfo;
+
+ binfo = (bd_t *) __res;
+
+ return binfo->bi_memsize;
+}
+
+/* The decrementer counts at the system (internal) clock freq divided by 8 */
+void __init
+mpc85xx_calibrate_decr(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq, divisor;
+
+ /* get the core frequency */
+ freq = binfo->bi_busfreq;
+
+ /* The timebase is updated every 8 bus clocks, HID0[SEL_TBCLK] = 0 */
+ divisor = 8;
+ tb_ticks_per_jiffy = freq / divisor / HZ;
+ tb_to_us = mulhwu_scale_factor(freq / divisor, 1000000);
+
+ /* Set the time base to zero */
+ mtspr(SPRN_TBWL, 0);
+ mtspr(SPRN_TBWU, 0);
+
+ /* Clear any pending timer interrupts */
+ mtspr(SPRN_TSR, TSR_ENW | TSR_WIS | TSR_DIS | TSR_FIS);
+
+ /* Enable decrementer interrupt */
+ mtspr(SPRN_TCR, TCR_DIE);
+}
+
+#ifdef CONFIG_SERIAL_8250
+void __init
+mpc85xx_early_serial_map(void)
+{
+ struct uart_port serial_req;
+ bd_t *binfo = (bd_t *) __res;
+ phys_addr_t duart_paddr = binfo->bi_immr_base + MPC85xx_UART0_OFFSET;
+
+ /* Setup serial port access */
+ memset(&serial_req, 0, sizeof (serial_req));
+ serial_req.uartclk = binfo->bi_busfreq;
+ serial_req.line = 0;
+ serial_req.irq = MPC85xx_IRQ_DUART;
+ serial_req.flags = ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST;
+ serial_req.iotype = SERIAL_IO_MEM;
+ serial_req.membase = ioremap(duart_paddr, MPC85xx_UART0_SIZE);
+ serial_req.mapbase = duart_paddr;
+ serial_req.regshift = 0;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(0, &serial_req);
+#endif
+
+ if (early_serial_setup(&serial_req) != 0)
+ printk("Early serial init of port 0 failed\n");
+
+ /* Assume early_serial_setup() doesn't modify serial_req */
+ duart_paddr = binfo->bi_immr_base + MPC85xx_UART1_OFFSET;
+ serial_req.line = 1;
+ serial_req.mapbase = duart_paddr;
+ serial_req.membase = ioremap(duart_paddr, MPC85xx_UART1_SIZE);
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(1, &serial_req);
+#endif
+
+ if (early_serial_setup(&serial_req) != 0)
+ printk("Early serial init of port 1 failed\n");
+}
+#endif
+
+void
+mpc85xx_restart(char *cmd)
+{
+ local_irq_disable();
+ abort();
+}
+
+void
+mpc85xx_power_off(void)
+{
+ local_irq_disable();
+ for(;;);
+}
+
+void
+mpc85xx_halt(void)
+{
+ local_irq_disable();
+ for(;;);
+}
+
+#ifdef CONFIG_PCI
+static void __init
+mpc85xx_setup_pci1(struct pci_controller *hose)
+{
+ volatile struct ccsr_pci *pci;
+ volatile struct ccsr_guts *guts;
+ unsigned short temps;
+ bd_t *binfo = (bd_t *) __res;
+
+ pci = ioremap(binfo->bi_immr_base + MPC85xx_PCI1_OFFSET,
+ MPC85xx_PCI1_SIZE);
+
+ guts = ioremap(binfo->bi_immr_base + MPC85xx_GUTS_OFFSET,
+ MPC85xx_GUTS_SIZE);
+
+ early_read_config_word(hose, 0, 0, PCI_COMMAND, &temps);
+ temps |= PCI_COMMAND_SERR | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
+ early_write_config_word(hose, 0, 0, PCI_COMMAND, temps);
+
+#define PORDEVSR_PCI (0x00800000) /* PCI Mode */
+ if (guts->pordevsr & PORDEVSR_PCI) {
+ early_write_config_byte(hose, 0, 0, PCI_LATENCY_TIMER, 0x80);
+ } else {
+ /* PCI-X init */
+ temps = PCI_X_CMD_MAX_SPLIT | PCI_X_CMD_MAX_READ
+ | PCI_X_CMD_ERO | PCI_X_CMD_DPERR_E;
+ early_write_config_word(hose, 0, 0, PCIX_COMMAND, temps);
+ }
+
+ /* Disable all windows (except powar0 since its ignored) */
+ pci->powar1 = 0;
+ pci->powar2 = 0;
+ pci->powar3 = 0;
+ pci->powar4 = 0;
+ pci->piwar1 = 0;
+ pci->piwar2 = 0;
+ pci->piwar3 = 0;
+
+ /* Setup 512M Phys:PCI 1:1 outbound mem window @ 0x80000000 */
+ pci->potar1 = (MPC85XX_PCI1_LOWER_MEM >> 12) & 0x000fffff;
+ pci->potear1 = 0x00000000;
+ pci->powbar1 = (MPC85XX_PCI1_LOWER_MEM >> 12) & 0x000fffff;
+ pci->powar1 = 0x8004401c; /* Enable, Mem R/W, 512M */
+
+ /* Setup 16M outboud IO windows @ 0xe2000000 */
+ pci->potar2 = 0x00000000;
+ pci->potear2 = 0x00000000;
+ pci->powbar2 = (MPC85XX_PCI1_IO_BASE >> 12) & 0x000fffff;
+ pci->powar2 = 0x80088017; /* Enable, IO R/W, 16M */
+
+ /* Setup 2G inbound Memory Window @ 0 */
+ pci->pitar1 = 0x00000000;
+ pci->piwbar1 = 0x00000000;
+ pci->piwar1 = 0xa0f5501e; /* Enable, Prefetch, Local
+ Mem, Snoop R/W, 2G */
+}
+
+
+extern int mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin);
+extern int mpc85xx_exclude_device(u_char bus, u_char devfn);
+
+#if CONFIG_85xx_PCI2
+static void __init
+mpc85xx_setup_pci2(struct pci_controller *hose)
+{
+ volatile struct ccsr_pci *pci;
+ unsigned short temps;
+ bd_t *binfo = (bd_t *) __res;
+
+ pci = ioremap(binfo->bi_immr_base + MPC85xx_PCI2_OFFSET,
+ MPC85xx_PCI2_SIZE);
+
+ early_read_config_word(hose, 0, 0, PCI_COMMAND, &temps);
+ temps |= PCI_COMMAND_SERR | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
+ early_write_config_word(hose, 0, 0, PCI_COMMAND, temps);
+ early_write_config_byte(hose, 0, 0, PCI_LATENCY_TIMER, 0x80);
+
+ /* Disable all windows (except powar0 since its ignored) */
+ pci->powar1 = 0;
+ pci->powar2 = 0;
+ pci->powar3 = 0;
+ pci->powar4 = 0;
+ pci->piwar1 = 0;
+ pci->piwar2 = 0;
+ pci->piwar3 = 0;
+
+ /* Setup 512M Phys:PCI 1:1 outbound mem window @ 0xa0000000 */
+ pci->potar1 = (MPC85XX_PCI2_LOWER_MEM >> 12) & 0x000fffff;
+ pci->potear1 = 0x00000000;
+ pci->powbar1 = (MPC85XX_PCI2_LOWER_MEM >> 12) & 0x000fffff;
+ pci->powar1 = 0x8004401c; /* Enable, Mem R/W, 512M */
+
+ /* Setup 16M outboud IO windows @ 0xe3000000 */
+ pci->potar2 = 0x00000000;
+ pci->potear2 = 0x00000000;
+ pci->powbar2 = (MPC85XX_PCI2_IO_BASE >> 12) & 0x000fffff;
+ pci->powar2 = 0x80088017; /* Enable, IO R/W, 16M */
+
+ /* Setup 2G inbound Memory Window @ 0 */
+ pci->pitar1 = 0x00000000;
+ pci->piwbar1 = 0x00000000;
+ pci->piwar1 = 0xa0f5501e; /* Enable, Prefetch, Local
+ Mem, Snoop R/W, 2G */
+}
+#endif /* CONFIG_85xx_PCI2 */
+
+void __init
+mpc85xx_setup_hose(void)
+{
+ struct pci_controller *hose_a;
+#ifdef CONFIG_85xx_PCI2
+ struct pci_controller *hose_b;
+#endif
+ bd_t *binfo = (bd_t *) __res;
+
+ hose_a = pcibios_alloc_controller();
+
+ if (!hose_a)
+ return;
+
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = mpc85xx_map_irq;
+
+ hose_a->first_busno = 0;
+ hose_a->bus_offset = 0;
+ hose_a->last_busno = 0xff;
+
+ setup_indirect_pci(hose_a, binfo->bi_immr_base + PCI1_CFG_ADDR_OFFSET,
+ binfo->bi_immr_base + PCI1_CFG_DATA_OFFSET);
+ hose_a->set_cfg_type = 1;
+
+ mpc85xx_setup_pci1(hose_a);
+
+ hose_a->pci_mem_offset = MPC85XX_PCI1_MEM_OFFSET;
+ hose_a->mem_space.start = MPC85XX_PCI1_LOWER_MEM;
+ hose_a->mem_space.end = MPC85XX_PCI1_UPPER_MEM;
+
+ hose_a->io_space.start = MPC85XX_PCI1_LOWER_IO;
+ hose_a->io_space.end = MPC85XX_PCI1_UPPER_IO;
+ hose_a->io_base_phys = MPC85XX_PCI1_IO_BASE;
+#if CONFIG_85xx_PCI2
+ isa_io_base =
+ (unsigned long) ioremap(MPC85XX_PCI1_IO_BASE,
+ MPC85XX_PCI1_IO_SIZE +
+ MPC85XX_PCI2_IO_SIZE);
+#else
+ isa_io_base =
+ (unsigned long) ioremap(MPC85XX_PCI1_IO_BASE,
+ MPC85XX_PCI1_IO_SIZE);
+#endif
+ hose_a->io_base_virt = (void *) isa_io_base;
+
+ /* setup resources */
+ pci_init_resource(&hose_a->mem_resources[0],
+ MPC85XX_PCI1_LOWER_MEM,
+ MPC85XX_PCI1_UPPER_MEM,
+ IORESOURCE_MEM, "PCI1 host bridge");
+
+ pci_init_resource(&hose_a->io_resource,
+ MPC85XX_PCI1_LOWER_IO,
+ MPC85XX_PCI1_UPPER_IO,
+ IORESOURCE_IO, "PCI1 host bridge");
+
+ ppc_md.pci_exclude_device = mpc85xx_exclude_device;
+
+ hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno);
+
+#if CONFIG_85xx_PCI2
+ hose_b = pcibios_alloc_controller();
+
+ if (!hose_b)
+ return;
+
+ hose_b->bus_offset = hose_a->last_busno + 1;
+ hose_b->first_busno = hose_a->last_busno + 1;
+ hose_b->last_busno = 0xff;
+
+ setup_indirect_pci(hose_b, binfo->bi_immr_base + PCI2_CFG_ADDR_OFFSET,
+ binfo->bi_immr_base + PCI2_CFG_DATA_OFFSET);
+ hose_b->set_cfg_type = 1;
+
+ mpc85xx_setup_pci2(hose_b);
+
+ hose_b->pci_mem_offset = MPC85XX_PCI2_MEM_OFFSET;
+ hose_b->mem_space.start = MPC85XX_PCI2_LOWER_MEM;
+ hose_b->mem_space.end = MPC85XX_PCI2_UPPER_MEM;
+
+ hose_b->io_space.start = MPC85XX_PCI2_LOWER_IO;
+ hose_b->io_space.end = MPC85XX_PCI2_UPPER_IO;
+ hose_b->io_base_phys = MPC85XX_PCI2_IO_BASE;
+ hose_b->io_base_virt = (void *) isa_io_base + MPC85XX_PCI1_IO_SIZE;
+
+ /* setup resources */
+ pci_init_resource(&hose_b->mem_resources[0],
+ MPC85XX_PCI2_LOWER_MEM,
+ MPC85XX_PCI2_UPPER_MEM,
+ IORESOURCE_MEM, "PCI2 host bridge");
+
+ pci_init_resource(&hose_b->io_resource,
+ MPC85XX_PCI2_LOWER_IO,
+ MPC85XX_PCI2_UPPER_IO,
+ IORESOURCE_IO, "PCI2 host bridge");
+
+ hose_b->last_busno = pciauto_bus_scan(hose_b, hose_b->first_busno);
+#endif
+ return;
+}
+#endif /* CONFIG_PCI */
+
+
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc85xx_setup.h
+ *
+ * MPC85XX common board definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __PPC_SYSLIB_PPC85XX_SETUP_H
+#define __PPC_SYSLIB_PPC85XX_SETUP_H
+
+#include <linux/config.h>
+#include <linux/serial.h>
+#include <linux/init.h>
+#include <asm/ppcboot.h>
+
+extern unsigned long mpc85xx_find_end_of_memory(void) __init;
+extern void mpc85xx_calibrate_decr(void) __init;
+extern void mpc85xx_early_serial_map(void) __init;
+extern void mpc85xx_restart(char *cmd);
+extern void mpc85xx_power_off(void);
+extern void mpc85xx_halt(void);
+extern void mpc85xx_setup_hose(void) __init;
+
+/* PCI config */
+#define PCI1_CFG_ADDR_OFFSET (0x8000)
+#define PCI1_CFG_DATA_OFFSET (0x8004)
+
+#define PCI2_CFG_ADDR_OFFSET (0x9000)
+#define PCI2_CFG_DATA_OFFSET (0x9004)
+
+/* Additional register for PCI-X configuration */
+#define PCIX_NEXT_CAP 0x60
+#define PCIX_CAP_ID 0x61
+#define PCIX_COMMAND 0x62
+#define PCIX_STATUS 0x64
+
+/* Serial Config */
+#define MPC85XX_0_SERIAL (CCSRBAR + 0x4500)
+#define MPC85XX_1_SERIAL (CCSRBAR + 0x4600)
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 2
+#endif
+
+#define BASE_BAUD 0
+
+#define STD_UART_OP(num) \
+ { 0, BASE_BAUD, num, MPC85xx_IRQ_DUART, \
+ (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST), \
+ iomem_base: (u8 *)MPC85XX_##num##_SERIAL, \
+ io_type: SERIAL_IO_MEM},
+
+/* Offset of CPM register space */
+#define CPM_MAP_ADDR (CCSRBAR + MPC85xx_CPM_OFFSET)
+
+#endif /* __PPC_SYSLIB_PPC85XX_SETUP_H */
--- /dev/null
+/*
+ * hvcserver.c
+ * Copyright (C) 2004 Ryan S Arnold, IBM Corporation
+ *
+ * PPC64 virtual I/O console server support.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <asm/hvcall.h>
+#include <asm/hvcserver.h>
+#include <asm/io.h>
+
+#define HVCS_ARCH_VERSION "1.0.0"
+
+MODULE_AUTHOR("Ryan S. Arnold <rsa@us.ibm.com>");
+MODULE_DESCRIPTION("IBM hvcs ppc64 API");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HVCS_ARCH_VERSION);
+
+/*
+ * Convert arch specific return codes into relevant errnos. The hvcs
+ * functions aren't performance sensitive, so this conversion isn't an
+ * issue.
+ */
+int hvcs_convert(long to_convert)
+{
+ switch (to_convert) {
+ case H_Success:
+ return 0;
+ case H_Parameter:
+ return -EINVAL;
+ case H_Hardware:
+ return -EIO;
+ case H_Busy:
+ case H_LongBusyOrder1msec:
+ case H_LongBusyOrder10msec:
+ case H_LongBusyOrder100msec:
+ case H_LongBusyOrder1sec:
+ case H_LongBusyOrder10sec:
+ case H_LongBusyOrder100sec:
+ return -EBUSY;
+ case H_Function: /* fall through */
+ default:
+ return -EPERM;
+ }
+}
+
+int hvcs_free_partner_info(struct list_head *head)
+{
+ struct hvcs_partner_info *pi;
+ struct list_head *element;
+
+ if (!head) {
+ return -EINVAL;
+ }
+
+ while (!list_empty(head)) {
+ element = head->next;
+ pi = list_entry(element, struct hvcs_partner_info, node);
+ list_del(element);
+ kfree(pi);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hvcs_free_partner_info);
+
+/* Helper function for hvcs_get_partner_info */
+int hvcs_next_partner(unsigned int unit_address,
+ unsigned long last_p_partition_ID,
+ unsigned long last_p_unit_address, unsigned long *pi_buff)
+
+{
+ long retval;
+ retval = plpar_hcall_norets(H_VTERM_PARTNER_INFO, unit_address,
+ last_p_partition_ID,
+ last_p_unit_address, virt_to_phys(pi_buff));
+ return hvcs_convert(retval);
+}
+
+/*
+ * The unit_address parameter is the unit address of the vty-server vdevice
+ * in whose partner information the caller is interested. This function
+ * uses a pointer to a list_head instance in which to store the partner info.
+ * This function returns non-zero on success, or if there is no partner info.
+ *
+ * Invocation of this function should always be followed by an invocation of
+ * hvcs_free_partner_info() using a pointer to the SAME list head instance
+ * that was used to store the partner_info list.
+ */
+int hvcs_get_partner_info(unsigned int unit_address, struct list_head *head,
+ unsigned long *pi_buff)
+{
+ /*
+ * This is a page sized buffer to be passed to hvcall per invocation.
+ * NOTE: the first long returned is unit_address. The second long
+ * returned is the partition ID and starting with pi_buff[2] are
+ * HVCS_CLC_LENGTH characters, which are diff size than the unsigned
+ * long, hence the casting mumbojumbo you see later.
+ */
+ unsigned long last_p_partition_ID;
+ unsigned long last_p_unit_address;
+ struct hvcs_partner_info *next_partner_info = NULL;
+ int more = 1;
+ int retval;
+
+ memset(pi_buff, 0x00, PAGE_SIZE);
+ /* invalid parameters */
+ if (!head)
+ return -EINVAL;
+
+ last_p_partition_ID = last_p_unit_address = ~0UL;
+ INIT_LIST_HEAD(head);
+
+ if (!pi_buff)
+ return -ENOMEM;
+
+ do {
+ retval = hvcs_next_partner(unit_address, last_p_partition_ID,
+ last_p_unit_address, pi_buff);
+ if (retval) {
+ /*
+ * Don't indicate that we've failed if we have
+ * any list elements.
+ */
+ if (!list_empty(head))
+ return 0;
+ return retval;
+ }
+
+ last_p_partition_ID = pi_buff[0];
+ last_p_unit_address = pi_buff[1];
+
+ /* This indicates that there are no further partners */
+ if (last_p_partition_ID == ~0UL
+ && last_p_unit_address == ~0UL)
+ break;
+
+ /* This is a very small struct and will be freed soon in
+ * hvcs_free_partner_info(). */
+ next_partner_info = kmalloc(sizeof(struct hvcs_partner_info),
+ GFP_ATOMIC);
+
+ if (!next_partner_info) {
+ printk(KERN_WARNING "HVCONSOLE: kmalloc() failed to"
+ " allocate partner info struct.\n");
+ hvcs_free_partner_info(head);
+ return -ENOMEM;
+ }
+
+ next_partner_info->unit_address
+ = (unsigned int)last_p_unit_address;
+ next_partner_info->partition_ID
+ = (unsigned int)last_p_partition_ID;
+
+ /* copy the Null-term char too */
+ strncpy(&next_partner_info->location_code[0],
+ (char *)&pi_buff[2],
+ strlen((char *)&pi_buff[2]) + 1);
+
+ list_add_tail(&(next_partner_info->node), head);
+ next_partner_info = NULL;
+
+ } while (more);
+
+ return 0;
+}
+EXPORT_SYMBOL(hvcs_get_partner_info);
+
+/*
+ * If this function is called once and -EINVAL is returned it may
+ * indicate that the partner info needs to be refreshed for the
+ * target unit address at which point the caller must invoke
+ * hvcs_get_partner_info() and then call this function again. If,
+ * for a second time, -EINVAL is returned then it indicates that
+ * there is probably already a partner connection registered to a
+ * different vty-server@ vdevice. It is also possible that a second
+ * -EINVAL may indicate that one of the parms is not valid, for
+ * instance if the link was removed between the vty-server@ vdevice
+ * and the vty@ vdevice that you are trying to open. Don't shoot the
+ * messenger. Firmware implemented it this way.
+ */
+int hvcs_register_connection( unsigned int unit_address,
+ unsigned int p_partition_ID, unsigned int p_unit_address)
+{
+ long retval;
+ retval = plpar_hcall_norets(H_REGISTER_VTERM, unit_address,
+ p_partition_ID, p_unit_address);
+ return hvcs_convert(retval);
+}
+EXPORT_SYMBOL(hvcs_register_connection);
+
+/*
+ * If -EBUSY is returned continue to call this function
+ * until 0 is returned.
+ */
+int hvcs_free_connection(unsigned int unit_address)
+{
+ long retval;
+ retval = plpar_hcall_norets(H_FREE_VTERM, unit_address);
+ return hvcs_convert(retval);
+}
+EXPORT_SYMBOL(hvcs_free_connection);
--- /dev/null
+/*
+ * Routines to emulate some Altivec/VMX instructions, specifically
+ * those that can trap when given denormalized operands in Java mode.
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <asm/ptrace.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+
+/* Functions in vector.S */
+extern void vaddfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vsubfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vmaddfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vnmsubfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vrefp(vector128 *dst, vector128 *src);
+extern void vrsqrtefp(vector128 *dst, vector128 *src);
+extern void vexptep(vector128 *dst, vector128 *src);
+
+static unsigned int exp2s[8] = {
+ 0x800000,
+ 0x8b95c2,
+ 0x9837f0,
+ 0xa5fed7,
+ 0xb504f3,
+ 0xc5672a,
+ 0xd744fd,
+ 0xeac0c7
+};
+
+/*
+ * Computes an estimate of 2^x. The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int eexp2(unsigned int s)
+{
+ int exp, pwr;
+ unsigned int mant, frac;
+
+ /* extract exponent field from input */
+ exp = ((s >> 23) & 0xff) - 127;
+ if (exp > 7) {
+ /* check for NaN input */
+ if (exp == 128 && (s & 0x7fffff) != 0)
+ return s | 0x400000; /* return QNaN */
+ /* 2^-big = 0, 2^+big = +Inf */
+ return (s & 0x80000000)? 0: 0x7f800000; /* 0 or +Inf */
+ }
+ if (exp < -23)
+ return 0x3f800000; /* 1.0 */
+
+ /* convert to fixed point integer in 9.23 representation */
+ pwr = (s & 0x7fffff) | 0x800000;
+ if (exp > 0)
+ pwr <<= exp;
+ else
+ pwr >>= -exp;
+ if (s & 0x80000000)
+ pwr = -pwr;
+
+ /* extract integer part, which becomes exponent part of result */
+ exp = (pwr >> 23) + 126;
+ if (exp >= 254)
+ return 0x7f800000;
+ if (exp < -23)
+ return 0;
+
+ /* table lookup on top 3 bits of fraction to get mantissa */
+ mant = exp2s[(pwr >> 20) & 7];
+
+ /* linear interpolation using remaining 20 bits of fraction */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" (pwr << 12), "r" (0x172b83ff));
+ asm("mulhwu %0,%1,%2" : "=r" (frac) : "r" (frac), "r" (mant));
+ mant += frac;
+
+ if (exp >= 0)
+ return mant + (exp << 23);
+
+ /* denormalized result */
+ exp = -exp;
+ mant += 1 << (exp - 1);
+ return mant >> exp;
+}
+
+/*
+ * Computes an estimate of log_2(x). The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int elog2(unsigned int s)
+{
+ int exp, mant, lz, frac;
+
+ exp = s & 0x7f800000;
+ mant = s & 0x7fffff;
+ if (exp == 0x7f800000) { /* Inf or NaN */
+ if (mant != 0)
+ s |= 0x400000; /* turn NaN into QNaN */
+ return s;
+ }
+ if ((exp | mant) == 0) /* +0 or -0 */
+ return 0xff800000; /* return -Inf */
+
+ if (exp == 0) {
+ /* denormalized */
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (mant));
+ mant <<= lz - 8;
+ exp = (-118 - lz) << 23;
+ } else {
+ mant |= 0x800000;
+ exp -= 127 << 23;
+ }
+
+ if (mant >= 0xb504f3) { /* 2^0.5 * 2^23 */
+ exp |= 0x400000; /* 0.5 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xb504f334)); /* 2^-0.5 * 2^32 */
+ }
+ if (mant >= 0x9837f0) { /* 2^0.25 * 2^23 */
+ exp |= 0x200000; /* 0.25 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xd744fccb)); /* 2^-0.25 * 2^32 */
+ }
+ if (mant >= 0x8b95c2) { /* 2^0.125 * 2^23 */
+ exp |= 0x100000; /* 0.125 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xeac0c6e8)); /* 2^-0.125 * 2^32 */
+ }
+ if (mant > 0x800000) { /* 1.0 * 2^23 */
+ /* calculate (mant - 1) * 1.381097463 */
+ /* 1.381097463 == 0.125 / (2^0.125 - 1) */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" ((mant - 0x800000) << 1), "r" (0xb0c7cd3a));
+ exp += frac;
+ }
+ s = exp & 0x80000000;
+ if (exp != 0) {
+ if (s)
+ exp = -exp;
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (exp));
+ lz = 8 - lz;
+ if (lz > 0)
+ exp >>= lz;
+ else if (lz < 0)
+ exp <<= -lz;
+ s += ((lz + 126) << 23) + exp;
+ }
+ return s;
+}
+
+#define VSCR_SAT 1
+
+static int ctsxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp, mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (exp >= 31) {
+ /* saturate, unless the result would be -2^31 */
+ if (x + (scale << 23) != 0xcf000000)
+ *vscrp |= VSCR_SAT;
+ return (x & 0x80000000)? 0x80000000: 0x7fffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 7) >> (30 - exp);
+ return (x & 0x80000000)? -mant: mant;
+}
+
+static unsigned int ctuxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp;
+ unsigned int mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (x & 0x80000000) {
+ /* negative => saturate to 0 */
+ *vscrp |= VSCR_SAT;
+ return 0;
+ }
+ if (exp >= 32) {
+ /* saturate */
+ *vscrp |= VSCR_SAT;
+ return 0xffffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 8) >> (31 - exp);
+ return mant;
+}
+
+/* Round to floating integer, towards 0 */
+static unsigned int rfiz(unsigned int x)
+{
+ int exp;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < 0)
+ return x & 0x80000000; /* |x| < 1.0 rounds to 0 */
+ return x & ~(0x7fffff >> exp);
+}
+
+/* Round to floating integer, towards +/- Inf */
+static unsigned int rfii(unsigned int x)
+{
+ int exp, mask;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if ((x & 0x7fffffff) == 0)
+ return x; /* +/-0 -> +/-0 */
+ if (exp < 0)
+ /* 0 < |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ mask = 0x7fffff >> exp;
+ /* mantissa overflows into exponent - that's OK,
+ it can't overflow into the sign bit */
+ return (x + mask) & ~mask;
+}
+
+/* Round to floating integer, to nearest */
+static unsigned int rfin(unsigned int x)
+{
+ int exp, half;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < -1)
+ return x & 0x80000000; /* |x| < 0.5 -> +/-0 */
+ if (exp == -1)
+ /* 0.5 <= |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ half = 0x400000 >> exp;
+ /* add 0.5 to the magnitude and chop off the fraction bits */
+ return (x + half) & ~(0x7fffff >> exp);
+}
+
+int
+emulate_altivec(struct pt_regs *regs)
+{
+ unsigned int instr, i;
+ unsigned int va, vb, vc, vd;
+ vector128 *vrs;
+
+ if (get_user(instr, (unsigned int *) regs->nip))
+ return -EFAULT;
+ if ((instr >> 26) != 4)
+ return -EINVAL; /* not an altivec instruction */
+ vd = (instr >> 21) & 0x1f;
+ va = (instr >> 16) & 0x1f;
+ vb = (instr >> 11) & 0x1f;
+ vc = (instr >> 6) & 0x1f;
+
+ vrs = current->thread.vr;
+ switch (instr & 0x3f) {
+ case 10:
+ switch (vc) {
+ case 0: /* vaddfp */
+ vaddfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 1: /* vsubfp */
+ vsubfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 4: /* vrefp */
+ vrefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 5: /* vrsqrtefp */
+ vrsqrtefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 6: /* vexptefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = eexp2(vrs[vb].u[i]);
+ break;
+ case 7: /* vlogefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = elog2(vrs[vb].u[i]);
+ break;
+ case 8: /* vrfin */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfin(vrs[vb].u[i]);
+ break;
+ case 9: /* vrfiz */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfiz(vrs[vb].u[i]);
+ break;
+ case 10: /* vrfip */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfiz(x): rfii(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 11: /* vrfim */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfii(x): rfiz(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 14: /* vctuxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctuxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ case 15: /* vctsxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctsxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case 46: /* vmaddfp */
+ vmaddfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ case 47: /* vnmsubfp */
+ vnmsubfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
--- /dev/null
+#include <asm/ppc_asm.h>
+#include <asm/processor.h>
+
+/*
+ * The routines below are in assembler so we can closely control the
+ * usage of floating-point registers. These routines must be called
+ * with preempt disabled.
+ */
+ .section ".toc","aw"
+fpzero:
+ .tc FD_0_0[TC],0
+fpone:
+ .tc FD_3ff00000_0[TC],0x3ff0000000000000 /* 1.0 */
+fphalf:
+ .tc FD_3fe00000_0[TC],0x3fe0000000000000 /* 0.5 */
+
+ .text
+/*
+ * Internal routine to enable floating point and set FPSCR to 0.
+ * Don't call it from C; it doesn't use the normal calling convention.
+ */
+fpenable:
+ mfmsr r10
+ ori r11,r10,MSR_FP
+ mtmsr r11
+ isync
+ stfd fr31,-8(r1)
+ stfd fr0,-16(r1)
+ stfd fr1,-24(r1)
+ mffs fr31
+ lfd fr1,fpzero@toc(r2)
+ mtfsf 0xff,fr1
+ blr
+
+fpdisable:
+ mtlr r12
+ mtfsf 0xff,fr31
+ lfd fr1,-24(r1)
+ lfd fr0,-16(r1)
+ lfd fr31,-8(r1)
+ mtmsr r10
+ isync
+ blr
+
+/*
+ * Vector add, floating point.
+ */
+_GLOBAL(vaddfp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fadds fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector subtract, floating point.
+ */
+_GLOBAL(vsubfp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fsubs fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector multiply and add, floating point.
+ */
+_GLOBAL(vmaddfp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fmadds fr0,fr0,fr1,fr2
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,-32(r1)
+ b fpdisable
+
+/*
+ * Vector negative multiply and subtract, floating point.
+ */
+_GLOBAL(vnmsubfp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fnmsubs fr0,fr0,fr1,fr2
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,-32(r1)
+ b fpdisable
+
+/*
+ * Vector reciprocal estimate. We just compute 1.0/x.
+ * r3 -> destination, r4 -> source.
+ */
+_GLOBAL(vrefp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ lfd fr1,fpone@toc(r2)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ fdivs fr0,fr1,fr0
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector reciprocal square-root estimate, floating point.
+ * We use the frsqrte instruction for the initial estimate followed
+ * by 2 iterations of Newton-Raphson to get sufficient accuracy.
+ * r3 -> destination, r4 -> source.
+ */
+_GLOBAL(vrsqrtefp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ stfd fr3,-40(r1)
+ stfd fr4,-48(r1)
+ stfd fr5,-56(r1)
+ li r0,4
+ lfd fr4,fpone@toc(r2)
+ lfd fr5,fphalf@toc(r2)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ frsqrte fr1,fr0 /* r = frsqrte(s) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ stfsx fr1,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ lfd fr5,-56(r1)
+ lfd fr4,-48(r1)
+ lfd fr3,-40(r1)
+ lfd fr2,-32(r1)
+ b fpdisable
--- /dev/null
+/*
+ * arch/ppc64/lib/e2a.c
+ *
+ * EBCDIC to ASCII conversion
+ *
+ * This function moved here from arch/ppc64/kernel/viopath.c
+ *
+ * (C) Copyright 2000-2004 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) anyu later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/module.h>
+
+unsigned char e2a(unsigned char x)
+{
+ switch (x) {
+ case 0xF0:
+ return '0';
+ case 0xF1:
+ return '1';
+ case 0xF2:
+ return '2';
+ case 0xF3:
+ return '3';
+ case 0xF4:
+ return '4';
+ case 0xF5:
+ return '5';
+ case 0xF6:
+ return '6';
+ case 0xF7:
+ return '7';
+ case 0xF8:
+ return '8';
+ case 0xF9:
+ return '9';
+ case 0xC1:
+ return 'A';
+ case 0xC2:
+ return 'B';
+ case 0xC3:
+ return 'C';
+ case 0xC4:
+ return 'D';
+ case 0xC5:
+ return 'E';
+ case 0xC6:
+ return 'F';
+ case 0xC7:
+ return 'G';
+ case 0xC8:
+ return 'H';
+ case 0xC9:
+ return 'I';
+ case 0xD1:
+ return 'J';
+ case 0xD2:
+ return 'K';
+ case 0xD3:
+ return 'L';
+ case 0xD4:
+ return 'M';
+ case 0xD5:
+ return 'N';
+ case 0xD6:
+ return 'O';
+ case 0xD7:
+ return 'P';
+ case 0xD8:
+ return 'Q';
+ case 0xD9:
+ return 'R';
+ case 0xE2:
+ return 'S';
+ case 0xE3:
+ return 'T';
+ case 0xE4:
+ return 'U';
+ case 0xE5:
+ return 'V';
+ case 0xE6:
+ return 'W';
+ case 0xE7:
+ return 'X';
+ case 0xE8:
+ return 'Y';
+ case 0xE9:
+ return 'Z';
+ }
+ return ' ';
+}
+EXPORT_SYMBOL(e2a);
+
+
--- /dev/null
+/*
+ * Spin and read/write lock operations.
+ *
+ * Copyright (C) 2001-2004 Paul Mackerras <paulus@au.ibm.com>, IBM
+ * Copyright (C) 2001 Anton Blanchard <anton@au.ibm.com>, IBM
+ * Copyright (C) 2002 Dave Engebretsen <engebret@us.ibm.com>, IBM
+ * Rework to support virtual processors
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <asm/hvcall.h>
+#include <asm/iSeries/HvCall.h>
+
+#ifndef CONFIG_SPINLINE
+
+/*
+ * On a system with shared processors (that is, where a physical
+ * processor is multiplexed between several virtual processors),
+ * there is no point spinning on a lock if the holder of the lock
+ * isn't currently scheduled on a physical processor. Instead
+ * we detect this situation and ask the hypervisor to give the
+ * rest of our timeslice to the lock holder.
+ *
+ * So that we can tell which virtual processor is holding a lock,
+ * we put 0x80000000 | smp_processor_id() in the lock when it is
+ * held. Conveniently, we have a word in the paca that holds this
+ * value.
+ */
+
+/* waiting for a spinlock... */
+#if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES)
+void __spin_yield(spinlock_t *lock)
+{
+ unsigned int lock_value, holder_cpu, yield_count;
+ struct paca_struct *holder_paca;
+
+ lock_value = lock->lock;
+ if (lock_value == 0)
+ return;
+ holder_cpu = lock_value & 0xffff;
+ BUG_ON(holder_cpu >= NR_CPUS);
+ holder_paca = &paca[holder_cpu];
+ yield_count = holder_paca->xLpPaca.xYieldCount;
+ if ((yield_count & 1) == 0)
+ return; /* virtual cpu is currently running */
+ rmb();
+ if (lock->lock != lock_value)
+ return; /* something has changed */
+#ifdef CONFIG_PPC_ISERIES
+ HvCall2(HvCallBaseYieldProcessor, HvCall_YieldToProc,
+ ((u64)holder_cpu << 32) | yield_count);
+#else
+ plpar_hcall_norets(H_CONFER, holder_cpu, yield_count);
+#endif
+}
+
+#else /* SPLPAR || ISERIES */
+#define __spin_yield(x) barrier()
+#endif
+
+/*
+ * This returns the old value in the lock, so we succeeded
+ * in getting the lock if the return value is 0.
+ */
+static __inline__ unsigned long __spin_trylock(spinlock_t *lock)
+{
+ unsigned long tmp, tmp2;
+
+ __asm__ __volatile__(
+" lwz %1,24(13) # __spin_trylock\n\
+1: lwarx %0,0,%2\n\
+ cmpwi 0,%0,0\n\
+ bne- 2f\n\
+ stwcx. %1,0,%2\n\
+ bne- 1b\n\
+ isync\n\
+2:" : "=&r" (tmp), "=&r" (tmp2)
+ : "r" (&lock->lock)
+ : "cr0", "memory");
+
+ return tmp;
+}
+
+int _raw_spin_trylock(spinlock_t *lock)
+{
+ return __spin_trylock(lock) == 0;
+}
+
+EXPORT_SYMBOL(_raw_spin_trylock);
+
+void _raw_spin_lock(spinlock_t *lock)
+{
+ while (1) {
+ if (likely(__spin_trylock(lock) == 0))
+ break;
+ do {
+ HMT_low();
+ __spin_yield(lock);
+ } while (likely(lock->lock != 0));
+ HMT_medium();
+ }
+}
+
+EXPORT_SYMBOL(_raw_spin_lock);
+
+void _raw_spin_lock_flags(spinlock_t *lock, unsigned long flags)
+{
+ unsigned long flags_dis;
+
+ while (1) {
+ if (likely(__spin_trylock(lock) == 0))
+ break;
+ local_save_flags(flags_dis);
+ local_irq_restore(flags);
+ do {
+ HMT_low();
+ __spin_yield(lock);
+ } while (likely(lock->lock != 0));
+ HMT_medium();
+ local_irq_restore(flags_dis);
+ }
+}
+
+EXPORT_SYMBOL(_raw_spin_lock_flags);
+
+void spin_unlock_wait(spinlock_t *lock)
+{
+ while (lock->lock)
+ __spin_yield(lock);
+}
+
+EXPORT_SYMBOL(spin_unlock_wait);
+
+/*
+ * Waiting for a read lock or a write lock on a rwlock...
+ * This turns out to be the same for read and write locks, since
+ * we only know the holder if it is write-locked.
+ */
+#if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES)
+void __rw_yield(rwlock_t *rw)
+{
+ int lock_value;
+ unsigned int holder_cpu, yield_count;
+ struct paca_struct *holder_paca;
+
+ lock_value = rw->lock;
+ if (lock_value >= 0)
+ return; /* no write lock at present */
+ holder_cpu = lock_value & 0xffff;
+ BUG_ON(holder_cpu >= NR_CPUS);
+ holder_paca = &paca[holder_cpu];
+ yield_count = holder_paca->xLpPaca.xYieldCount;
+ if ((yield_count & 1) == 0)
+ return; /* virtual cpu is currently running */
+ rmb();
+ if (rw->lock != lock_value)
+ return; /* something has changed */
+#ifdef CONFIG_PPC_ISERIES
+ HvCall2(HvCallBaseYieldProcessor, HvCall_YieldToProc,
+ ((u64)holder_cpu << 32) | yield_count);
+#else
+ plpar_hcall_norets(H_CONFER, holder_cpu, yield_count);
+#endif
+}
+
+#else /* SPLPAR || ISERIES */
+#define __rw_yield(x) barrier()
+#endif
+
+/*
+ * This returns the old value in the lock + 1,
+ * so we got a read lock if the return value is > 0.
+ */
+static __inline__ long __read_trylock(rwlock_t *rw)
+{
+ long tmp;
+
+ __asm__ __volatile__(
+"1: lwarx %0,0,%1 # read_trylock\n\
+ extsw %0,%0\n\
+ addic. %0,%0,1\n\
+ ble- 2f\n\
+ stwcx. %0,0,%1\n\
+ bne- 1b\n\
+ isync\n\
+2:" : "=&r" (tmp)
+ : "r" (&rw->lock)
+ : "cr0", "xer", "memory");
+
+ return tmp;
+}
+
+int _raw_read_trylock(rwlock_t *rw)
+{
+ return __read_trylock(rw) > 0;
+}
+
+EXPORT_SYMBOL(_raw_read_trylock);
+
+void _raw_read_lock(rwlock_t *rw)
+{
+ while (1) {
+ if (likely(__read_trylock(rw) > 0))
+ break;
+ do {
+ HMT_low();
+ __rw_yield(rw);
+ } while (likely(rw->lock < 0));
+ HMT_medium();
+ }
+}
+
+EXPORT_SYMBOL(_raw_read_lock);
+
+void _raw_read_unlock(rwlock_t *rw)
+{
+ long tmp;
+
+ __asm__ __volatile__(
+ "eieio # read_unlock\n\
+1: lwarx %0,0,%1\n\
+ addic %0,%0,-1\n\
+ stwcx. %0,0,%1\n\
+ bne- 1b"
+ : "=&r"(tmp)
+ : "r"(&rw->lock)
+ : "cr0", "memory");
+}
+
+EXPORT_SYMBOL(_raw_read_unlock);
+
+/*
+ * This returns the old value in the lock,
+ * so we got the write lock if the return value is 0.
+ */
+static __inline__ long __write_trylock(rwlock_t *rw)
+{
+ long tmp, tmp2;
+
+ __asm__ __volatile__(
+" lwz %1,24(13) # write_trylock\n\
+1: lwarx %0,0,%2\n\
+ cmpwi 0,%0,0\n\
+ bne- 2f\n\
+ stwcx. %1,0,%2\n\
+ bne- 1b\n\
+ isync\n\
+2:" : "=&r" (tmp), "=&r" (tmp2)
+ : "r" (&rw->lock)
+ : "cr0", "memory");
+
+ return tmp;
+}
+
+int _raw_write_trylock(rwlock_t *rw)
+{
+ return __write_trylock(rw) == 0;
+}
+
+EXPORT_SYMBOL(_raw_write_trylock);
+
+void _raw_write_lock(rwlock_t *rw)
+{
+ while (1) {
+ if (likely(__write_trylock(rw) == 0))
+ break;
+ do {
+ HMT_low();
+ __rw_yield(rw);
+ } while (likely(rw->lock != 0));
+ HMT_medium();
+ }
+}
+
+EXPORT_SYMBOL(_raw_write_lock);
+
+#endif /* CONFIG_SPINLINE */
--- /dev/null
+/*
+ * Functions which are too large to be inlined.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <linux/module.h>
+#include <asm/uaccess.h>
+
+unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+ if (likely(access_ok(VERIFY_READ, from, n)))
+ n = __copy_from_user(to, from, n);
+ else
+ memset(to, 0, n);
+ return n;
+}
+
+unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+ if (likely(access_ok(VERIFY_WRITE, to, n)))
+ n = __copy_to_user(to, from, n);
+ return n;
+}
+
+unsigned long copy_in_user(void __user *to, const void __user *from,
+ unsigned long n)
+{
+ might_sleep();
+ if (likely(access_ok(VERIFY_READ, from, n) &&
+ access_ok(VERIFY_WRITE, to, n)))
+ n =__copy_tofrom_user(to, from, n);
+ return n;
+}
+
+EXPORT_SYMBOL(copy_from_user);
+EXPORT_SYMBOL(copy_to_user);
+EXPORT_SYMBOL(copy_in_user);
+
--- /dev/null
+/*
+ * PowerPC64 SLB support.
+ *
+ * Copyright (C) 2004 David Gibson <dwg@au.ibm.com>, IBM
+ * Based on earlier code writteh by:
+ * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
+ * Copyright (c) 2001 Dave Engebretsen
+ * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/mmu_context.h>
+#include <asm/paca.h>
+#include <asm/naca.h>
+#include <asm/cputable.h>
+
+extern void slb_allocate(unsigned long ea);
+
+static inline void create_slbe(unsigned long ea, unsigned long vsid,
+ unsigned long flags, unsigned long entry)
+{
+ ea = (ea & ESID_MASK) | SLB_ESID_V | entry;
+ vsid = (vsid << SLB_VSID_SHIFT) | flags;
+ asm volatile("slbmte %0,%1" :
+ : "r" (vsid), "r" (ea)
+ : "memory" );
+}
+
+static void slb_add_bolted(void)
+{
+#ifndef CONFIG_PPC_ISERIES
+ WARN_ON(!irqs_disabled());
+
+ /* If you change this make sure you change SLB_NUM_BOLTED
+ * appropriately too */
+
+ /* Slot 1 - first VMALLOC segment
+ * Since modules end up there it gets hit very heavily.
+ */
+ create_slbe(VMALLOCBASE, get_kernel_vsid(VMALLOCBASE),
+ SLB_VSID_KERNEL, 1);
+
+ asm volatile("isync":::"memory");
+#endif
+}
+
+/* Flush all user entries from the segment table of the current processor. */
+void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
+{
+ unsigned long offset = get_paca()->slb_cache_ptr;
+ unsigned long esid_data;
+ unsigned long pc = KSTK_EIP(tsk);
+ unsigned long stack = KSTK_ESP(tsk);
+ unsigned long unmapped_base;
+
+ if (offset <= SLB_CACHE_ENTRIES) {
+ int i;
+ asm volatile("isync" : : : "memory");
+ for (i = 0; i < offset; i++) {
+ esid_data = (unsigned long)get_paca()->slb_cache[i]
+ << SID_SHIFT;
+ asm volatile("slbie %0" : : "r" (esid_data));
+ }
+ asm volatile("isync" : : : "memory");
+ } else {
+ asm volatile("isync; slbia; isync" : : : "memory");
+ slb_add_bolted();
+ }
+
+ /* Workaround POWER5 < DD2.1 issue */
+ if (offset == 1 || offset > SLB_CACHE_ENTRIES) {
+ /* flush segment in EEH region, we shouldn't ever
+ * access addresses in this region. */
+ asm volatile("slbie %0" : : "r"(EEHREGIONBASE));
+ }
+
+ get_paca()->slb_cache_ptr = 0;
+ get_paca()->context = mm->context;
+
+ /*
+ * preload some userspace segments into the SLB.
+ */
+ if (test_tsk_thread_flag(tsk, TIF_32BIT))
+ unmapped_base = TASK_UNMAPPED_BASE_USER32;
+ else
+ unmapped_base = TASK_UNMAPPED_BASE_USER64;
+
+ if (pc >= KERNELBASE)
+ return;
+ slb_allocate(pc);
+
+ if (GET_ESID(pc) == GET_ESID(stack))
+ return;
+
+ if (stack >= KERNELBASE)
+ return;
+ slb_allocate(stack);
+
+ if ((GET_ESID(pc) == GET_ESID(unmapped_base))
+ || (GET_ESID(stack) == GET_ESID(unmapped_base)))
+ return;
+
+ if (unmapped_base >= KERNELBASE)
+ return;
+ slb_allocate(unmapped_base);
+}
+
+void slb_initialize(void)
+{
+#ifdef CONFIG_PPC_ISERIES
+ asm volatile("isync; slbia; isync":::"memory");
+#else
+ unsigned long flags = SLB_VSID_KERNEL;
+
+ /* Invalidate the entire SLB (even slot 0) & all the ERATS */
+ if (cur_cpu_spec->cpu_features & CPU_FTR_16M_PAGE)
+ flags |= SLB_VSID_L;
+
+ asm volatile("isync":::"memory");
+ asm volatile("slbmte %0,%0"::"r" (0) : "memory");
+ asm volatile("isync; slbia; isync":::"memory");
+ create_slbe(KERNELBASE, get_kernel_vsid(KERNELBASE),
+ flags, 0);
+
+#endif
+ slb_add_bolted();
+ get_paca()->stab_rr = SLB_NUM_BOLTED;
+}
--- /dev/null
+/*
+ * arch/ppc64/mm/slb_low.S
+ *
+ * Low-level SLB routines
+ *
+ * Copyright (C) 2004 David Gibson <dwg@au.ibm.com>, IBM
+ *
+ * Based on earlier C version:
+ * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
+ * Copyright (c) 2001 Dave Engebretsen
+ * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/mmu.h>
+#include <asm/ppc_asm.h>
+#include <asm/offsets.h>
+#include <asm/cputable.h>
+
+/* void slb_allocate(unsigned long ea);
+ *
+ * Create an SLB entry for the given EA (user or kernel).
+ * r3 = faulting address, r13 = PACA
+ * r9, r10, r11 are clobbered by this function
+ * No other registers are examined or changed.
+ */
+_GLOBAL(slb_allocate)
+ /*
+ * First find a slot, round robin. Previously we tried to find
+ * a free slot first but that took too long. Unfortunately we
+ * dont have any LRU information to help us choose a slot.
+ */
+ ld r10,PACASTABRR(r13)
+3:
+ addi r10,r10,1
+ /* use a cpu feature mask if we ever change our slb size */
+ cmpldi r10,SLB_NUM_ENTRIES
+
+ blt+ 4f
+ li r10,SLB_NUM_BOLTED
+
+ /*
+ * Never cast out the segment for our kernel stack. Since we
+ * dont invalidate the ERAT we could have a valid translation
+ * for the kernel stack during the first part of exception exit
+ * which gets invalidated due to a tlbie from another cpu at a
+ * non recoverable point (after setting srr0/1) - Anton
+ */
+4: slbmfee r11,r10
+ srdi r11,r11,27
+ /*
+ * Use paca->ksave as the value of the kernel stack pointer,
+ * because this is valid at all times.
+ * The >> 27 (rather than >> 28) is so that the LSB is the
+ * valid bit - this way we check valid and ESID in one compare.
+ * In order to completely close the tiny race in the context
+ * switch (between updating r1 and updating paca->ksave),
+ * we check against both r1 and paca->ksave.
+ */
+ srdi r9,r1,27
+ ori r9,r9,1 /* mangle SP for later compare */
+ cmpd r11,r9
+ beq- 3b
+ ld r9,PACAKSAVE(r13)
+ srdi r9,r9,27
+ ori r9,r9,1
+ cmpd r11,r9
+ beq- 3b
+
+ std r10,PACASTABRR(r13)
+
+ /* r3 = faulting address, r10 = entry */
+
+ srdi r9,r3,60 /* get region */
+ srdi r3,r3,28 /* get esid */
+ cmpldi cr7,r9,0xc /* cmp KERNELBASE for later use */
+
+ /* r9 = region, r3 = esid, cr7 = <>KERNELBASE */
+
+ rldicr. r11,r3,32,16
+ bne- 8f /* invalid ea bits set */
+ addi r11,r9,-1
+ cmpldi r11,0xb
+ blt- 8f /* invalid region */
+
+ /* r9 = region, r3 = esid, r10 = entry, cr7 = <>KERNELBASE */
+
+ blt cr7,0f /* user or kernel? */
+
+ /* kernel address */
+ li r11,SLB_VSID_KERNEL
+BEGIN_FTR_SECTION
+ bne cr7,9f
+ li r11,(SLB_VSID_KERNEL|SLB_VSID_L)
+END_FTR_SECTION_IFSET(CPU_FTR_16M_PAGE)
+ b 9f
+
+0: /* user address */
+ li r11,SLB_VSID_USER
+#ifdef CONFIG_HUGETLB_PAGE
+BEGIN_FTR_SECTION
+ /* check against the hugepage ranges */
+ cmpldi r3,(TASK_HPAGE_END>>SID_SHIFT)
+ bge 6f /* >= TASK_HPAGE_END */
+ cmpldi r3,(TASK_HPAGE_BASE>>SID_SHIFT)
+ bge 5f /* TASK_HPAGE_BASE..TASK_HPAGE_END */
+ cmpldi r3,16
+ bge 6f /* 4GB..TASK_HPAGE_BASE */
+
+ lhz r9,PACAHTLBSEGS(r13)
+ srd r9,r9,r3
+ andi. r9,r9,1
+ beq 6f
+
+5: /* this is a hugepage user address */
+ li r11,(SLB_VSID_USER|SLB_VSID_L)
+END_FTR_SECTION_IFSET(CPU_FTR_16M_PAGE)
+#endif /* CONFIG_HUGETLB_PAGE */
+
+6: ld r9,PACACONTEXTID(r13)
+
+9: /* r9 = "context", r3 = esid, r11 = flags, r10 = entry */
+
+ rldimi r9,r3,15,0 /* r9= VSID ordinal */
+
+7: rldimi r10,r3,28,0 /* r10= ESID<<28 | entry */
+ oris r10,r10,SLB_ESID_V@h /* r10 |= SLB_ESID_V */
+
+ /* r9 = ordinal, r3 = esid, r11 = flags, r10 = esid_data */
+
+ li r3,VSID_RANDOMIZER@higher
+ sldi r3,r3,32
+ oris r3,r3,VSID_RANDOMIZER@h
+ ori r3,r3,VSID_RANDOMIZER@l
+
+ mulld r9,r3,r9 /* r9 = ordinal * VSID_RANDOMIZER */
+ clrldi r9,r9,28 /* r9 &= VSID_MASK */
+ sldi r9,r9,SLB_VSID_SHIFT /* r9 <<= SLB_VSID_SHIFT */
+ or r9,r9,r11 /* r9 |= flags */
+
+ /* r9 = vsid_data, r10 = esid_data, cr7 = <>KERNELBASE */
+
+ /*
+ * No need for an isync before or after this slbmte. The exception
+ * we enter with and the rfid we exit with are context synchronizing.
+ */
+ slbmte r9,r10
+
+ bgelr cr7 /* we're done for kernel addresses */
+
+ /* Update the slb cache */
+ lhz r3,PACASLBCACHEPTR(r13) /* offset = paca->slb_cache_ptr */
+ cmpldi r3,SLB_CACHE_ENTRIES
+ bge 1f
+
+ /* still room in the slb cache */
+ sldi r11,r3,1 /* r11 = offset * sizeof(u16) */
+ rldicl r10,r10,36,28 /* get low 16 bits of the ESID */
+ add r11,r11,r13 /* r11 = (u16 *)paca + offset */
+ sth r10,PACASLBCACHE(r11) /* paca->slb_cache[offset] = esid */
+ addi r3,r3,1 /* offset++ */
+ b 2f
+1: /* offset >= SLB_CACHE_ENTRIES */
+ li r3,SLB_CACHE_ENTRIES+1
+2:
+ sth r3,PACASLBCACHEPTR(r13) /* paca->slb_cache_ptr = offset */
+ blr
+
+8: /* invalid EA */
+ li r9,0 /* 0 VSID ordinal -> BAD_VSID */
+ li r11,SLB_VSID_USER /* flags don't much matter */
+ b 7b
--- /dev/null
+/*
+ * arch/s390/kernel/vtime.c
+ * Virtual cpu timer based timer functions.
+ *
+ * S390 version
+ * Copyright (C) 2004 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Jan Glauber <jan.glauber@de.ibm.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/types.h>
+#include <linux/timex.h>
+#include <linux/notifier.h>
+
+#include <asm/s390_ext.h>
+#include <asm/timer.h>
+
+#define VTIMER_MAGIC (0x4b87ad6e + 1)
+static ext_int_info_t ext_int_info_timer;
+DEFINE_PER_CPU(struct vtimer_queue, virt_cpu_timer);
+
+void start_cpu_timer(void)
+{
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+ set_vtimer(vt_list->idle);
+}
+
+void stop_cpu_timer(void)
+{
+ __u64 done;
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+
+ /* nothing to do */
+ if (list_empty(&vt_list->list)) {
+ vt_list->idle = VTIMER_MAX_SLICE;
+ goto fire;
+ }
+
+ /* store progress */
+ asm volatile ("STPT %0" : "=m" (done));
+
+ /*
+ * If done is negative we do not stop the CPU timer
+ * because we will get instantly an interrupt that
+ * will start the CPU timer again.
+ */
+ if (done & 1LL<<63)
+ return;
+ else
+ vt_list->offset += vt_list->to_expire - done;
+
+ /* save the actual expire value */
+ vt_list->idle = done;
+
+ /*
+ * We cannot halt the CPU timer, we just write a value that
+ * nearly never expires (only after 71 years) and re-write
+ * the stored expire value if we continue the timer
+ */
+ fire:
+ set_vtimer(VTIMER_MAX_SLICE);
+}
+
+void set_vtimer(__u64 expires)
+{
+ asm volatile ("SPT %0" : : "m" (expires));
+
+ /* store expire time for this CPU timer */
+ per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
+}
+
+/*
+ * Sorted add to a list. List is linear searched until first bigger
+ * element is found.
+ */
+void list_add_sorted(struct vtimer_list *timer, struct list_head *head)
+{
+ struct vtimer_list *event;
+
+ list_for_each_entry(event, head, entry) {
+ if (event->expires > timer->expires) {
+ list_add_tail(&timer->entry, &event->entry);
+ return;
+ }
+ }
+ list_add_tail(&timer->entry, head);
+}
+
+/*
+ * Do the callback functions of expired vtimer events.
+ * Called from within the interrupt handler.
+ */
+static void do_callbacks(struct list_head *cb_list, struct pt_regs *regs)
+{
+ struct vtimer_queue *vt_list;
+ struct vtimer_list *event, *tmp;
+ void (*fn)(unsigned long, struct pt_regs*);
+ unsigned long data;
+
+ if (list_empty(cb_list))
+ return;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+
+ list_for_each_entry_safe(event, tmp, cb_list, entry) {
+ fn = event->function;
+ data = event->data;
+ fn(data, regs);
+
+ if (!event->interval)
+ /* delete one shot timer */
+ list_del_init(&event->entry);
+ else {
+ /* move interval timer back to list */
+ spin_lock(&vt_list->lock);
+ list_del_init(&event->entry);
+ list_add_sorted(event, &vt_list->list);
+ spin_unlock(&vt_list->lock);
+ }
+ }
+}
+
+/*
+ * Handler for the virtual CPU timer.
+ */
+static void do_cpu_timer_interrupt(struct pt_regs *regs, __u16 error_code)
+{
+ int cpu;
+ __u64 next, delta;
+ struct vtimer_queue *vt_list;
+ struct vtimer_list *event, *tmp;
+ struct list_head *ptr;
+ /* the callback queue */
+ struct list_head cb_list;
+
+ INIT_LIST_HEAD(&cb_list);
+ cpu = smp_processor_id();
+ vt_list = &per_cpu(virt_cpu_timer, cpu);
+
+ /* walk timer list, fire all expired events */
+ spin_lock(&vt_list->lock);
+
+ if (vt_list->to_expire < VTIMER_MAX_SLICE)
+ vt_list->offset += vt_list->to_expire;
+
+ list_for_each_entry_safe(event, tmp, &vt_list->list, entry) {
+ if (event->expires > vt_list->offset)
+ /* found first unexpired event, leave */
+ break;
+
+ /* re-charge interval timer, we have to add the offset */
+ if (event->interval)
+ event->expires = event->interval + vt_list->offset;
+
+ /* move expired timer to the callback queue */
+ list_move_tail(&event->entry, &cb_list);
+ }
+ spin_unlock(&vt_list->lock);
+ do_callbacks(&cb_list, regs);
+
+ /* next event is first in list */
+ spin_lock(&vt_list->lock);
+ if (!list_empty(&vt_list->list)) {
+ ptr = vt_list->list.next;
+ event = list_entry(ptr, struct vtimer_list, entry);
+ next = event->expires - vt_list->offset;
+
+ /* add the expired time from this interrupt handler
+ * and the callback functions
+ */
+ asm volatile ("STPT %0" : "=m" (delta));
+ delta = 0xffffffffffffffffLL - delta + 1;
+ vt_list->offset += delta;
+ next -= delta;
+ } else {
+ vt_list->offset = 0;
+ next = VTIMER_MAX_SLICE;
+ }
+ spin_unlock(&vt_list->lock);
+ set_vtimer(next);
+}
+
+void init_virt_timer(struct vtimer_list *timer)
+{
+ timer->magic = VTIMER_MAGIC;
+ timer->function = NULL;
+ INIT_LIST_HEAD(&timer->entry);
+ spin_lock_init(&timer->lock);
+}
+EXPORT_SYMBOL(init_virt_timer);
+
+static inline int check_vtimer(struct vtimer_list *timer)
+{
+ if (timer->magic != VTIMER_MAGIC)
+ return -EINVAL;
+ return 0;
+}
+
+static inline int vtimer_pending(struct vtimer_list *timer)
+{
+ return (!list_empty(&timer->entry));
+}
+
+/*
+ * this function should only run on the specified CPU
+ */
+static void internal_add_vtimer(struct vtimer_list *timer)
+{
+ unsigned long flags;
+ __u64 done;
+ struct vtimer_list *event;
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, timer->cpu);
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ if (timer->cpu != smp_processor_id())
+ printk("internal_add_vtimer: BUG, running on wrong CPU");
+
+ /* if list is empty we only have to set the timer */
+ if (list_empty(&vt_list->list)) {
+ /* reset the offset, this may happen if the last timer was
+ * just deleted by mod_virt_timer and the interrupt
+ * didn't happen until here
+ */
+ vt_list->offset = 0;
+ goto fire;
+ }
+
+ /* save progress */
+ asm volatile ("STPT %0" : "=m" (done));
+
+ /* calculate completed work */
+ done = vt_list->to_expire - done + vt_list->offset;
+ vt_list->offset = 0;
+
+ list_for_each_entry(event, &vt_list->list, entry)
+ event->expires -= done;
+
+ fire:
+ list_add_sorted(timer, &vt_list->list);
+
+ /* get first element, which is the next vtimer slice */
+ event = list_entry(vt_list->list.next, struct vtimer_list, entry);
+
+ set_vtimer(event->expires);
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ /* release CPU aquired in prepare_vtimer or mod_virt_timer() */
+ put_cpu();
+}
+
+static inline int prepare_vtimer(struct vtimer_list *timer)
+{
+ if (check_vtimer(timer) || !timer->function) {
+ printk("add_virt_timer: uninitialized timer\n");
+ return -EINVAL;
+ }
+
+ if (!timer->expires || timer->expires > VTIMER_MAX_SLICE) {
+ printk("add_virt_timer: invalid timer expire value!\n");
+ return -EINVAL;
+ }
+
+ if (vtimer_pending(timer)) {
+ printk("add_virt_timer: timer pending\n");
+ return -EBUSY;
+ }
+
+ timer->cpu = get_cpu();
+ return 0;
+}
+
+/*
+ * add_virt_timer - add an oneshot virtual CPU timer
+ */
+void add_virt_timer(void *new)
+{
+ struct vtimer_list *timer;
+
+ timer = (struct vtimer_list *)new;
+
+ if (prepare_vtimer(timer) < 0)
+ return;
+
+ timer->interval = 0;
+ internal_add_vtimer(timer);
+}
+EXPORT_SYMBOL(add_virt_timer);
+
+/*
+ * add_virt_timer_int - add an interval virtual CPU timer
+ */
+void add_virt_timer_periodic(void *new)
+{
+ struct vtimer_list *timer;
+
+ timer = (struct vtimer_list *)new;
+
+ if (prepare_vtimer(timer) < 0)
+ return;
+
+ timer->interval = timer->expires;
+ internal_add_vtimer(timer);
+}
+EXPORT_SYMBOL(add_virt_timer_periodic);
+
+/*
+ * If we change a pending timer the function must be called on the CPU
+ * where the timer is running on, e.g. by smp_call_function_on()
+ *
+ * The original mod_timer adds the timer if it is not pending. For compatibility
+ * we do the same. The timer will be added on the current CPU as a oneshot timer.
+ *
+ * returns whether it has modified a pending timer (1) or not (0)
+ */
+int mod_virt_timer(struct vtimer_list *timer, __u64 expires)
+{
+ struct vtimer_queue *vt_list;
+ unsigned long flags;
+ int cpu;
+
+ if (check_vtimer(timer) || !timer->function) {
+ printk("mod_virt_timer: uninitialized timer\n");
+ return -EINVAL;
+ }
+
+ if (!expires || expires > VTIMER_MAX_SLICE) {
+ printk("mod_virt_timer: invalid expire range\n");
+ return -EINVAL;
+ }
+
+ /*
+ * This is a common optimization triggered by the
+ * networking code - if the timer is re-modified
+ * to be the same thing then just return:
+ */
+ if (timer->expires == expires && vtimer_pending(timer))
+ return 1;
+
+ cpu = get_cpu();
+ vt_list = &per_cpu(virt_cpu_timer, cpu);
+
+ /* disable interrupts before test if timer is pending */
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ /* if timer isn't pending add it on the current CPU */
+ if (!vtimer_pending(timer)) {
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ /* we do not activate an interval timer with mod_virt_timer */
+ timer->interval = 0;
+ timer->expires = expires;
+ timer->cpu = cpu;
+ internal_add_vtimer(timer);
+ return 0;
+ }
+
+ /* check if we run on the right CPU */
+ if (timer->cpu != cpu) {
+ printk("mod_virt_timer: running on wrong CPU, check your code\n");
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ put_cpu();
+ return -EINVAL;
+ }
+
+ list_del_init(&timer->entry);
+ timer->expires = expires;
+
+ /* also change the interval if we have an interval timer */
+ if (timer->interval)
+ timer->interval = expires;
+
+ /* the timer can't expire anymore so we can release the lock */
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ internal_add_vtimer(timer);
+ return 1;
+}
+EXPORT_SYMBOL(mod_virt_timer);
+
+/*
+ * delete a virtual timer
+ *
+ * returns whether the deleted timer was pending (1) or not (0)
+ */
+int del_virt_timer(struct vtimer_list *timer)
+{
+ unsigned long flags;
+ struct vtimer_queue *vt_list;
+
+ if (check_vtimer(timer)) {
+ printk("del_virt_timer: timer not initialized\n");
+ return -EINVAL;
+ }
+
+ /* check if timer is pending */
+ if (!vtimer_pending(timer))
+ return 0;
+
+ vt_list = &per_cpu(virt_cpu_timer, timer->cpu);
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ /* we don't interrupt a running timer, just let it expire! */
+ list_del_init(&timer->entry);
+
+ /* last timer removed */
+ if (list_empty(&vt_list->list)) {
+ vt_list->to_expire = 0;
+ vt_list->offset = 0;
+ }
+
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ return 1;
+}
+EXPORT_SYMBOL(del_virt_timer);
+
+/*
+ * Start the virtual CPU timer on the current CPU.
+ */
+void init_cpu_vtimer(void)
+{
+ struct vtimer_queue *vt_list;
+ unsigned long cr0;
+ __u64 timer;
+
+ /* kick the virtual timer */
+ timer = VTIMER_MAX_SLICE;
+ asm volatile ("SPT %0" : : "m" (timer));
+ __ctl_store(cr0, 0, 0);
+ cr0 |= 0x400;
+ __ctl_load(cr0, 0, 0);
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+ INIT_LIST_HEAD(&vt_list->list);
+ spin_lock_init(&vt_list->lock);
+ vt_list->to_expire = 0;
+ vt_list->offset = 0;
+ vt_list->idle = 0;
+
+}
+
+static int vtimer_idle_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ switch (action) {
+ case CPU_IDLE:
+ stop_cpu_timer();
+ break;
+ case CPU_NOT_IDLE:
+ start_cpu_timer();
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block vtimer_idle_nb = {
+ .notifier_call = vtimer_idle_notify,
+};
+
+void __init vtime_init(void)
+{
+ /* request the cpu timer external interrupt */
+ if (register_early_external_interrupt(0x1005, do_cpu_timer_interrupt,
+ &ext_int_info_timer) != 0)
+ panic("Couldn't request external interrupt 0x1005");
+
+ if (register_idle_notifier(&vtimer_idle_nb))
+ panic("Couldn't register idle notifier");
+
+ init_cpu_vtimer();
+}
+
--- /dev/null
+/*
+ * arch/s390/lib/string.c
+ * Optimized string functions
+ *
+ * S390 version
+ * Copyright (C) 2004 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com)
+ */
+
+#define IN_ARCH_STRING_C 1
+
+#include <linux/types.h>
+#include <linux/module.h>
+
+/*
+ * Helper functions to find the end of a string
+ */
+static inline char *__strend(const char *s)
+{
+ register unsigned long r0 asm("0") = 0;
+
+ asm volatile ("0: srst %0,%1\n"
+ " jo 0b"
+ : "+d" (r0), "+a" (s) : : "cc" );
+ return (char *) r0;
+}
+
+static inline char *__strnend(const char *s, size_t n)
+{
+ register unsigned long r0 asm("0") = 0;
+ const char *p = s + n;
+
+ asm volatile ("0: srst %0,%1\n"
+ " jo 0b"
+ : "+d" (p), "+a" (s) : "d" (r0) : "cc" );
+ return (char *) p;
+}
+
+/**
+ * strlen - Find the length of a string
+ * @s: The string to be sized
+ *
+ * returns the length of @s
+ */
+size_t strlen(const char *s)
+{
+ return __strend(s) - s;
+}
+EXPORT_SYMBOL_NOVERS(strlen);
+
+/**
+ * strnlen - Find the length of a length-limited string
+ * @s: The string to be sized
+ * @n: The maximum number of bytes to search
+ *
+ * returns the minimum of the length of @s and @n
+ */
+size_t strnlen(const char * s, size_t n)
+{
+ return __strnend(s, n) - s;
+}
+EXPORT_SYMBOL_NOVERS(strnlen);
+
+/**
+ * strcpy - Copy a %NUL terminated string
+ * @dest: Where to copy the string to
+ * @src: Where to copy the string from
+ *
+ * returns a pointer to @dest
+ */
+char *strcpy(char *dest, const char *src)
+{
+ register int r0 asm("0") = 0;
+ char *ret = dest;
+
+ asm volatile ("0: mvst %0,%1\n"
+ " jo 0b"
+ : "+&a" (dest), "+&a" (src) : "d" (r0)
+ : "cc", "memory" );
+ return ret;
+}
+EXPORT_SYMBOL_NOVERS(strcpy);
+
+/**
+ * strlcpy - Copy a %NUL terminated string into a sized buffer
+ * @dest: Where to copy the string to
+ * @src: Where to copy the string from
+ * @size: size of destination buffer
+ *
+ * Compatible with *BSD: the result is always a valid
+ * NUL-terminated string that fits in the buffer (unless,
+ * of course, the buffer size is zero). It does not pad
+ * out the result like strncpy() does.
+ */
+size_t strlcpy(char *dest, const char *src, size_t size)
+{
+ size_t ret = __strend(src) - src;
+
+ if (size) {
+ size_t len = (ret >= size) ? size-1 : ret;
+ dest[len] = '\0';
+ __builtin_memcpy(dest, src, len);
+ }
+ return ret;
+}
+EXPORT_SYMBOL_NOVERS(strlcpy);
+
+/**
+ * strncpy - Copy a length-limited, %NUL-terminated string
+ * @dest: Where to copy the string to
+ * @src: Where to copy the string from
+ * @n: The maximum number of bytes to copy
+ *
+ * The result is not %NUL-terminated if the source exceeds
+ * @n bytes.
+ */
+char *strncpy(char *dest, const char *src, size_t n)
+{
+ size_t len = __strnend(src, n) - src;
+ __builtin_memset(dest + len, 0, n - len);
+ __builtin_memcpy(dest, src, len);
+ return dest;
+}
+EXPORT_SYMBOL_NOVERS(strncpy);
+
+/**
+ * strcat - Append one %NUL-terminated string to another
+ * @dest: The string to be appended to
+ * @src: The string to append to it
+ *
+ * returns a pointer to @dest
+ */
+char *strcat(char *dest, const char *src)
+{
+ register int r0 asm("0") = 0;
+ unsigned long dummy;
+ char *ret = dest;
+
+ asm volatile ("0: srst %0,%1\n"
+ " jo 0b\n"
+ "1: mvst %0,%2\n"
+ " jo 1b"
+ : "=&a" (dummy), "+a" (dest), "+a" (src)
+ : "d" (r0), "0" (0UL) : "cc", "memory" );
+ return ret;
+}
+EXPORT_SYMBOL_NOVERS(strcat);
+
+/**
+ * strlcat - Append a length-limited, %NUL-terminated string to another
+ * @dest: The string to be appended to
+ * @src: The string to append to it
+ * @n: The size of the destination buffer.
+ */
+size_t strlcat(char *dest, const char *src, size_t n)
+{
+ size_t dsize = __strend(dest) - dest;
+ size_t len = __strend(src) - src;
+ size_t res = dsize + len;
+
+ if (dsize < n) {
+ dest += dsize;
+ n -= dsize;
+ if (len >= n)
+ len = n - 1;
+ dest[len] = '\0';
+ __builtin_memcpy(dest, src, len);
+ }
+ return res;
+}
+EXPORT_SYMBOL_NOVERS(strlcat);
+
+/**
+ * strncat - Append a length-limited, %NUL-terminated string to another
+ * @dest: The string to be appended to
+ * @src: The string to append to it
+ * @n: The maximum numbers of bytes to copy
+ *
+ * returns a pointer to @dest
+ *
+ * Note that in contrast to strncpy, strncat ensures the result is
+ * terminated.
+ */
+char *strncat(char *dest, const char *src, size_t n)
+{
+ size_t len = __strnend(src, n) - src;
+ char *p = __strend(dest);
+
+ p[len] = '\0';
+ __builtin_memcpy(p, src, len);
+ return dest;
+}
+EXPORT_SYMBOL_NOVERS(strncat);
+
+/**
+ * strcmp - Compare two strings
+ * @cs: One string
+ * @ct: Another string
+ *
+ * returns 0 if @cs and @ct are equal,
+ * < 0 if @cs is less than @ct
+ * > 0 if @cs is greater than @ct
+ */
+int strcmp(const char *cs, const char *ct)
+{
+ register int r0 asm("0") = 0;
+ int ret = 0;
+
+ asm volatile ("0: clst %2,%3\n"
+ " jo 0b\n"
+ " je 1f\n"
+ " ic %0,0(%2)\n"
+ " ic %1,0(%3)\n"
+ " sr %0,%1\n"
+ "1:"
+ : "+d" (ret), "+d" (r0), "+a" (cs), "+a" (ct)
+ : : "cc" );
+ return ret;
+}
+EXPORT_SYMBOL_NOVERS(strcmp);
+
+/**
+ * strrchr - Find the last occurrence of a character in a string
+ * @s: The string to be searched
+ * @c: The character to search for
+ */
+char * strrchr(const char * s, int c)
+{
+ size_t len = __strend(s) - s;
+
+ if (len)
+ do {
+ if (s[len] == (char) c)
+ return (char *) s + len;
+ } while (--len > 0);
+ return 0;
+}
+EXPORT_SYMBOL_NOVERS(strrchr);
+
+/**
+ * strstr - Find the first substring in a %NUL terminated string
+ * @s1: The string to be searched
+ * @s2: The string to search for
+ */
+char * strstr(const char * s1,const char * s2)
+{
+ int l1, l2;
+
+ l2 = __strend(s2) - s2;
+ if (!l2)
+ return (char *) s1;
+ l1 = __strend(s1) - s1;
+ while (l1-- >= l2) {
+ register unsigned long r2 asm("2") = (unsigned long) s1;
+ register unsigned long r3 asm("3") = (unsigned long) l2;
+ register unsigned long r4 asm("4") = (unsigned long) s2;
+ register unsigned long r5 asm("5") = (unsigned long) l2;
+ int cc;
+
+ asm volatile ("0: clcle %1,%3,0\n"
+ " jo 0b\n"
+ " ipm %0\n"
+ " srl %0,28"
+ : "=&d" (cc), "+a" (r2), "+a" (r3),
+ "+a" (r4), "+a" (r5) : : "cc" );
+ if (!cc)
+ return (char *) s1;
+ s1++;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_NOVERS(strstr);
+
+/**
+ * memchr - Find a character in an area of memory.
+ * @s: The memory area
+ * @c: The byte to search for
+ * @n: The size of the area.
+ *
+ * returns the address of the first occurrence of @c, or %NULL
+ * if @c is not found
+ */
+void *memchr(const void *s, int c, size_t n)
+{
+ register int r0 asm("0") = (char) c;
+ const void *ret = s + n;
+
+ asm volatile ("0: srst %0,%1\n"
+ " jo 0b\n"
+ " jl 1f\n"
+ " la %0,0\n"
+ "1:"
+ : "+a" (ret), "+&a" (s) : "d" (r0) : "cc" );
+ return (void *) ret;
+}
+EXPORT_SYMBOL_NOVERS(memchr);
+
+/**
+ * memcmp - Compare two areas of memory
+ * @cs: One area of memory
+ * @ct: Another area of memory
+ * @count: The size of the area.
+ */
+int memcmp(const void *cs, const void *ct, size_t n)
+{
+ register unsigned long r2 asm("2") = (unsigned long) cs;
+ register unsigned long r3 asm("3") = (unsigned long) n;
+ register unsigned long r4 asm("4") = (unsigned long) ct;
+ register unsigned long r5 asm("5") = (unsigned long) n;
+ int ret;
+
+ asm volatile ("0: clcle %1,%3,0\n"
+ " jo 0b\n"
+ " ipm %0\n"
+ " srl %0,28"
+ : "=&d" (ret), "+a" (r2), "+a" (r3), "+a" (r4), "+a" (r5)
+ : : "cc" );
+ if (ret)
+ ret = *(char *) r2 - *(char *) r4;
+ return ret;
+}
+EXPORT_SYMBOL_NOVERS(memcmp);
+
+/**
+ * memscan - Find a character in an area of memory.
+ * @s: The memory area
+ * @c: The byte to search for
+ * @n: The size of the area.
+ *
+ * returns the address of the first occurrence of @c, or 1 byte past
+ * the area if @c is not found
+ */
+void *memscan(void *s, int c, size_t n)
+{
+ register int r0 asm("0") = (char) c;
+ const void *ret = s + n;
+
+ asm volatile ("0: srst %0,%1\n"
+ " jo 0b\n"
+ : "+a" (ret), "+&a" (s) : "d" (r0) : "cc" );
+ return (void *) ret;
+}
+EXPORT_SYMBOL_NOVERS(memscan);
+
+/**
+ * memcpy - Copy one area of memory to another
+ * @dest: Where to copy to
+ * @src: Where to copy from
+ * @n: The size of the area.
+ *
+ * returns a pointer to @dest
+ */
+void *memcpy(void *dest, const void *src, size_t n)
+{
+ return __builtin_memcpy(dest, src, n);
+}
+EXPORT_SYMBOL_NOVERS(memcpy);
+
+/**
+ * bcopy - Copy one area of memory to another
+ * @src: Where to copy from
+ * @dest: Where to copy to
+ * @n: The size of the area.
+ *
+ * Note that this is the same as memcpy(), with the arguments reversed.
+ * memcpy() is the standard, bcopy() is a legacy BSD function.
+ */
+void bcopy(const void *srcp, void *destp, size_t n)
+{
+ __builtin_memcpy(destp, srcp, n);
+}
+EXPORT_SYMBOL_NOVERS(bcopy);
+
+/**
+ * memset - Fill a region of memory with the given value
+ * @s: Pointer to the start of the area.
+ * @c: The byte to fill the area with
+ * @n: The size of the area.
+ *
+ * returns a pointer to @s
+ */
+void *memset(void *s, int c, size_t n)
+{
+ char *xs;
+
+ if (c == 0)
+ return __builtin_memset(s, 0, n);
+
+ xs = (char *) s;
+ if (n > 0)
+ do {
+ *xs++ = c;
+ } while (--n > 0);
+ return s;
+}
+EXPORT_SYMBOL_NOVERS(memset);
+
+/*
+ * missing exports for string functions defined in lib/string.c
+ */
+EXPORT_SYMBOL_NOVERS(memmove);
+EXPORT_SYMBOL_NOVERS(strchr);
+EXPORT_SYMBOL_NOVERS(strnchr);
+EXPORT_SYMBOL_NOVERS(strncmp);
+EXPORT_SYMBOL_NOVERS(strpbrk);
--- /dev/null
+#
+# Makefile for the HS7751RVoIP specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+obj-y := mach.o setup.o io.o irq.o led.o
+
+obj-$(CONFIG_PCI) += pci.o
+
--- /dev/null
+/*
+ * linux/arch/sh/kernel/io_hs7751rvoip.c
+ *
+ * Copyright (C) 2001 Ian da Silva, Jeremy Siegel
+ * Based largely on io_se.c.
+ *
+ * I/O routine for Renesas Technology sales HS7751RVoIP
+ *
+ * Initial version only to support LAN access; some
+ * placeholder code from io_hs7751rvoip.c left in with the
+ * expectation of later SuperIO and PCMCIA access.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <asm/io.h>
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+#include <asm/addrspace.h>
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include "../../../drivers/pci/pci-sh7751.h"
+
+extern void *area5_io8_base; /* Area 5 8bit I/O Base address */
+extern void *area6_io8_base; /* Area 6 8bit I/O Base address */
+extern void *area5_io16_base; /* Area 5 16bit I/O Base address */
+extern void *area6_io16_base; /* Area 6 16bit I/O Base address */
+
+/*
+ * The 7751R HS7751RVoIP uses the built-in PCI controller (PCIC)
+ * of the 7751R processor, and has a SuperIO accessible via the PCI.
+ * The board also includes a PCMCIA controller on its memory bus,
+ * like the other Solution Engine boards.
+ */
+
+#define PCIIOBR (volatile long *)PCI_REG(SH7751_PCIIOBR)
+#define PCIMBR (volatile long *)PCI_REG(SH7751_PCIMBR)
+#define PCI_IO_AREA SH7751_PCI_IO_BASE
+#define PCI_MEM_AREA SH7751_PCI_CONFIG_BASE
+
+#define PCI_IOMAP(adr) (PCI_IO_AREA + (adr & ~SH7751_PCIIOBR_MASK))
+
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+#define CODEC_IO_BASE 0x1000
+#endif
+
+#define maybebadio(name,port) \
+ printk("bad PC-like io %s for port 0x%lx at 0x%08x\n", \
+ #name, (port), (__u32) __builtin_return_address(0))
+
+static inline void delay(void)
+{
+ ctrl_inw(0xa0000000);
+}
+
+static inline unsigned long port2adr(unsigned int port)
+{
+ if ((0x1f0 <= port && port < 0x1f8) || port == 0x3f6)
+ if (port == 0x3f6)
+ return ((unsigned long)area5_io16_base + 0x0c);
+ else
+ return ((unsigned long)area5_io16_base + 0x800 + ((port-0x1f0) << 1));
+ else
+ maybebadio(port2adr, (unsigned long)port);
+ return port;
+}
+
+/* The 7751R HS7751RVoIP seems to have everything hooked */
+/* up pretty normally (nothing on high-bytes only...) so this */
+/* shouldn't be needed */
+static inline int shifted_port(unsigned long port)
+{
+ /* For IDE registers, value is not shifted */
+ if ((0x1f0 <= port && port < 0x1f8) || port == 0x3f6)
+ return 0;
+ else
+ return 1;
+}
+
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+static inline int
+codec_port(unsigned long port)
+{
+ if (CODEC_IO_BASE <= port && port < (CODEC_IO_BASE+0x20))
+ return 1;
+ else
+ return 0;
+}
+#endif
+
+/* In case someone configures the kernel w/o PCI support: in that */
+/* scenario, don't ever bother to check for PCI-window addresses */
+
+/* NOTE: WINDOW CHECK MAY BE A BIT OFF, HIGH PCIBIOS_MIN_IO WRAPS? */
+#if defined(CONFIG_PCI)
+#define CHECK_SH7751_PCIIO(port) \
+ ((port >= PCIBIOS_MIN_IO) && (port < (PCIBIOS_MIN_IO + SH7751_PCI_IO_SIZE)))
+#else
+#define CHECK_SH7751_PCIIO(port) (0)
+#endif
+
+/*
+ * General outline: remap really low stuff [eventually] to SuperIO,
+ * stuff in PCI IO space (at or above window at pci.h:PCIBIOS_MIN_IO)
+ * is mapped through the PCI IO window. Stuff with high bits (PXSEG)
+ * should be way beyond the window, and is used w/o translation for
+ * compatibility.
+ */
+unsigned char hs7751rvoip_inb(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned char *)port;
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+ else if (codec_port(port))
+ return *(volatile unsigned char *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE));
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned char *)PCI_IOMAP(port);
+ else
+ return (*(volatile unsigned short *)port2adr(port) & 0xff);
+}
+
+unsigned char hs7751rvoip_inb_p(unsigned long port)
+{
+ unsigned char v;
+
+ if (PXSEG(port))
+ v = *(volatile unsigned char *)port;
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+ else if (codec_port(port))
+ v = *(volatile unsigned char *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE));
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ v = *(volatile unsigned char *)PCI_IOMAP(port);
+ else
+ v = (*(volatile unsigned short *)port2adr(port) & 0xff);
+ delay();
+ return v;
+}
+
+unsigned short hs7751rvoip_inw(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned short *)PCI_IOMAP(port);
+ else
+ maybebadio(inw, port);
+ return 0;
+}
+
+unsigned int hs7751rvoip_inl(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned long *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned long *)PCI_IOMAP(port);
+ else
+ maybebadio(inl, port);
+ return 0;
+}
+
+void hs7751rvoip_outb(unsigned char value, unsigned long port)
+{
+
+ if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+#if defined(CONFIG_HS7751RVOIP_CIDEC)
+ else if (codec_port(port))
+ *(volatile unsigned cjar *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE)) = value;
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(unsigned char *)PCI_IOMAP(port) = value;
+ else
+ *(volatile unsigned short *)port2adr(port) = value;
+}
+
+void hs7751rvoip_outb_p(unsigned char value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+#if defined(CONFIG_HS7751RVOIP_CIDEC)
+ else if (codec_port(port))
+ *(volatile unsigned cjar *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE)) = value;
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(unsigned char *)PCI_IOMAP(port) = value;
+ else
+ *(volatile unsigned short *)port2adr(port) = value;
+ delay();
+}
+
+void hs7751rvoip_outw(unsigned short value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned short *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(unsigned short *)PCI_IOMAP(port) = value;
+ else
+ maybebadio(outw, port);
+}
+
+void hs7751rvoip_outl(unsigned int value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned long *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *((unsigned long *)PCI_IOMAP(port)) = value;
+ else
+ maybebadio(outl, port);
+}
+
+void hs7751rvoip_insb(unsigned long port, void *addr, unsigned long count)
+{
+ if (PXSEG(port))
+ while (count--) *((unsigned char *) addr)++ = *(volatile unsigned char *)port;
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+ else if (codec_port(port))
+ while (count--) *((unsigned char *) addr)++ = *(volatile unsigned char *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE));
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u8 *bp = (__u8 *)PCI_IOMAP(port);
+
+ while (count--) *((volatile unsigned char *) addr)++ = *bp;
+ } else {
+ volatile __u16 *p = (volatile unsigned short *)port2adr(port);
+
+ while (count--) *((unsigned char *) addr)++ = *p & 0xff;
+ }
+}
+
+void hs7751rvoip_insw(unsigned long port, void *addr, unsigned long count)
+{
+ volatile __u16 *p;
+
+ if (PXSEG(port))
+ p = (volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ p = (volatile unsigned short *)PCI_IOMAP(port);
+ else
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *((__u16 *) addr)++ = *p;
+}
+
+void hs7751rvoip_insl(unsigned long port, void *addr, unsigned long count)
+{
+ if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u32 *p = (__u32 *)PCI_IOMAP(port);
+
+ while (count--) *((__u32 *) addr)++ = *p;
+ } else
+ maybebadio(insl, port);
+}
+
+void hs7751rvoip_outsb(unsigned long port, const void *addr, unsigned long count)
+{
+ if (PXSEG(port))
+ while (count--) *(volatile unsigned char *)port = *((unsigned char *) addr)++;
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+ else if (codec_port(port))
+ while (count--) *(volatile unsigned char *)((unsigned long)area6_io8_base+(port-CODEC_IO_BASE)) = *((unsigned char *) addr)++;
+#endif
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u8 *bp = (__u8 *)PCI_IOMAP(port);
+
+ while (count--) *bp = *((volatile unsigned char *) addr)++;
+ } else {
+ volatile __u16 *p = (volatile unsigned short *)port2adr(port);
+
+ while (count--) *p = *((unsigned char *) addr)++;
+ }
+}
+
+void hs7751rvoip_outsw(unsigned long port, const void *addr, unsigned long count)
+{
+ volatile __u16 *p;
+
+ if (PXSEG(port))
+ p = (volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ p = (volatile unsigned short *)PCI_IOMAP(port);
+ else
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *p = *((__u16 *) addr)++;
+}
+
+void hs7751rvoip_outsl(unsigned long port, const void *addr, unsigned long count)
+{
+ if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u32 *p = (__u32 *)PCI_IOMAP(port);
+
+ while (count--) *p = *((__u32 *) addr)++;
+ } else
+ maybebadio(outsl, port);
+}
+
+void *hs7751rvoip_ioremap(unsigned long offset, unsigned long size)
+{
+ if (offset >= 0xfd000000)
+ return (void *)offset;
+ else
+ return (void *)P2SEGADDR(offset);
+}
+EXPORT_SYMBOL(hs7751rvoip_ioremap);
+
+unsigned long hs7751rvoip_isa_port2addr(unsigned long offset)
+{
+ return port2adr(offset);
+}
--- /dev/null
+/*
+ * linux/arch/sh/boards/renesas/hs7751rvoip/irq.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Renesas Technology Sales HS7751RVoIP Support.
+ *
+ * Modified for HS7751RVoIP by
+ * Atom Create Engineering Co., Ltd. 2002.
+ * Lineo uSolutions, Inc. 2003.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+
+static int mask_pos[] = {8, 9, 10, 11, 12, 13, 0, 1, 2, 3, 4, 5, 6, 7};
+
+static void enable_hs7751rvoip_irq(unsigned int irq);
+static void disable_hs7751rvoip_irq(unsigned int irq);
+
+/* shutdown is same as "disable" */
+#define shutdown_hs7751rvoip_irq disable_hs7751rvoip_irq
+
+static void ack_hs7751rvoip_irq(unsigned int irq);
+static void end_hs7751rvoip_irq(unsigned int irq);
+
+static unsigned int startup_hs7751rvoip_irq(unsigned int irq)
+{
+ enable_hs7751rvoip_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void disable_hs7751rvoip_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned short val;
+ unsigned short mask = 0xffff ^ (0x0001 << mask_pos[irq]);
+
+ /* Set the priority in IPR to 0 */
+ local_irq_save(flags);
+ val = ctrl_inw(IRLCNTR3);
+ val &= mask;
+ ctrl_outw(val, IRLCNTR3);
+ local_irq_restore(flags);
+}
+
+static void enable_hs7751rvoip_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned short val;
+ unsigned short value = (0x0001 << mask_pos[irq]);
+
+ /* Set priority in IPR back to original value */
+ local_irq_save(flags);
+ val = ctrl_inw(IRLCNTR3);
+ val |= value;
+ ctrl_outw(val, IRLCNTR3);
+ local_irq_restore(flags);
+}
+
+static void ack_hs7751rvoip_irq(unsigned int irq)
+{
+ disable_hs7751rvoip_irq(irq);
+}
+
+static void end_hs7751rvoip_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_hs7751rvoip_irq(irq);
+}
+
+static struct hw_interrupt_type hs7751rvoip_irq_type = {
+ "HS7751RVoIP IRQ",
+ startup_hs7751rvoip_irq,
+ shutdown_hs7751rvoip_irq,
+ enable_hs7751rvoip_irq,
+ disable_hs7751rvoip_irq,
+ ack_hs7751rvoip_irq,
+ end_hs7751rvoip_irq,
+};
+
+static void make_hs7751rvoip_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ irq_desc[irq].handler = &hs7751rvoip_irq_type;
+ disable_hs7751rvoip_irq(irq);
+}
+
+/*
+ * Initialize IRQ setting
+ */
+void __init init_hs7751rvoip_IRQ(void)
+{
+ int i;
+
+ /* IRL0=ON HOOK1
+ * IRL1=OFF HOOK1
+ * IRL2=ON HOOK2
+ * IRL3=OFF HOOK2
+ * IRL4=Ringing Detection
+ * IRL5=CODEC
+ * IRL6=Ethernet
+ * IRL7=Ethernet Hub
+ * IRL8=USB Communication
+ * IRL9=USB Connection
+ * IRL10=USB DMA
+ * IRL11=CF Card
+ * IRL12=PCMCIA
+ * IRL13=PCI Slot
+ */
+ ctrl_outw(0x9876, IRLCNTR1);
+ ctrl_outw(0xdcba, IRLCNTR2);
+ ctrl_outw(0x0050, IRLCNTR4);
+ ctrl_outw(0x4321, IRLCNTR5);
+
+ for (i=0; i<14; i++)
+ make_hs7751rvoip_irq(i);
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/setup_hs7751rvoip.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Renesas Technology Sales HS7751RVoIP Support.
+ *
+ * Modified for HS7751RVoIP by
+ * Atom Create Engineering Co., Ltd. 2002.
+ * Lineo uSolutions, Inc. 2003.
+ */
+
+#include <linux/config.h>
+#include <asm/io.h>
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+
+extern unsigned int debug_counter;
+
+void debug_led_disp(void)
+{
+ unsigned short value;
+
+ value = (unsigned char)debug_counter++;
+ ctrl_outb((0xf0|value), PA_OUTPORTR);
+ if (value == 0x0f)
+ debug_counter = 0;
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/mach_hs7751rvoip.c
+ *
+ * Minor tweak of mach_se.c file to reference hs7751rvoip-specific items.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Machine vector for the Renesas Technology sales HS7751RVoIP
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+
+#include <asm/machvec.h>
+#include <asm/rtc.h>
+#include <asm/irq.h>
+#include <asm/hs7751rvoip/io.h>
+
+extern void init_hs7751rvoip_IRQ(void);
+extern void *hs7751rvoip_ioremap(unsigned long, unsigned long);
+
+/*
+ * The Machine Vector
+ */
+
+struct sh_machine_vector mv_hs7751rvoip __initmv = {
+ .mv_nr_irqs = 72,
+
+ .mv_inb = hs7751rvoip_inb,
+ .mv_inw = hs7751rvoip_inw,
+ .mv_inl = hs7751rvoip_inl,
+ .mv_outb = hs7751rvoip_outb,
+ .mv_outw = hs7751rvoip_outw,
+ .mv_outl = hs7751rvoip_outl,
+
+ .mv_inb_p = hs7751rvoip_inb_p,
+ .mv_inw_p = hs7751rvoip_inw,
+ .mv_inl_p = hs7751rvoip_inl,
+ .mv_outb_p = hs7751rvoip_outb_p,
+ .mv_outw_p = hs7751rvoip_outw,
+ .mv_outl_p = hs7751rvoip_outl,
+
+ .mv_insb = hs7751rvoip_insb,
+ .mv_insw = hs7751rvoip_insw,
+ .mv_insl = hs7751rvoip_insl,
+ .mv_outsb = hs7751rvoip_outsb,
+ .mv_outsw = hs7751rvoip_outsw,
+ .mv_outsl = hs7751rvoip_outsl,
+
+ .mv_ioremap = hs7751rvoip_ioremap,
+ .mv_isa_port2addr = hs7751rvoip_isa_port2addr,
+ .mv_init_irq = init_hs7751rvoip_IRQ,
+};
+ALIAS_MV(hs7751rvoip)
--- /dev/null
+/*
+ * linux/arch/sh/kernel/pci-hs7751rvoip.c
+ *
+ * Author: Ian DaSilva (idasilva@mvista.com)
+ *
+ * Highly leveraged from pci-bigsur.c, written by Dustin McIntire.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * PCI initialization for the Renesas SH7751R HS7751RVoIP board
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+
+#include <asm/io.h>
+#include "../../../drivers/pci/pci-sh7751.h"
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+
+#define PCIMCR_MRSET_OFF 0xBFFFFFFF
+#define PCIMCR_RFSH_OFF 0xFFFFFFFB
+
+/*
+ * Only long word accesses of the PCIC's internal local registers and the
+ * configuration registers from the CPU is supported.
+ */
+#define PCIC_WRITE(x,v) writel((v), PCI_REG(x))
+#define PCIC_READ(x) readl(PCI_REG(x))
+
+/*
+ * Description: This function sets up and initializes the pcic, sets
+ * up the BARS, maps the DRAM into the address space etc, etc.
+ */
+int __init pcibios_init_platform(void)
+{
+ unsigned long bcr1, wcr1, wcr2, wcr3, mcr;
+ unsigned short bcr2, bcr3;
+
+ /*
+ * Initialize the slave bus controller on the pcic. The values used
+ * here should not be hardcoded, but they should be taken from the bsc
+ * on the processor, to make this function as generic as possible.
+ * (i.e. Another sbc may usr different SDRAM timing settings -- in order
+ * for the pcic to work, its settings need to be exactly the same.)
+ */
+ bcr1 = (*(volatile unsigned long *)(SH7751_BCR1));
+ bcr2 = (*(volatile unsigned short *)(SH7751_BCR2));
+ bcr3 = (*(volatile unsigned short *)(SH7751_BCR3));
+ wcr1 = (*(volatile unsigned long *)(SH7751_WCR1));
+ wcr2 = (*(volatile unsigned long *)(SH7751_WCR2));
+ wcr3 = (*(volatile unsigned long *)(SH7751_WCR3));
+ mcr = (*(volatile unsigned long *)(SH7751_MCR));
+
+ bcr1 = bcr1 | 0x00080000; /* Enable Bit 19, BREQEN */
+ (*(volatile unsigned long *)(SH7751_BCR1)) = bcr1;
+
+ bcr1 = bcr1 | 0x40080000; /* Enable Bit 19 BREQEN, set PCIC to slave */
+ PCIC_WRITE(SH7751_PCIBCR1, bcr1); /* PCIC BCR1 */
+ PCIC_WRITE(SH7751_PCIBCR2, bcr2); /* PCIC BCR2 */
+ PCIC_WRITE(SH7751_PCIBCR3, bcr3); /* PCIC BCR3 */
+ PCIC_WRITE(SH7751_PCIWCR1, wcr1); /* PCIC WCR1 */
+ PCIC_WRITE(SH7751_PCIWCR2, wcr2); /* PCIC WCR2 */
+ PCIC_WRITE(SH7751_PCIWCR3, wcr3); /* PCIC WCR3 */
+ mcr = (mcr & PCIMCR_MRSET_OFF) & PCIMCR_RFSH_OFF;
+ PCIC_WRITE(SH7751_PCIMCR, mcr); /* PCIC MCR */
+
+ /* Enable all interrupts, so we know what to fix */
+ PCIC_WRITE(SH7751_PCIINTM, 0x0000c3ff);
+ PCIC_WRITE(SH7751_PCIAINTM, 0x0000380f);
+
+ /* Set up standard PCI config registers */
+ PCIC_WRITE(SH7751_PCICONF1, 0xFB900047); /* Bus Master, Mem & I/O access */
+ PCIC_WRITE(SH7751_PCICONF2, 0x00000000); /* PCI Class code & Revision ID */
+ PCIC_WRITE(SH7751_PCICONF4, 0xab000001); /* PCI I/O address (local regs) */
+ PCIC_WRITE(SH7751_PCICONF5, 0x0c000000); /* PCI MEM address (local RAM) */
+ PCIC_WRITE(SH7751_PCICONF6, 0xd0000000); /* PCI MEM address (unused) */
+ PCIC_WRITE(SH7751_PCICONF11, 0x35051054); /* PCI Subsystem ID & Vendor ID */
+ PCIC_WRITE(SH7751_PCILSR0, 0x03f00000); /* MEM (full 64M exposed) */
+ PCIC_WRITE(SH7751_PCILSR1, 0x00000000); /* MEM (unused) */
+ PCIC_WRITE(SH7751_PCILAR0, 0x0c000000); /* MEM (direct map from PCI) */
+ PCIC_WRITE(SH7751_PCILAR1, 0x00000000); /* MEM (unused) */
+
+ /* Now turn it on... */
+ PCIC_WRITE(SH7751_PCICR, 0xa5000001);
+
+ /*
+ * Set PCIMBR and PCIIOBR here, assuming a single window
+ * (16M MEM, 256K IO) is enough. If a larger space is
+ * needed, the readx/writex and inx/outx functions will
+ * have to do more (e.g. setting registers for each call).
+ */
+
+ /*
+ * Set the MBR so PCI address is one-to-one with window,
+ * meaning all calls go straight through... use ifdef to
+ * catch erroneous assumption.
+ */
+ BUG_ON(PCIBIOS_MIN_MEM != SH7751_PCI_MEMORY_BASE);
+
+ PCIC_WRITE(SH7751_PCIMBR, PCIBIOS_MIN_MEM);
+
+ /* Set IOBR for window containing area specified in pci.h */
+ PCIC_WRITE(SH7751_PCIIOBR, (PCIBIOS_MIN_IO & SH7751_PCIIOBR_MASK));
+
+ /* All done, may as well say so... */
+ printk("SH7751R PCI: Finished initialization of the PCI controller\n");
+
+ return 1;
+}
+
+int __init pcibios_map_platform_irq(u8 slot, u8 pin)
+{
+ switch (slot) {
+ case 0: return IRQ_PCISLOT; /* PCI Extend slot */
+ case 1: return IRQ_PCMCIA; /* PCI Cardbus Bridge */
+ case 2: return IRQ_PCIETH; /* Realtek Ethernet controller */
+ case 3: return IRQ_PCIHUB; /* Realtek Ethernet Hub controller */
+ default:
+ printk("PCI: Bad IRQ mapping request for slot %d\n", slot);
+ return -1;
+ }
+}
+
+static struct resource sh7751_io_resource = {
+ .name = "SH7751_IO",
+ .start = 0x4000,
+ .end = 0x4000 + SH7751_PCI_IO_SIZE - 1,
+ .flags = IORESOURCE_IO
+};
+
+static struct resource sh7751_mem_resource = {
+ .name = "SH7751_mem",
+ .start = SH7751_PCI_MEMORY_BASE,
+ .end = SH7751_PCI_MEMORY_BASE + SH7751_PCI_MEM_SIZE - 1,
+ .flags = IORESOURCE_MEM
+};
+
+extern struct pci_ops sh7751_pci_ops;
+
+struct pci_channel board_pci_channels[] = {
+ { &sh7751_pci_ops, &sh7751_io_resource, &sh7751_mem_resource, 0, 0xff },
+ { NULL, NULL, NULL, 0, 0 },
+};
+EXPORT_SYMBOL(board_pci_channels);
--- /dev/null
+/*
+ * linux/arch/sh/kernel/setup_hs7751rvoip.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Renesas Technology Sales HS7751RVoIP Support.
+ *
+ * Modified for HS7751RVoIP by
+ * Atom Create Engineering Co., Ltd. 2002.
+ * Lineo uSolutions, Inc. 2003.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <linux/hdreg.h>
+#include <linux/ide.h>
+#include <asm/io.h>
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+
+/* defined in mm/ioremap.c */
+extern void * p3_ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags);
+
+unsigned int debug_counter;
+
+const char *get_system_type(void)
+{
+ return "HS7751RVoIP";
+}
+
+/*
+ * Initialize the board
+ */
+void __init platform_setup(void)
+{
+ printk(KERN_INFO "Renesas Technology Sales HS7751RVoIP-2 support.\n");
+ ctrl_outb(0xf0, PA_OUTPORTR);
+ debug_counter = 0;
+}
+
+void *area5_io8_base;
+void *area6_io8_base;
+void *area5_io16_base;
+void *area6_io16_base;
+
+int __init cf_init(void)
+{
+ pgprot_t prot;
+ unsigned long paddrbase, psize;
+
+ /* open I/O area window */
+ paddrbase = virt_to_phys((void *)(PA_AREA5_IO+0x00000800));
+ psize = PAGE_SIZE;
+ prot = PAGE_KERNEL_PCC(1, _PAGE_PCC_COM16);
+ area5_io16_base = p3_ioremap(paddrbase, psize, prot.pgprot);
+ if (!area5_io16_base) {
+ printk("allocate_cf_area : can't open CF I/O window!\n");
+ return -ENOMEM;
+ }
+
+ /* XXX : do we need attribute and common-memory area also? */
+
+ paddrbase = virt_to_phys((void *)PA_AREA6_IO);
+ psize = PAGE_SIZE;
+#if defined(CONFIG_HS7751RVOIP_CODEC)
+ prot = PAGE_KERNEL_PCC(0, _PAGE_PCC_COM8);
+#else
+ prot = PAGE_KERNEL_PCC(0, _PAGE_PCC_IO8);
+#endif
+ area6_io8_base = p3_ioremap(paddrbase, psize, prot.pgprot);
+ if (!area6_io8_base) {
+ printk("allocate_cf_area : can't open CODEC I/O 8bit window!\n");
+ return -ENOMEM;
+ }
+ prot = PAGE_KERNEL_PCC(0, _PAGE_PCC_IO16);
+ area6_io16_base = p3_ioremap(paddrbase, psize, prot.pgprot);
+ if (!area6_io16_base) {
+ printk("allocate_cf_area : can't open CODEC I/O 16bit window!\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+__initcall (cf_init);
--- /dev/null
+#
+# Makefile for the RTS7751R2D specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+obj-y := mach.o setup.o io.o irq.o led.o
+
--- /dev/null
+/*
+ * linux/arch/sh/kernel/io_rts7751r2d.c
+ *
+ * Copyright (C) 2001 Ian da Silva, Jeremy Siegel
+ * Based largely on io_se.c.
+ *
+ * I/O routine for Renesas Technology sales RTS7751R2D.
+ *
+ * Initial version only to support LAN access; some
+ * placeholder code from io_rts7751r2d.c left in with the
+ * expectation of later SuperIO and PCMCIA access.
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <asm/io.h>
+#include <asm/rts7751r2d/rts7751r2d.h>
+#include <asm/addrspace.h>
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include "../../../drivers/pci/pci-sh7751.h"
+
+/*
+ * The 7751R RTS7751R2D uses the built-in PCI controller (PCIC)
+ * of the 7751R processor, and has a SuperIO accessible via the PCI.
+ * The board also includes a PCMCIA controller on its memory bus,
+ * like the other Solution Engine boards.
+ */
+
+#define PCIIOBR (volatile long *)PCI_REG(SH7751_PCIIOBR)
+#define PCIMBR (volatile long *)PCI_REG(SH7751_PCIMBR)
+#define PCI_IO_AREA SH7751_PCI_IO_BASE
+#define PCI_MEM_AREA SH7751_PCI_CONFIG_BASE
+
+#define PCI_IOMAP(adr) (PCI_IO_AREA + (adr & ~SH7751_PCIIOBR_MASK))
+
+#define maybebadio(name,port) \
+ printk("bad PC-like io %s for port 0x%lx at 0x%08x\n", \
+ #name, (port), (__u32) __builtin_return_address(0))
+
+static inline void delay(void)
+{
+ ctrl_inw(0xa0000000);
+}
+
+static inline unsigned long port2adr(unsigned int port)
+{
+ if ((0x1f0 <= port && port < 0x1f8) || port == 0x3f6)
+ if (port == 0x3f6)
+ return (PA_AREA5_IO + 0x80c);
+ else
+ return (PA_AREA5_IO + 0x1000 + ((port-0x1f0) << 1));
+ else
+ maybebadio(port2adr, (unsigned long)port);
+
+ return port;
+}
+
+static inline unsigned long port88796l(unsigned int port, int flag)
+{
+ unsigned long addr;
+
+ if (flag)
+ addr = PA_AX88796L + ((port - AX88796L_IO_BASE) << 1);
+ else
+ addr = PA_AX88796L + ((port - AX88796L_IO_BASE) << 1) + 0x1000;
+
+ return addr;
+}
+
+/* The 7751R RTS7751R2D seems to have everything hooked */
+/* up pretty normally (nothing on high-bytes only...) so this */
+/* shouldn't be needed */
+static inline int shifted_port(unsigned long port)
+{
+ /* For IDE registers, value is not shifted */
+ if ((0x1f0 <= port && port < 0x1f8) || port == 0x3f6)
+ return 0;
+ else
+ return 1;
+}
+
+/* In case someone configures the kernel w/o PCI support: in that */
+/* scenario, don't ever bother to check for PCI-window addresses */
+
+/* NOTE: WINDOW CHECK MAY BE A BIT OFF, HIGH PCIBIOS_MIN_IO WRAPS? */
+#if defined(CONFIG_PCI)
+#define CHECK_SH7751_PCIIO(port) \
+ ((port >= PCIBIOS_MIN_IO) && (port < (PCIBIOS_MIN_IO + SH7751_PCI_IO_SIZE)))
+#else
+#define CHECK_SH7751_PCIIO(port) (0)
+#endif
+
+#if defined(CONFIG_NE2000) || defined(CONFIG_NE2000_MODULE)
+#define CHECK_AX88796L_PORT(port) \
+ ((port >= AX88796L_IO_BASE) && (port < (AX88796L_IO_BASE+0x20)))
+#else
+#define CHECK_AX88796L_PORT(port) (0)
+#endif
+
+/*
+ * General outline: remap really low stuff [eventually] to SuperIO,
+ * stuff in PCI IO space (at or above window at pci.h:PCIBIOS_MIN_IO)
+ * is mapped through the PCI IO window. Stuff with high bits (PXSEG)
+ * should be way beyond the window, and is used w/o translation for
+ * compatibility.
+ */
+unsigned char rts7751r2d_inb(unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ return (*(volatile unsigned short *)port88796l(port, 0)) & 0xff;
+ else if (PXSEG(port))
+ return *(volatile unsigned char *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned char *)PCI_IOMAP(port);
+ else
+ return (*(volatile unsigned short *)port2adr(port) & 0xff);
+}
+
+unsigned char rts7751r2d_inb_p(unsigned long port)
+{
+ unsigned char v;
+
+ if (CHECK_AX88796L_PORT(port))
+ v = (*(volatile unsigned short *)port88796l(port, 0)) & 0xff;
+ else if (PXSEG(port))
+ v = *(volatile unsigned char *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ v = *(volatile unsigned char *)PCI_IOMAP(port);
+ else
+ v = (*(volatile unsigned short *)port2adr(port) & 0xff);
+ delay();
+
+ return v;
+}
+
+unsigned short rts7751r2d_inw(unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(inw, port);
+ else if (PXSEG(port))
+ return *(volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned short *)PCI_IOMAP(port);
+ else
+ maybebadio(inw, port);
+
+ return 0;
+}
+
+unsigned int rts7751r2d_inl(unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(inl, port);
+ else if (PXSEG(port))
+ return *(volatile unsigned long *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ return *(volatile unsigned long *)PCI_IOMAP(port);
+ else
+ maybebadio(inl, port);
+
+ return 0;
+}
+
+void rts7751r2d_outb(unsigned char value, unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ *((volatile unsigned short *)port88796l(port, 0)) = value;
+ else if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(volatile unsigned char *)PCI_IOMAP(port) = value;
+ else
+ *(volatile unsigned short *)port2adr(port) = value;
+}
+
+void rts7751r2d_outb_p(unsigned char value, unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ *((volatile unsigned short *)port88796l(port, 0)) = value;
+ else if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(volatile unsigned char *)PCI_IOMAP(port) = value;
+ else
+ *(volatile unsigned short *)port2adr(port) = value;
+ delay();
+}
+
+void rts7751r2d_outw(unsigned short value, unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(outw, port);
+ else if (PXSEG(port))
+ *(volatile unsigned short *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(volatile unsigned short *)PCI_IOMAP(port) = value;
+ else
+ maybebadio(outw, port);
+}
+
+void rts7751r2d_outl(unsigned int value, unsigned long port)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(outl, port);
+ else if (PXSEG(port))
+ *(volatile unsigned long *)port = value;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ *(volatile unsigned long *)PCI_IOMAP(port) = value;
+ else
+ maybebadio(outl, port);
+}
+
+void rts7751r2d_insb(unsigned long port, void *addr, unsigned long count)
+{
+ volatile __u8 *bp;
+ volatile __u16 *p;
+
+ if (CHECK_AX88796L_PORT(port)) {
+ p = (volatile unsigned short *)port88796l(port, 0);
+ while (count--) *((unsigned char *) addr)++ = *p & 0xff;
+ } else if (PXSEG(port))
+ while (count--) *((unsigned char *) addr)++ = *(volatile unsigned char *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ bp = (__u8 *)PCI_IOMAP(port);
+ while (count--) *((volatile unsigned char *) addr)++ = *bp;
+ } else {
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *((unsigned char *) addr)++ = *p & 0xff;
+ }
+}
+
+void rts7751r2d_insw(unsigned long port, void *addr, unsigned long count)
+{
+ volatile __u16 *p;
+
+ if (CHECK_AX88796L_PORT(port))
+ p = (volatile unsigned short *)port88796l(port, 1);
+ else if (PXSEG(port))
+ p = (volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ p = (volatile unsigned short *)PCI_IOMAP(port);
+ else
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *((__u16 *) addr)++ = *p;
+}
+
+void rts7751r2d_insl(unsigned long port, void *addr, unsigned long count)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(insl, port);
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u32 *p = (__u32 *)PCI_IOMAP(port);
+
+ while (count--) *((__u32 *) addr)++ = *p;
+ } else
+ maybebadio(insl, port);
+}
+
+void rts7751r2d_outsb(unsigned long port, const void *addr, unsigned long count)
+{
+ volatile __u8 *bp;
+ volatile __u16 *p;
+
+ if (CHECK_AX88796L_PORT(port)) {
+ p = (volatile unsigned short *)port88796l(port, 0);
+ while (count--) *p = *((unsigned char *) addr)++;
+ } else if (PXSEG(port))
+ while (count--) *(volatile unsigned char *)port = *((unsigned char *) addr)++;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ bp = (__u8 *)PCI_IOMAP(port);
+ while (count--) *bp = *((volatile unsigned char *) addr)++;
+ } else {
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *p = *((unsigned char *) addr)++;
+ }
+}
+
+void rts7751r2d_outsw(unsigned long port, const void *addr, unsigned long count)
+{
+ volatile __u16 *p;
+
+ if (CHECK_AX88796L_PORT(port))
+ p = (volatile unsigned short *)port88796l(port, 1);
+ else if (PXSEG(port))
+ p = (volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port))
+ p = (volatile unsigned short *)PCI_IOMAP(port);
+ else
+ p = (volatile unsigned short *)port2adr(port);
+ while (count--) *p = *((__u16 *) addr)++;
+}
+
+void rts7751r2d_outsl(unsigned long port, const void *addr, unsigned long count)
+{
+ if (CHECK_AX88796L_PORT(port))
+ maybebadio(outsl, port);
+ else if (CHECK_SH7751_PCIIO(port) || shifted_port(port)) {
+ volatile __u32 *p = (__u32 *)PCI_IOMAP(port);
+
+ while (count--) *p = *((__u32 *) addr)++;
+ } else
+ maybebadio(outsl, port);
+}
+
+void *rts7751r2d_ioremap(unsigned long offset, unsigned long size)
+{
+ if (offset >= 0xfd000000)
+ return (void *)offset;
+ else
+ return (void *)P2SEGADDR(offset);
+}
+EXPORT_SYMBOL(rts7751r2d_ioremap);
+
+unsigned long rts7751r2d_isa_port2addr(unsigned long offset)
+{
+ return port2adr(offset);
+}
--- /dev/null
+/*
+ * linux/arch/sh/boards/renesas/rts7751r2d/irq.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Renesas Technology Sales RTS7751R2D Support.
+ *
+ * Modified for RTS7751R2D by
+ * Atom Create Engineering Co., Ltd. 2002.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+#if defined(CONFIG_RTS7751R2D_REV11)
+static int mask_pos[] = {11, 9, 8, 12, 10, 6, 5, 4, 7, 14, 13, 0, 0, 0, 0};
+#else
+static int mask_pos[] = {6, 11, 9, 8, 12, 10, 5, 4, 7, 14, 13, 0, 0, 0, 0};
+#endif
+
+extern int voyagergx_irq_demux(int irq);
+extern void setup_voyagergx_irq(void);
+
+static void enable_rts7751r2d_irq(unsigned int irq);
+static void disable_rts7751r2d_irq(unsigned int irq);
+
+/* shutdown is same as "disable" */
+#define shutdown_rts7751r2d_irq disable_rts7751r2d_irq
+
+static void ack_rts7751r2d_irq(unsigned int irq);
+static void end_rts7751r2d_irq(unsigned int irq);
+
+static unsigned int startup_rts7751r2d_irq(unsigned int irq)
+{
+ enable_rts7751r2d_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void disable_rts7751r2d_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned short val;
+ unsigned short mask = 0xffff ^ (0x0001 << mask_pos[irq]);
+
+ /* Set the priority in IPR to 0 */
+ local_irq_save(flags);
+ val = ctrl_inw(IRLCNTR1);
+ val &= mask;
+ ctrl_outw(val, IRLCNTR1);
+ local_irq_restore(flags);
+}
+
+static void enable_rts7751r2d_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned short val;
+ unsigned short value = (0x0001 << mask_pos[irq]);
+
+ /* Set priority in IPR back to original value */
+ local_irq_save(flags);
+ val = ctrl_inw(IRLCNTR1);
+ val |= value;
+ ctrl_outw(val, IRLCNTR1);
+ local_irq_restore(flags);
+}
+
+int rts7751r2d_irq_demux(int irq)
+{
+ int demux_irq;
+
+ demux_irq = voyagergx_irq_demux(irq);
+ return demux_irq;
+}
+
+static void ack_rts7751r2d_irq(unsigned int irq)
+{
+ disable_rts7751r2d_irq(irq);
+}
+
+static void end_rts7751r2d_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_rts7751r2d_irq(irq);
+}
+
+static struct hw_interrupt_type rts7751r2d_irq_type = {
+ "RTS7751R2D IRQ",
+ startup_rts7751r2d_irq,
+ shutdown_rts7751r2d_irq,
+ enable_rts7751r2d_irq,
+ disable_rts7751r2d_irq,
+ ack_rts7751r2d_irq,
+ end_rts7751r2d_irq,
+};
+
+static void make_rts7751r2d_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ irq_desc[irq].handler = &rts7751r2d_irq_type;
+ disable_rts7751r2d_irq(irq);
+}
+
+/*
+ * Initialize IRQ setting
+ */
+void __init init_rts7751r2d_IRQ(void)
+{
+ int i;
+
+ /* IRL0=KEY Input
+ * IRL1=Ethernet
+ * IRL2=CF Card
+ * IRL3=CF Card Insert
+ * IRL4=PCMCIA
+ * IRL5=VOYAGER
+ * IRL6=RTC Alarm
+ * IRL7=RTC Timer
+ * IRL8=SD Card
+ * IRL9=PCI Slot #1
+ * IRL10=PCI Slot #2
+ * IRL11=Extention #0
+ * IRL12=Extention #1
+ * IRL13=Extention #2
+ * IRL14=Extention #3
+ */
+
+ for (i=0; i<15; i++)
+ make_rts7751r2d_irq(i);
+
+ setup_voyagergx_irq();
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/led_rts7751r2d.c
+ *
+ * Copyright (C) Atom Create Engineering Co., Ltd.
+ *
+ * May be copied or modified under the terms of GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * This file contains Renesas Technology Sales RTS7751R2D specific LED code.
+ */
+
+#include <linux/config.h>
+#include <asm/io.h>
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+extern unsigned int debug_counter;
+
+#ifdef CONFIG_HEARTBEAT
+
+#include <linux/sched.h>
+
+/* Cycle the LED's in the clasic Knightriger/Sun pattern */
+void heartbeat_rts7751r2d(void)
+{
+ static unsigned int cnt = 0, period = 0;
+ volatile unsigned short *p = (volatile unsigned short *)PA_OUTPORT;
+ static unsigned bit = 0, up = 1;
+
+ cnt += 1;
+ if (cnt < period)
+ return;
+
+ cnt = 0;
+
+ /* Go through the points (roughly!):
+ * f(0)=10, f(1)=16, f(2)=20, f(5)=35, f(int)->110
+ */
+ period = 110 - ((300 << FSHIFT)/((avenrun[0]/5) + (3<<FSHIFT)));
+
+ *p = 1 << bit;
+ if (up)
+ if (bit == 7) {
+ bit--;
+ up = 0;
+ } else
+ bit++;
+ else if (bit == 0)
+ up = 1;
+ else
+ bit--;
+}
+#endif /* CONFIG_HEARTBEAT */
+
+void rts7751r2d_led(unsigned short value)
+{
+ ctrl_outw(value, PA_OUTPORT);
+}
+
+void debug_led_disp(void)
+{
+ unsigned short value;
+
+ value = (unsigned short)debug_counter++;
+ rts7751r2d_led(value);
+ if (value == 0xff)
+ debug_counter = 0;
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/mach_rts7751r2d.c
+ *
+ * Minor tweak of mach_se.c file to reference rts7751r2d-specific items.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Machine vector for the Renesas Technology sales RTS7751R2D
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/types.h>
+
+#include <asm/machvec.h>
+#include <asm/rtc.h>
+#include <asm/irq.h>
+#include <asm/rts7751r2d/io.h>
+
+extern void heartbeat_rts7751r2d(void);
+extern void init_rts7751r2d_IRQ(void);
+extern void *rts7751r2d_ioremap(unsigned long, unsigned long);
+extern int rts7751r2d_irq_demux(int irq);
+
+extern void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, int);
+extern void voyagergx_consistent_free(struct device *, size_t, void *, dma_addr_t);
+
+/*
+ * The Machine Vector
+ */
+
+struct sh_machine_vector mv_rts7751r2d __initmv = {
+ .mv_nr_irqs = 72,
+
+ .mv_inb = rts7751r2d_inb,
+ .mv_inw = rts7751r2d_inw,
+ .mv_inl = rts7751r2d_inl,
+ .mv_outb = rts7751r2d_outb,
+ .mv_outw = rts7751r2d_outw,
+ .mv_outl = rts7751r2d_outl,
+
+ .mv_inb_p = rts7751r2d_inb_p,
+ .mv_inw_p = rts7751r2d_inw,
+ .mv_inl_p = rts7751r2d_inl,
+ .mv_outb_p = rts7751r2d_outb_p,
+ .mv_outw_p = rts7751r2d_outw,
+ .mv_outl_p = rts7751r2d_outl,
+
+ .mv_insb = rts7751r2d_insb,
+ .mv_insw = rts7751r2d_insw,
+ .mv_insl = rts7751r2d_insl,
+ .mv_outsb = rts7751r2d_outsb,
+ .mv_outsw = rts7751r2d_outsw,
+ .mv_outsl = rts7751r2d_outsl,
+
+ .mv_ioremap = rts7751r2d_ioremap,
+ .mv_isa_port2addr = rts7751r2d_isa_port2addr,
+ .mv_init_irq = init_rts7751r2d_IRQ,
+#ifdef CONFIG_HEARTBEAT
+ .mv_heartbeat = heartbeat_rts7751r2d,
+#endif
+ .mv_irq_demux = rts7751r2d_irq_demux,
+
+ .mv_consistent_alloc = voyagergx_consistent_alloc,
+ .mv_consistent_free = voyagergx_consistent_free,
+};
+ALIAS_MV(rts7751r2d)
--- /dev/null
+/*
+ * linux/arch/sh/kernel/setup_rts7751r2d.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Renesas Technology Sales RTS7751R2D Support.
+ *
+ * Modified for RTS7751R2D by
+ * Atom Create Engineering Co., Ltd. 2002.
+ */
+
+#include <linux/init.h>
+#include <asm/io.h>
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+unsigned int debug_counter;
+
+const char *get_system_type(void)
+{
+ return "RTS7751R2D";
+}
+
+/*
+ * Initialize the board
+ */
+void __init platform_setup(void)
+{
+ printk(KERN_INFO "Renesas Technology Sales RTS7751R2D support.\n");
+ ctrl_outw(0x0000, PA_OUTPORT);
+ debug_counter = 0;
+}
--- /dev/null
+#
+# Makefile for the SystemH specific parts of the kernel
+#
+
+obj-y := setup.o irq.o io.o
+
+# XXX: This wants to be consolidated in arch/sh/drivers/pci, and more
+# importantly, with the generic sh7751_pcic_init() code. For now, we'll
+# just abuse the hell out of kbuild, because we can..
+
+obj-$(CONFIG_PCI) += pci.o
+pci-y := ../../se/7751/pci.o
+
--- /dev/null
+/*
+ * linux/arch/sh/boards/systemh/io.c
+ *
+ * Copyright (C) 2001 Ian da Silva, Jeremy Siegel
+ * Based largely on io_se.c.
+ *
+ * I/O routine for Hitachi 7751 Systemh.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <asm/systemh/7751systemh.h>
+#include <asm/addrspace.h>
+#include <asm/io.h>
+
+#include <linux/pci.h>
+#include "../../drivers/pci/pci-sh7751.h"
+
+/*
+ * The 7751 SystemH Engine uses the built-in PCI controller (PCIC)
+ * of the 7751 processor, and has a SuperIO accessible on its memory
+ * bus.
+ */
+
+#define PCIIOBR (volatile long *)PCI_REG(SH7751_PCIIOBR)
+#define PCIMBR (volatile long *)PCI_REG(SH7751_PCIMBR)
+#define PCI_IO_AREA SH7751_PCI_IO_BASE
+#define PCI_MEM_AREA SH7751_PCI_CONFIG_BASE
+
+#define PCI_IOMAP(adr) (PCI_IO_AREA + (adr & ~SH7751_PCIIOBR_MASK))
+#define ETHER_IOMAP(adr) (0xB3000000 + (adr)) /*map to 16bits access area
+ of smc lan chip*/
+
+#define maybebadio(name,port) \
+ printk("bad PC-like io %s for port 0x%lx at 0x%08x\n", \
+ #name, (port), (__u32) __builtin_return_address(0))
+
+static inline void delay(void)
+{
+ ctrl_inw(0xa0000000);
+}
+
+static inline volatile __u16 *
+port2adr(unsigned int port)
+{
+ if (port >= 0x2000)
+ return (volatile __u16 *) (PA_MRSHPC + (port - 0x2000));
+#if 0
+ else
+ return (volatile __u16 *) (PA_SUPERIO + (port << 1));
+#endif
+ maybebadio(name,(unsigned long)port);
+ return (volatile __u16*)port;
+}
+
+/* In case someone configures the kernel w/o PCI support: in that */
+/* scenario, don't ever bother to check for PCI-window addresses */
+
+/* NOTE: WINDOW CHECK MAY BE A BIT OFF, HIGH PCIBIOS_MIN_IO WRAPS? */
+#if defined(CONFIG_PCI)
+#define CHECK_SH7751_PCIIO(port) \
+ ((port >= PCIBIOS_MIN_IO) && (port < (PCIBIOS_MIN_IO + SH7751_PCI_IO_SIZE)))
+#else
+#define CHECK_SH7751_PCIIO(port) (0)
+#endif
+
+/*
+ * General outline: remap really low stuff [eventually] to SuperIO,
+ * stuff in PCI IO space (at or above window at pci.h:PCIBIOS_MIN_IO)
+ * is mapped through the PCI IO window. Stuff with high bits (PXSEG)
+ * should be way beyond the window, and is used w/o translation for
+ * compatibility.
+ */
+unsigned char sh7751systemh_inb(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned char *)port;
+ else if (CHECK_SH7751_PCIIO(port))
+ return *(volatile unsigned char *)PCI_IOMAP(port);
+ else if (port <= 0x3F1)
+ return *(volatile unsigned char *)ETHER_IOMAP(port);
+ else
+ return (*port2adr(port))&0xff;
+}
+
+unsigned char sh7751systemh_inb_p(unsigned long port)
+{
+ unsigned char v;
+
+ if (PXSEG(port))
+ v = *(volatile unsigned char *)port;
+ else if (CHECK_SH7751_PCIIO(port))
+ v = *(volatile unsigned char *)PCI_IOMAP(port);
+ else if (port <= 0x3F1)
+ v = *(volatile unsigned char *)ETHER_IOMAP(port);
+ else
+ v = (*port2adr(port))&0xff;
+ delay();
+ return v;
+}
+
+unsigned short sh7751systemh_inw(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned short *)port;
+ else if (CHECK_SH7751_PCIIO(port))
+ return *(volatile unsigned short *)PCI_IOMAP(port);
+ else if (port >= 0x2000)
+ return *port2adr(port);
+ else if (port <= 0x3F1)
+ return *(volatile unsigned int *)ETHER_IOMAP(port);
+ else
+ maybebadio(inw, port);
+ return 0;
+}
+
+unsigned int sh7751systemh_inl(unsigned long port)
+{
+ if (PXSEG(port))
+ return *(volatile unsigned long *)port;
+ else if (CHECK_SH7751_PCIIO(port))
+ return *(volatile unsigned int *)PCI_IOMAP(port);
+ else if (port >= 0x2000)
+ return *port2adr(port);
+ else if (port <= 0x3F1)
+ return *(volatile unsigned int *)ETHER_IOMAP(port);
+ else
+ maybebadio(inl, port);
+ return 0;
+}
+
+void sh7751systemh_outb(unsigned char value, unsigned long port)
+{
+
+ if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+ else if (CHECK_SH7751_PCIIO(port))
+ *((unsigned char*)PCI_IOMAP(port)) = value;
+ else if (port <= 0x3F1)
+ *(volatile unsigned char *)ETHER_IOMAP(port) = value;
+ else
+ *(port2adr(port)) = value;
+}
+
+void sh7751systemh_outb_p(unsigned char value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned char *)port = value;
+ else if (CHECK_SH7751_PCIIO(port))
+ *((unsigned char*)PCI_IOMAP(port)) = value;
+ else if (port <= 0x3F1)
+ *(volatile unsigned char *)ETHER_IOMAP(port) = value;
+ else
+ *(port2adr(port)) = value;
+ delay();
+}
+
+void sh7751systemh_outw(unsigned short value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned short *)port = value;
+ else if (CHECK_SH7751_PCIIO(port))
+ *((unsigned short *)PCI_IOMAP(port)) = value;
+ else if (port >= 0x2000)
+ *port2adr(port) = value;
+ else if (port <= 0x3F1)
+ *(volatile unsigned short *)ETHER_IOMAP(port) = value;
+ else
+ maybebadio(outw, port);
+}
+
+void sh7751systemh_outl(unsigned int value, unsigned long port)
+{
+ if (PXSEG(port))
+ *(volatile unsigned long *)port = value;
+ else if (CHECK_SH7751_PCIIO(port))
+ *((unsigned long*)PCI_IOMAP(port)) = value;
+ else
+ maybebadio(outl, port);
+}
+
+void sh7751systemh_insb(unsigned long port, void *addr, unsigned long count)
+{
+ unsigned char *p = addr;
+ while (count--) *p++ = sh7751systemh_inb(port);
+}
+
+void sh7751systemh_insw(unsigned long port, void *addr, unsigned long count)
+{
+ unsigned short *p = addr;
+ while (count--) *p++ = sh7751systemh_inw(port);
+}
+
+void sh7751systemh_insl(unsigned long port, void *addr, unsigned long count)
+{
+ maybebadio(insl, port);
+}
+
+void sh7751systemh_outsb(unsigned long port, const void *addr, unsigned long count)
+{
+ unsigned char *p = (unsigned char*)addr;
+ while (count--) sh7751systemh_outb(*p++, port);
+}
+
+void sh7751systemh_outsw(unsigned long port, const void *addr, unsigned long count)
+{
+ unsigned short *p = (unsigned short*)addr;
+ while (count--) sh7751systemh_outw(*p++, port);
+}
+
+void sh7751systemh_outsl(unsigned long port, const void *addr, unsigned long count)
+{
+ maybebadio(outsw, port);
+}
+
+/* For read/write calls, just copy generic (pass-thru); PCIMBR is */
+/* already set up. For a larger memory space, these would need to */
+/* reset PCIMBR as needed on a per-call basis... */
+
+unsigned char sh7751systemh_readb(unsigned long addr)
+{
+ return *(volatile unsigned char*)addr;
+}
+
+unsigned short sh7751systemh_readw(unsigned long addr)
+{
+ return *(volatile unsigned short*)addr;
+}
+
+unsigned int sh7751systemh_readl(unsigned long addr)
+{
+ return *(volatile unsigned long*)addr;
+}
+
+void sh7751systemh_writeb(unsigned char b, unsigned long addr)
+{
+ *(volatile unsigned char*)addr = b;
+}
+
+void sh7751systemh_writew(unsigned short b, unsigned long addr)
+{
+ *(volatile unsigned short*)addr = b;
+}
+
+void sh7751systemh_writel(unsigned int b, unsigned long addr)
+{
+ *(volatile unsigned long*)addr = b;
+}
+
+\f
+
+/* Map ISA bus address to the real address. Only for PCMCIA. */
+
+/* ISA page descriptor. */
+static __u32 sh_isa_memmap[256];
+
+#if 0
+static int
+sh_isa_mmap(__u32 start, __u32 length, __u32 offset)
+{
+ int idx;
+
+ if (start >= 0x100000 || (start & 0xfff) || (length != 0x1000))
+ return -1;
+
+ idx = start >> 12;
+ sh_isa_memmap[idx] = 0xb8000000 + (offset &~ 0xfff);
+ printk("sh_isa_mmap: start %x len %x offset %x (idx %x paddr %x)\n",
+ start, length, offset, idx, sh_isa_memmap[idx]);
+ return 0;
+}
+#endif
+
+unsigned long
+sh7751systemh_isa_port2addr(unsigned long offset)
+{
+ int idx;
+
+ idx = (offset >> 12) & 0xff;
+ offset &= 0xfff;
+ return sh_isa_memmap[idx] + offset;
+}
--- /dev/null
+/*
+ * linux/arch/sh/boards/systemh/irq.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ *
+ * Hitachi SystemH Support.
+ *
+ * Modified for 7751 SystemH by
+ * Jonathan Short.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <linux/hdreg.h>
+#include <linux/ide.h>
+#include <asm/io.h>
+#include <asm/mach/7751systemh.h>
+#include <asm/smc37c93x.h>
+
+/* address of external interrupt mask register
+ * address must be set prior to use these (maybe in init_XXX_irq())
+ * XXX : is it better to use .config than specifying it in code? */
+static unsigned long *systemh_irq_mask_register = (unsigned long *)0xB3F10004;
+static unsigned long *systemh_irq_request_register = (unsigned long *)0xB3F10000;
+
+/* forward declaration */
+static unsigned int startup_systemh_irq(unsigned int irq);
+static void shutdown_systemh_irq(unsigned int irq);
+static void enable_systemh_irq(unsigned int irq);
+static void disable_systemh_irq(unsigned int irq);
+static void mask_and_ack_systemh(unsigned int);
+static void end_systemh_irq(unsigned int irq);
+
+/* hw_interrupt_type */
+static struct hw_interrupt_type systemh_irq_type = {
+ " SystemH Register",
+ startup_systemh_irq,
+ shutdown_systemh_irq,
+ enable_systemh_irq,
+ disable_systemh_irq,
+ mask_and_ack_systemh,
+ end_systemh_irq
+};
+
+static unsigned int startup_systemh_irq(unsigned int irq)
+{
+ enable_systemh_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void shutdown_systemh_irq(unsigned int irq)
+{
+ disable_systemh_irq(irq);
+}
+
+static void disable_systemh_irq(unsigned int irq)
+{
+ if (systemh_irq_mask_register) {
+ unsigned long flags;
+ unsigned long val, mask = 0x01 << 1;
+
+ /* Clear the "irq"th bit in the mask and set it in the request */
+ local_irq_save(flags);
+
+ val = ctrl_inl((unsigned long)systemh_irq_mask_register);
+ val &= ~mask;
+ ctrl_outl(val, (unsigned long)systemh_irq_mask_register);
+
+ val = ctrl_inl((unsigned long)systemh_irq_request_register);
+ val |= mask;
+ ctrl_outl(val, (unsigned long)systemh_irq_request_register);
+
+ local_irq_restore(flags);
+ }
+}
+
+static void enable_systemh_irq(unsigned int irq)
+{
+ if (systemh_irq_mask_register) {
+ unsigned long flags;
+ unsigned long val, mask = 0x01 << 1;
+
+ /* Set "irq"th bit in the mask register */
+ local_irq_save(flags);
+ val = ctrl_inl((unsigned long)systemh_irq_mask_register);
+ val |= mask;
+ ctrl_outl(val, (unsigned long)systemh_irq_mask_register);
+ local_irq_restore(flags);
+ }
+}
+
+static void mask_and_ack_systemh(unsigned int irq)
+{
+ disable_systemh_irq(irq);
+}
+
+static void end_systemh_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_systemh_irq(irq);
+}
+
+void make_systemh_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ irq_desc[irq].handler = &systemh_irq_type;
+ disable_systemh_irq(irq);
+}
+
--- /dev/null
+/*
+ * linux/arch/sh/boards/systemh/setup.c
+ *
+ * Copyright (C) 2000 Kazumoto Kojima
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Hitachi SystemH Support.
+ *
+ * Modified for 7751 SystemH by Jonathan Short.
+ *
+ * Rewritten for 2.6 by Paul Mundt.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/init.h>
+#include <asm/mach/7751systemh.h>
+#include <asm/mach/io.h>
+#include <asm/machvec.h>
+
+extern void make_systemh_irq(unsigned int irq);
+
+const char *get_system_type(void)
+{
+ return "7751 SystemH";
+}
+
+/*
+ * Initialize IRQ setting
+ */
+void __init init_7751systemh_IRQ(void)
+{
+/* make_ipr_irq(10, BCR_ILCRD, 1, 0x0f-10); LAN */
+/* make_ipr_irq(14, BCR_ILCRA, 2, 0x0f-4); */
+ make_systemh_irq(0xb); /* Ethernet interrupt */
+}
+
+struct sh_machine_vector mv_7751systemh __initmv = {
+ .mv_nr_irqs = 72,
+
+ .mv_inb = sh7751systemh_inb,
+ .mv_inw = sh7751systemh_inw,
+ .mv_inl = sh7751systemh_inl,
+ .mv_outb = sh7751systemh_outb,
+ .mv_outw = sh7751systemh_outw,
+ .mv_outl = sh7751systemh_outl,
+
+ .mv_inb_p = sh7751systemh_inb_p,
+ .mv_inw_p = sh7751systemh_inw,
+ .mv_inl_p = sh7751systemh_inl,
+ .mv_outb_p = sh7751systemh_outb_p,
+ .mv_outw_p = sh7751systemh_outw,
+ .mv_outl_p = sh7751systemh_outl,
+
+ .mv_insb = sh7751systemh_insb,
+ .mv_insw = sh7751systemh_insw,
+ .mv_insl = sh7751systemh_insl,
+ .mv_outsb = sh7751systemh_outsb,
+ .mv_outsw = sh7751systemh_outsw,
+ .mv_outsl = sh7751systemh_outsl,
+
+ .mv_readb = sh7751systemh_readb,
+ .mv_readw = sh7751systemh_readw,
+ .mv_readl = sh7751systemh_readl,
+ .mv_writeb = sh7751systemh_writeb,
+ .mv_writew = sh7751systemh_writew,
+ .mv_writel = sh7751systemh_writel,
+
+ .mv_isa_port2addr = sh7751systemh_isa_port2addr,
+
+ .mv_init_irq = init_7751systemh_IRQ,
+};
+ALIAS_MV(7751systemh)
+
+int __init platform_setup(void)
+{
+ return 0;
+}
+
--- /dev/null
+#
+# Makefile for the 7300 SolutionEngine specific parts of the kernel
+#
+
+obj-y := setup.o io.o irq.o
+
+obj-$(CONFIG_HEARTBEAT) += led.o
--- /dev/null
+/*
+ * linux/arch/sh/boards/se/7300/io.c
+ *
+ * Copyright (C) 2003 YOSHII Takashi <yoshii-takashi@hitachi-ul.co.jp>
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <asm/se7300/se7300.h>
+#include <asm/io.h>
+
+#define badio(fn, a) panic("bad i/o operation %s for %08lx.", #fn, a)
+
+struct iop {
+ unsigned long start, end;
+ unsigned long base;
+ struct iop *(*check) (struct iop * p, unsigned long port);
+ unsigned char (*inb) (struct iop * p, unsigned long port);
+ unsigned short (*inw) (struct iop * p, unsigned long port);
+ void (*outb) (struct iop * p, unsigned char value, unsigned long port);
+ void (*outw) (struct iop * p, unsigned short value, unsigned long port);
+};
+
+struct iop *
+simple_check(struct iop *p, unsigned long port)
+{
+ if ((p->start <= port) && (port <= p->end))
+ return p;
+ else
+ badio(check, port);
+}
+
+struct iop *
+ide_check(struct iop *p, unsigned long port)
+{
+ if (((0x1f0 <= port) && (port <= 0x1f7)) || (port == 0x3f7))
+ return p;
+ return NULL;
+}
+
+unsigned char
+simple_inb(struct iop *p, unsigned long port)
+{
+ return *(unsigned char *) (p->base + port);
+}
+
+unsigned short
+simple_inw(struct iop *p, unsigned long port)
+{
+ return *(unsigned short *) (p->base + port);
+}
+
+void
+simple_outb(struct iop *p, unsigned char value, unsigned long port)
+{
+ *(unsigned char *) (p->base + port) = value;
+}
+
+void
+simple_outw(struct iop *p, unsigned short value, unsigned long port)
+{
+ *(unsigned short *) (p->base + port) = value;
+}
+
+unsigned char
+pcc_inb(struct iop *p, unsigned long port)
+{
+ unsigned long addr = p->base + port + 0x40000;
+ unsigned long v;
+
+ if (port & 1)
+ addr += 0x00400000;
+ v = *(volatile unsigned char *) addr;
+ return v;
+}
+
+void
+pcc_outb(struct iop *p, unsigned char value, unsigned long port)
+{
+ unsigned long addr = p->base + port + 0x40000;
+
+ if (port & 1)
+ addr += 0x00400000;
+ *(volatile unsigned char *) addr = value;
+}
+
+unsigned char
+bad_inb(struct iop *p, unsigned long port)
+{
+ badio(inb, port);
+}
+
+void
+bad_outb(struct iop *p, unsigned char value, unsigned long port)
+{
+ badio(inw, port);
+}
+
+/* MSTLANEX01 LAN at 0xb400:0000 */
+static struct iop laniop = {
+ .start = 0x300,
+ .end = 0x30f,
+ .base = 0xb4000000,
+ .check = simple_check,
+ .inb = simple_inb,
+ .inw = simple_inw,
+ .outb = simple_outb,
+ .outw = simple_outw,
+};
+
+/* NE2000 pc card NIC */
+static struct iop neiop = {
+ .start = 0x280,
+ .end = 0x29f,
+ .base = 0xb0600000 + 0x80, /* soft 0x280 -> hard 0x300 */
+ .check = simple_check,
+ .inb = pcc_inb,
+ .inw = simple_inw,
+ .outb = pcc_outb,
+ .outw = simple_outw,
+};
+
+/* CF in CF slot */
+static struct iop cfiop = {
+ .base = 0xb0600000,
+ .check = ide_check,
+ .inb = pcc_inb,
+ .inw = simple_inw,
+ .outb = pcc_outb,
+ .outw = simple_outw,
+};
+
+static __inline__ struct iop *
+port2iop(unsigned long port)
+{
+ if (0) ;
+#if defined(CONFIG_SMC91111)
+ else if (laniop.check(&laniop, port))
+ return &laniop;
+#endif
+#if defined(CONFIG_NE2000)
+ else if (neiop.check(&neiop, port))
+ return &neiop;
+#endif
+#if defined(CONFIG_IDE)
+ else if (cfiop.check(&cfiop, port))
+ return &cfiop;
+#endif
+ else
+ return &neiop; /* fallback */
+}
+
+static inline void
+delay(void)
+{
+ ctrl_inw(0xac000000);
+ ctrl_inw(0xac000000);
+}
+
+unsigned char
+sh7300se_inb(unsigned long port)
+{
+ struct iop *p = port2iop(port);
+ return (p->inb) (p, port);
+}
+
+unsigned char
+sh7300se_inb_p(unsigned long port)
+{
+ unsigned char v = sh7300se_inb(port);
+ delay();
+ return v;
+}
+
+unsigned short
+sh7300se_inw(unsigned long port)
+{
+ struct iop *p = port2iop(port);
+ return (p->inw) (p, port);
+}
+
+unsigned int
+sh7300se_inl(unsigned long port)
+{
+ badio(inl, port);
+}
+
+void
+sh7300se_outb(unsigned char value, unsigned long port)
+{
+ struct iop *p = port2iop(port);
+ (p->outb) (p, value, port);
+}
+
+void
+sh7300se_outb_p(unsigned char value, unsigned long port)
+{
+ sh7300se_outb(value, port);
+ delay();
+}
+
+void
+sh7300se_outw(unsigned short value, unsigned long port)
+{
+ struct iop *p = port2iop(port);
+ (p->outw) (p, value, port);
+}
+
+void
+sh7300se_outl(unsigned int value, unsigned long port)
+{
+ badio(outl, port);
+}
+
+void
+sh7300se_insb(unsigned long port, void *addr, unsigned long count)
+{
+ unsigned char *a = addr;
+ struct iop *p = port2iop(port);
+ while (count--)
+ *a++ = (p->inb) (p, port);
+}
+
+void
+sh7300se_insw(unsigned long port, void *addr, unsigned long count)
+{
+ unsigned short *a = addr;
+ struct iop *p = port2iop(port);
+ while (count--)
+ *a++ = (p->inw) (p, port);
+}
+
+void
+sh7300se_insl(unsigned long port, void *addr, unsigned long count)
+{
+ badio(insl, port);
+}
+
+void
+sh7300se_outsb(unsigned long port, const void *addr, unsigned long count)
+{
+ unsigned char *a = (unsigned char *) addr;
+ struct iop *p = port2iop(port);
+ while (count--)
+ (p->outb) (p, *a++, port);
+}
+
+void
+sh7300se_outsw(unsigned long port, const void *addr, unsigned long count)
+{
+ unsigned short *a = (unsigned short *) addr;
+ struct iop *p = port2iop(port);
+ while (count--)
+ (p->outw) (p, *a++, port);
+}
+
+void
+sh7300se_outsl(unsigned long port, const void *addr, unsigned long count)
+{
+ badio(outsw, port);
+}
--- /dev/null
+/*
+ * linux/arch/sh/boards/se/7300/irq.c
+ *
+ * Copyright (C) 2003 Takashi Kusuda <kusuda-takashi@hitachi-ul.co.jp>
+ *
+ * SH-Mobile SolutionEngine 7300 Support.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+#include <asm/mach/se7300.h>
+
+/*
+ * Initialize IRQ setting
+ */
+void __init
+init_7300se_IRQ(void)
+{
+ ctrl_outw(0x0028, PA_EPLD_MODESET); /* mode set IRQ0,1 active low. */
+ ctrl_outw(0xa000, INTC_ICR1); /* IRQ mode; IRQ0,1 enable. */
+ ctrl_outw(0x0000, PORT_PFCR); /* use F for IRQ[3:0] and SIU. */
+
+ /* PC_IRQ[0-3] -> IRQ0 (32) */
+ make_ipr_irq(IRQ0_IRQ, IRQ0_IPR_ADDR, IRQ0_IPR_POS, 0x0f - IRQ0_IRQ);
+ /* A_IRQ[0-3] -> IRQ1 (33) */
+ make_ipr_irq(IRQ1_IRQ, IRQ1_IPR_ADDR, IRQ1_IPR_POS, 0x0f - IRQ1_IRQ);
+ make_ipr_irq(SIOF0_IRQ, SIOF0_IPR_ADDR, SIOF0_IPR_POS, SIOF0_PRIORITY);
+ make_ipr_irq(DMTE2_IRQ, DMA1_IPR_ADDR, DMA1_IPR_POS, DMA1_PRIORITY);
+ make_ipr_irq(DMTE3_IRQ, DMA1_IPR_ADDR, DMA1_IPR_POS, DMA1_PRIORITY);
+ make_ipr_irq(VIO_IRQ, VIO_IPR_ADDR, VIO_IPR_POS, VIO_PRIORITY);
+
+ ctrl_outw(0x2000, PA_MRSHPC + 0x0c); /* mrshpc irq enable */
+}
--- /dev/null
+/*
+ * linux/arch/sh/boards/se/7300/led.c
+ *
+ * Derived from linux/arch/sh/boards/se/770x/led.c
+ *
+ * Copyright (C) 2000 Stuart Menefy <stuart.menefy@st.com>
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * This file contains Solution Engine specific LED code.
+ */
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <asm/mach/se7300.h>
+
+static void
+mach_led(int position, int value)
+{
+ volatile unsigned short *p = (volatile unsigned short *) PA_LED;
+
+ if (value) {
+ *p |= (1 << 8);
+ } else {
+ *p &= ~(1 << 8);
+ }
+}
+
+
+/* Cycle the LED's in the clasic Knightrider/Sun pattern */
+void
+heartbeat_7300se(void)
+{
+ static unsigned int cnt = 0, period = 0;
+ volatile unsigned short *p = (volatile unsigned short *) PA_LED;
+ static unsigned bit = 0, up = 1;
+
+ cnt += 1;
+ if (cnt < period) {
+ return;
+ }
+
+ cnt = 0;
+
+ /* Go through the points (roughly!):
+ * f(0)=10, f(1)=16, f(2)=20, f(5)=35,f(inf)->110
+ */
+ period = 110 - ((300 << FSHIFT) / ((avenrun[0] / 5) + (3 << FSHIFT)));
+
+ if (up) {
+ if (bit == 7) {
+ bit--;
+ up = 0;
+ } else {
+ bit++;
+ }
+ } else {
+ if (bit == 0) {
+ bit++;
+ up = 1;
+ } else {
+ bit--;
+ }
+ }
+ *p = 1 << (bit + 8);
+
+}
+
--- /dev/null
+/*
+ * linux/arch/sh/boards/se/7300/setup.c
+ *
+ * Copyright (C) 2003 Takashi Kusuda <kusuda-takashi@hitachi-ul.co.jp>
+ *
+ * SH-Mobile SolutionEngine 7300 Support.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <asm/machvec.h>
+#include <asm/machvec_init.h>
+#include <asm/mach/io.h>
+
+void heartbeat_7300se(void);
+void init_7300se_IRQ(void);
+
+const char *
+get_system_type(void)
+{
+ return "SolutionEngine 7300";
+}
+
+/*
+ * The Machine Vector
+ */
+
+struct sh_machine_vector mv_7300se __initmv = {
+ .mv_nr_irqs = 109,
+ .mv_inb = sh7300se_inb,
+ .mv_inw = sh7300se_inw,
+ .mv_inl = sh7300se_inl,
+ .mv_outb = sh7300se_outb,
+ .mv_outw = sh7300se_outw,
+ .mv_outl = sh7300se_outl,
+
+ .mv_inb_p = sh7300se_inb_p,
+ .mv_inw_p = sh7300se_inw,
+ .mv_inl_p = sh7300se_inl,
+ .mv_outb_p = sh7300se_outb_p,
+ .mv_outw_p = sh7300se_outw,
+ .mv_outl_p = sh7300se_outl,
+
+ .mv_insb = sh7300se_insb,
+ .mv_insw = sh7300se_insw,
+ .mv_insl = sh7300se_insl,
+ .mv_outsb = sh7300se_outsb,
+ .mv_outsw = sh7300se_outsw,
+ .mv_outsl = sh7300se_outsl,
+
+ .mv_init_irq = init_7300se_IRQ,
+#ifdef CONFIG_HEARTBEAT
+ .mv_heartbeat = heartbeat_7300se,
+#endif
+};
+
+ALIAS_MV(7300se)
+/*
+ * Initialize the board
+ */
+void __init
+platform_setup(void)
+{
+
+}
--- /dev/null
+#
+# Makefile for VoyagerGX
+#
+
+obj-y := irq.o setup.o
+
+obj-$(CONFIG_USB_OHCI_HCD) += consistent.o
+
--- /dev/null
+/*
+ * arch/sh/cchips/voyagergx/consistent.c
+ *
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <asm/io.h>
+#include <asm/bus-sh.h>
+
+struct voya_alloc_entry {
+ struct list_head list;
+ unsigned long ofs;
+ unsigned long len;
+};
+
+static spinlock_t voya_list_lock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(voya_alloc_list);
+
+#define OHCI_SRAM_START 0xb0000000
+#define OHCI_HCCA_SIZE 0x100
+#define OHCI_SRAM_SIZE 0x10000
+
+void *voyagergx_consistent_alloc(struct device *dev, size_t size,
+ dma_addr_t *handle, int flag)
+{
+ struct list_head *list = &voya_alloc_list;
+ struct voya_alloc_entry *entry;
+ struct sh_dev *shdev = to_sh_dev(dev);
+ unsigned long start, end;
+ unsigned long flags;
+
+ /*
+ * The SM501 contains an integrated 8051 with its own SRAM.
+ * Devices within the cchip can all hook into the 8051 SRAM.
+ * We presently use this for the OHCI.
+ *
+ * Everything else goes through consistent_alloc().
+ */
+ if (!dev || dev->bus != &sh_bus_types[SH_BUS_VIRT] ||
+ (dev->bus == &sh_bus_types[SH_BUS_VIRT] &&
+ shdev->dev_id != SH_DEV_ID_USB_OHCI))
+ return consistent_alloc(flag, size, handle);
+
+ start = OHCI_SRAM_START + OHCI_HCCA_SIZE;
+
+ entry = kmalloc(sizeof(struct voya_alloc_entry), GFP_ATOMIC);
+ if (!entry)
+ return NULL;
+
+ entry->len = (size + 15) & ~15;
+
+ /*
+ * The basis for this allocator is dwmw2's malloc.. the
+ * Matrox allocator :-)
+ */
+ spin_lock_irqsave(&voya_list_lock, flags);
+ list_for_each(list, &voya_alloc_list) {
+ struct voya_alloc_entry *p;
+
+ p = list_entry(list, struct voya_alloc_entry, list);
+
+ if (p->ofs - start >= size)
+ goto out;
+
+ start = p->ofs + p->len;
+ }
+
+ end = start + (OHCI_SRAM_SIZE - OHCI_HCCA_SIZE);
+ list = &voya_alloc_list;
+
+ if (end - start >= size) {
+out:
+ entry->ofs = start;
+ list_add_tail(&entry->list, list);
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+
+ *handle = start;
+ return (void *)start;
+ }
+
+ kfree(entry);
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+
+ return NULL;
+}
+
+void voyagergx_consistent_free(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t handle)
+{
+ struct voya_alloc_entry *entry;
+ struct sh_dev *shdev = to_sh_dev(dev);
+ unsigned long flags;
+
+ if (!dev || dev->bus != &sh_bus_types[SH_BUS_VIRT] ||
+ (dev->bus == &sh_bus_types[SH_BUS_VIRT] &&
+ shdev->dev_id != SH_DEV_ID_USB_OHCI)) {
+ consistent_free(vaddr, size);
+ return;
+ }
+
+ spin_lock_irqsave(&voya_list_lock, flags);
+ list_for_each_entry(entry, &voya_alloc_list, list) {
+ if (entry->ofs != handle)
+ continue;
+
+ list_del(&entry->list);
+ kfree(entry);
+
+ break;
+ }
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+}
+
+EXPORT_SYMBOL(voyagergx_consistent_alloc);
+EXPORT_SYMBOL(voyagergx_consistent_free);
+
--- /dev/null
+/* -------------------------------------------------------------------- */
+/* setup_voyagergx.c: */
+/* -------------------------------------------------------------------- */
+/* This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ Copyright 2003 (c) Lineo uSolutions,Inc.
+*/
+/* -------------------------------------------------------------------- */
+
+#undef DEBUG
+
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/rts7751r2d/rts7751r2d.h>
+#include <asm/rts7751r2d/voyagergx_reg.h>
+
+static void disable_voyagergx_irq(unsigned int irq)
+{
+ unsigned long flags, val;
+ unsigned long mask = 1 << (irq - VOYAGER_IRQ_BASE);
+
+ pr_debug("disable_voyagergx_irq(%d): mask=%x\n", irq, mask);
+ local_irq_save(flags);
+ val = inl(VOYAGER_INT_MASK);
+ val &= ~mask;
+ outl(val, VOYAGER_INT_MASK);
+ local_irq_restore(flags);
+}
+
+
+static void enable_voyagergx_irq(unsigned int irq)
+{
+ unsigned long flags, val;
+ unsigned long mask = 1 << (irq - VOYAGER_IRQ_BASE);
+
+ pr_debug("disable_voyagergx_irq(%d): mask=%x\n", irq, mask);
+ local_irq_save(flags);
+ val = inl(VOYAGER_INT_MASK);
+ val |= mask;
+ outl(val, VOYAGER_INT_MASK);
+ local_irq_restore(flags);
+}
+
+
+static void mask_and_ack_voyagergx(unsigned int irq)
+{
+ disable_voyagergx_irq(irq);
+}
+
+static void end_voyagergx_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_voyagergx_irq(irq);
+}
+
+static unsigned int startup_voyagergx_irq(unsigned int irq)
+{
+ enable_voyagergx_irq(irq);
+ return 0;
+}
+
+static void shutdown_voyagergx_irq(unsigned int irq)
+{
+ disable_voyagergx_irq(irq);
+}
+
+static struct hw_interrupt_type voyagergx_irq_type = {
+ "VOYAGERGX-IRQ",
+ startup_voyagergx_irq,
+ shutdown_voyagergx_irq,
+ enable_voyagergx_irq,
+ disable_voyagergx_irq,
+ mask_and_ack_voyagergx,
+ end_voyagergx_irq,
+};
+
+static irqreturn_t voyagergx_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ printk(KERN_INFO
+ "VoyagerGX: spurious interrupt, status: 0x%x\n",
+ inl(INT_STATUS));
+ return IRQ_HANDLED;
+}
+
+
+/*====================================================*/
+
+static struct {
+ int (*func)(int, void *);
+ void *dev;
+} voyagergx_demux[VOYAGER_IRQ_NUM];
+
+void voyagergx_register_irq_demux(int irq,
+ int (*demux)(int irq, void *dev), void *dev)
+{
+ voyagergx_demux[irq - VOYAGER_IRQ_BASE].func = demux;
+ voyagergx_demux[irq - VOYAGER_IRQ_BASE].dev = dev;
+}
+
+void voyagergx_unregister_irq_demux(int irq)
+{
+ voyagergx_demux[irq - VOYAGER_IRQ_BASE].func = 0;
+}
+
+int voyagergx_irq_demux(int irq)
+{
+
+ if (irq == IRQ_VOYAGER ) {
+ unsigned long i = 0, bit __attribute__ ((unused));
+ unsigned long val = inl(INT_STATUS);
+#if 1
+ if ( val & ( 1 << 1 )){
+ i = 1;
+ } else if ( val & ( 1 << 2 )){
+ i = 2;
+ } else if ( val & ( 1 << 6 )){
+ i = 6;
+ } else if( val & ( 1 << 10 )){
+ i = 10;
+ } else if( val & ( 1 << 11 )){
+ i = 11;
+ } else if( val & ( 1 << 12 )){
+ i = 12;
+ } else if( val & ( 1 << 17 )){
+ i = 17;
+ } else {
+ printk("Unexpected IRQ irq = %d status = 0x%08lx\n", irq, val);
+ }
+ pr_debug("voyagergx_irq_demux %d \n", i);
+#else
+ for (bit = 1, i = 0 ; i < VOYAGER_IRQ_NUM ; bit <<= 1, i++)
+ if (val & bit)
+ break;
+#endif
+ if (i < VOYAGER_IRQ_NUM) {
+ irq = VOYAGER_IRQ_BASE + i;
+ if (voyagergx_demux[i].func != 0)
+ irq = voyagergx_demux[i].func(irq, voyagergx_demux[i].dev);
+ }
+ }
+ return irq;
+}
+
+static struct irqaction irq0 = { voyagergx_interrupt, SA_INTERRUPT, 0, "VOYAGERGX", NULL, NULL};
+
+void __init setup_voyagergx_irq(void)
+{
+ int i, flag;
+
+ printk(KERN_INFO "VoyagerGX configured at 0x%x on irq %d(mapped into %d to %d)\n",
+ VOYAGER_BASE,
+ IRQ_VOYAGER,
+ VOYAGER_IRQ_BASE,
+ VOYAGER_IRQ_BASE + VOYAGER_IRQ_NUM - 1);
+
+ for (i=0; i<VOYAGER_IRQ_NUM; i++) {
+ flag = 0;
+ switch (VOYAGER_IRQ_BASE + i) {
+ case VOYAGER_USBH_IRQ:
+ case VOYAGER_8051_IRQ:
+ case VOYAGER_UART0_IRQ:
+ case VOYAGER_UART1_IRQ:
+ case VOYAGER_AC97_IRQ:
+ flag = 1;
+ }
+ if (flag == 1)
+ irq_desc[VOYAGER_IRQ_BASE + i].handler = &voyagergx_irq_type;
+ }
+
+ setup_irq(IRQ_VOYAGER, &irq0);
+}
+
--- /dev/null
+/*
+ * arch/sh/cchips/voyagergx/setup.c
+ *
+ * Setup routines for VoyagerGX cchip.
+ *
+ * Copyright (C) 2003 Lineo uSolutions, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/io.h>
+#include <asm/rts7751r2d/voyagergx_reg.h>
+
+static int __init setup_voyagergx(void)
+{
+ unsigned long val;
+
+ val = inl(DRAM_CTRL);
+ val |= (DRAM_CTRL_CPU_COLUMN_SIZE_256 |
+ DRAM_CTRL_CPU_ACTIVE_PRECHARGE |
+ DRAM_CTRL_CPU_RESET |
+ DRAM_CTRL_REFRESH_COMMAND |
+ DRAM_CTRL_BLOCK_WRITE_TIME |
+ DRAM_CTRL_BLOCK_WRITE_PRECHARGE |
+ DRAM_CTRL_ACTIVE_PRECHARGE |
+ DRAM_CTRL_RESET |
+ DRAM_CTRL_REMAIN_ACTIVE);
+ outl(val, DRAM_CTRL);
+
+ return 0;
+}
+
+module_init(setup_voyagergx);
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_SUPERH=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_KMOD is not set
+
+#
+# System type
+#
+# CONFIG_SH_SOLUTION_ENGINE is not set
+# CONFIG_SH_7751_SOLUTION_ENGINE is not set
+# CONFIG_SH_7751_SYSTEMH is not set
+# CONFIG_SH_STB1_HARP is not set
+# CONFIG_SH_STB1_OVERDRIVE is not set
+# CONFIG_SH_HP620 is not set
+# CONFIG_SH_HP680 is not set
+# CONFIG_SH_HP690 is not set
+# CONFIG_SH_CQREEK is not set
+# CONFIG_SH_DMIDA is not set
+# CONFIG_SH_EC3104 is not set
+# CONFIG_SH_SATURN is not set
+# CONFIG_SH_DREAMCAST is not set
+# CONFIG_SH_CAT68701 is not set
+# CONFIG_SH_BIGSUR is not set
+# CONFIG_SH_SH2000 is not set
+# CONFIG_SH_ADX is not set
+# CONFIG_SH_MPC1211 is not set
+# CONFIG_SH_SECUREEDGE5410 is not set
+CONFIG_SH_RTS7751R2D=y
+# CONFIG_SH_UNKNOWN is not set
+# CONFIG_CPU_SH2 is not set
+# CONFIG_CPU_SH3 is not set
+CONFIG_CPU_SH4=y
+# CONFIG_CPU_SUBTYPE_SH7604 is not set
+# CONFIG_CPU_SUBTYPE_SH7300 is not set
+# CONFIG_CPU_SUBTYPE_SH7707 is not set
+# CONFIG_CPU_SUBTYPE_SH7708 is not set
+# CONFIG_CPU_SUBTYPE_SH7709 is not set
+# CONFIG_CPU_SUBTYPE_SH7750 is not set
+CONFIG_CPU_SUBTYPE_SH7751=y
+# CONFIG_CPU_SUBTYPE_SH7760 is not set
+# CONFIG_CPU_SUBTYPE_ST40STB1 is not set
+CONFIG_MMU=y
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="mem=64M console=ttySC0,115200 root=/dev/hda1"
+CONFIG_MEMORY_START=0x0c000000
+CONFIG_MEMORY_SIZE=0x04000000
+CONFIG_MEMORY_SET=y
+# CONFIG_MEMORY_OVERRIDE is not set
+CONFIG_SH_RTC=y
+CONFIG_ZERO_PAGE_OFFSET=0x00010000
+CONFIG_BOOT_LINK_OFFSET=0x00800000
+CONFIG_CPU_LITTLE_ENDIAN=y
+# CONFIG_PREEMPT is not set
+# CONFIG_UBC_WAKEUP is not set
+# CONFIG_SH_WRITETHROUGH is not set
+# CONFIG_SH_OCRAM is not set
+# CONFIG_SH_STORE_QUEUES is not set
+# CONFIG_SMP is not set
+CONFIG_VOYAGERGX=y
+CONFIG_RTS7751R2D_REV11=y
+CONFIG_SH_PCLK_FREQ=60000000
+# CONFIG_CPU_FREQ is not set
+CONFIG_SH_DMA=y
+CONFIG_NR_ONCHIP_DMA_CHANNELS=8
+# CONFIG_NR_DMA_CHANNELS_BOOL is not set
+# CONFIG_DMA_PAGE_OPS is not set
+
+#
+# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
+#
+CONFIG_ISA=y
+CONFIG_PCI=y
+CONFIG_SH_PCIDMA_NONCOHERENT=y
+CONFIG_PCI_AUTO=y
+CONFIG_PCI_AUTO_UPDATE_RESOURCES=y
+CONFIG_PCI_DMA=y
+# CONFIG_PCI_LEGACY_PROC is not set
+CONFIG_PCI_NAMES=y
+CONFIG_HOTPLUG=y
+
+#
+# PCMCIA/CardBus support
+#
+CONFIG_PCMCIA=m
+CONFIG_YENTA=m
+CONFIG_CARDBUS=y
+# CONFIG_I82092 is not set
+# CONFIG_I82365 is not set
+# CONFIG_TCIC is not set
+CONFIG_PCMCIA_PROBE=y
+
+#
+# PCI Hotplug Support
+#
+CONFIG_HOTPLUG_PCI=y
+# CONFIG_HOTPLUG_PCI_FAKE is not set
+# CONFIG_HOTPLUG_PCI_CPCI is not set
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_FLAT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_FW_LOADER is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_XD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_IDEDISK_STROKE is not set
+CONFIG_BLK_DEV_IDECS=m
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+# CONFIG_BLK_DEV_IDEPCI is not set
+# CONFIG_IDE_CHIPSETS is not set
+# CONFIG_BLK_DEV_IDEDMA is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_DMA_NONPCI is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Old CD-ROM drivers (not SCSI, not IDE)
+#
+# CONFIG_CD_NO_IDESCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# IEEE 1394 (FireWire) support (EXPERIMENTAL)
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_INET_ECN is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_DECNET is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+CONFIG_IPV6_SCTP__=y
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+CONFIG_NETDEVICES=y
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_STNIC is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_LANCE is not set
+# CONFIG_NET_VENDOR_SMC is not set
+# CONFIG_NET_VENDOR_RACAL is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_AT1700 is not set
+# CONFIG_DEPCA is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_ISA=y
+# CONFIG_E2100 is not set
+# CONFIG_EWRK3 is not set
+# CONFIG_EEXPRESS is not set
+# CONFIG_EEXPRESS_PRO is not set
+# CONFIG_HPLAN_PLUS is not set
+# CONFIG_HPLAN is not set
+# CONFIG_LP486E is not set
+# CONFIG_ETH16I is not set
+CONFIG_NE2000=m
+# CONFIG_ZNET is not set
+# CONFIG_SEEQ8005 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_AC3200 is not set
+# CONFIG_APRICOT is not set
+# CONFIG_B44 is not set
+# CONFIG_CS89x0 is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+CONFIG_8139TOO=y
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_8139_OLD_RX_RESET is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+# CONFIG_NET_POCKET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SIS190 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+# CONFIG_ARLAN is not set
+# CONFIG_WAVELAN is not set
+# CONFIG_PCMCIA_WAVELAN is not set
+# CONFIG_PCMCIA_NETWAVE is not set
+
+#
+# Wireless 802.11 Frequency Hopping cards support
+#
+# CONFIG_PCMCIA_RAYCS is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+# CONFIG_AIRO is not set
+CONFIG_HERMES=m
+# CONFIG_PLX_HERMES is not set
+# CONFIG_TMD_HERMES is not set
+# CONFIG_PCI_HERMES is not set
+
+#
+# Wireless 802.11b Pcmcia/Cardbus cards support
+#
+CONFIG_PCMCIA_HERMES=m
+# CONFIG_AIRO_CS is not set
+# CONFIG_PCMCIA_ATMEL is not set
+# CONFIG_PCMCIA_WL3501 is not set
+CONFIG_NET_WIRELESS=y
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
+#
+# Amateur Radio support
+#
+# CONFIG_HAMRADIO is not set
+
+#
+# IrDA (infrared) support
+#
+# CONFIG_IRDA is not set
+
+#
+# Bluetooth support
+#
+# CONFIG_BT is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN_BOOL is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+# CONFIG_INPUT is not set
+
+#
+# Userland interfaces
+#
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL is not set
+CONFIG_SH_SCI=y
+CONFIG_SERIAL_CONSOLE=y
+CONFIG_RTC_9701JE=y
+
+#
+# Unix 98 PTY support
+#
+# CONFIG_UNIX98_PTYS is not set
+CONFIG_HEARTBEAT=y
+# CONFIG_PSMOUSE is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_SH_SCI is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# I2C Algorithms
+#
+
+#
+# I2C Hardware Bus support
+#
+
+#
+# I2C Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+CONFIG_MINIX_FS=y
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+# CONFIG_NFS_FS is not set
+# CONFIG_NFSD is not set
+# CONFIG_EXPORTFS is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+CONFIG_NLS=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+CONFIG_NLS_CODEPAGE_932=y
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+CONFIG_SOUND=y
+
+#
+# Advanced Linux Sound Architecture
+#
+CONFIG_SND=m
+# CONFIG_SND_SEQUENCER is not set
+# CONFIG_SND_OSSEMUL is not set
+# CONFIG_SND_VERBOSE_PRINTK is not set
+# CONFIG_SND_DEBUG is not set
+
+#
+# Generic devices
+#
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+
+#
+# ISA devices
+#
+# CONFIG_SND_AD1848 is not set
+# CONFIG_SND_CS4231 is not set
+# CONFIG_SND_CS4232 is not set
+# CONFIG_SND_CS4236 is not set
+# CONFIG_SND_ES1688 is not set
+# CONFIG_SND_ES18XX is not set
+# CONFIG_SND_GUSCLASSIC is not set
+# CONFIG_SND_GUSEXTREME is not set
+# CONFIG_SND_GUSMAX is not set
+# CONFIG_SND_INTERWAVE is not set
+# CONFIG_SND_INTERWAVE_STB is not set
+# CONFIG_SND_OPTI92X_AD1848 is not set
+# CONFIG_SND_OPTI92X_CS4231 is not set
+# CONFIG_SND_OPTI93X is not set
+# CONFIG_SND_SB8 is not set
+# CONFIG_SND_SB16 is not set
+# CONFIG_SND_SBAWE is not set
+# CONFIG_SND_WAVEFRONT is not set
+# CONFIG_SND_CMI8330 is not set
+# CONFIG_SND_OPL3SA2 is not set
+# CONFIG_SND_SGALAXY is not set
+# CONFIG_SND_SSCAPE is not set
+
+#
+# PCI devices
+#
+# CONFIG_SND_ALI5451 is not set
+# CONFIG_SND_AZT3328 is not set
+# CONFIG_SND_CS46XX is not set
+# CONFIG_SND_CS4281 is not set
+# CONFIG_SND_EMU10K1 is not set
+# CONFIG_SND_KORG1212 is not set
+# CONFIG_SND_NM256 is not set
+# CONFIG_SND_RME32 is not set
+# CONFIG_SND_RME96 is not set
+# CONFIG_SND_RME9652 is not set
+# CONFIG_SND_HDSP is not set
+# CONFIG_SND_TRIDENT is not set
+CONFIG_SND_YMFPCI=m
+# CONFIG_SND_ALS4000 is not set
+# CONFIG_SND_CMIPCI is not set
+# CONFIG_SND_ENS1370 is not set
+# CONFIG_SND_ENS1371 is not set
+# CONFIG_SND_ES1938 is not set
+# CONFIG_SND_ES1968 is not set
+# CONFIG_SND_MAESTRO3 is not set
+# CONFIG_SND_FM801 is not set
+# CONFIG_SND_ICE1712 is not set
+# CONFIG_SND_ICE1724 is not set
+# CONFIG_SND_INTEL8X0 is not set
+# CONFIG_SND_SONICVIBES is not set
+# CONFIG_SND_VIA82XX is not set
+# CONFIG_SND_VX222 is not set
+
+#
+# PCMCIA devices
+#
+# CONFIG_SND_VXPOCKET is not set
+# CONFIG_SND_VXP440 is not set
+
+#
+# Open Sound System
+#
+CONFIG_SOUND_PRIME=m
+# CONFIG_SOUND_BT878 is not set
+CONFIG_SOUND_CMPCI=m
+# CONFIG_SOUND_CMPCI_FM is not set
+# CONFIG_SOUND_CMPCI_MIDI is not set
+# CONFIG_SOUND_CMPCI_JOYSTICK is not set
+# CONFIG_SOUND_CMPCI_CM8738 is not set
+# CONFIG_SOUND_EMU10K1 is not set
+# CONFIG_SOUND_FUSION is not set
+# CONFIG_SOUND_CS4281 is not set
+# CONFIG_SOUND_ES1370 is not set
+# CONFIG_SOUND_ES1371 is not set
+# CONFIG_SOUND_ESSSOLO1 is not set
+# CONFIG_SOUND_MAESTRO is not set
+# CONFIG_SOUND_MAESTRO3 is not set
+# CONFIG_SOUND_ICH is not set
+# CONFIG_SOUND_SONICVIBES is not set
+# CONFIG_SOUND_TRIDENT is not set
+# CONFIG_SOUND_MSNDCLAS is not set
+# CONFIG_SOUND_MSNDPIN is not set
+# CONFIG_SOUND_VIA82CXXX is not set
+# CONFIG_SOUND_OSS is not set
+# CONFIG_SOUND_ALI5455 is not set
+# CONFIG_SOUND_FORTE is not set
+# CONFIG_SOUND_RME96XX is not set
+# CONFIG_SOUND_AD1980 is not set
+CONFIG_SOUND_VOYAGERGX=m
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+# CONFIG_USB_GADGET is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_SH_STANDARD_BIOS is not set
+# CONFIG_KGDB is not set
+# CONFIG_FRAME_POINTER is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_SUPERH=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+# CONFIG_SWAP is not set
+# CONFIG_SYSVIPC is not set
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+# CONFIG_KALLSYMS is not set
+# CONFIG_FUTEX is not set
+# CONFIG_EPOLL is not set
+CONFIG_IOSCHED_NOOP=y
+# CONFIG_IOSCHED_AS is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# System type
+#
+# CONFIG_SH_SOLUTION_ENGINE is not set
+# CONFIG_SH_7751_SOLUTION_ENGINE is not set
+CONFIG_SH_7300_SOLUTION_ENGINE=y
+# CONFIG_SH_7751_SYSTEMH is not set
+# CONFIG_SH_STB1_HARP is not set
+# CONFIG_SH_STB1_OVERDRIVE is not set
+# CONFIG_SH_HP620 is not set
+# CONFIG_SH_HP680 is not set
+# CONFIG_SH_HP690 is not set
+# CONFIG_SH_CQREEK is not set
+# CONFIG_SH_DMIDA is not set
+# CONFIG_SH_EC3104 is not set
+# CONFIG_SH_SATURN is not set
+# CONFIG_SH_DREAMCAST is not set
+# CONFIG_SH_CAT68701 is not set
+# CONFIG_SH_BIGSUR is not set
+# CONFIG_SH_SH2000 is not set
+# CONFIG_SH_ADX is not set
+# CONFIG_SH_MPC1211 is not set
+# CONFIG_SH_SECUREEDGE5410 is not set
+# CONFIG_SH_HS7751RVOIP is not set
+# CONFIG_SH_RTS7751R2D is not set
+# CONFIG_SH_UNKNOWN is not set
+# CONFIG_CPU_SH2 is not set
+CONFIG_CPU_SH3=y
+# CONFIG_CPU_SH4 is not set
+# CONFIG_CPU_SUBTYPE_SH7604 is not set
+CONFIG_CPU_SUBTYPE_SH7300=y
+# CONFIG_CPU_SUBTYPE_SH7707 is not set
+# CONFIG_CPU_SUBTYPE_SH7708 is not set
+# CONFIG_CPU_SUBTYPE_SH7709 is not set
+# CONFIG_CPU_SUBTYPE_SH7750 is not set
+# CONFIG_CPU_SUBTYPE_SH7751 is not set
+# CONFIG_CPU_SUBTYPE_SH7760 is not set
+# CONFIG_CPU_SUBTYPE_ST40STB1 is not set
+# CONFIG_CPU_SUBTYPE_ST40GX1 is not set
+CONFIG_MMU=y
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="console=ttySC0,38400 root=/dev/ram0"
+CONFIG_MEMORY_START=0x0c000000
+CONFIG_MEMORY_SIZE=0x04000000
+# CONFIG_MEMORY_OVERRIDE is not set
+CONFIG_SH_DSP=y
+# CONFIG_SH_ADC is not set
+CONFIG_ZERO_PAGE_OFFSET=0x00001000
+CONFIG_BOOT_LINK_OFFSET=0x00210000
+CONFIG_CPU_LITTLE_ENDIAN=y
+# CONFIG_PREEMPT is not set
+# CONFIG_UBC_WAKEUP is not set
+# CONFIG_SH_WRITETHROUGH is not set
+# CONFIG_SH_OCRAM is not set
+# CONFIG_SMP is not set
+# CONFIG_SH_PCLK_CALC is not set
+CONFIG_SH_PCLK_FREQ=33333333
+
+#
+# CPU Frequency scaling
+#
+# CONFIG_CPU_FREQ is not set
+
+#
+# DMA support
+#
+# CONFIG_SH_DMA is not set
+
+#
+# Companion Chips
+#
+# CONFIG_HD6446X_SERIES is not set
+CONFIG_HEARTBEAT=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
+#
+# CONFIG_PCI is not set
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_FLAT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# SH initrd options
+#
+CONFIG_EMBEDDED_RAMDISK=y
+CONFIG_EMBEDDED_RAMDISK_IMAGE="ramdisk.gz"
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_LOOP is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+
+#
+# Networking support
+#
+# CONFIG_NET is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+
+#
+# ISDN subsystem
+#
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+# CONFIG_SERIO_SERPORT is not set
+# CONFIG_SERIO_CT82C710 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_SH_SCI=y
+CONFIG_SERIAL_SH_SCI_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_UNIX98_PTYS is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+CONFIG_IPMI_HANDLER=y
+# CONFIG_IPMI_PANIC_EVENT is not set
+CONFIG_IPMI_DEVICE_INTERFACE=y
+# CONFIG_IPMI_SI is not set
+CONFIG_IPMI_WATCHDOG=y
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+CONFIG_SOFT_WATCHDOG=y
+# CONFIG_SH_WDT is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+CONFIG_DEVFS_FS=y
+CONFIG_DEVFS_MOUNT=y
+# CONFIG_DEVFS_DEBUG is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLBFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_INFO is not set
+CONFIG_SH_STANDARD_BIOS=y
+CONFIG_EARLY_PRINTK=y
+CONFIG_KGDB=y
+
+#
+# KGDB configuration options
+#
+# CONFIG_MORE_COMPILE_OPTIONS is not set
+# CONFIG_KGDB_NMI is not set
+# CONFIG_KGDB_THREAD is not set
+# CONFIG_SH_KGDB_CONSOLE is not set
+# CONFIG_KGDB_SYSRQ is not set
+# CONFIG_KGDB_KERNEL_ASSERTS is not set
+
+#
+# Serial port setup
+#
+CONFIG_KGDB_DEFPORT=1
+CONFIG_KGDB_DEFBAUD=115200
+CONFIG_KGDB_DEFPARITY_N=y
+# CONFIG_KGDB_DEFPARITY_E is not set
+# CONFIG_KGDB_DEFPARITY_O is not set
+CONFIG_KGDB_DEFBITS_8=y
+# CONFIG_KGDB_DEFBITS_7 is not set
+# CONFIG_FRAME_POINTER is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
--- /dev/null
+/*
+ * arch/sh/drivers/dma/dma-sysfs.c
+ *
+ * sysfs interface for SH DMA API
+ *
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sysdev.h>
+#include <linux/module.h>
+#include <asm/dma.h>
+
+static struct sysdev_class dma_sysclass = {
+ set_kset_name("dma"),
+};
+
+EXPORT_SYMBOL(dma_sysclass);
+
+static ssize_t dma_show_devices(struct sys_device *dev, char *buf)
+{
+ ssize_t len = 0;
+ int i;
+
+ for (i = 0; i < MAX_DMA_CHANNELS; i++) {
+ struct dma_info *info = get_dma_info(i);
+ struct dma_channel *channel = &info->channels[i];
+
+ len += sprintf(buf + len, "%2d: %14s %s\n",
+ channel->chan, info->name,
+ channel->dev_id);
+ }
+
+ return len;
+}
+
+static SYSDEV_ATTR(devices, S_IRUGO, dma_show_devices, NULL);
+
+static int __init dma_sysclass_init(void)
+{
+ int ret;
+
+ ret = sysdev_class_register(&dma_sysclass);
+ if (ret == 0)
+ sysfs_create_file(&dma_sysclass.kset.kobj, &attr_devices.attr);
+
+ return ret;
+}
+
+postcore_initcall(dma_sysclass_init);
+
+static ssize_t dma_show_dev_id(struct sys_device *dev, char *buf)
+{
+ struct dma_channel *channel = to_dma_channel(dev);
+ return sprintf(buf, "%s\n", channel->dev_id);
+}
+
+static ssize_t dma_store_dev_id(struct sys_device *dev,
+ const char *buf, size_t count)
+{
+ struct dma_channel *channel = to_dma_channel(dev);
+ strcpy(channel->dev_id, buf);
+ return count;
+}
+
+static SYSDEV_ATTR(dev_id, S_IRUGO | S_IWUSR, dma_show_dev_id, dma_store_dev_id);
+
+static ssize_t dma_store_config(struct sys_device *dev,
+ const char *buf, size_t count)
+{
+ struct dma_channel *channel = to_dma_channel(dev);
+ unsigned long config;
+
+ config = simple_strtoul(buf, NULL, 0);
+ dma_configure_channel(channel->chan, config);
+
+ return count;
+}
+
+static SYSDEV_ATTR(config, S_IWUSR, NULL, dma_store_config);
+
+static ssize_t dma_show_mode(struct sys_device *dev, char *buf)
+{
+ struct dma_channel *channel = to_dma_channel(dev);
+ return sprintf(buf, "0x%08x\n", channel->mode);
+}
+
+static ssize_t dma_store_mode(struct sys_device *dev,
+ const char *buf, size_t count)
+{
+ struct dma_channel *channel = to_dma_channel(dev);
+ channel->mode = simple_strtoul(buf, NULL, 0);
+ return count;
+}
+
+static SYSDEV_ATTR(mode, S_IRUGO | S_IWUSR, dma_show_mode, dma_store_mode);
+
+#define dma_ro_attr(field, fmt) \
+static ssize_t dma_show_##field(struct sys_device *dev, char *buf) \
+{ \
+ struct dma_channel *channel = to_dma_channel(dev); \
+ return sprintf(buf, fmt, channel->field); \
+} \
+static SYSDEV_ATTR(field, S_IRUGO, dma_show_##field, NULL);
+
+dma_ro_attr(count, "0x%08x\n");
+dma_ro_attr(flags, "0x%08lx\n");
+
+int __init dma_create_sysfs_files(struct dma_channel *chan)
+{
+ struct sys_device *dev = &chan->dev;
+ int ret;
+
+ dev->id = chan->chan;
+ dev->cls = &dma_sysclass;
+
+ ret = sysdev_register(dev);
+ if (ret)
+ return ret;
+
+ sysdev_create_file(dev, &attr_dev_id);
+ sysdev_create_file(dev, &attr_count);
+ sysdev_create_file(dev, &attr_mode);
+ sysdev_create_file(dev, &attr_flags);
+ sysdev_create_file(dev, &attr_config);
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * arch/sh/drivers/pci/fixups-rts7751r2d.c
+ *
+ * RTS7751R2D PCI fixups
+ *
+ * Copyright (C) 2003 Lineo uSolutions, Inc.
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include "pci-sh7751.h"
+#include <asm/io.h>
+
+#define PCIMCR_MRSET_OFF 0xBFFFFFFF
+#define PCIMCR_RFSH_OFF 0xFFFFFFFB
+
+int pci_fixup_pcic(void)
+{
+ unsigned long mcr;
+
+ outl(0xfb900047, SH7751_PCICONF1);
+ outl(0xab000001, SH7751_PCICONF4);
+
+ mcr = inl(SH7751_MCR);
+ mcr = (mcr & PCIMCR_MRSET_OFF) & PCIMCR_RFSH_OFF;
+ outl(mcr, SH7751_PCIMCR);
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * linux/arch/sh/kernel/pci-rts7751r2d.c
+ *
+ * Author: Ian DaSilva (idasilva@mvista.com)
+ *
+ * Highly leveraged from pci-bigsur.c, written by Dustin McIntire.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * PCI initialization for the Renesas SH7751R RTS7751R2D board
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+
+#include <asm/io.h>
+#include "pci-sh7751.h"
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+int __init pcibios_map_platform_irq(u8 slot, u8 pin)
+{
+ switch (slot) {
+ case 0: return IRQ_PCISLOT1; /* PCI Extend slot #1 */
+ case 1: return IRQ_PCISLOT2; /* PCI Extend slot #2 */
+ case 2: return IRQ_PCMCIA; /* PCI Cardbus Bridge */
+ case 3: return IRQ_PCIETH; /* Realtek Ethernet controller */
+ default:
+ printk("PCI: Bad IRQ mapping request for slot %d\n", slot);
+ return -1;
+ }
+}
+
+static struct resource sh7751_io_resource = {
+ .name = "SH7751_IO",
+ .start = 0x4000,
+ .end = 0x4000 + SH7751_PCI_IO_SIZE - 1,
+ .flags = IORESOURCE_IO
+};
+
+static struct resource sh7751_mem_resource = {
+ .name = "SH7751_mem",
+ .start = SH7751_PCI_MEMORY_BASE,
+ .end = SH7751_PCI_MEMORY_BASE + SH7751_PCI_MEM_SIZE - 1,
+ .flags = IORESOURCE_MEM
+};
+
+extern struct pci_ops sh7751_pci_ops;
+
+struct pci_channel board_pci_channels[] = {
+ { &sh7751_pci_ops, &sh7751_io_resource, &sh7751_mem_resource, 0, 0xff },
+ { NULL, NULL, NULL, 0, 0 },
+};
+EXPORT_SYMBOL(board_pci_channels);
+
+static struct sh7751_pci_address_map sh7751_pci_map = {
+ .window0 = {
+ .base = SH7751_CS3_BASE_ADDR,
+ .size = 0x03f00000,
+ },
+
+ .flags = SH7751_PCIC_NO_RESET,
+};
+
+int __init pcibios_init_platform(void)
+{
+ return sh7751_pcic_init(&sh7751_pci_map);
+}
+
--- /dev/null
+/*
+ * linux/arch/sh/kernel/adc.c -- SH3 on-chip ADC support
+ *
+ * Copyright (C) 2004 Andriy Skulysh <askulysh@image.kiev.ua>
+ */
+
+#include <linux/module.h>
+#include <asm/adc.h>
+#include <asm/io.h>
+
+
+int adc_single(unsigned int channel)
+{
+ int off;
+ unsigned char csr;
+
+ if (channel >= 8) return -1;
+
+ off = (channel & 0x03) << 2;
+
+ csr = ctrl_inb(ADCSR);
+ csr = channel | ADCSR_ADST | ADCSR_CKS;
+ ctrl_outb(csr, ADCSR);
+
+ do {
+ csr = ctrl_inb(ADCSR);
+ } while ((csr & ADCSR_ADF) == 0);
+
+ csr &= ~(ADCSR_ADF | ADCSR_ADST);
+ ctrl_outb(csr, ADCSR);
+
+ return (((ctrl_inb(ADDRAH + off) << 8) |
+ ctrl_inb(ADDRAL + off)) >> 6);
+}
+
+EXPORT_SYMBOL(adc_single);
--- /dev/null
+/*
+ * arch/sh/kernel/cpu/bus.c
+ *
+ * Virtual bus for SuperH.
+ *
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * Shamelessly cloned from arch/arm/mach-omap/bus.c, which was written
+ * by:
+ *
+ * Copyright (C) 2003 - 2004 Nokia Corporation
+ * Written by Tony Lindgren <tony@atomide.com>
+ * Portions of code based on sa1111.c.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/bus-sh.h>
+
+static int sh_bus_match(struct device *dev, struct device_driver *drv)
+{
+ struct sh_driver *shdrv = to_sh_driver(drv);
+ struct sh_dev *shdev = to_sh_dev(dev);
+
+ return shdev->dev_id == shdrv->dev_id;
+}
+
+static int sh_bus_suspend(struct device *dev, u32 state)
+{
+ struct sh_dev *shdev = to_sh_dev(dev);
+ struct sh_driver *shdrv = to_sh_driver(dev->driver);
+
+ if (shdrv && shdrv->suspend)
+ return shdrv->suspend(shdev, state);
+
+ return 0;
+}
+
+static int sh_bus_resume(struct device *dev)
+{
+ struct sh_dev *shdev = to_sh_dev(dev);
+ struct sh_driver *shdrv = to_sh_driver(dev->driver);
+
+ if (shdrv && shdrv->resume)
+ return shdrv->resume(shdev);
+
+ return 0;
+}
+
+static struct device sh_bus_devices[SH_NR_BUSES] = {
+ {
+ .bus_id = SH_BUS_NAME_VIRT,
+ },
+};
+
+struct bus_type sh_bus_types[SH_NR_BUSES] = {
+ {
+ .name = SH_BUS_NAME_VIRT,
+ .match = sh_bus_match,
+ .suspend = sh_bus_suspend,
+ .resume = sh_bus_resume,
+ },
+};
+
+static int sh_device_probe(struct device *dev)
+{
+ struct sh_dev *shdev = to_sh_dev(dev);
+ struct sh_driver *shdrv = to_sh_driver(dev->driver);
+
+ if (shdrv && shdrv->probe)
+ return shdrv->probe(shdev);
+
+ return -ENODEV;
+}
+
+static int sh_device_remove(struct device *dev)
+{
+ struct sh_dev *shdev = to_sh_dev(dev);
+ struct sh_driver *shdrv = to_sh_driver(dev->driver);
+
+ if (shdrv && shdrv->remove)
+ return shdrv->remove(shdev);
+
+ return 0;
+}
+
+int sh_device_register(struct sh_dev *dev)
+{
+ if (!dev)
+ return -EINVAL;
+
+ if (dev->bus_id < 0 || dev->bus_id >= SH_NR_BUSES) {
+ printk(KERN_ERR "%s: bus_id invalid: %s bus: %d\n",
+ __FUNCTION__, dev->name, dev->bus_id);
+ return -EINVAL;
+ }
+
+ dev->dev.parent = &sh_bus_devices[dev->bus_id];
+ dev->dev.bus = &sh_bus_types[dev->bus_id];
+
+ /* This is needed for USB OHCI to work */
+ if (dev->dma_mask)
+ dev->dev.dma_mask = dev->dma_mask;
+
+ snprintf(dev->dev.bus_id, BUS_ID_SIZE, "%s%u",
+ dev->name, dev->dev_id);
+
+ printk(KERN_INFO "Registering SH device '%s'. Parent at %s\n",
+ dev->dev.bus_id, dev->dev.parent->bus_id);
+
+ return device_register(&dev->dev);
+}
+
+void sh_device_unregister(struct sh_dev *dev)
+{
+ device_unregister(&dev->dev);
+}
+
+int sh_driver_register(struct sh_driver *drv)
+{
+ if (!drv)
+ return -EINVAL;
+
+ if (drv->bus_id < 0 || drv->bus_id >= SH_NR_BUSES) {
+ printk(KERN_ERR "%s: bus_id invalid: bus: %d device %d\n",
+ __FUNCTION__, drv->bus_id, drv->dev_id);
+ return -EINVAL;
+ }
+
+ drv->drv.probe = sh_device_probe;
+ drv->drv.remove = sh_device_remove;
+ drv->drv.bus = &sh_bus_types[drv->bus_id];
+
+ return driver_register(&drv->drv);
+}
+
+void sh_driver_unregister(struct sh_driver *drv)
+{
+ driver_unregister(&drv->drv);
+}
+
+static int __init sh_bus_init(void)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < SH_NR_BUSES; i++) {
+ ret = device_register(&sh_bus_devices[i]);
+ if (ret != 0) {
+ printk(KERN_ERR "Unable to register bus device %s\n",
+ sh_bus_devices[i].bus_id);
+ continue;
+ }
+
+ ret = bus_register(&sh_bus_types[i]);
+ if (ret != 0) {
+ printk(KERN_ERR "Unable to register bus %s\n",
+ sh_bus_types[i].name);
+ device_unregister(&sh_bus_devices[i]);
+ }
+ }
+
+ printk(KERN_INFO "SH Virtual Bus initialized\n");
+
+ return ret;
+}
+
+static void __exit sh_bus_exit(void)
+{
+ int i;
+
+ for (i = 0; i < SH_NR_BUSES; i++) {
+ bus_unregister(&sh_bus_types[i]);
+ device_unregister(&sh_bus_devices[i]);
+ }
+}
+
+module_init(sh_bus_init);
+module_exit(sh_bus_exit);
+
+MODULE_AUTHOR("Paul Mundt <lethal@linux-sh.org>");
+MODULE_DESCRIPTION("SH Virtual Bus");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(sh_bus_types);
+EXPORT_SYMBOL(sh_device_register);
+EXPORT_SYMBOL(sh_device_unregister);
+EXPORT_SYMBOL(sh_driver_register);
+EXPORT_SYMBOL(sh_driver_unregister);
+
--- /dev/null
+/*
+ * arch/sh/kernel/early_printk.c
+ *
+ * Copyright (C) 1999, 2000 Niibe Yutaka
+ * Copyright (C) 2002 M. R. Brown
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/console.h>
+#include <linux/tty.h>
+#include <linux/init.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_SH_STANDARD_BIOS
+#include <asm/sh_bios.h>
+
+/*
+ * Print a string through the BIOS
+ */
+static void sh_console_write(struct console *co, const char *s,
+ unsigned count)
+{
+ sh_bios_console_write(s, count);
+}
+
+/*
+ * Setup initial baud/bits/parity. We do two things here:
+ * - construct a cflag setting for the first rs_open()
+ * - initialize the serial port
+ * Return non-zero if we didn't find a serial port.
+ */
+static int __init sh_console_setup(struct console *co, char *options)
+{
+ int cflag = CREAD | HUPCL | CLOCAL;
+
+ /*
+ * Now construct a cflag setting.
+ * TODO: this is a totally bogus cflag, as we have
+ * no idea what serial settings the BIOS is using, or
+ * even if its using the serial port at all.
+ */
+ cflag |= B115200 | CS8 | /*no parity*/0;
+
+ co->cflag = cflag;
+
+ return 0;
+}
+
+static struct console early_console = {
+ .name = "bios",
+ .write = sh_console_write,
+ .setup = sh_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+#endif
+
+#ifdef CONFIG_EARLY_SCIF_CONSOLE
+#define SCIF_REG 0xffe80000
+
+static void scif_sercon_putc(int c)
+{
+ while (!(ctrl_inw(SCIF_REG + 0x10) & 0x20)) ;
+
+ ctrl_outb(c, SCIF_REG + 12);
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0x9f), SCIF_REG + 0x10);
+
+ if (c == '\n')
+ scif_sercon_putc('\r');
+}
+
+static void scif_sercon_flush(void)
+{
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0xbf), SCIF_REG + 0x10);
+
+ while (!(ctrl_inw(SCIF_REG + 0x10) & 0x40)) ;
+
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0xbf), SCIF_REG + 0x10);
+}
+
+static void scif_sercon_write(struct console *con, const char *s, unsigned count)
+{
+ while (count-- > 0)
+ scif_sercon_putc(*s++);
+
+ scif_sercon_flush();
+}
+
+static int __init scif_sercon_setup(struct console *con, char *options)
+{
+ con->cflag = CREAD | HUPCL | CLOCAL | B115200 | CS8;
+
+ return 0;
+}
+
+static struct console early_console = {
+ .name = "sercon",
+ .write = scif_sercon_write,
+ .setup = scif_sercon_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+void scif_sercon_init(int baud)
+{
+ ctrl_outw(0, SCIF_REG + 8);
+ ctrl_outw(0, SCIF_REG);
+
+ /* Set baud rate */
+ ctrl_outb((50000000 / (32 * baud)) - 1, SCIF_REG + 4);
+
+ ctrl_outw(12, SCIF_REG + 24);
+ ctrl_outw(8, SCIF_REG + 24);
+ ctrl_outw(0, SCIF_REG + 32);
+ ctrl_outw(0x60, SCIF_REG + 16);
+ ctrl_outw(0, SCIF_REG + 36);
+ ctrl_outw(0x30, SCIF_REG + 8);
+}
+#endif
+
+void __init enable_early_printk(void)
+{
+#ifdef CONFIG_EARLY_SCIF_CONSOLE
+ scif_sercon_init(115200);
+#endif
+ register_console(&early_console);
+}
+
+void disable_early_printk(void)
+{
+ unregister_console(&early_console);
+}
+
--- /dev/null
+#
+# Makefile for a ramdisk image
+#
+
+obj-y += ramdisk.o
+
+
+O_FORMAT = $(shell $(OBJDUMP) -i | head -n 2 | grep elf32)
+img := $(subst ",,$(CONFIG_EMBEDDED_RAMDISK_IMAGE))
+# add $(src) when $(img) is relative
+img := $(subst $(src)//,/,$(src)/$(img))
+
+quiet_cmd_ramdisk = LD $@
+define cmd_ramdisk
+ $(LD) -T $(src)/ld.script -b binary --oformat $(O_FORMAT) -o $@ $(img)
+endef
+
+$(obj)/ramdisk.o: $(img) $(src)/ld.script
+ $(call cmd,ramdisk)
--- /dev/null
+OUTPUT_ARCH(sh)
+SECTIONS
+{
+ .initrd :
+ {
+ *(.data)
+ }
+}
+
--- /dev/null
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/config-language.txt.
+#
+
+mainmenu "Linux/SH64 Kernel Configuration"
+
+config SUPERH
+ bool
+ default y
+
+config SUPERH64
+ bool
+ default y
+
+config MMU
+ bool
+ default y
+
+config UID16
+ bool
+ default y
+
+config RWSEM_GENERIC_SPINLOCK
+ bool
+ default y
+
+config LOG_BUF_SHIFT
+ int
+ default 14
+
+config RWSEM_XCHGADD_ALGORITHM
+ bool
+
+config GENERIC_ISA_DMA
+ bool
+
+source init/Kconfig
+
+menu "System type"
+
+choice
+ prompt "SuperH system type"
+ default SH_SIMULATOR
+
+config SH_GENERIC
+ bool "Generic"
+
+config SH_SIMULATOR
+ bool "Simulator"
+
+config SH_CAYMAN
+ bool "Cayman"
+
+config SH_ROMRAM
+ bool "ROM/RAM"
+
+config SH_HARP
+ bool "ST50-Harp"
+
+endchoice
+
+choice
+ prompt "Processor family"
+ default CPU_SH5
+
+config CPU_SH5
+ bool "SH-5"
+
+endchoice
+
+choice
+ prompt "Processor type"
+
+config CPU_SUBTYPE_SH5_101
+ bool "SH5-101"
+ depends on CPU_SH5
+
+config CPU_SUBTYPE_SH5_103
+ bool "SH5-103"
+ depends on CPU_SH5
+
+endchoice
+
+choice
+ prompt "Endianness"
+ default LITTLE_ENDIAN
+
+config LITTLE_ENDIAN
+ bool "Little-Endian"
+
+config BIG_ENDIAN
+ bool "Big-Endian"
+
+endchoice
+
+config SH64_FPU_DENORM_FLUSH
+ bool "Flush floating point denorms to zero"
+
+choice
+ prompt "Page table levels"
+ default SH64_PGTABLE_2_LEVEL
+
+config SH64_PGTABLE_2_LEVEL
+ bool "2"
+
+config SH64_PGTABLE_3_LEVEL
+ bool "3"
+
+endchoice
+
+choice
+ prompt "HugeTLB page size"
+ depends on HUGETLB_PAGE && MMU
+ default HUGETLB_PAGE_SIZE_64K
+
+config HUGETLB_PAGE_SIZE_64K
+ bool "64K"
+
+config HUGETLB_PAGE_SIZE_1MB
+ bool "1MB"
+
+config HUGETLB_PAGE_SIZE_512MB
+ bool "512MB"
+
+endchoice
+
+config SH64_USER_MISALIGNED_FIXUP
+ bool "Fixup misaligned loads/stores occurring in user mode"
+
+comment "Memory options"
+
+config CACHED_MEMORY_OFFSET
+ hex "Cached Area Offset"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "20000000"
+
+config MEMORY_START
+ hex "Physical memory start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "80000000"
+
+config MEMORY_SIZE_IN_MB
+ int "Memory size (in MB)" if SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "64" if SH_HARP || SH_CAYMAN
+ default "8" if SH_SIMULATOR
+
+comment "Cache options"
+
+config DCACHE_DISABLED
+ bool "DCache Disabling"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+
+choice
+ prompt "DCache mode"
+ depends on !DCACHE_DISABLED && !SH_SIMULATOR
+ default DCACHE_WRITE_BACK
+
+config DCACHE_WRITE_BACK
+ bool "Write-back"
+
+config DCACHE_WRITE_THROUGH
+ bool "Write-through"
+
+endchoice
+
+config ICACHE_DISABLED
+ bool "ICache Disabling"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+
+config PCIDEVICE_MEMORY_START
+ hex
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "C0000000"
+
+config DEVICE_MEMORY_START
+ hex
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "E0000000"
+
+config FLASH_MEMORY_START
+ hex "Flash memory/on-chip devices start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "00000000"
+
+config PCI_BLOCK_START
+ hex "PCI block start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "40000000"
+
+comment "CPU Subtype specific options"
+
+config SH64_ID2815_WORKAROUND
+ bool "Include workaround for SH5-101 cut2 silicon defect ID2815"
+
+comment "Misc options"
+config HEARTBEAT
+ bool "Heartbeat LED"
+
+config HDSP253_LED
+ bool "Support for HDSP-253 LED"
+ depends on SH_CAYMAN
+
+config SH_DMA
+ tristate "DMA controller (DMAC) support"
+
+config PREEMPT
+ bool "Preemptible Kernel (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+
+endmenu
+
+menu "Bus options (PCI, PCMCIA, EISA, MCA, ISA)"
+
+config ISA
+ bool
+
+config SBUS
+ bool
+
+config PCI
+ bool "PCI support"
+ help
+ Find out whether you have a PCI motherboard. PCI is the name of a
+ bus system, i.e. the way the CPU talks to the other stuff inside
+ your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or
+ VESA. If you have PCI, say Y, otherwise N.
+
+ The PCI-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>, contains valuable
+ information about which PCI hardware does work under Linux and which
+ doesn't.
+
+config SH_PCIDMA_NONCOHERENT
+ bool "Cache and PCI noncoherent"
+ depends on PCI
+ default y
+ help
+ Enable this option if your platform does not have a CPU cache which
+ remains coherent with PCI DMA. It is safest to say 'Y', although you
+ will see better performance if you can say 'N', because the PCI DMA
+ code will not have to flush the CPU's caches. If you have a PCI host
+ bridge integrated with your SH CPU, refer carefully to the chip specs
+ to see if you can say 'N' here. Otherwise, leave it as 'Y'.
+
+source "drivers/pci/Kconfig"
+
+source "drivers/pcmcia/Kconfig"
+
+source "drivers/pci/hotplug/Kconfig"
+
+endmenu
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+source "arch/sh64/oprofile/Kconfig"
+
+menu "Kernel hacking"
+
+config MAGIC_SYSRQ
+ bool "Magic SysRq key"
+ help
+ If you say Y here, you will have some control over the system even
+ if the system crashes for example during kernel debugging (e.g., you
+ will be able to flush the buffer cache to disk, reboot the system
+ immediately or dump some status information). This is accomplished
+ by pressing various keys while holding SysRq (Alt+PrintScreen). It
+ also works on a serial console (on PC hardware at least), if you
+ send a BREAK and then within 5 seconds a command keypress. The
+ keys are documented in Documentation/sysrq.txt. Don't say Y unless
+ you really know what this hack does.
+
+config EARLY_PRINTK
+ bool "Early SCIF console support"
+
+config DEBUG_KERNEL_WITH_GDB_STUB
+ bool "GDB Stub kernel debug"
+
+config SH64_PROC_TLB
+ bool "Debug: report TLB fill/purge activity through /proc/tlb"
+ depends on PROC_FS
+
+config SH64_PROC_ASIDS
+ bool "Debug: report ASIDs through /proc/asids"
+ depends on PROC_FS
+
+config SH64_SR_WATCH
+ bool "Debug: set SR.WATCH to enable hardware watchpoints and trace"
+
+config SH_ALPHANUMERIC
+ bool "Enable debug outputs to on-board alphanumeric display"
+
+config SH_NO_BSS_INIT
+ bool "Avoid zeroing BSS (to speed-up startup on suitable platforms)"
+
+config FRAME_POINTER
+ bool "Compile the kernel with frame pointers"
+ default y if KGDB
+ help
+ If you say Y here the resulting kernel image will be slightly larger
+ and slower, but it will give very useful debugging information.
+ If you don't debug the kernel, you can say N, but we may not be able
+ to solve problems without frame pointers.
+
+endmenu
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
+
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000, 2001 Paolo Alberelli
+# Copyright (C) 2003, 2004 Paul Mundt
+#
+# This file is included by the global makefile so that you can add your own
+# architecture-specific flags and dependencies. Remember to do have actions
+# for "archclean" and "archdep" for cleaning up and making dependencies for
+# this architecture
+#
+# Note that top level Makefile automagically builds dependencies for SUBDIRS
+# but does not automagically clean SUBDIRS. Therefore "archclean" should clean
+# up all, "archdep" does nothing on added SUBDIRS.
+#
+ifndef include_config
+-include .config
+endif
+
+cpu-y := -mb
+cpu-$(CONFIG_LITTLE_ENDIAN) := -ml
+
+cpu-$(CONFIG_CPU_SH5) += -m5-32media-nofpu
+
+ifdef CONFIG_LITTLE_ENDIAN
+LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64'
+LDFLAGS += -EL -mshlelf32_linux
+else
+LDFLAGS_vmlinux += --defsym 'jiffies=jiffies_64+4'
+LDFLAGS += -EB -mshelf32_linux
+endif
+
+# No requirements for endianess support from AFLAGS, 'as' always run through gcc
+AFLAGS += -m5 -isa=sh64 -traditional
+CFLAGS += $(cpu-y)
+
+LDFLAGS_vmlinux += --defsym phys_stext=_stext-$(CONFIG_CACHED_MEMORY_OFFSET) \
+ -e phys_stext
+
+OBJCOPYFLAGS := -O binary -R .note -R .comment -R .stab -R .stabstr -S
+
+ifdef LOADADDR
+LINKFLAGS += -Ttext $(word 1,$(LOADADDR))
+endif
+
+machine-$(CONFIG_SH_CAYMAN) := cayman
+machine-$(CONFIG_SH_SIMULATOR) := sim
+machine-$(CONFIG_SH_HARP) := harp
+machine-$(CONFIG_SH_ROMRAM) := romram
+
+head-y := arch/$(ARCH)/kernel/head.o arch/$(ARCH)/kernel/init_task.o
+
+core-y += $(addprefix arch/$(ARCH)/, kernel/ mm/ mach-$(machine-y)/)
+
+LIBGCC := $(shell $(CC) $(CFLAGS) -print-libgcc-file-name)
+libs-y += arch/$(ARCH)/lib/ $(LIBGCC)
+
+drivers-$(CONFIG_OPROFILE) += arch/sh64/oprofile/
+
+boot := arch/$(ARCH)/boot
+
+zImage: vmlinux
+ $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
+
+compressed: zImage
+
+archclean:
+ $(Q)$(MAKE) $(clean)=$(boot)
+
+prepare: include/asm-$(ARCH)/asm-offsets.h arch/$(ARCH)/lib/syscalltab.h
+
+include/asm-$(ARCH)/asm-offsets.h: arch/$(ARCH)/kernel/asm-offsets.s \
+ include/asm include/linux/version.h
+ $(call filechk,gen-asm-offsets)
+
+define filechk_gen-syscalltab
+ (set -e; \
+ echo "/*"; \
+ echo " * DO NOT MODIFY."; \
+ echo " *"; \
+ echo " * This file was generated by arch/$(ARCH)/Makefile"; \
+ echo " * Any changes will be reverted at build time."; \
+ echo " */"; \
+ echo ""; \
+ echo "#ifndef __SYSCALLTAB_H"; \
+ echo "#define __SYSCALLTAB_H"; \
+ echo ""; \
+ echo "#include <linux/kernel.h>"; \
+ echo ""; \
+ echo "struct syscall_info {"; \
+ echo " const char *name;"; \
+ echo "} syscall_info_table[] = {"; \
+ sed -e '/^.*\.long /!d;s//\t{ "/;s/\(\([^/]*\)\/\)\{1\}.*/\2/; \
+ s/[ \t]*$$//g;s/$$/" },/;s/\("\)sys_/\1/g'; \
+ echo "};"; \
+ echo ""; \
+ echo "#define NUM_SYSCALL_INFO_ENTRIES ARRAY_SIZE(syscall_info_table)"; \
+ echo ""; \
+ echo "#endif /* __SYSCALLTAB_H */" )
+endef
+
+arch/$(ARCH)/lib/syscalltab.h: arch/sh64/kernel/syscalls.S
+ $(call filechk,gen-syscalltab)
+
+CLEAN_FILES += include/asm-$(ARCH)/asm-offsets.h arch/$(ARCH)/lib/syscalltab.h
+
+define archhelp
+ @echo ' zImage - Compressed kernel image (arch/sh64/boot/zImage)'
+endef
+
--- /dev/null
+#
+# arch/sh64/boot/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002 Stuart Menefy
+#
+
+targets := zImage
+subdir- := compressed
+
+$(obj)/zImage: $(obj)/compressed/vmlinux FORCE
+ $(call if_changed,objcopy)
+ @echo 'Kernel: $@ is ready'
+
+$(obj)/compressed/vmlinux: FORCE
+ $(Q)$(MAKE) $(build)=$(obj)/compressed $@
+
--- /dev/null
+#
+# linux/arch/sh64/boot/compressed/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002 Stuart Menefy
+# Copyright (C) 2004 Paul Mundt
+#
+# create a compressed vmlinux image from the original vmlinux
+#
+
+targets := vmlinux vmlinux.bin vmlinux.bin.gz \
+ head.o misc.o cache.o piggy.o vmlinux.lds.o
+
+EXTRA_AFLAGS := -traditional
+
+OBJECTS := $(obj)/head.o $(obj)/misc.o $(obj)/cache.o
+
+#
+# ZIMAGE_OFFSET is the load offset of the compression loader
+# (4M for the kernel plus 64K for this loader)
+#
+ZIMAGE_OFFSET = $(shell printf "0x%8x" $$[$(CONFIG_MEMORY_START)+0x400000+0x10000])
+
+LDFLAGS_vmlinux := -Ttext $(ZIMAGE_OFFSET) -e startup \
+ -T $(obj)/../../kernel/vmlinux.lds.s \
+ --no-warn-mismatch
+
+$(obj)/vmlinux: $(OBJECTS) $(obj)/piggy.o FORCE
+ $(call if_changed,ld)
+ @:
+
+$(obj)/vmlinux.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,gzip)
+
+LDFLAGS_piggy.o := -r --format binary --oformat elf32-sh64-linux -T
+OBJCOPYFLAGS += -R .empty_zero_page
+
+$(obj)/piggy.o: $(obj)/vmlinux.lds.s $(obj)/vmlinux.bin.gz FORCE
+ $(call if_changed,ld)
+
--- /dev/null
+/*
+ * arch/shmedia/boot/compressed/cache.c -- simple cache management functions
+ *
+ * Code extracted from sh-ipl+g, sh-stub.c, which has the copyright:
+ *
+ * This is originally based on an m68k software stub written by Glenn
+ * Engel at HP, but has changed quite a bit.
+ *
+ * Modifications for the SH by Ben Lee and Steve Chamberlain
+ *
+****************************************************************************
+
+ THIS SOFTWARE IS NOT COPYRIGHTED
+
+ HP offers the following for use in the public domain. HP makes no
+ warranty with regard to the software or it's performance and the
+ user accepts the software "AS IS" with all faults.
+
+ HP DISCLAIMS ANY WARRANTIES, EXPRESS OR IMPLIED, WITH REGARD
+ TO THIS SOFTWARE INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+
+****************************************************************************/
+
+#define CACHE_ENABLE 0
+#define CACHE_DISABLE 1
+
+int cache_control(unsigned int command)
+{
+ volatile unsigned int *p = (volatile unsigned int *) 0x80000000;
+ int i;
+
+ for (i = 0; i < (32 * 1024); i += 32) {
+ (void *) *p;
+ p += (32 / sizeof (int));
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/shmedia/boot/compressed/head.S
+ *
+ * Copied from
+ * arch/shmedia/kernel/head.S
+ * which carried the copyright:
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * Modification for compressed loader:
+ * Copyright (C) 2002 Stuart Menefy (stuart.menefy@st.com)
+ */
+
+#include <linux/linkage.h>
+#include <asm/registers.h>
+#include <asm/cache.h>
+#include <asm/mmu_context.h>
+
+/*
+ * Fixed TLB entries to identity map the beginning of RAM
+ */
+#define MMUIR_TEXT_H 0x0000000000000003 | CONFIG_MEMORY_START
+ /* Enabled, Shared, ASID 0, Eff. Add. 0xA0000000 */
+#define MMUIR_TEXT_L 0x000000000000009a | CONFIG_MEMORY_START
+ /* 512 Mb, Cacheable (Write-back), execute, Not User, Ph. Add. */
+
+#define MMUDR_CACHED_H 0x0000000000000003 | CONFIG_MEMORY_START
+ /* Enabled, Shared, ASID 0, Eff. Add. 0xA0000000 */
+#define MMUDR_CACHED_L 0x000000000000015a | CONFIG_MEMORY_START
+ /* 512 Mb, Cacheable (Write-back), read/write, Not User, Ph. Add. */
+
+#define ICCR0_INIT_VAL ICCR0_ON | ICCR0_ICI /* ICE + ICI */
+#define ICCR1_INIT_VAL ICCR1_NOLOCK /* No locking */
+
+#if 1
+#define OCCR0_INIT_VAL OCCR0_ON | OCCR0_OCI | OCCR0_WB /* OCE + OCI + WB */
+#else
+#define OCCR0_INIT_VAL OCCR0_OFF
+#endif
+#define OCCR1_INIT_VAL OCCR1_NOLOCK /* No locking */
+
+ .text
+
+ .global startup
+startup:
+ /*
+ * Prevent speculative fetch on device memory due to
+ * uninitialized target registers.
+ * This must be executed before the first branch.
+ */
+ ptabs/u ZERO, tr0
+ ptabs/u ZERO, tr1
+ ptabs/u ZERO, tr2
+ ptabs/u ZERO, tr3
+ ptabs/u ZERO, tr4
+ ptabs/u ZERO, tr5
+ ptabs/u ZERO, tr6
+ ptabs/u ZERO, tr7
+ synci
+
+ /*
+ * Set initial TLB entries for cached and uncached regions.
+ * Note: PTA/BLINK is PIC code, PTABS/BLINK isn't !
+ */
+ /* Clear ITLBs */
+ pta 1f, tr1
+ movi ITLB_FIXED, r21
+ movi ITLB_LAST_VAR_UNRESTRICTED+TLB_STEP, r22
+1: putcfg r21, 0, ZERO /* Clear MMUIR[n].PTEH.V */
+ addi r21, TLB_STEP, r21
+ bne r21, r22, tr1
+
+ /* Clear DTLBs */
+ pta 1f, tr1
+ movi DTLB_FIXED, r21
+ movi DTLB_LAST_VAR_UNRESTRICTED+TLB_STEP, r22
+1: putcfg r21, 0, ZERO /* Clear MMUDR[n].PTEH.V */
+ addi r21, TLB_STEP, r21
+ bne r21, r22, tr1
+
+ /* Map one big (512Mb) page for ITLB */
+ movi ITLB_FIXED, r21
+ movi MMUIR_TEXT_L, r22 /* PTEL first */
+ putcfg r21, 1, r22 /* Set MMUIR[0].PTEL */
+ movi MMUIR_TEXT_H, r22 /* PTEH last */
+ putcfg r21, 0, r22 /* Set MMUIR[0].PTEH */
+
+ /* Map one big CACHED (512Mb) page for DTLB */
+ movi DTLB_FIXED, r21
+ movi MMUDR_CACHED_L, r22 /* PTEL first */
+ putcfg r21, 1, r22 /* Set MMUDR[0].PTEL */
+ movi MMUDR_CACHED_H, r22 /* PTEH last */
+ putcfg r21, 0, r22 /* Set MMUDR[0].PTEH */
+
+ /* ICache */
+ movi ICCR_BASE, r21
+ movi ICCR0_INIT_VAL, r22
+ movi ICCR1_INIT_VAL, r23
+ putcfg r21, ICCR_REG0, r22
+ putcfg r21, ICCR_REG1, r23
+ synci
+
+ /* OCache */
+ movi OCCR_BASE, r21
+ movi OCCR0_INIT_VAL, r22
+ movi OCCR1_INIT_VAL, r23
+ putcfg r21, OCCR_REG0, r22
+ putcfg r21, OCCR_REG1, r23
+ synco
+
+ /*
+ * Enable the MMU.
+ * From here-on code can be non-PIC.
+ */
+ movi SR_HARMLESS | SR_ENABLE_MMU, r22
+ putcon r22, SSR
+ movi 1f, r22
+ putcon r22, SPC
+ synco
+ rte /* And now go into the hyperspace ... */
+1: /* ... that's the next instruction ! */
+
+ /* Set initial stack pointer */
+ movi datalabel stack_start, r0
+ ld.l r0, 0, r15
+
+ /*
+ * Clear bss
+ */
+ pt 1f, tr1
+ movi datalabel __bss_start, r22
+ movi datalabel _end, r23
+1: st.l r22, 0, ZERO
+ addi r22, 4, r22
+ bne r22, r23, tr1
+
+ /*
+ * Decompress the kernel.
+ */
+ pt decompress_kernel, tr0
+ blink tr0, r18
+
+ /*
+ * Disable the MMU.
+ */
+ movi SR_HARMLESS, r22
+ putcon r22, SSR
+ movi 1f, r22
+ putcon r22, SPC
+ synco
+ rte /* And now go into the hyperspace ... */
+1: /* ... that's the next instruction ! */
+
+ /* Jump into the decompressed kernel */
+ movi datalabel (CONFIG_MEMORY_START + 0x2000)+1, r19
+ ptabs r19, tr0
+ blink tr0, r18
+
+ /* Shouldn't return here, but just in case, loop forever */
+ pt 1f, tr0
+1: blink tr0, ZERO
--- /dev/null
+#!/bin/sh
+#
+# arch/sh/boot/install.sh
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 1995 by Linus Torvalds
+#
+# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
+# Adapted from code in arch/i386/boot/install.sh by Russell King
+# Adapted from code in arch/arm/boot/install.sh by Stuart Menefy
+#
+# "make install" script for sh architecture
+#
+# Arguments:
+# $1 - kernel version
+# $2 - kernel image file
+# $3 - kernel map file
+# $4 - default install path (blank if root directory)
+#
+
+# User may have a custom install script
+
+if [ -x /sbin/installkernel ]; then
+ exec /sbin/installkernel "$@"
+fi
+
+if [ "$2" = "zImage" ]; then
+# Compressed install
+ echo "Installing compressed kernel"
+ if [ -f $4/vmlinuz-$1 ]; then
+ mv $4/vmlinuz-$1 $4/vmlinuz.old
+ fi
+
+ if [ -f $4/System.map-$1 ]; then
+ mv $4/System.map-$1 $4/System.old
+ fi
+
+ cat $2 > $4/vmlinuz-$1
+ cp $3 $4/System.map-$1
+else
+# Normal install
+ echo "Installing normal kernel"
+ if [ -f $4/vmlinux-$1 ]; then
+ mv $4/vmlinux-$1 $4/vmlinux.old
+ fi
+
+ if [ -f $4/System.map ]; then
+ mv $4/System.map $4/System.old
+ fi
+
+ cat $2 > $4/vmlinux-$1
+ cp $3 $4/System.map
+fi
--- /dev/null
+/*
+ * arch/shmedia/boot/compressed/misc.c
+ *
+ * This is a collection of several routines from gzip-1.0.3
+ * adapted for Linux.
+ *
+ * malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
+ *
+ * Adapted for SHmedia from sh by Stuart Menefy, May 2002
+ */
+
+#include <linux/config.h>
+#include <asm/uaccess.h>
+
+/* cache.c */
+#define CACHE_ENABLE 0
+#define CACHE_DISABLE 1
+int cache_control(unsigned int command);
+
+/*
+ * gzip declarations
+ */
+
+#define OF(args) args
+#define STATIC static
+
+#undef memset
+#undef memcpy
+#define memzero(s, n) memset ((s), 0, (n))
+
+typedef unsigned char uch;
+typedef unsigned short ush;
+typedef unsigned long ulg;
+
+#define WSIZE 0x8000 /* Window size must be at least 32k, */
+ /* and a power of two */
+
+static uch *inbuf; /* input buffer */
+static uch window[WSIZE]; /* Sliding window buffer */
+
+static unsigned insize = 0; /* valid bytes in inbuf */
+static unsigned inptr = 0; /* index of next byte to be processed in inbuf */
+static unsigned outcnt = 0; /* bytes in output buffer */
+
+/* gzip flag byte */
+#define ASCII_FLAG 0x01 /* bit 0 set: file probably ASCII text */
+#define CONTINUATION 0x02 /* bit 1 set: continuation of multi-part gzip file */
+#define EXTRA_FIELD 0x04 /* bit 2 set: extra field present */
+#define ORIG_NAME 0x08 /* bit 3 set: original file name present */
+#define COMMENT 0x10 /* bit 4 set: file comment present */
+#define ENCRYPTED 0x20 /* bit 5 set: file is encrypted */
+#define RESERVED 0xC0 /* bit 6,7: reserved */
+
+#define get_byte() (inptr < insize ? inbuf[inptr++] : fill_inbuf())
+
+/* Diagnostic functions */
+#ifdef DEBUG
+# define Assert(cond,msg) {if(!(cond)) error(msg);}
+# define Trace(x) fprintf x
+# define Tracev(x) {if (verbose) fprintf x ;}
+# define Tracevv(x) {if (verbose>1) fprintf x ;}
+# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
+# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
+#else
+# define Assert(cond,msg)
+# define Trace(x)
+# define Tracev(x)
+# define Tracevv(x)
+# define Tracec(c,x)
+# define Tracecv(c,x)
+#endif
+
+static int fill_inbuf(void);
+static void flush_window(void);
+static void error(char *m);
+static void gzip_mark(void **);
+static void gzip_release(void **);
+
+extern char input_data[];
+extern int input_len;
+
+static long bytes_out = 0;
+static uch *output_data;
+static unsigned long output_ptr = 0;
+
+static void *malloc(int size);
+static void free(void *where);
+static void error(char *m);
+static void gzip_mark(void **);
+static void gzip_release(void **);
+
+static void puts(const char *);
+
+extern int _text; /* Defined in vmlinux.lds.S */
+extern int _end;
+static unsigned long free_mem_ptr;
+static unsigned long free_mem_end_ptr;
+
+#define HEAP_SIZE 0x10000
+
+#include "../../../../lib/inflate.c"
+
+static void *malloc(int size)
+{
+ void *p;
+
+ if (size < 0)
+ error("Malloc error\n");
+ if (free_mem_ptr == 0)
+ error("Memory error\n");
+
+ free_mem_ptr = (free_mem_ptr + 3) & ~3; /* Align */
+
+ p = (void *) free_mem_ptr;
+ free_mem_ptr += size;
+
+ if (free_mem_ptr >= free_mem_end_ptr)
+ error("\nOut of memory\n");
+
+ return p;
+}
+
+static void free(void *where)
+{ /* Don't care */
+}
+
+static void gzip_mark(void **ptr)
+{
+ *ptr = (void *) free_mem_ptr;
+}
+
+static void gzip_release(void **ptr)
+{
+ free_mem_ptr = (long) *ptr;
+}
+
+void puts(const char *s)
+{
+}
+
+void *memset(void *s, int c, size_t n)
+{
+ int i;
+ char *ss = (char *) s;
+
+ for (i = 0; i < n; i++)
+ ss[i] = c;
+ return s;
+}
+
+void *memcpy(void *__dest, __const void *__src, size_t __n)
+{
+ int i;
+ char *d = (char *) __dest, *s = (char *) __src;
+
+ for (i = 0; i < __n; i++)
+ d[i] = s[i];
+ return __dest;
+}
+
+/* ===========================================================================
+ * Fill the input buffer. This is called only when the buffer is empty
+ * and at least one byte is really needed.
+ */
+static int fill_inbuf(void)
+{
+ if (insize != 0) {
+ error("ran out of input data\n");
+ }
+
+ inbuf = input_data;
+ insize = input_len;
+ inptr = 1;
+ return inbuf[0];
+}
+
+/* ===========================================================================
+ * Write the output window window[0..outcnt-1] and update crc and bytes_out.
+ * (Used for the decompressed data only.)
+ */
+static void flush_window(void)
+{
+ ulg c = crc; /* temporary variable */
+ unsigned n;
+ uch *in, *out, ch;
+
+ in = window;
+ out = &output_data[output_ptr];
+ for (n = 0; n < outcnt; n++) {
+ ch = *out++ = *in++;
+ c = crc_32_tab[((int) c ^ ch) & 0xff] ^ (c >> 8);
+ }
+ crc = c;
+ bytes_out += (ulg) outcnt;
+ output_ptr += (ulg) outcnt;
+ outcnt = 0;
+ puts(".");
+}
+
+static void error(char *x)
+{
+ puts("\n\n");
+ puts(x);
+ puts("\n\n -- System halted");
+
+ while (1) ; /* Halt */
+}
+
+#define STACK_SIZE (4096)
+long __attribute__ ((aligned(8))) user_stack[STACK_SIZE];
+long *stack_start = &user_stack[STACK_SIZE];
+
+void decompress_kernel(void)
+{
+ output_data = (uch *) (CONFIG_MEMORY_START + 0x2000);
+ free_mem_ptr = (unsigned long) &_end;
+ free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
+
+ makecrc();
+ puts("Uncompressing Linux... ");
+ cache_control(CACHE_ENABLE);
+ gunzip();
+ puts("\n");
+
+#if 0
+ /* When booting from ROM may want to do something like this if the
+ * boot loader doesn't.
+ */
+
+ /* Set up the parameters and command line */
+ {
+ volatile unsigned int *parambase =
+ (int *) (CONFIG_MEMORY_START + 0x1000);
+
+ parambase[0] = 0x1; /* MOUNT_ROOT_RDONLY */
+ parambase[1] = 0x0; /* RAMDISK_FLAGS */
+ parambase[2] = 0x0200; /* ORIG_ROOT_DEV */
+ parambase[3] = 0x0; /* LOADER_TYPE */
+ parambase[4] = 0x0; /* INITRD_START */
+ parambase[5] = 0x0; /* INITRD_SIZE */
+ parambase[6] = 0;
+
+ strcpy((char *) ((int) parambase + 0x100),
+ "console=ttySC0,38400");
+ }
+#endif
+
+ puts("Ok, booting the kernel.\n");
+
+ cache_control(CACHE_DISABLE);
+}
--- /dev/null
+/*
+ * ld script to make compressed SuperH/shmedia Linux kernel+decompression
+ * bootstrap
+ * Modified by Stuart Menefy from arch/sh/vmlinux.lds.S written by Niibe Yutaka
+ */
+
+#include <linux/config.h>
+
+#ifdef CONFIG_LITTLE_ENDIAN
+/* OUTPUT_FORMAT("elf32-sh64l-linux", "elf32-sh64l-linux", "elf32-sh64l-linux") */
+#define NOP 0x6ff0fff0
+#else
+/* OUTPUT_FORMAT("elf32-sh64", "elf32-sh64", "elf32-sh64") */
+#define NOP 0xf0fff06f
+#endif
+
+OUTPUT_FORMAT("elf32-sh64-linux")
+OUTPUT_ARCH(sh)
+ENTRY(_start)
+
+#define ALIGNED_GAP(section, align) (((ADDR(section)+SIZEOF(section)+(align)-1) & ~((align)-1))-ADDR(section))
+#define FOLLOWING(section, align) AT (LOADADDR(section) + ALIGNED_GAP(section,align))
+
+SECTIONS
+{
+ _text = .; /* Text and read-only data */
+
+ .text : {
+ *(.text)
+ *(.text64)
+ *(.text..SHmedia32)
+ *(.fixup)
+ *(.gnu.warning)
+ } = NOP
+ . = ALIGN(4);
+ .rodata : { *(.rodata) }
+
+ /* There is no 'real' reason for eight byte alignment, four would work
+ * as well, but gdb downloads much (*4) faster with this.
+ */
+ . = ALIGN(8);
+ .image : { *(.image) }
+ . = ALIGN(4);
+ _etext = .; /* End of text section */
+
+ .data : /* Data */
+ FOLLOWING(.image, 4)
+ {
+ _data = .;
+ *(.data)
+ }
+ _data_image = LOADADDR(.data);/* Address of data section in ROM */
+
+ _edata = .; /* End of data section */
+
+ .stack : { stack = .; _stack = .; }
+
+ . = ALIGN(4);
+ __bss_start = .; /* BSS */
+ .bss : {
+ *(.bss)
+ }
+ . = ALIGN(4);
+ _end = . ;
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_SUPERH=y
+CONFIG_SUPERH64=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_LOG_BUF_SHIFT=14
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+# CONFIG_SYSVIPC is not set
+CONFIG_POSIX_MQUEUE=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# System type
+#
+# CONFIG_SH_GENERIC is not set
+# CONFIG_SH_SIMULATOR is not set
+CONFIG_SH_CAYMAN=y
+# CONFIG_SH_ROMRAM is not set
+# CONFIG_SH_HARP is not set
+
+#
+# Processor type and features
+#
+CONFIG_CPU_SH5=y
+CONFIG_CPU_SUBTYPE_SH5_101=y
+# CONFIG_CPU_SUBTYPE_SH5_103 is not set
+CONFIG_LITTLE_ENDIAN=y
+# CONFIG_BIG_ENDIAN is not set
+# CONFIG_SH64_FPU_DENORM_FLUSH is not set
+CONFIG_SH64_PGTABLE_2_LEVEL=y
+# CONFIG_SH64_PGTABLE_3_LEVEL is not set
+CONFIG_HUGETLB_PAGE_SIZE_64K=y
+# CONFIG_HUGETLB_PAGE_SIZE_1MB is not set
+# CONFIG_HUGETLB_PAGE_SIZE_512MB is not set
+CONFIG_SH64_USER_MISALIGNED_FIXUP=y
+
+#
+# Memory options
+#
+CONFIG_CACHED_MEMORY_OFFSET=0x20000000
+CONFIG_MEMORY_START=0x80000000
+CONFIG_MEMORY_SIZE_IN_MB=128
+
+#
+# Cache options
+#
+# CONFIG_DCACHE_DISABLED is not set
+CONFIG_DCACHE_WRITE_BACK=y
+# CONFIG_DCACHE_WRITE_THROUGH is not set
+# CONFIG_ICACHE_DISABLED is not set
+CONFIG_PCIDEVICE_MEMORY_START=C0000000
+CONFIG_DEVICE_MEMORY_START=E0000000
+CONFIG_FLASH_MEMORY_START=0x00000000
+CONFIG_PCI_BLOCK_START=0x40000000
+
+#
+# CPU Subtype specific options
+#
+CONFIG_SH64_ID2815_WORKAROUND=y
+
+#
+# Misc options
+#
+CONFIG_HEARTBEAT=y
+CONFIG_HDSP253_LED=y
+CONFIG_SH_DMA=y
+CONFIG_PREEMPT=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
+#
+CONFIG_PCI=y
+CONFIG_SH_PCIDMA_NONCOHERENT=y
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_FLAT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
+# CONFIG_STNIC is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+CONFIG_NET_TULIP=y
+# CONFIG_DE2104X is not set
+CONFIG_TULIP=y
+# CONFIG_TULIP_MWI is not set
+# CONFIG_TULIP_MMIO is not set
+# CONFIG_TULIP_NAPI is not set
+# CONFIG_DE4X5 is not set
+# CONFIG_WINBOND_840 is not set
+# CONFIG_DM9102 is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PCIPS2 is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_SH_SCI=y
+CONFIG_SERIAL_SH_SCI_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+# CONFIG_SH_WDT is not set
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_ASILIANT is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_E1355 is not set
+# CONFIG_FB_RIVA is not set
+# CONFIG_FB_MATROX is not set
+# CONFIG_FB_RADEON_OLD is not set
+# CONFIG_FB_RADEON is not set
+# CONFIG_FB_ATY128 is not set
+# CONFIG_FB_ATY is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+CONFIG_FB_KYRO=y
+# CONFIG_FB_3DFX is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_VIRTUAL is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_PCI_CONSOLE=y
+CONFIG_FONTS=y
+# CONFIG_FONT_8x8 is not set
+CONFIG_FONT_8x16=y
+# CONFIG_FONT_6x11 is not set
+# CONFIG_FONT_PEARL_8x8 is not set
+# CONFIG_FONT_ACORN_8x8 is not set
+# CONFIG_FONT_MINI_4x6 is not set
+# CONFIG_FONT_SUN8x16 is not set
+# CONFIG_FONT_SUN12x22 is not set
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+# CONFIG_LOGO_LINUX_CLUT224 is not set
+# CONFIG_LOGO_SUPERH_MONO is not set
+# CONFIG_LOGO_SUPERH_VGA16 is not set
+CONFIG_LOGO_SUPERH_CLUT224=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+CONFIG_MINIX_FS=y
+CONFIG_ROMFS_FS=y
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Kernel hacking
+#
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_DEBUG_KERNEL_WITH_GDB_STUB is not set
+# CONFIG_SH64_PROC_TLB is not set
+# CONFIG_SH64_PROC_ASIDS is not set
+CONFIG_SH64_SR_WATCH=y
+# CONFIG_SH_ALPHANUMERIC is not set
+# CONFIG_SH_NO_BSS_INIT is not set
+CONFIG_FRAME_POINTER=y
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_SUPERH=y
+CONFIG_SUPERH64=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_LOG_BUF_SHIFT=14
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+# CONFIG_SYSVIPC is not set
+CONFIG_POSIX_MQUEUE=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# System type
+#
+# CONFIG_SH_GENERIC is not set
+# CONFIG_SH_SIMULATOR is not set
+CONFIG_SH_CAYMAN=y
+# CONFIG_SH_ROMRAM is not set
+# CONFIG_SH_HARP is not set
+CONFIG_CPU_SH5=y
+CONFIG_CPU_SUBTYPE_SH5_101=y
+# CONFIG_CPU_SUBTYPE_SH5_103 is not set
+CONFIG_LITTLE_ENDIAN=y
+# CONFIG_BIG_ENDIAN is not set
+# CONFIG_SH64_FPU_DENORM_FLUSH is not set
+CONFIG_SH64_PGTABLE_2_LEVEL=y
+# CONFIG_SH64_PGTABLE_3_LEVEL is not set
+CONFIG_HUGETLB_PAGE_SIZE_64K=y
+# CONFIG_HUGETLB_PAGE_SIZE_1MB is not set
+# CONFIG_HUGETLB_PAGE_SIZE_512MB is not set
+CONFIG_SH64_USER_MISALIGNED_FIXUP=y
+
+#
+# Memory options
+#
+CONFIG_CACHED_MEMORY_OFFSET=0x20000000
+CONFIG_MEMORY_START=0x80000000
+CONFIG_MEMORY_SIZE_IN_MB=128
+
+#
+# Cache options
+#
+# CONFIG_DCACHE_DISABLED is not set
+CONFIG_DCACHE_WRITE_BACK=y
+# CONFIG_DCACHE_WRITE_THROUGH is not set
+# CONFIG_ICACHE_DISABLED is not set
+CONFIG_PCIDEVICE_MEMORY_START=C0000000
+CONFIG_DEVICE_MEMORY_START=E0000000
+CONFIG_FLASH_MEMORY_START=0x00000000
+CONFIG_PCI_BLOCK_START=0x40000000
+
+#
+# CPU Subtype specific options
+#
+CONFIG_SH64_ID2815_WORKAROUND=y
+
+#
+# Misc options
+#
+CONFIG_HEARTBEAT=y
+CONFIG_HDSP253_LED=y
+CONFIG_SH_DMA=y
+CONFIG_PREEMPT=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
+#
+CONFIG_PCI=y
+CONFIG_SH_PCIDMA_NONCOHERENT=y
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_FLAT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+# CONFIG_MII is not set
+# CONFIG_STNIC is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+CONFIG_NET_TULIP=y
+# CONFIG_DE2104X is not set
+CONFIG_TULIP=y
+# CONFIG_TULIP_MWI is not set
+# CONFIG_TULIP_MMIO is not set
+# CONFIG_TULIP_NAPI is not set
+# CONFIG_DE4X5 is not set
+# CONFIG_WINBOND_840 is not set
+# CONFIG_DM9102 is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+# CONFIG_VIA_VELOCITY is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PCIPS2 is not set
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_SH_SCI=y
+CONFIG_SERIAL_SH_SCI_CONSOLE=y
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+# CONFIG_SH_WDT is not set
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+CONFIG_FB=y
+# CONFIG_FB_CIRRUS is not set
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_ASILIANT is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_E1355 is not set
+# CONFIG_FB_RIVA is not set
+# CONFIG_FB_MATROX is not set
+# CONFIG_FB_RADEON_OLD is not set
+# CONFIG_FB_RADEON is not set
+# CONFIG_FB_ATY128 is not set
+# CONFIG_FB_ATY is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+CONFIG_FB_KYRO=y
+# CONFIG_FB_3DFX is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_VIRTUAL is not set
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+# CONFIG_MDA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_PCI_CONSOLE=y
+CONFIG_FONTS=y
+# CONFIG_FONT_8x8 is not set
+CONFIG_FONT_8x16=y
+# CONFIG_FONT_6x11 is not set
+# CONFIG_FONT_PEARL_8x8 is not set
+# CONFIG_FONT_ACORN_8x8 is not set
+# CONFIG_FONT_MINI_4x6 is not set
+# CONFIG_FONT_SUN8x16 is not set
+# CONFIG_FONT_SUN12x22 is not set
+
+#
+# Logo configuration
+#
+CONFIG_LOGO=y
+# CONFIG_LOGO_LINUX_MONO is not set
+# CONFIG_LOGO_LINUX_VGA16 is not set
+# CONFIG_LOGO_LINUX_CLUT224 is not set
+# CONFIG_LOGO_SUPERH_MONO is not set
+# CONFIG_LOGO_SUPERH_VGA16 is not set
+CONFIG_LOGO_SUPERH_CLUT224=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+CONFIG_MINIX_FS=y
+CONFIG_ROMFS_FS=y
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+CONFIG_HUGETLBFS=y
+CONFIG_HUGETLB_PAGE=y
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+CONFIG_PROFILING=y
+# CONFIG_OPROFILE is not set
+
+#
+# Kernel hacking
+#
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_EARLY_PRINTK is not set
+# CONFIG_DEBUG_KERNEL_WITH_GDB_STUB is not set
+# CONFIG_SH64_PROC_TLB is not set
+# CONFIG_SH64_PROC_ASIDS is not set
+CONFIG_SH64_SR_WATCH=y
+# CONFIG_SH_ALPHANUMERIC is not set
+# CONFIG_SH_NO_BSS_INIT is not set
+CONFIG_FRAME_POINTER=y
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC16 is not set
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000, 2001 Paolo Alberelli
+# Copyright (C) 2003 Paul Mundt
+#
+# Makefile for the Linux sh64 kernel.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+extra-y := head.o init_task.o vmlinux.lds.s
+
+obj-y := process.o signal.o entry.o traps.o irq.o irq_intc.o \
+ ptrace.o setup.o time.o sys_sh64.o semaphore.o sh_ksyms.o \
+ switchto.o syscalls.o
+
+obj-$(CONFIG_HEARTBEAT) += led.o
+obj-$(CONFIG_SH_ALPHANUMERIC) += alphanum.o
+obj-$(CONFIG_SH_DMA) += dma.o
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-$(CONFIG_KALLSYMS) += unwind.o
+obj-$(CONFIG_PCI) += pci-dma.o pcibios.o
+
+ifeq ($(CONFIG_PCI),y)
+obj-$(CONFIG_CPU_SH5) += pci_sh5.o
+endif
+
+ifndef CONFIG_NOFPU_SUPPORT
+obj-y += fpu.o
+endif
+
+USE_STANDARD_AS_RULE := true
+
--- /dev/null
+/*
+ * arch/sh64/kernel/alpanum.c
+ *
+ * Copyright (C) 2002 Stuart Menefy <stuart.menefy@st.com>
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Machine-independent functions for handling 8-digit alphanumeric display
+ * (e.g. Agilent HDSP-253x)
+ */
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/sched.h>
+
+void mach_alphanum(int pos, unsigned char val);
+void mach_led(int pos, int val);
+
+void print_seg(char *file, int line)
+{
+ int i;
+ unsigned int nibble;
+
+ for (i = 0; i < 5; i++) {
+ mach_alphanum(i, file[i]);
+ }
+
+ for (i = 0; i < 3; i++) {
+ nibble = ((line >> (i * 4)) & 0xf);
+ mach_alphanum(7 - i, nibble + ((nibble > 9) ? 55 : 48));
+ }
+}
+
+void print_seg_num(unsigned num)
+{
+ int i;
+ unsigned int nibble;
+
+ for (i = 0; i < 8; i++) {
+ nibble = ((num >> (i * 4)) & 0xf);
+
+ mach_alphanum(7 - i, nibble + ((nibble > 9) ? 55 : 48));
+ }
+}
+
--- /dev/null
+/*
+ * This program is used to generate definitions needed by
+ * assembly language modules.
+ *
+ * We use the technique used in the OSF Mach kernel code:
+ * generate asm statements containing #defines,
+ * compile this file to assembler, and then extract the
+ * #defines from the assembly-language output.
+ */
+
+#include <linux/stddef.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <asm/thread_info.h>
+
+#define DEFINE(sym, val) \
+ asm volatile("\n->" #sym " %0 " #val : : "i" (val))
+
+#define BLANK() asm volatile("\n->" : : )
+
+int main(void)
+{
+ /* offsets into the thread_info struct */
+ DEFINE(TI_TASK, offsetof(struct thread_info, task));
+ DEFINE(TI_EXEC_DOMAIN, offsetof(struct thread_info, exec_domain));
+ DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
+ DEFINE(TI_PRE_COUNT, offsetof(struct thread_info, preempt_count));
+ DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
+ DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
+ DEFINE(TI_RESTART_BLOCK,offsetof(struct thread_info, restart_block));
+
+ return 0;
+}
--- /dev/null
+/*
+ * arch/sh64/kernel/dma.c
+ *
+ * DMA routines for the SH-5 DMAC.
+ *
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/types.h>
+#include <linux/irq.h>
+#include <linux/spinlock.h>
+#include <linux/mm.h>
+#include <asm/hardware.h>
+#include <asm/dma.h>
+#include <asm/signal.h>
+#include <asm/errno.h>
+#include <asm/io.h>
+
+typedef struct {
+ unsigned long dev_addr;
+ unsigned long mem_addr;
+
+ unsigned int mode;
+ unsigned int count;
+} dma_info_t;
+
+static dma_info_t dma_info[MAX_DMA_CHANNELS];
+extern spinlock_t dma_spin_lock;
+
+/* arch/sh64/kernel/irq_intc.c */
+extern void make_intc_irq(unsigned int irq);
+
+/* DMAC Interrupts */
+#define DMA_IRQ_DMTE0 18
+#define DMA_IRQ_DERR 22
+
+#define DMAC_COMMON_BASE (dmac_base + 0x08)
+#define DMAC_SAR_BASE (dmac_base + 0x10)
+#define DMAC_DAR_BASE (dmac_base + 0x18)
+#define DMAC_COUNT_BASE (dmac_base + 0x20)
+#define DMAC_CTRL_BASE (dmac_base + 0x28)
+#define DMAC_STATUS_BASE (dmac_base + 0x30)
+
+#define DMAC_SAR(n) (DMAC_SAR_BASE + ((n) * 0x28))
+#define DMAC_DAR(n) (DMAC_DAR_BASE + ((n) * 0x28))
+#define DMAC_COUNT(n) (DMAC_COUNT_BASE + ((n) * 0x28))
+#define DMAC_CTRL(n) (DMAC_CTRL_BASE + ((n) * 0x28))
+#define DMAC_STATUS(n) (DMAC_STATUS_BASE + ((n) * 0x28))
+
+/* DMAC.COMMON Bit Definitions */
+#define DMAC_COMMON_PR 0x00000001 /* Priority */
+ /* Bits 1-2 Reserved */
+#define DMAC_COMMON_ME 0x00000008 /* Master Enable */
+#define DMAC_COMMON_NMI 0x00000010 /* NMI Flag */
+ /* Bits 5-6 Reserved */
+#define DMAC_COMMON_ER 0x00000780 /* Error Response */
+#define DMAC_COMMON_AAE 0x00007800 /* Address Alignment Error */
+ /* Bits 15-63 Reserved */
+
+/* DMAC.SAR Bit Definitions */
+#define DMAC_SAR_ADDR 0xffffffff /* Source Address */
+
+/* DMAC.DAR Bit Definitions */
+#define DMAC_DAR_ADDR 0xffffffff /* Destination Address */
+
+/* DMAC.COUNT Bit Definitions */
+#define DMAC_COUNT_CNT 0xffffffff /* Transfer Count */
+
+/* DMAC.CTRL Bit Definitions */
+#define DMAC_CTRL_TS 0x00000007 /* Transfer Size */
+#define DMAC_CTRL_SI 0x00000018 /* Source Increment */
+#define DMAC_CTRL_DI 0x00000060 /* Destination Increment */
+#define DMAC_CTRL_RS 0x00000780 /* Resource Select */
+#define DMAC_CTRL_IE 0x00000800 /* Interrupt Enable */
+#define DMAC_CTRL_TE 0x00001000 /* Transfer Enable */
+ /* Bits 15-63 Reserved */
+
+/* DMAC.STATUS Bit Definitions */
+#define DMAC_STATUS_TE 0x00000001 /* Transfer End */
+#define DMAC_STATUS_AAE 0x00000002 /* Address Alignment Error */
+ /* Bits 2-63 Reserved */
+
+static unsigned long dmac_base;
+
+void set_dma_count(unsigned int chan, unsigned int count);
+void set_dma_addr(unsigned int chan, unsigned int addr);
+
+static irqreturn_t dma_mte(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned int chan = irq - DMA_IRQ_DMTE0;
+ dma_info_t *info = dma_info + chan;
+ u64 status;
+
+ if (info->mode & DMA_MODE_WRITE) {
+ sh64_out64(info->mem_addr & DMAC_SAR_ADDR, DMAC_SAR(chan));
+ } else {
+ sh64_out64(info->mem_addr & DMAC_DAR_ADDR, DMAC_DAR(chan));
+ }
+
+ set_dma_count(chan, info->count);
+
+ /* Clear the TE bit */
+ status = sh64_in64(DMAC_STATUS(chan));
+ status &= ~DMAC_STATUS_TE;
+ sh64_out64(status, DMAC_STATUS(chan));
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction irq_dmte = {
+ .handler = dma_mte,
+ .flags = SA_INTERRUPT,
+ .name = "DMA MTE",
+};
+
+static irqreturn_t dma_err(int irq, void *dev_id, struct pt_regs *regs)
+{
+ u64 tmp;
+ u8 chan;
+
+ printk(KERN_NOTICE "DMAC: Got a DMA Error!\n");
+
+ tmp = sh64_in64(DMAC_COMMON_BASE);
+
+ /* Check for the type of error */
+ if ((chan = tmp & DMAC_COMMON_AAE)) {
+ /* It's an address alignment error.. */
+ printk(KERN_NOTICE "DMAC: Alignment error on channel %d, ", chan);
+
+ printk(KERN_NOTICE "SAR: 0x%08llx, DAR: 0x%08llx, COUNT: %lld\n",
+ (sh64_in64(DMAC_SAR(chan)) & DMAC_SAR_ADDR),
+ (sh64_in64(DMAC_DAR(chan)) & DMAC_DAR_ADDR),
+ (sh64_in64(DMAC_COUNT(chan)) & DMAC_COUNT_CNT));
+
+ } else if ((chan = tmp & DMAC_COMMON_ER)) {
+ /* Something else went wrong.. */
+ printk(KERN_NOTICE "DMAC: Error on channel %d\n", chan);
+ }
+
+ /* Reset the ME bit to clear the interrupt */
+ tmp |= DMAC_COMMON_ME;
+ sh64_out64(tmp, DMAC_COMMON_BASE);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction irq_derr = {
+ .handler = dma_err,
+ .flags = SA_INTERRUPT,
+ .name = "DMA Error",
+};
+
+static inline unsigned long calc_xmit_shift(unsigned int chan)
+{
+ return sh64_in64(DMAC_CTRL(chan)) & 0x03;
+}
+
+void setup_dma(unsigned int chan, dma_info_t *info)
+{
+ unsigned int irq = DMA_IRQ_DMTE0 + chan;
+ dma_info_t *dma = dma_info + chan;
+
+ make_intc_irq(irq);
+ setup_irq(irq, &irq_dmte);
+ dma = info;
+}
+
+void enable_dma(unsigned int chan)
+{
+ u64 ctrl;
+
+ ctrl = sh64_in64(DMAC_CTRL(chan));
+ ctrl |= DMAC_CTRL_TE;
+ sh64_out64(ctrl, DMAC_CTRL(chan));
+}
+
+void disable_dma(unsigned int chan)
+{
+ u64 ctrl;
+
+ ctrl = sh64_in64(DMAC_CTRL(chan));
+ ctrl &= ~DMAC_CTRL_TE;
+ sh64_out64(ctrl, DMAC_CTRL(chan));
+}
+
+void set_dma_mode(unsigned int chan, char mode)
+{
+ dma_info_t *info = dma_info + chan;
+
+ info->mode = mode;
+
+ set_dma_addr(chan, info->mem_addr);
+ set_dma_count(chan, info->count);
+}
+
+void set_dma_addr(unsigned int chan, unsigned int addr)
+{
+ dma_info_t *info = dma_info + chan;
+ unsigned long sar, dar;
+
+ info->mem_addr = addr;
+ sar = (info->mode & DMA_MODE_WRITE) ? info->mem_addr : info->dev_addr;
+ dar = (info->mode & DMA_MODE_WRITE) ? info->dev_addr : info->mem_addr;
+
+ sh64_out64(sar & DMAC_SAR_ADDR, DMAC_SAR(chan));
+ sh64_out64(dar & DMAC_SAR_ADDR, DMAC_DAR(chan));
+}
+
+void set_dma_count(unsigned int chan, unsigned int count)
+{
+ dma_info_t *info = dma_info + chan;
+ u64 tmp;
+
+ info->count = count;
+
+ tmp = (info->count >> calc_xmit_shift(chan)) & DMAC_COUNT_CNT;
+
+ sh64_out64(tmp, DMAC_COUNT(chan));
+}
+
+unsigned long claim_dma_lock(void)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dma_spin_lock, flags);
+
+ return flags;
+}
+
+void release_dma_lock(unsigned long flags)
+{
+ spin_unlock_irqrestore(&dma_spin_lock, flags);
+}
+
+int get_dma_residue(unsigned int chan)
+{
+ return sh64_in64(DMAC_COUNT(chan) << calc_xmit_shift(chan));
+}
+
+int __init init_dma(void)
+{
+ struct vcr_info vcr;
+ u64 tmp;
+
+ /* Remap the DMAC */
+ dmac_base = onchip_remap(PHYS_DMAC_BLOCK, 1024, "DMAC");
+ if (!dmac_base) {
+ printk(KERN_ERR "Unable to remap DMAC\n");
+ return -ENOMEM;
+ }
+
+ /* Report DMAC.VCR Info */
+ vcr = sh64_get_vcr_info(dmac_base);
+ printk("DMAC: Module ID: 0x%04x, Module version: 0x%04x\n",
+ vcr.mod_id, vcr.mod_vers);
+
+ /* Set the ME bit */
+ tmp = sh64_in64(DMAC_COMMON_BASE);
+ tmp |= DMAC_COMMON_ME;
+ sh64_out64(tmp, DMAC_COMMON_BASE);
+
+ /* Enable the DMAC Error Interrupt */
+ make_intc_irq(DMA_IRQ_DERR);
+ setup_irq(DMA_IRQ_DERR, &irq_derr);
+
+ return 0;
+}
+
+static void __exit exit_dma(void)
+{
+ onchip_unmap(dmac_base);
+ free_irq(DMA_IRQ_DERR, 0);
+}
+
+module_init(init_dma);
+module_exit(exit_dma);
+
+MODULE_AUTHOR("Paul Mundt");
+MODULE_DESCRIPTION("DMA API for SH-5 DMAC");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(setup_dma);
+EXPORT_SYMBOL(claim_dma_lock);
+EXPORT_SYMBOL(release_dma_lock);
+EXPORT_SYMBOL(enable_dma);
+EXPORT_SYMBOL(disable_dma);
+EXPORT_SYMBOL(set_dma_mode);
+EXPORT_SYMBOL(set_dma_addr);
+EXPORT_SYMBOL(set_dma_count);
+EXPORT_SYMBOL(get_dma_residue);
+
--- /dev/null
+/*
+ * arch/sh64/kernel/early_printk.c
+ *
+ * SH-5 Early SCIF console (cloned and hacked from sh implementation)
+ *
+ * Copyright (C) 2003, 2004 Paul Mundt <lethal@linux-sh.org>
+ * Copyright (C) 2002 M. R. Brown <mrbrown@0xd6.org>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/console.h>
+#include <linux/tty.h>
+#include <linux/init.h>
+#include <asm/io.h>
+#include <asm/hardware.h>
+
+extern void cpu_relax(void);
+
+#define SCIF_BASE_ADDR 0x01030000
+#define SCIF_ADDR_SH5 PHYS_PERIPHERAL_BLOCK+SCIF_BASE_ADDR
+
+/*
+ * Fixed virtual address where SCIF is mapped (should already be done
+ * in arch/sh64/kernel/head.S!).
+ */
+#define SCIF_REG 0xfa030000
+
+enum {
+ SCIF_SCSMR2 = SCIF_REG + 0x00,
+ SCIF_SCBRR2 = SCIF_REG + 0x04,
+ SCIF_SCSCR2 = SCIF_REG + 0x08,
+ SCIF_SCFTDR2 = SCIF_REG + 0x0c,
+ SCIF_SCFSR2 = SCIF_REG + 0x10,
+ SCIF_SCFRDR2 = SCIF_REG + 0x14,
+ SCIF_SCFCR2 = SCIF_REG + 0x18,
+ SCIF_SCFDR2 = SCIF_REG + 0x1c,
+ SCIF_SCSPTR2 = SCIF_REG + 0x20,
+ SCIF_SCLSR2 = SCIF_REG + 0x24,
+};
+
+static void sh_console_putc(int c)
+{
+ while (!(ctrl_inw(SCIF_SCFSR2) & 0x20))
+ cpu_relax();
+
+ ctrl_outb(c, SCIF_SCFTDR2);
+ ctrl_outw((ctrl_inw(SCIF_SCFSR2) & 0x9f), SCIF_SCFSR2);
+
+ if (c == '\n')
+ sh_console_putc('\r');
+}
+
+static void sh_console_flush(void)
+{
+ ctrl_outw((ctrl_inw(SCIF_SCFSR2) & 0xbf), SCIF_SCFSR2);
+
+ while (!(ctrl_inw(SCIF_SCFSR2) & 0x40))
+ cpu_relax();
+
+ ctrl_outw((ctrl_inw(SCIF_SCFSR2) & 0xbf), SCIF_SCFSR2);
+}
+
+static void sh_console_write(struct console *con, const char *s, unsigned count)
+{
+ while (count-- > 0)
+ sh_console_putc(*s++);
+
+ sh_console_flush();
+}
+
+static int __init sh_console_setup(struct console *con, char *options)
+{
+ con->cflag = CREAD | HUPCL | CLOCAL | B19200 | CS8;
+
+ return 0;
+}
+
+static struct console sh_console = {
+ .name = "scifcon",
+ .write = sh_console_write,
+ .setup = sh_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+void __init enable_early_printk(void)
+{
+ ctrl_outb(0x2a, SCIF_SCBRR2); /* 19200bps */
+
+ ctrl_outw(0x04, SCIF_SCFCR2); /* Reset TFRST */
+ ctrl_outw(0x10, SCIF_SCFCR2); /* TTRG0=1 */
+
+ ctrl_outw(0, SCIF_SCSPTR2);
+ ctrl_outw(0x60, SCIF_SCFSR2);
+ ctrl_outw(0, SCIF_SCLSR2);
+ ctrl_outw(0x30, SCIF_SCSCR2);
+
+ register_console(&sh_console);
+}
+
+void disable_early_printk(void)
+{
+ unregister_console(&sh_console);
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/entry.S
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/sys.h>
+
+#include <asm/processor.h>
+#include <asm/registers.h>
+#include <asm/unistd.h>
+#include <asm/thread_info.h>
+#include <asm/asm-offsets.h>
+
+/*
+ * SR fields.
+ */
+#define SR_ASID_MASK 0x00ff0000
+#define SR_FD_MASK 0x00008000
+#define SR_SS 0x08000000
+#define SR_BL 0x10000000
+#define SR_MD 0x40000000
+
+/*
+ * Event code.
+ */
+#define EVENT_INTERRUPT 0
+#define EVENT_FAULT_TLB 1
+#define EVENT_FAULT_NOT_TLB 2
+#define EVENT_DEBUG 3
+
+/* EXPEVT values */
+#define RESET_CAUSE 0x20
+#define DEBUGSS_CAUSE 0x980
+
+/*
+ * Frame layout. Quad index.
+ */
+#define FRAME_T(x) FRAME_TBASE+(x*8)
+#define FRAME_R(x) FRAME_RBASE+(x*8)
+#define FRAME_S(x) FRAME_SBASE+(x*8)
+#define FSPC 0
+#define FSSR 1
+#define FSYSCALL_ID 2
+
+/* Arrange the save frame to be a multiple of 32 bytes long */
+#define FRAME_SBASE 0
+#define FRAME_RBASE (FRAME_SBASE+(3*8)) /* SYSCALL_ID - SSR - SPC */
+#define FRAME_TBASE (FRAME_RBASE+(63*8)) /* r0 - r62 */
+#define FRAME_PBASE (FRAME_TBASE+(8*8)) /* tr0 -tr7 */
+#define FRAME_SIZE (FRAME_PBASE+(2*8)) /* pad0-pad1 */
+
+#define FP_FRAME_SIZE FP_FRAME_BASE+(33*8) /* dr0 - dr31 + fpscr */
+#define FP_FRAME_BASE 0
+
+#define SAVED_R2 0*8
+#define SAVED_R3 1*8
+#define SAVED_R4 2*8
+#define SAVED_R5 3*8
+#define SAVED_R18 4*8
+#define SAVED_R6 5*8
+#define SAVED_TR0 6*8
+
+/* These are the registers saved in the TLB path that aren't saved in the first
+ level of the normal one. */
+#define TLB_SAVED_R25 7*8
+#define TLB_SAVED_TR1 8*8
+#define TLB_SAVED_TR2 9*8
+#define TLB_SAVED_TR3 10*8
+#define TLB_SAVED_TR4 11*8
+/* Save R0/R1 : PT-migrating compiler currently dishounours -ffixed-r0 and -ffixed-r1 causing
+ breakage otherwise. */
+#define TLB_SAVED_R0 12*8
+#define TLB_SAVED_R1 13*8
+
+#define CLI() \
+ getcon SR, r6; \
+ ori r6, 0xf0, r6; \
+ putcon r6, SR;
+
+#define STI() \
+ getcon SR, r6; \
+ andi r6, ~0xf0, r6; \
+ putcon r6, SR;
+
+#ifdef CONFIG_PREEMPT
+# define preempt_stop() CLI()
+#else
+# define preempt_stop()
+# define resume_kernel restore_all
+#endif
+
+ .section .data, "aw"
+
+#define FAST_TLBMISS_STACK_CACHELINES 4
+#define FAST_TLBMISS_STACK_QUADWORDS (4*FAST_TLBMISS_STACK_CACHELINES)
+
+/* Register back-up area for all exceptions */
+ .balign 32
+ /* Allow for 16 quadwords to be pushed by fast tlbmiss handling
+ * register saves etc. */
+ .fill FAST_TLBMISS_STACK_QUADWORDS, 8, 0x0
+/* This is 32 byte aligned by construction */
+/* Register back-up area for all exceptions */
+reg_save_area:
+ .quad 0
+ .quad 0
+ .quad 0
+ .quad 0
+
+ .quad 0
+ .quad 0
+ .quad 0
+ .quad 0
+
+ .quad 0
+ .quad 0
+ .quad 0
+ .quad 0
+
+ .quad 0
+ .quad 0
+
+/* Save area for RESVEC exceptions. We cannot use reg_save_area because of
+ * reentrancy. Note this area may be accessed via physical address.
+ * Align so this fits a whole single cache line, for ease of purging.
+ */
+ .balign 32,0,32
+resvec_save_area:
+ .quad 0
+ .quad 0
+ .quad 0
+ .quad 0
+ .quad 0
+ .balign 32,0,32
+
+/* Jump table of 3rd level handlers */
+trap_jtable:
+ .long do_exception_error /* 0x000 */
+ .long do_exception_error /* 0x020 */
+ .long tlb_miss_load /* 0x040 */
+ .long tlb_miss_store /* 0x060 */
+ ! ARTIFICIAL pseudo-EXPEVT setting
+ .long do_debug_interrupt /* 0x080 */
+ .long tlb_miss_load /* 0x0A0 */
+ .long tlb_miss_store /* 0x0C0 */
+ .long do_address_error_load /* 0x0E0 */
+ .long do_address_error_store /* 0x100 */
+#ifndef CONFIG_NOFPU_SUPPORT
+ .long do_fpu_error /* 0x120 */
+#else
+ .long do_exception_error /* 0x120 */
+#endif
+ .long do_exception_error /* 0x140 */
+ .long system_call /* 0x160 */
+ .long do_reserved_inst /* 0x180 */
+ .long do_illegal_slot_inst /* 0x1A0 */
+ .long do_NMI /* 0x1C0 */
+ .long do_exception_error /* 0x1E0 */
+ .rept 15
+ .long do_IRQ /* 0x200 - 0x3C0 */
+ .endr
+ .long do_exception_error /* 0x3E0 */
+ .rept 32
+ .long do_IRQ /* 0x400 - 0x7E0 */
+ .endr
+ .long fpu_error_or_IRQA /* 0x800 */
+ .long fpu_error_or_IRQB /* 0x820 */
+ .long do_IRQ /* 0x840 */
+ .long do_IRQ /* 0x860 */
+ .rept 6
+ .long do_exception_error /* 0x880 - 0x920 */
+ .endr
+ .long do_software_break_point /* 0x940 */
+ .long do_exception_error /* 0x960 */
+ .long do_single_step /* 0x980 */
+
+ .rept 3
+ .long do_exception_error /* 0x9A0 - 0x9E0 */
+ .endr
+ .long do_IRQ /* 0xA00 */
+ .long do_IRQ /* 0xA20 */
+ .long itlb_miss_or_IRQ /* 0xA40 */
+ .long do_IRQ /* 0xA60 */
+ .long do_IRQ /* 0xA80 */
+ .long itlb_miss_or_IRQ /* 0xAA0 */
+ .long do_exception_error /* 0xAC0 */
+ .long do_address_error_exec /* 0xAE0 */
+ .rept 8
+ .long do_exception_error /* 0xB00 - 0xBE0 */
+ .endr
+ .rept 18
+ .long do_IRQ /* 0xC00 - 0xE20 */
+ .endr
+
+ .section .text64, "ax"
+
+/*
+ * --- Exception/Interrupt/Event Handling Section
+ */
+
+/*
+ * VBR and RESVEC blocks.
+ *
+ * First level handler for VBR-based exceptions.
+ *
+ * To avoid waste of space, align to the maximum text block size.
+ * This is assumed to be at most 128 bytes or 32 instructions.
+ * DO NOT EXCEED 32 instructions on the first level handlers !
+ *
+ * Also note that RESVEC is contained within the VBR block
+ * where the room left (1KB - TEXT_SIZE) allows placing
+ * the RESVEC block (at most 512B + TEXT_SIZE).
+ *
+ * So first (and only) level handler for RESVEC-based exceptions.
+ *
+ * Where the fault/interrupt is handled (not_a_tlb_miss, tlb_miss
+ * and interrupt) we are a lot tight with register space until
+ * saving onto the stack frame, which is done in handle_exception().
+ *
+ */
+
+#define TEXT_SIZE 128
+#define BLOCK_SIZE 1664 /* Dynamic check, 13*128 */
+
+ .balign TEXT_SIZE
+LVBR_block:
+ .space 256, 0 /* Power-on class handler, */
+ /* not required here */
+not_a_tlb_miss:
+ /* Save original stack pointer into KCR1 */
+ putcon SP, KCR1
+
+ /* Save other original registers into reg_save_area */
+ movi reg_save_area, SP
+ st.q SP, SAVED_R2, r2
+ st.q SP, SAVED_R3, r3
+ st.q SP, SAVED_R4, r4
+ st.q SP, SAVED_R5, r5
+ st.q SP, SAVED_R6, r6
+ st.q SP, SAVED_R18, r18
+ gettr tr0, r3
+ st.q SP, SAVED_TR0, r3
+
+ /* Set args for Non-debug, Not a TLB miss class handler */
+ getcon EXPEVT, r2
+ movi ret_from_exception, r3
+ ori r3, 1, r3
+ movi EVENT_FAULT_NOT_TLB, r4
+ or SP, ZERO, r5
+ getcon KCR1, SP
+ pta handle_exception, tr0
+ blink tr0, ZERO
+
+ .balign 256
+ ! VBR+0x200
+ nop
+ .balign 256
+ ! VBR+0x300
+ nop
+ .balign 256
+ /*
+ * Instead of the natural .balign 1024 place RESVEC here
+ * respecting the final 1KB alignment.
+ */
+ .balign TEXT_SIZE
+ /*
+ * Instead of '.space 1024-TEXT_SIZE' place the RESVEC
+ * block making sure the final alignment is correct.
+ */
+tlb_miss:
+ putcon SP, KCR1
+ movi reg_save_area, SP
+ /* SP is guaranteed 32-byte aligned. */
+ st.q SP, TLB_SAVED_R0 , r0
+ st.q SP, TLB_SAVED_R1 , r1
+ st.q SP, SAVED_R2 , r2
+ st.q SP, SAVED_R3 , r3
+ st.q SP, SAVED_R4 , r4
+ st.q SP, SAVED_R5 , r5
+ st.q SP, SAVED_R6 , r6
+ st.q SP, SAVED_R18, r18
+
+ /* Save R25 for safety; as/ld may want to use it to achieve the call to
+ * the code in mm/tlbmiss.c */
+ st.q SP, TLB_SAVED_R25, r25
+ gettr tr0, r2
+ gettr tr1, r3
+ gettr tr2, r4
+ gettr tr3, r5
+ gettr tr4, r18
+ st.q SP, SAVED_TR0 , r2
+ st.q SP, TLB_SAVED_TR1 , r3
+ st.q SP, TLB_SAVED_TR2 , r4
+ st.q SP, TLB_SAVED_TR3 , r5
+ st.q SP, TLB_SAVED_TR4 , r18
+
+ pt do_fast_page_fault, tr0
+ getcon SSR, r2
+ getcon EXPEVT, r3
+ getcon TEA, r4
+ shlri r2, 30, r2
+ andi r2, 1, r2 /* r2 = SSR.MD */
+ blink tr0, LINK
+
+ pt fixup_to_invoke_general_handler, tr1
+
+ /* If the fast path handler fixed the fault, just drop through quickly
+ to the restore code right away to return to the excepting context.
+ */
+ beqi/u r2, 0, tr1
+
+fast_tlb_miss_restore:
+ ld.q SP, SAVED_TR0, r2
+ ld.q SP, TLB_SAVED_TR1, r3
+ ld.q SP, TLB_SAVED_TR2, r4
+
+ ld.q SP, TLB_SAVED_TR3, r5
+ ld.q SP, TLB_SAVED_TR4, r18
+
+ ptabs r2, tr0
+ ptabs r3, tr1
+ ptabs r4, tr2
+ ptabs r5, tr3
+ ptabs r18, tr4
+
+ ld.q SP, TLB_SAVED_R0, r0
+ ld.q SP, TLB_SAVED_R1, r1
+ ld.q SP, SAVED_R2, r2
+ ld.q SP, SAVED_R3, r3
+ ld.q SP, SAVED_R4, r4
+ ld.q SP, SAVED_R5, r5
+ ld.q SP, SAVED_R6, r6
+ ld.q SP, SAVED_R18, r18
+ ld.q SP, TLB_SAVED_R25, r25
+
+ getcon KCR1, SP
+ rte
+ nop /* for safety, in case the code is run on sh5-101 cut1.x */
+
+fixup_to_invoke_general_handler:
+
+ /* OK, new method. Restore stuff that's not expected to get saved into
+ the 'first-level' reg save area, then just fall through to setting
+ up the registers and calling the second-level handler. */
+
+ /* 2nd level expects r2,3,4,5,6,18,tr0 to be saved. So we must restore
+ r25,tr1-4 and save r6 to get into the right state. */
+
+ ld.q SP, TLB_SAVED_TR1, r3
+ ld.q SP, TLB_SAVED_TR2, r4
+ ld.q SP, TLB_SAVED_TR3, r5
+ ld.q SP, TLB_SAVED_TR4, r18
+ ld.q SP, TLB_SAVED_R25, r25
+
+ ld.q SP, TLB_SAVED_R0, r0
+ ld.q SP, TLB_SAVED_R1, r1
+
+ ptabs/u r3, tr1
+ ptabs/u r4, tr2
+ ptabs/u r5, tr3
+ ptabs/u r18, tr4
+
+ /* Set args for Non-debug, TLB miss class handler */
+ getcon EXPEVT, r2
+ movi ret_from_exception, r3
+ ori r3, 1, r3
+ movi EVENT_FAULT_TLB, r4
+ or SP, ZERO, r5
+ getcon KCR1, SP
+ pta handle_exception, tr0
+ blink tr0, ZERO
+
+/* NB TAKE GREAT CARE HERE TO ENSURE THAT THE INTERRUPT CODE
+ DOES END UP AT VBR+0x600 */
+ nop
+ nop
+ nop
+ nop
+ nop
+ nop
+
+ .balign 256
+ /* VBR + 0x600 */
+
+interrupt:
+ /* Save original stack pointer into KCR1 */
+ putcon SP, KCR1
+
+ /* Save other original registers into reg_save_area */
+ movi reg_save_area, SP
+ st.q SP, SAVED_R2, r2
+ st.q SP, SAVED_R3, r3
+ st.q SP, SAVED_R4, r4
+ st.q SP, SAVED_R5, r5
+ st.q SP, SAVED_R6, r6
+ st.q SP, SAVED_R18, r18
+ gettr tr0, r3
+ st.q SP, SAVED_TR0, r3
+
+ /* Set args for interrupt class handler */
+ getcon INTEVT, r2
+ movi ret_from_irq, r3
+ ori r3, 1, r3
+ movi EVENT_INTERRUPT, r4
+ or SP, ZERO, r5
+ getcon KCR1, SP
+ pta handle_exception, tr0
+ blink tr0, ZERO
+ .balign TEXT_SIZE /* let's waste the bare minimum */
+
+LVBR_block_end: /* Marker. Used for total checking */
+
+ .balign 256
+LRESVEC_block:
+ /* Panic handler. Called with MMU off. Possible causes/actions:
+ * - Reset: Jump to program start.
+ * - Single Step: Turn off Single Step & return.
+ * - Others: Call panic handler, passing PC as arg.
+ * (this may need to be extended...)
+ */
+reset_or_panic:
+ putcon SP, DCR
+ /* First save r0-1 and tr0, as we need to use these */
+ movi resvec_save_area-CONFIG_CACHED_MEMORY_OFFSET, SP
+ st.q SP, 0, r0
+ st.q SP, 8, r1
+ gettr tr0, r0
+ st.q SP, 32, r0
+
+ /* Check cause */
+ getcon EXPEVT, r0
+ movi RESET_CAUSE, r1
+ sub r1, r0, r1 /* r1=0 if reset */
+ movi _stext-CONFIG_CACHED_MEMORY_OFFSET, r0
+ ori r0, 1, r0
+ ptabs r0, tr0
+ beqi r1, 0, tr0 /* Jump to start address if reset */
+
+ getcon EXPEVT, r0
+ movi DEBUGSS_CAUSE, r1
+ sub r1, r0, r1 /* r1=0 if single step */
+ pta single_step_panic, tr0
+ beqi r1, 0, tr0 /* jump if single step */
+
+ /* Now jump to where we save the registers. */
+ movi panic_stash_regs-CONFIG_CACHED_MEMORY_OFFSET, r1
+ ptabs r1, tr0
+ blink tr0, r63
+
+single_step_panic:
+ /* We are in a handler with Single Step set. We need to resume the
+ * handler, by turning on MMU & turning off Single Step. */
+ getcon SSR, r0
+ movi SR_MMU, r1
+ or r0, r1, r0
+ movi ~SR_SS, r1
+ and r0, r1, r0
+ putcon r0, SSR
+ /* Restore EXPEVT, as the rte won't do this */
+ getcon PEXPEVT, r0
+ putcon r0, EXPEVT
+ /* Restore regs */
+ ld.q SP, 32, r0
+ ptabs r0, tr0
+ ld.q SP, 0, r0
+ ld.q SP, 8, r1
+ getcon DCR, SP
+ synco
+ rte
+
+
+ .balign 256
+debug_exception:
+ /*
+ * Single step/software_break_point first level handler.
+ * Called with MMU off, so the first thing we do is enable it
+ * by doing an rte with appropriate SSR.
+ */
+ putcon SP, DCR
+ /* Save SSR & SPC, together with R0 & R1, as we need to use 2 regs. */
+ movi resvec_save_area-CONFIG_CACHED_MEMORY_OFFSET, SP
+
+ /* With the MMU off, we are bypassing the cache, so purge any
+ * data that will be made stale by the following stores.
+ */
+ ocbp SP, 0
+ synco
+
+ st.q SP, 0, r0
+ st.q SP, 8, r1
+ getcon SPC, r0
+ st.q SP, 16, r0
+ getcon SSR, r0
+ st.q SP, 24, r0
+
+ /* Enable MMU, block exceptions, set priv mode, disable single step */
+ movi SR_MMU | SR_BL | SR_MD, r1
+ or r0, r1, r0
+ movi ~SR_SS, r1
+ and r0, r1, r0
+ putcon r0, SSR
+ /* Force control to debug_exception_2 when rte is executed */
+ movi debug_exeception_2, r0
+ ori r0, 1, r0 /* force SHmedia, just in case */
+ putcon r0, SPC
+ getcon DCR, SP
+ synco
+ rte
+debug_exeception_2:
+ /* Restore saved regs */
+ putcon SP, KCR1
+ movi resvec_save_area, SP
+ ld.q SP, 24, r0
+ putcon r0, SSR
+ ld.q SP, 16, r0
+ putcon r0, SPC
+ ld.q SP, 0, r0
+ ld.q SP, 8, r1
+
+ /* Save other original registers into reg_save_area */
+ movi reg_save_area, SP
+ st.q SP, SAVED_R2, r2
+ st.q SP, SAVED_R3, r3
+ st.q SP, SAVED_R4, r4
+ st.q SP, SAVED_R5, r5
+ st.q SP, SAVED_R6, r6
+ st.q SP, SAVED_R18, r18
+ gettr tr0, r3
+ st.q SP, SAVED_TR0, r3
+
+ /* Set args for debug class handler */
+ getcon EXPEVT, r2
+ movi ret_from_exception, r3
+ ori r3, 1, r3
+ movi EVENT_DEBUG, r4
+ or SP, ZERO, r5
+ getcon KCR1, SP
+ pta handle_exception, tr0
+ blink tr0, ZERO
+
+ .balign 256
+debug_interrupt:
+ /* !!! WE COME HERE IN REAL MODE !!! */
+ /* Hook-up debug interrupt to allow various debugging options to be
+ * hooked into its handler. */
+ /* Save original stack pointer into KCR1 */
+ synco
+ putcon SP, KCR1
+ movi resvec_save_area-CONFIG_CACHED_MEMORY_OFFSET, SP
+ ocbp SP, 0
+ ocbp SP, 32
+ synco
+
+ /* Save other original registers into reg_save_area thru real addresses */
+ st.q SP, SAVED_R2, r2
+ st.q SP, SAVED_R3, r3
+ st.q SP, SAVED_R4, r4
+ st.q SP, SAVED_R5, r5
+ st.q SP, SAVED_R6, r6
+ st.q SP, SAVED_R18, r18
+ gettr tr0, r3
+ st.q SP, SAVED_TR0, r3
+
+ /* move (spc,ssr)->(pspc,pssr). The rte will shift
+ them back again, so that they look like the originals
+ as far as the real handler code is concerned. */
+ getcon spc, r6
+ putcon r6, pspc
+ getcon ssr, r6
+ putcon r6, pssr
+
+ ! construct useful SR for handle_exception
+ movi 3, r6
+ shlli r6, 30, r6
+ getcon sr, r18
+ or r18, r6, r6
+ putcon r6, ssr
+
+ ! SSR is now the current SR with the MD and MMU bits set
+ ! i.e. the rte will switch back to priv mode and put
+ ! the mmu back on
+
+ ! construct spc
+ movi handle_exception, r18
+ ori r18, 1, r18 ! for safety (do we need this?)
+ putcon r18, spc
+
+ /* Set args for Non-debug, Not a TLB miss class handler */
+
+ ! EXPEVT==0x80 is unused, so 'steal' this value to put the
+ ! debug interrupt handler in the vectoring table
+ movi 0x80, r2
+ movi ret_from_exception, r3
+ ori r3, 1, r3
+ movi EVENT_FAULT_NOT_TLB, r4
+
+ or SP, ZERO, r5
+ movi CONFIG_CACHED_MEMORY_OFFSET, r6
+ add r6, r5, r5
+ getcon KCR1, SP
+
+ synco ! for safety
+ rte ! -> handle_exception, switch back to priv mode again
+
+LRESVEC_block_end: /* Marker. Unused. */
+
+ .balign TEXT_SIZE
+
+/*
+ * Second level handler for VBR-based exceptions. Pre-handler.
+ * In common to all stack-frame sensitive handlers.
+ *
+ * Inputs:
+ * (KCR0) Current [current task union]
+ * (KCR1) Original SP
+ * (r2) INTEVT/EXPEVT
+ * (r3) appropriate return address
+ * (r4) Event (0 = interrupt, 1 = TLB miss fault, 2 = Not TLB miss fault, 3=debug)
+ * (r5) Pointer to reg_save_area
+ * (SP) Original SP
+ *
+ * Available registers:
+ * (r6)
+ * (r18)
+ * (tr0)
+ *
+ */
+handle_exception:
+ /* Common 2nd level handler. */
+
+ /* First thing we need an appropriate stack pointer */
+ getcon SSR, r6
+ shlri r6, 30, r6
+ andi r6, 1, r6
+ pta stack_ok, tr0
+ bne r6, ZERO, tr0 /* Original stack pointer is fine */
+
+ /* Set stack pointer for user fault */
+ getcon KCR0, SP
+ movi THREAD_SIZE, r6 /* Point to the end */
+ add SP, r6, SP
+
+stack_ok:
+
+/* DEBUG : check for underflow/overflow of the kernel stack */
+ pta no_underflow, tr0
+ getcon KCR0, r6
+ movi 1024, r18
+ add r6, r18, r6
+ bge SP, r6, tr0 ! ? below 1k from bottom of stack : danger zone
+
+/* Just panic to cause a crash. */
+bad_sp:
+ ld.b r63, 0, r6
+ nop
+
+no_underflow:
+ pta bad_sp, tr0
+ getcon kcr0, r6
+ movi THREAD_SIZE, r18
+ add r18, r6, r6
+ bgt SP, r6, tr0 ! sp above the stack
+
+ /* Make some room for the BASIC frame. */
+ movi -(FRAME_SIZE), r6
+ add SP, r6, SP
+
+/* Could do this with no stalling if we had another spare register, but the
+ code below will be OK. */
+ ld.q r5, SAVED_R2, r6
+ ld.q r5, SAVED_R3, r18
+ st.q SP, FRAME_R(2), r6
+ ld.q r5, SAVED_R4, r6
+ st.q SP, FRAME_R(3), r18
+ ld.q r5, SAVED_R5, r18
+ st.q SP, FRAME_R(4), r6
+ ld.q r5, SAVED_R6, r6
+ st.q SP, FRAME_R(5), r18
+ ld.q r5, SAVED_R18, r18
+ st.q SP, FRAME_R(6), r6
+ ld.q r5, SAVED_TR0, r6
+ st.q SP, FRAME_R(18), r18
+ st.q SP, FRAME_T(0), r6
+
+ /* Keep old SP around */
+ getcon KCR1, r6
+
+ /* Save the rest of the general purpose registers */
+ st.q SP, FRAME_R(0), r0
+ st.q SP, FRAME_R(1), r1
+ st.q SP, FRAME_R(7), r7
+ st.q SP, FRAME_R(8), r8
+ st.q SP, FRAME_R(9), r9
+ st.q SP, FRAME_R(10), r10
+ st.q SP, FRAME_R(11), r11
+ st.q SP, FRAME_R(12), r12
+ st.q SP, FRAME_R(13), r13
+ st.q SP, FRAME_R(14), r14
+
+ /* SP is somewhere else */
+ st.q SP, FRAME_R(15), r6
+
+ st.q SP, FRAME_R(16), r16
+ st.q SP, FRAME_R(17), r17
+ /* r18 is saved earlier. */
+ st.q SP, FRAME_R(19), r19
+ st.q SP, FRAME_R(20), r20
+ st.q SP, FRAME_R(21), r21
+ st.q SP, FRAME_R(22), r22
+ st.q SP, FRAME_R(23), r23
+ st.q SP, FRAME_R(24), r24
+ st.q SP, FRAME_R(25), r25
+ st.q SP, FRAME_R(26), r26
+ st.q SP, FRAME_R(27), r27
+ st.q SP, FRAME_R(28), r28
+ st.q SP, FRAME_R(29), r29
+ st.q SP, FRAME_R(30), r30
+ st.q SP, FRAME_R(31), r31
+ st.q SP, FRAME_R(32), r32
+ st.q SP, FRAME_R(33), r33
+ st.q SP, FRAME_R(34), r34
+ st.q SP, FRAME_R(35), r35
+ st.q SP, FRAME_R(36), r36
+ st.q SP, FRAME_R(37), r37
+ st.q SP, FRAME_R(38), r38
+ st.q SP, FRAME_R(39), r39
+ st.q SP, FRAME_R(40), r40
+ st.q SP, FRAME_R(41), r41
+ st.q SP, FRAME_R(42), r42
+ st.q SP, FRAME_R(43), r43
+ st.q SP, FRAME_R(44), r44
+ st.q SP, FRAME_R(45), r45
+ st.q SP, FRAME_R(46), r46
+ st.q SP, FRAME_R(47), r47
+ st.q SP, FRAME_R(48), r48
+ st.q SP, FRAME_R(49), r49
+ st.q SP, FRAME_R(50), r50
+ st.q SP, FRAME_R(51), r51
+ st.q SP, FRAME_R(52), r52
+ st.q SP, FRAME_R(53), r53
+ st.q SP, FRAME_R(54), r54
+ st.q SP, FRAME_R(55), r55
+ st.q SP, FRAME_R(56), r56
+ st.q SP, FRAME_R(57), r57
+ st.q SP, FRAME_R(58), r58
+ st.q SP, FRAME_R(59), r59
+ st.q SP, FRAME_R(60), r60
+ st.q SP, FRAME_R(61), r61
+ st.q SP, FRAME_R(62), r62
+
+ /*
+ * Save the S* registers.
+ */
+ getcon SSR, r61
+ st.q SP, FRAME_S(FSSR), r61
+ getcon SPC, r62
+ st.q SP, FRAME_S(FSPC), r62
+ movi -1, r62 /* Reset syscall_nr */
+ st.q SP, FRAME_S(FSYSCALL_ID), r62
+
+ /* Save the rest of the target registers */
+ gettr tr1, r6
+ st.q SP, FRAME_T(1), r6
+ gettr tr2, r6
+ st.q SP, FRAME_T(2), r6
+ gettr tr3, r6
+ st.q SP, FRAME_T(3), r6
+ gettr tr4, r6
+ st.q SP, FRAME_T(4), r6
+ gettr tr5, r6
+ st.q SP, FRAME_T(5), r6
+ gettr tr6, r6
+ st.q SP, FRAME_T(6), r6
+ gettr tr7, r6
+ st.q SP, FRAME_T(7), r6
+
+ ! setup FP so that unwinder can wind back through nested kernel mode
+ ! exceptions
+ add SP, ZERO, r14
+
+#define POOR_MANS_STRACE 0
+
+#if POOR_MANS_STRACE
+ /* We've pushed all the registers now, so only r2-r4 hold anything
+ * useful. Move them into callee save registers */
+ or r2, ZERO, r28
+ or r3, ZERO, r29
+ or r4, ZERO, r30
+
+ /* Preserve r2 as the event code */
+ movi evt_debug, r3
+ ori r3, 1, r3
+ ptabs r3, tr0
+
+ or SP, ZERO, r6
+ getcon TRA, r5
+ blink tr0, LINK
+
+ or r28, ZERO, r2
+ or r29, ZERO, r3
+ or r30, ZERO, r4
+#endif
+
+
+ /* For syscall and debug race condition, get TRA now */
+ getcon TRA, r5
+
+ /* We are in a safe position to turn SR.BL off, but set IMASK=0xf
+ * Also set FD, to catch FPU usage in the kernel.
+ *
+ * benedict.gaster@superh.com 29/07/2002
+ *
+ * On all SH5-101 revisions it is unsafe to raise the IMASK and at the
+ * same time change BL from 1->0, as any pending interrupt of a level
+ * higher than he previous value of IMASK will leak through and be
+ * taken unexpectedly.
+ *
+ * To avoid this we raise the IMASK and then issue another PUTCON to
+ * enable interrupts.
+ */
+ getcon SR, r6
+ movi SR_IMASK | SR_FD, r7
+ or r6, r7, r6
+ putcon r6, SR
+ movi SR_UNBLOCK_EXC, r7
+ and r6, r7, r6
+ putcon r6, SR
+
+
+ /* Now call the appropriate 3rd level handler */
+ or r3, ZERO, LINK
+ movi trap_jtable, r3
+ shlri r2, 3, r2
+ ldx.l r2, r3, r3
+ shlri r2, 2, r2
+ ptabs r3, tr0
+ or SP, ZERO, r3
+ blink tr0, ZERO
+
+/*
+ * Second level handler for VBR-based exceptions. Post-handlers.
+ *
+ * Post-handlers for interrupts (ret_from_irq), exceptions
+ * (ret_from_exception) and common reentrance doors (restore_all
+ * to get back to the original context, ret_from_syscall loop to
+ * check kernel exiting).
+ *
+ * ret_with_reschedule and work_notifysig are an inner lables of
+ * the ret_from_syscall loop.
+ *
+ * In common to all stack-frame sensitive handlers.
+ *
+ * Inputs:
+ * (SP) struct pt_regs *, original register's frame pointer (basic)
+ *
+ */
+ .global ret_from_irq
+ret_from_irq:
+#if POOR_MANS_STRACE
+ pta evt_debug_ret_from_irq, tr0
+ ori SP, 0, r2
+ blink tr0, LINK
+#endif
+ ld.q SP, FRAME_S(FSSR), r6
+ shlri r6, 30, r6
+ andi r6, 1, r6
+ pta resume_kernel, tr0
+ bne r6, ZERO, tr0 /* no further checks */
+ STI()
+ pta ret_with_reschedule, tr0
+ blink tr0, ZERO /* Do not check softirqs */
+
+ .global ret_from_exception
+ret_from_exception:
+ preempt_stop()
+
+#if POOR_MANS_STRACE
+ pta evt_debug_ret_from_exc, tr0
+ ori SP, 0, r2
+ blink tr0, LINK
+#endif
+
+ ld.q SP, FRAME_S(FSSR), r6
+ shlri r6, 30, r6
+ andi r6, 1, r6
+ pta resume_kernel, tr0
+ bne r6, ZERO, tr0 /* no further checks */
+
+ /* Check softirqs */
+
+#ifdef CONFIG_PREEMPT
+ pta ret_from_syscall, tr0
+ blink tr0, ZERO
+
+resume_kernel:
+ pta restore_all, tr0
+
+ getcon KCR0, r6
+ ld.l r6, TI_PRE_COUNT, r7
+ beq/u r7, ZERO, tr0
+
+need_resched:
+ ld.l r6, TI_FLAGS, r7
+ movi (1 << TIF_NEED_RESCHED), r8
+ and r8, r7, r8
+ bne r8, ZERO, tr0
+
+ getcon SR, r7
+ andi r7, 0xf0, r7
+ bne r7, ZERO, tr0
+
+ movi ((PREEMPT_ACTIVE >> 16) & 65535), r8
+ shori (PREEMPT_ACTIVE & 65535), r8
+ st.l r6, TI_PRE_COUNT, r8
+
+ STI()
+ movi schedule, r7
+ ori r7, 1, r7
+ ptabs r7, tr1
+ blink tr1, LINK
+
+ st.l r6, TI_PRE_COUNT, ZERO
+ CLI()
+
+ pta need_resched, tr1
+ blink tr1, ZERO
+#endif
+
+ .global ret_from_syscall
+ret_from_syscall:
+
+ret_with_reschedule:
+ getcon KCR0, r6 ! r6 contains current_thread_info
+ ld.l r6, TI_FLAGS, r7 ! r7 contains current_thread_info->flags
+
+ ! FIXME:!!!
+ ! no handling of TIF_SYSCALL_TRACE yet!!
+
+ movi (1 << TIF_NEED_RESCHED), r8
+ and r8, r7, r8
+ pta work_resched, tr0
+ bne r8, ZERO, tr0
+
+ pta restore_all, tr1
+
+ movi (1 << TIF_SIGPENDING), r8
+ and r8, r7, r8
+ pta work_notifysig, tr0
+ bne r8, ZERO, tr0
+
+ blink tr1, ZERO
+
+work_resched:
+ pta ret_from_syscall, tr0
+ gettr tr0, LINK
+ movi schedule, r6
+ ptabs r6, tr0
+ blink tr0, ZERO /* Call schedule(), return on top */
+
+work_notifysig:
+ gettr tr1, LINK
+
+ movi do_signal, r6
+ ptabs r6, tr0
+ or SP, ZERO, r2
+ or ZERO, ZERO, r3
+ blink tr0, LINK /* Call do_signal(regs, 0), return here */
+
+restore_all:
+ /* Do prefetches */
+
+ ld.q SP, FRAME_T(0), r6
+ ld.q SP, FRAME_T(1), r7
+ ld.q SP, FRAME_T(2), r8
+ ld.q SP, FRAME_T(3), r9
+ ptabs r6, tr0
+ ptabs r7, tr1
+ ptabs r8, tr2
+ ptabs r9, tr3
+ ld.q SP, FRAME_T(4), r6
+ ld.q SP, FRAME_T(5), r7
+ ld.q SP, FRAME_T(6), r8
+ ld.q SP, FRAME_T(7), r9
+ ptabs r6, tr4
+ ptabs r7, tr5
+ ptabs r8, tr6
+ ptabs r9, tr7
+
+ ld.q SP, FRAME_R(0), r0
+ ld.q SP, FRAME_R(1), r1
+ ld.q SP, FRAME_R(2), r2
+ ld.q SP, FRAME_R(3), r3
+ ld.q SP, FRAME_R(4), r4
+ ld.q SP, FRAME_R(5), r5
+ ld.q SP, FRAME_R(6), r6
+ ld.q SP, FRAME_R(7), r7
+ ld.q SP, FRAME_R(8), r8
+ ld.q SP, FRAME_R(9), r9
+ ld.q SP, FRAME_R(10), r10
+ ld.q SP, FRAME_R(11), r11
+ ld.q SP, FRAME_R(12), r12
+ ld.q SP, FRAME_R(13), r13
+ ld.q SP, FRAME_R(14), r14
+
+ ld.q SP, FRAME_R(16), r16
+ ld.q SP, FRAME_R(17), r17
+ ld.q SP, FRAME_R(18), r18
+ ld.q SP, FRAME_R(19), r19
+ ld.q SP, FRAME_R(20), r20
+ ld.q SP, FRAME_R(21), r21
+ ld.q SP, FRAME_R(22), r22
+ ld.q SP, FRAME_R(23), r23
+ ld.q SP, FRAME_R(24), r24
+ ld.q SP, FRAME_R(25), r25
+ ld.q SP, FRAME_R(26), r26
+ ld.q SP, FRAME_R(27), r27
+ ld.q SP, FRAME_R(28), r28
+ ld.q SP, FRAME_R(29), r29
+ ld.q SP, FRAME_R(30), r30
+ ld.q SP, FRAME_R(31), r31
+ ld.q SP, FRAME_R(32), r32
+ ld.q SP, FRAME_R(33), r33
+ ld.q SP, FRAME_R(34), r34
+ ld.q SP, FRAME_R(35), r35
+ ld.q SP, FRAME_R(36), r36
+ ld.q SP, FRAME_R(37), r37
+ ld.q SP, FRAME_R(38), r38
+ ld.q SP, FRAME_R(39), r39
+ ld.q SP, FRAME_R(40), r40
+ ld.q SP, FRAME_R(41), r41
+ ld.q SP, FRAME_R(42), r42
+ ld.q SP, FRAME_R(43), r43
+ ld.q SP, FRAME_R(44), r44
+ ld.q SP, FRAME_R(45), r45
+ ld.q SP, FRAME_R(46), r46
+ ld.q SP, FRAME_R(47), r47
+ ld.q SP, FRAME_R(48), r48
+ ld.q SP, FRAME_R(49), r49
+ ld.q SP, FRAME_R(50), r50
+ ld.q SP, FRAME_R(51), r51
+ ld.q SP, FRAME_R(52), r52
+ ld.q SP, FRAME_R(53), r53
+ ld.q SP, FRAME_R(54), r54
+ ld.q SP, FRAME_R(55), r55
+ ld.q SP, FRAME_R(56), r56
+ ld.q SP, FRAME_R(57), r57
+ ld.q SP, FRAME_R(58), r58
+
+ getcon SR, r59
+ movi SR_BLOCK_EXC, r60
+ or r59, r60, r59
+ putcon r59, SR /* SR.BL = 1, keep nesting out */
+ ld.q SP, FRAME_S(FSSR), r61
+ ld.q SP, FRAME_S(FSPC), r62
+ movi SR_ASID_MASK, r60
+ and r59, r60, r59
+ andc r61, r60, r61 /* Clear out older ASID */
+ or r59, r61, r61 /* Retain current ASID */
+ putcon r61, SSR
+ putcon r62, SPC
+
+ /* Ignore FSYSCALL_ID */
+
+ ld.q SP, FRAME_R(59), r59
+ ld.q SP, FRAME_R(60), r60
+ ld.q SP, FRAME_R(61), r61
+ ld.q SP, FRAME_R(62), r62
+
+ /* Last touch */
+ ld.q SP, FRAME_R(15), SP
+ rte
+ nop
+
+/*
+ * Third level handlers for VBR-based exceptions. Adapting args to
+ * and/or deflecting to fourth level handlers.
+ *
+ * Fourth level handlers interface.
+ * Most are C-coded handlers directly pointed by the trap_jtable.
+ * (Third = Fourth level)
+ * Inputs:
+ * (r2) fault/interrupt code, entry number (e.g. NMI = 14,
+ * IRL0-3 (0000) = 16, RTLBMISS = 2, SYSCALL = 11, etc ...)
+ * (r3) struct pt_regs *, original register's frame pointer
+ * (r4) Event (0 = interrupt, 1 = TLB miss fault, 2 = Not TLB miss fault)
+ * (r5) TRA control register (for syscall/debug benefit only)
+ * (LINK) return address
+ * (SP) = r3
+ *
+ * Kernel TLB fault handlers will get a slightly different interface.
+ * (r2) struct pt_regs *, original register's frame pointer
+ * (r3) writeaccess, whether it's a store fault as opposed to load fault
+ * (r4) execaccess, whether it's a ITLB fault as opposed to DTLB fault
+ * (r5) Effective Address of fault
+ * (LINK) return address
+ * (SP) = r2
+ *
+ * fpu_error_or_IRQ? is a helper to deflect to the right cause.
+ *
+ */
+tlb_miss_load:
+ or SP, ZERO, r2
+ or ZERO, ZERO, r3 /* Read */
+ or ZERO, ZERO, r4 /* Data */
+ getcon TEA, r5
+ pta call_do_page_fault, tr0
+ beq ZERO, ZERO, tr0
+
+tlb_miss_store:
+ or SP, ZERO, r2
+ movi 1, r3 /* Write */
+ or ZERO, ZERO, r4 /* Data */
+ getcon TEA, r5
+ pta call_do_page_fault, tr0
+ beq ZERO, ZERO, tr0
+
+itlb_miss_or_IRQ:
+ pta its_IRQ, tr0
+ beqi/u r4, EVENT_INTERRUPT, tr0
+ or SP, ZERO, r2
+ or ZERO, ZERO, r3 /* Read */
+ movi 1, r4 /* Text */
+ getcon TEA, r5
+ /* Fall through */
+
+call_do_page_fault:
+ movi do_page_fault, r6
+ ptabs r6, tr0
+ blink tr0, ZERO
+
+fpu_error_or_IRQA:
+ pta its_IRQ, tr0
+ beqi/l r4, EVENT_INTERRUPT, tr0
+#ifndef CONFIG_NOFPU_SUPPORT
+ movi do_fpu_state_restore, r6
+#else
+ movi do_exception_error, r6
+#endif
+ ptabs r6, tr0
+ blink tr0, ZERO
+
+fpu_error_or_IRQB:
+ pta its_IRQ, tr0
+ beqi/l r4, EVENT_INTERRUPT, tr0
+#ifndef CONFIG_NOFPU_SUPPORT
+ movi do_fpu_state_restore, r6
+#else
+ movi do_exception_error, r6
+#endif
+ ptabs r6, tr0
+ blink tr0, ZERO
+
+its_IRQ:
+ movi do_IRQ, r6
+ ptabs r6, tr0
+ blink tr0, ZERO
+
+/*
+ * system_call/unknown_trap third level handler:
+ *
+ * Inputs:
+ * (r2) fault/interrupt code, entry number (TRAP = 11)
+ * (r3) struct pt_regs *, original register's frame pointer
+ * (r4) Not used. Event (0=interrupt, 1=TLB miss fault, 2=Not TLB miss fault)
+ * (r5) TRA Control Reg (0x00xyzzzz: x=1 SYSCALL, y = #args, z=nr)
+ * (SP) = r3
+ * (LINK) return address: ret_from_exception
+ * (*r3) Syscall parms: SC#, arg0, arg1, ..., arg5 in order (Saved r2/r7)
+ *
+ * Outputs:
+ * (*r3) Syscall reply (Saved r2)
+ * (LINK) In case of syscall only it can be scrapped.
+ * Common second level post handler will be ret_from_syscall.
+ * Common (non-trace) exit point to that is syscall_ret (saving
+ * result to r2). Common bad exit point is syscall_bad (returning
+ * ENOSYS then saved to r2).
+ *
+ */
+
+unknown_trap:
+ /* Unknown Trap or User Trace */
+ movi do_unknown_trapa, r6
+ ptabs r6, tr0
+ ld.q r3, FRAME_R(9), r2 /* r2 = #arg << 16 | syscall # */
+ andi r2, 0x1ff, r2 /* r2 = syscall # */
+ blink tr0, LINK
+
+ pta syscall_ret, tr0
+ blink tr0, ZERO
+
+ /* New syscall implementation*/
+system_call:
+ pta unknown_trap, tr0
+ or r5, ZERO, r4 /* TRA (=r5) -> r4 */
+ shlri r4, 20, r4
+ bnei r4, 1, tr0 /* unknown_trap if not 0x1yzzzz */
+
+ /* It's a system call */
+ st.q r3, FRAME_S(FSYSCALL_ID), r5 /* ID (0x1yzzzz) -> stack */
+ andi r5, 0x1ff, r5 /* syscall # -> r5 */
+
+ STI()
+
+ pta syscall_allowed, tr0
+ movi NR_syscalls - 1, r4 /* Last valid */
+ bgeu/l r4, r5, tr0
+
+syscall_bad:
+ /* Return ENOSYS ! */
+ movi -(ENOSYS), r2 /* Fall-through */
+
+ .global syscall_ret
+syscall_ret:
+ st.q SP, FRAME_R(9), r2 /* Expecting SP back to BASIC frame */
+
+#if POOR_MANS_STRACE
+ /* nothing useful in registers at this point */
+
+ movi evt_debug2, r5
+ ori r5, 1, r5
+ ptabs r5, tr0
+ ld.q SP, FRAME_R(9), r2
+ or SP, ZERO, r3
+ blink tr0, LINK
+#endif
+
+ ld.q SP, FRAME_S(FSPC), r2
+ addi r2, 4, r2 /* Move PC, being pre-execution event */
+ st.q SP, FRAME_S(FSPC), r2
+ pta ret_from_syscall, tr0
+ blink tr0, ZERO
+
+
+/* A different return path for ret_from_fork, because we now need
+ * to call schedule_tail with the later kernels. Because prev is
+ * loaded into r2 by switch_to() means we can just call it straight away
+ */
+
+.global ret_from_fork
+ret_from_fork:
+
+ movi schedule_tail,r5
+ ori r5, 1, r5
+ ptabs r5, tr0
+ blink tr0, LINK
+
+#if POOR_MANS_STRACE
+ /* nothing useful in registers at this point */
+
+ movi evt_debug2, r5
+ ori r5, 1, r5
+ ptabs r5, tr0
+ ld.q SP, FRAME_R(9), r2
+ or SP, ZERO, r3
+ blink tr0, LINK
+#endif
+
+ ld.q SP, FRAME_S(FSPC), r2
+ addi r2, 4, r2 /* Move PC, being pre-execution event */
+ st.q SP, FRAME_S(FSPC), r2
+ pta ret_from_syscall, tr0
+ blink tr0, ZERO
+
+
+
+syscall_allowed:
+ /* Use LINK to deflect the exit point, default is syscall_ret */
+ pta syscall_ret, tr0
+ gettr tr0, LINK
+ pta syscall_notrace, tr0
+
+ getcon KCR0, r2
+ ld.l r2, TI_FLAGS, r4
+ movi (1 << TIF_SYSCALL_TRACE), r6
+ and r6, r4, r6
+ beq/l r6, ZERO, tr0
+
+ /* Trace it by calling syscall_trace before and after */
+ movi syscall_trace, r4
+ ptabs r4, tr0
+ blink tr0, LINK
+ /* Reload syscall number as r5 is trashed by syscall_trace */
+ ld.q SP, FRAME_S(FSYSCALL_ID), r5
+ andi r5, 0x1ff, r5
+
+ pta syscall_ret_trace, tr0
+ gettr tr0, LINK
+
+syscall_notrace:
+ /* Now point to the appropriate 4th level syscall handler */
+ movi sys_call_table, r4
+ shlli r5, 2, r5
+ ldx.l r4, r5, r5
+ ptabs r5, tr0
+
+ /* Prepare original args */
+ ld.q SP, FRAME_R(2), r2
+ ld.q SP, FRAME_R(3), r3
+ ld.q SP, FRAME_R(4), r4
+ ld.q SP, FRAME_R(5), r5
+ ld.q SP, FRAME_R(6), r6
+ ld.q SP, FRAME_R(7), r7
+
+ /* And now the trick for those syscalls requiring regs * ! */
+ or SP, ZERO, r8
+
+ /* Call it */
+ blink tr0, ZERO /* LINK is already properly set */
+
+syscall_ret_trace:
+ /* We get back here only if under trace */
+ st.q SP, FRAME_R(9), r2 /* Save return value */
+
+ movi syscall_trace, LINK
+ ptabs LINK, tr0
+ blink tr0, LINK
+
+ /* This needs to be done after any syscall tracing */
+ ld.q SP, FRAME_S(FSPC), r2
+ addi r2, 4, r2 /* Move PC, being pre-execution event */
+ st.q SP, FRAME_S(FSPC), r2
+
+ pta ret_from_syscall, tr0
+ blink tr0, ZERO /* Resume normal return sequence */
+
+/*
+ * --- Switch to running under a particular ASID and return the previous ASID value
+ * --- The caller is assumed to have done a cli before calling this.
+ *
+ * Input r2 : new ASID
+ * Output r2 : old ASID
+ */
+
+ .global switch_and_save_asid
+switch_and_save_asid:
+ getcon sr, r0
+ movi 255, r4
+ shlli r4, 16, r4 /* r4 = mask to select ASID */
+ and r0, r4, r3 /* r3 = shifted old ASID */
+ andi r2, 255, r2 /* mask down new ASID */
+ shlli r2, 16, r2 /* align new ASID against SR.ASID */
+ andc r0, r4, r0 /* efface old ASID from SR */
+ or r0, r2, r0 /* insert the new ASID */
+ putcon r0, ssr
+ movi 1f, r0
+ putcon r0, spc
+ rte
+ nop
+1:
+ ptabs LINK, tr0
+ shlri r3, 16, r2 /* r2 = old ASID */
+ blink tr0, r63
+
+ .global route_to_panic_handler
+route_to_panic_handler:
+ /* Switch to real mode, goto panic_handler, don't return. Useful for
+ last-chance debugging, e.g. if no output wants to go to the console.
+ */
+
+ movi panic_handler - CONFIG_CACHED_MEMORY_OFFSET, r1
+ ptabs r1, tr0
+ pta 1f, tr1
+ gettr tr1, r0
+ putcon r0, spc
+ getcon sr, r0
+ movi 1, r1
+ shlli r1, 31, r1
+ andc r0, r1, r0
+ putcon r0, ssr
+ rte
+ nop
+1: /* Now in real mode */
+ blink tr0, r63
+ nop
+
+ .global peek_real_address_q
+peek_real_address_q:
+ /* Two args:
+ r2 : real mode address to peek
+ r2(out) : result quadword
+
+ This is provided as a cheapskate way of manipulating device
+ registers for debugging (to avoid the need to onchip_remap the debug
+ module, and to avoid the need to onchip_remap the watchpoint
+ controller in a way that identity maps sufficient bits to avoid the
+ SH5-101 cut2 silicon defect).
+
+ This code is not performance critical
+ */
+
+ add.l r2, r63, r2 /* sign extend address */
+ getcon sr, r0 /* r0 = saved original SR */
+ movi 1, r1
+ shlli r1, 28, r1
+ or r0, r1, r1 /* r0 with block bit set */
+ putcon r1, sr /* now in critical section */
+ movi 1, r36
+ shlli r36, 31, r36
+ andc r1, r36, r1 /* turn sr.mmu off in real mode section */
+
+ putcon r1, ssr
+ movi .peek0 - CONFIG_CACHED_MEMORY_OFFSET, r36 /* real mode target address */
+ movi 1f, r37 /* virtual mode return addr */
+ putcon r36, spc
+
+ synco
+ rte
+ nop
+
+.peek0: /* come here in real mode, don't touch caches!!
+ still in critical section (sr.bl==1) */
+ putcon r0, ssr
+ putcon r37, spc
+ /* Here's the actual peek. If the address is bad, all bets are now off
+ * what will happen (handlers invoked in real-mode = bad news) */
+ ld.q r2, 0, r2
+ synco
+ rte /* Back to virtual mode */
+ nop
+
+1:
+ ptabs LINK, tr0
+ blink tr0, r63
+
+ .global poke_real_address_q
+poke_real_address_q:
+ /* Two args:
+ r2 : real mode address to poke
+ r3 : quadword value to write.
+
+ This is provided as a cheapskate way of manipulating device
+ registers for debugging (to avoid the need to onchip_remap the debug
+ module, and to avoid the need to onchip_remap the watchpoint
+ controller in a way that identity maps sufficient bits to avoid the
+ SH5-101 cut2 silicon defect).
+
+ This code is not performance critical
+ */
+
+ add.l r2, r63, r2 /* sign extend address */
+ getcon sr, r0 /* r0 = saved original SR */
+ movi 1, r1
+ shlli r1, 28, r1
+ or r0, r1, r1 /* r0 with block bit set */
+ putcon r1, sr /* now in critical section */
+ movi 1, r36
+ shlli r36, 31, r36
+ andc r1, r36, r1 /* turn sr.mmu off in real mode section */
+
+ putcon r1, ssr
+ movi .poke0-CONFIG_CACHED_MEMORY_OFFSET, r36 /* real mode target address */
+ movi 1f, r37 /* virtual mode return addr */
+ putcon r36, spc
+
+ synco
+ rte
+ nop
+
+.poke0: /* come here in real mode, don't touch caches!!
+ still in critical section (sr.bl==1) */
+ putcon r0, ssr
+ putcon r37, spc
+ /* Here's the actual poke. If the address is bad, all bets are now off
+ * what will happen (handlers invoked in real-mode = bad news) */
+ st.q r2, 0, r3
+ synco
+ rte /* Back to virtual mode */
+ nop
+
+1:
+ ptabs LINK, tr0
+ blink tr0, r63
+
+/*
+ * --- User Access Handling Section
+ */
+
+/*
+ * User Access support. It all moved to non inlined Assembler
+ * functions in here.
+ *
+ * __kernel_size_t __copy_user(void *__to, const void *__from,
+ * __kernel_size_t __n)
+ *
+ * Inputs:
+ * (r2) target address
+ * (r3) source address
+ * (r4) size in bytes
+ *
+ * Ouputs:
+ * (*r2) target data
+ * (r2) non-copied bytes
+ *
+ * If a fault occurs on the user pointer, bail out early and return the
+ * number of bytes not copied in r2.
+ * Strategy : for large blocks, call a real memcpy function which can
+ * move >1 byte at a time using unaligned ld/st instructions, and can
+ * manipulate the cache using prefetch + alloco to improve the speed
+ * further. If a fault occurs in that function, just revert to the
+ * byte-by-byte approach used for small blocks; this is rare so the
+ * performance hit for that case does not matter.
+ *
+ * For small blocks it's not worth the overhead of setting up and calling
+ * the memcpy routine; do the copy a byte at a time.
+ *
+ */
+ .global __copy_user
+__copy_user:
+ pta __copy_user_byte_by_byte, tr1
+ movi 16, r0 ! this value is a best guess, should tune it by benchmarking
+ bge/u r0, r4, tr1
+ pta copy_user_memcpy, tr0
+ addi SP, -32, SP
+ /* Save arguments in case we have to fix-up unhandled page fault */
+ st.q SP, 0, r2
+ st.q SP, 8, r3
+ st.q SP, 16, r4
+ st.q SP, 24, r35 ! r35 is callee-save
+ /* Save LINK in a register to reduce RTS time later (otherwise
+ ld SP,*,LINK;ptabs LINK;trn;blink trn,r63 becomes a critical path) */
+ ori LINK, 0, r35
+ blink tr0, LINK
+
+ /* Copy completed normally if we get back here */
+ ptabs r35, tr0
+ ld.q SP, 24, r35
+ /* don't restore r2-r4, pointless */
+ /* set result=r2 to zero as the copy must have succeeded. */
+ or r63, r63, r2
+ addi SP, 32, SP
+ blink tr0, r63 ! RTS
+
+ .global __copy_user_fixup
+__copy_user_fixup:
+ /* Restore stack frame */
+ ori r35, 0, LINK
+ ld.q SP, 24, r35
+ ld.q SP, 16, r4
+ ld.q SP, 8, r3
+ ld.q SP, 0, r2
+ addi SP, 32, SP
+ /* Fall through to original code, in the 'same' state we entered with */
+
+/* The slow byte-by-byte method is used if the fast copy traps due to a bad
+ user address. In that rare case, the speed drop can be tolerated. */
+__copy_user_byte_by_byte:
+ pta ___copy_user_exit, tr1
+ pta ___copy_user1, tr0
+ beq/u r4, r63, tr1 /* early exit for zero length copy */
+ sub r2, r3, r0
+ addi r0, -1, r0
+
+___copy_user1:
+ ld.b r3, 0, r5 /* Fault address 1 */
+
+ /* Could rewrite this to use just 1 add, but the second comes 'free'
+ due to load latency */
+ addi r3, 1, r3
+ addi r4, -1, r4 /* No real fixup required */
+___copy_user2:
+ stx.b r3, r0, r5 /* Fault address 2 */
+ bne r4, ZERO, tr0
+
+___copy_user_exit:
+ or r4, ZERO, r2
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+/*
+ * __kernel_size_t __clear_user(void *addr, __kernel_size_t size)
+ *
+ * Inputs:
+ * (r2) target address
+ * (r3) size in bytes
+ *
+ * Ouputs:
+ * (*r2) zero-ed target data
+ * (r2) non-zero-ed bytes
+ */
+ .global __clear_user
+__clear_user:
+ pta ___clear_user_exit, tr1
+ pta ___clear_user1, tr0
+ beq/u r3, r63, tr1
+
+___clear_user1:
+ st.b r2, 0, ZERO /* Fault address */
+ addi r2, 1, r2
+ addi r3, -1, r3 /* No real fixup required */
+ bne r3, ZERO, tr0
+
+___clear_user_exit:
+ or r3, ZERO, r2
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+/*
+ * int __strncpy_from_user(unsigned long __dest, unsigned long __src,
+ * int __count)
+ *
+ * Inputs:
+ * (r2) target address
+ * (r3) source address
+ * (r4) maximum size in bytes
+ *
+ * Ouputs:
+ * (*r2) copied data
+ * (r2) -EFAULT (in case of faulting)
+ * copied data (otherwise)
+ */
+ .global __strncpy_from_user
+__strncpy_from_user:
+ pta ___strncpy_from_user1, tr0
+ pta ___strncpy_from_user_done, tr1
+ or r4, ZERO, r5 /* r5 = original count */
+ beq/u r4, r63, tr1 /* early exit if r4==0 */
+ movi -(EFAULT), r6 /* r6 = reply, no real fixup */
+ or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */
+
+___strncpy_from_user1:
+ ld.b r3, 0, r7 /* Fault address: only in reading */
+ st.b r2, 0, r7
+ addi r2, 1, r2
+ addi r3, 1, r3
+ beq/u ZERO, r7, tr1
+ addi r4, -1, r4 /* return real number of copied bytes */
+ bne/l ZERO, r4, tr0
+
+___strncpy_from_user_done:
+ sub r5, r4, r6 /* If done, return copied */
+
+___strncpy_from_user_exit:
+ or r6, ZERO, r2
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+/*
+ * extern long __strnlen_user(const char *__s, long __n)
+ *
+ * Inputs:
+ * (r2) source address
+ * (r3) source size in bytes
+ *
+ * Ouputs:
+ * (r2) -EFAULT (in case of faulting)
+ * string length (otherwise)
+ */
+ .global __strnlen_user
+__strnlen_user:
+ pta ___strnlen_user_set_reply, tr0
+ pta ___strnlen_user1, tr1
+ or ZERO, ZERO, r5 /* r5 = counter */
+ movi -(EFAULT), r6 /* r6 = reply, no real fixup */
+ or ZERO, ZERO, r7 /* r7 = data, clear top byte of data */
+ beq r3, ZERO, tr0
+
+___strnlen_user1:
+ ldx.b r2, r5, r7 /* Fault address: only in reading */
+ addi r3, -1, r3 /* No real fixup */
+ addi r5, 1, r5
+ beq r3, ZERO, tr0
+ bne r7, ZERO, tr1
+! The line below used to be active. This meant led to a junk byte lying between each pair
+! of entries in the argv & envp structures in memory. Whilst the program saw the right data
+! via the argv and envp arguments to main, it meant the 'flat' representation visible through
+! /proc/$pid/cmdline was corrupt, causing trouble with ps, for example.
+! addi r5, 1, r5 /* Include '\0' */
+
+___strnlen_user_set_reply:
+ or r5, ZERO, r6 /* If done, return counter */
+
+___strnlen_user_exit:
+ or r6, ZERO, r2
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+/*
+ * extern long __get_user_asm_?(void *val, long addr)
+ *
+ * Inputs:
+ * (r2) dest address
+ * (r3) source address (in User Space)
+ *
+ * Ouputs:
+ * (r2) -EFAULT (faulting)
+ * 0 (not faulting)
+ */
+ .global __get_user_asm_b
+__get_user_asm_b:
+ or r2, ZERO, r4
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___get_user_asm_b1:
+ ld.b r3, 0, r5 /* r5 = data */
+ st.b r4, 0, r5
+ or ZERO, ZERO, r2
+
+___get_user_asm_b_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __get_user_asm_w
+__get_user_asm_w:
+ or r2, ZERO, r4
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___get_user_asm_w1:
+ ld.w r3, 0, r5 /* r5 = data */
+ st.w r4, 0, r5
+ or ZERO, ZERO, r2
+
+___get_user_asm_w_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __get_user_asm_l
+__get_user_asm_l:
+ or r2, ZERO, r4
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___get_user_asm_l1:
+ ld.l r3, 0, r5 /* r5 = data */
+ st.l r4, 0, r5
+ or ZERO, ZERO, r2
+
+___get_user_asm_l_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __get_user_asm_q
+__get_user_asm_q:
+ or r2, ZERO, r4
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___get_user_asm_q1:
+ ld.q r3, 0, r5 /* r5 = data */
+ st.q r4, 0, r5
+ or ZERO, ZERO, r2
+
+___get_user_asm_q_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+/*
+ * extern long __put_user_asm_?(void *pval, long addr)
+ *
+ * Inputs:
+ * (r2) kernel pointer to value
+ * (r3) dest address (in User Space)
+ *
+ * Ouputs:
+ * (r2) -EFAULT (faulting)
+ * 0 (not faulting)
+ */
+ .global __put_user_asm_b
+__put_user_asm_b:
+ ld.b r2, 0, r4 /* r4 = data */
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___put_user_asm_b1:
+ st.b r3, 0, r4
+ or ZERO, ZERO, r2
+
+___put_user_asm_b_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __put_user_asm_w
+__put_user_asm_w:
+ ld.w r2, 0, r4 /* r4 = data */
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___put_user_asm_w1:
+ st.w r3, 0, r4
+ or ZERO, ZERO, r2
+
+___put_user_asm_w_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __put_user_asm_l
+__put_user_asm_l:
+ ld.l r2, 0, r4 /* r4 = data */
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___put_user_asm_l1:
+ st.l r3, 0, r4
+ or ZERO, ZERO, r2
+
+___put_user_asm_l_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+
+ .global __put_user_asm_q
+__put_user_asm_q:
+ ld.q r2, 0, r4 /* r4 = data */
+ movi -(EFAULT), r2 /* r2 = reply, no real fixup */
+
+___put_user_asm_q1:
+ st.q r3, 0, r4
+ or ZERO, ZERO, r2
+
+___put_user_asm_q_exit:
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
+panic_stash_regs:
+ /* The idea is : when we get an unhandled panic, we dump the registers
+ to a known memory location, the just sit in a tight loop.
+ This allows the human to look at the memory region through the GDB
+ session (assuming the debug module's SHwy initiator isn't locked up
+ or anything), to hopefully analyze the cause of the panic. */
+
+ /* On entry, former r15 (SP) is in DCR
+ former r0 is at resvec_saved_area + 0
+ former r1 is at resvec_saved_area + 8
+ former tr0 is at resvec_saved_area + 32
+ DCR is the only register whose value is lost altogether.
+ */
+
+ movi 0xffffffff80000000, r0 ! phy of dump area
+ ld.q SP, 0x000, r1 ! former r0
+ st.q r0, 0x000, r1
+ ld.q SP, 0x008, r1 ! former r1
+ st.q r0, 0x008, r1
+ st.q r0, 0x010, r2
+ st.q r0, 0x018, r3
+ st.q r0, 0x020, r4
+ st.q r0, 0x028, r5
+ st.q r0, 0x030, r6
+ st.q r0, 0x038, r7
+ st.q r0, 0x040, r8
+ st.q r0, 0x048, r9
+ st.q r0, 0x050, r10
+ st.q r0, 0x058, r11
+ st.q r0, 0x060, r12
+ st.q r0, 0x068, r13
+ st.q r0, 0x070, r14
+ getcon dcr, r14
+ st.q r0, 0x078, r14
+ st.q r0, 0x080, r16
+ st.q r0, 0x088, r17
+ st.q r0, 0x090, r18
+ st.q r0, 0x098, r19
+ st.q r0, 0x0a0, r20
+ st.q r0, 0x0a8, r21
+ st.q r0, 0x0b0, r22
+ st.q r0, 0x0b8, r23
+ st.q r0, 0x0c0, r24
+ st.q r0, 0x0c8, r25
+ st.q r0, 0x0d0, r26
+ st.q r0, 0x0d8, r27
+ st.q r0, 0x0e0, r28
+ st.q r0, 0x0e8, r29
+ st.q r0, 0x0f0, r30
+ st.q r0, 0x0f8, r31
+ st.q r0, 0x100, r32
+ st.q r0, 0x108, r33
+ st.q r0, 0x110, r34
+ st.q r0, 0x118, r35
+ st.q r0, 0x120, r36
+ st.q r0, 0x128, r37
+ st.q r0, 0x130, r38
+ st.q r0, 0x138, r39
+ st.q r0, 0x140, r40
+ st.q r0, 0x148, r41
+ st.q r0, 0x150, r42
+ st.q r0, 0x158, r43
+ st.q r0, 0x160, r44
+ st.q r0, 0x168, r45
+ st.q r0, 0x170, r46
+ st.q r0, 0x178, r47
+ st.q r0, 0x180, r48
+ st.q r0, 0x188, r49
+ st.q r0, 0x190, r50
+ st.q r0, 0x198, r51
+ st.q r0, 0x1a0, r52
+ st.q r0, 0x1a8, r53
+ st.q r0, 0x1b0, r54
+ st.q r0, 0x1b8, r55
+ st.q r0, 0x1c0, r56
+ st.q r0, 0x1c8, r57
+ st.q r0, 0x1d0, r58
+ st.q r0, 0x1d8, r59
+ st.q r0, 0x1e0, r60
+ st.q r0, 0x1e8, r61
+ st.q r0, 0x1f0, r62
+ st.q r0, 0x1f8, r63 ! bogus, but for consistency's sake...
+
+ ld.q SP, 0x020, r1 ! former tr0
+ st.q r0, 0x200, r1
+ gettr tr1, r1
+ st.q r0, 0x208, r1
+ gettr tr2, r1
+ st.q r0, 0x210, r1
+ gettr tr3, r1
+ st.q r0, 0x218, r1
+ gettr tr4, r1
+ st.q r0, 0x220, r1
+ gettr tr5, r1
+ st.q r0, 0x228, r1
+ gettr tr6, r1
+ st.q r0, 0x230, r1
+ gettr tr7, r1
+ st.q r0, 0x238, r1
+
+ getcon sr, r1
+ getcon ssr, r2
+ getcon pssr, r3
+ getcon spc, r4
+ getcon pspc, r5
+ getcon intevt, r6
+ getcon expevt, r7
+ getcon pexpevt, r8
+ getcon tra, r9
+ getcon tea, r10
+ getcon kcr0, r11
+ getcon kcr1, r12
+ getcon vbr, r13
+ getcon resvec, r14
+
+ st.q r0, 0x240, r1
+ st.q r0, 0x248, r2
+ st.q r0, 0x250, r3
+ st.q r0, 0x258, r4
+ st.q r0, 0x260, r5
+ st.q r0, 0x268, r6
+ st.q r0, 0x270, r7
+ st.q r0, 0x278, r8
+ st.q r0, 0x280, r9
+ st.q r0, 0x288, r10
+ st.q r0, 0x290, r11
+ st.q r0, 0x298, r12
+ st.q r0, 0x2a0, r13
+ st.q r0, 0x2a8, r14
+
+ getcon SPC,r2
+ getcon SSR,r3
+ getcon EXPEVT,r4
+ /* Prepare to jump to C - physical address */
+ movi panic_handler-CONFIG_CACHED_MEMORY_OFFSET, r1
+ ori r1, 1, r1
+ ptabs r1, tr0
+ getcon DCR, SP
+ blink tr0, ZERO
+ nop
+ nop
+ nop
+ nop
+
+
+
+
+/*
+ * --- Signal Handling Section
+ */
+
+/*
+ * extern long long _sa_default_rt_restorer
+ * extern long long _sa_default_restorer
+ *
+ * or, better,
+ *
+ * extern void _sa_default_rt_restorer(void)
+ * extern void _sa_default_restorer(void)
+ *
+ * Code prototypes to do a sys_rt_sigreturn() or sys_sysreturn()
+ * from user space. Copied into user space by signal management.
+ * Both must be quad aligned and 2 quad long (4 instructions).
+ *
+ */
+ .balign 8
+ .global sa_default_rt_restorer
+sa_default_rt_restorer:
+ movi 0x10, r9
+ shori __NR_rt_sigreturn, r9
+ trapa r9
+ nop
+
+ .balign 8
+ .global sa_default_restorer
+sa_default_restorer:
+ movi 0x10, r9
+ shori __NR_sigreturn, r9
+ trapa r9
+ nop
+
+/*
+ * --- __ex_table Section
+ */
+
+/*
+ * User Access Exception Table.
+ */
+ .section __ex_table, "a"
+
+ .global asm_uaccess_start /* Just a marker */
+asm_uaccess_start:
+
+ .long ___copy_user1, ___copy_user_exit
+ .long ___copy_user2, ___copy_user_exit
+ .long ___clear_user1, ___clear_user_exit
+ .long ___strncpy_from_user1, ___strncpy_from_user_exit
+ .long ___strnlen_user1, ___strnlen_user_exit
+ .long ___get_user_asm_b1, ___get_user_asm_b_exit
+ .long ___get_user_asm_w1, ___get_user_asm_w_exit
+ .long ___get_user_asm_l1, ___get_user_asm_l_exit
+ .long ___get_user_asm_q1, ___get_user_asm_q_exit
+ .long ___put_user_asm_b1, ___put_user_asm_b_exit
+ .long ___put_user_asm_w1, ___put_user_asm_w_exit
+ .long ___put_user_asm_l1, ___put_user_asm_l_exit
+ .long ___put_user_asm_q1, ___put_user_asm_q_exit
+
+ .global asm_uaccess_end /* Just a marker */
+asm_uaccess_end:
+
+
+
+
+/*
+ * --- .text.init Section
+ */
+
+ .section .text.init, "ax"
+
+/*
+ * void trap_init (void)
+ *
+ */
+ .global trap_init
+trap_init:
+ addi SP, -24, SP /* Room to save r28/r29/r30 */
+ st.q SP, 0, r28
+ st.q SP, 8, r29
+ st.q SP, 16, r30
+
+ /* Set VBR and RESVEC */
+ movi LVBR_block, r19
+ andi r19, -4, r19 /* reset MMUOFF + reserved */
+ /* For RESVEC exceptions we force the MMU off, which means we need the
+ physical address. */
+ movi LRESVEC_block-CONFIG_CACHED_MEMORY_OFFSET, r20
+ andi r20, -4, r20 /* reset reserved */
+ ori r20, 1, r20 /* set MMUOFF */
+ putcon r19, VBR
+ putcon r20, RESVEC
+
+ /* Sanity check */
+ movi LVBR_block_end, r21
+ andi r21, -4, r21
+ movi BLOCK_SIZE, r29 /* r29 = expected size */
+ or r19, ZERO, r30
+ add r19, r29, r19
+
+ /*
+ * Ugly, but better loop forever now than crash afterwards.
+ * We should print a message, but if we touch LVBR or
+ * LRESVEC blocks we should not be surprised if we get stuck
+ * in trap_init().
+ */
+ pta trap_init_loop, tr1
+ gettr tr1, r28 /* r28 = trap_init_loop */
+ sub r21, r30, r30 /* r30 = actual size */
+
+ /*
+ * VBR/RESVEC handlers overlap by being bigger than
+ * allowed. Very bad. Just loop forever.
+ * (r28) panic/loop address
+ * (r29) expected size
+ * (r30) actual size
+ */
+trap_init_loop:
+ bne r19, r21, tr1
+
+ /* Now that exception vectors are set up reset SR.BL */
+ getcon SR, r22
+ movi SR_UNBLOCK_EXC, r23
+ and r22, r23, r22
+ putcon r22, SR
+
+ addi SP, 24, SP
+ ptabs LINK, tr0
+ blink tr0, ZERO
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/fpu.c
+ *
+ * Copyright (C) 2001 Manuela Cirronis, Paolo Alberelli
+ * Copyright (C) 2002 STMicroelectronics Limited
+ * Author : Stuart Menefy
+ *
+ * Started from SH4 version:
+ * Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka
+ *
+ */
+
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <asm/processor.h>
+#include <asm/user.h>
+#include <asm/io.h>
+
+/*
+ * Initially load the FPU with signalling NANS. This bit pattern
+ * has the property that no matter whether considered as single or as
+ * double precision, it still represents a signalling NAN.
+ */
+#define sNAN64 0xFFFFFFFFFFFFFFFFULL
+#define sNAN32 0xFFFFFFFFUL
+
+static union sh_fpu_union init_fpuregs = {
+ .hard = {
+ .fp_regs = { [0 ... 63] = sNAN32 },
+ .fpscr = FPSCR_INIT
+ }
+};
+
+inline void fpsave(struct sh_fpu_hard_struct *fpregs)
+{
+ asm volatile("fst.p %0, (0*8), fp0\n\t"
+ "fst.p %0, (1*8), fp2\n\t"
+ "fst.p %0, (2*8), fp4\n\t"
+ "fst.p %0, (3*8), fp6\n\t"
+ "fst.p %0, (4*8), fp8\n\t"
+ "fst.p %0, (5*8), fp10\n\t"
+ "fst.p %0, (6*8), fp12\n\t"
+ "fst.p %0, (7*8), fp14\n\t"
+ "fst.p %0, (8*8), fp16\n\t"
+ "fst.p %0, (9*8), fp18\n\t"
+ "fst.p %0, (10*8), fp20\n\t"
+ "fst.p %0, (11*8), fp22\n\t"
+ "fst.p %0, (12*8), fp24\n\t"
+ "fst.p %0, (13*8), fp26\n\t"
+ "fst.p %0, (14*8), fp28\n\t"
+ "fst.p %0, (15*8), fp30\n\t"
+ "fst.p %0, (16*8), fp32\n\t"
+ "fst.p %0, (17*8), fp34\n\t"
+ "fst.p %0, (18*8), fp36\n\t"
+ "fst.p %0, (19*8), fp38\n\t"
+ "fst.p %0, (20*8), fp40\n\t"
+ "fst.p %0, (21*8), fp42\n\t"
+ "fst.p %0, (22*8), fp44\n\t"
+ "fst.p %0, (23*8), fp46\n\t"
+ "fst.p %0, (24*8), fp48\n\t"
+ "fst.p %0, (25*8), fp50\n\t"
+ "fst.p %0, (26*8), fp52\n\t"
+ "fst.p %0, (27*8), fp54\n\t"
+ "fst.p %0, (28*8), fp56\n\t"
+ "fst.p %0, (29*8), fp58\n\t"
+ "fst.p %0, (30*8), fp60\n\t"
+ "fst.p %0, (31*8), fp62\n\t"
+
+ "fgetscr fr63\n\t"
+ "fst.s %0, (32*8), fr63\n\t"
+ : /* no output */
+ : "r" (fpregs)
+ : "memory");
+}
+
+
+static inline void
+fpload(struct sh_fpu_hard_struct *fpregs)
+{
+ asm volatile("fld.p %0, (0*8), fp0\n\t"
+ "fld.p %0, (1*8), fp2\n\t"
+ "fld.p %0, (2*8), fp4\n\t"
+ "fld.p %0, (3*8), fp6\n\t"
+ "fld.p %0, (4*8), fp8\n\t"
+ "fld.p %0, (5*8), fp10\n\t"
+ "fld.p %0, (6*8), fp12\n\t"
+ "fld.p %0, (7*8), fp14\n\t"
+ "fld.p %0, (8*8), fp16\n\t"
+ "fld.p %0, (9*8), fp18\n\t"
+ "fld.p %0, (10*8), fp20\n\t"
+ "fld.p %0, (11*8), fp22\n\t"
+ "fld.p %0, (12*8), fp24\n\t"
+ "fld.p %0, (13*8), fp26\n\t"
+ "fld.p %0, (14*8), fp28\n\t"
+ "fld.p %0, (15*8), fp30\n\t"
+ "fld.p %0, (16*8), fp32\n\t"
+ "fld.p %0, (17*8), fp34\n\t"
+ "fld.p %0, (18*8), fp36\n\t"
+ "fld.p %0, (19*8), fp38\n\t"
+ "fld.p %0, (20*8), fp40\n\t"
+ "fld.p %0, (21*8), fp42\n\t"
+ "fld.p %0, (22*8), fp44\n\t"
+ "fld.p %0, (23*8), fp46\n\t"
+ "fld.p %0, (24*8), fp48\n\t"
+ "fld.p %0, (25*8), fp50\n\t"
+ "fld.p %0, (26*8), fp52\n\t"
+ "fld.p %0, (27*8), fp54\n\t"
+ "fld.p %0, (28*8), fp56\n\t"
+ "fld.p %0, (29*8), fp58\n\t"
+ "fld.p %0, (30*8), fp60\n\t"
+
+ "fld.s %0, (32*8), fr63\n\t"
+ "fputscr fr63\n\t"
+
+ "fld.p %0, (31*8), fp62\n\t"
+ : /* no output */
+ : "r" (fpregs) );
+}
+
+void fpinit(struct sh_fpu_hard_struct *fpregs)
+{
+ *fpregs = init_fpuregs.hard;
+}
+
+asmlinkage void
+do_fpu_error(unsigned long ex, struct pt_regs *regs)
+{
+ struct task_struct *tsk = current;
+
+ regs->pc += 4;
+
+ tsk->thread.trap_no = 11;
+ tsk->thread.error_code = 0;
+ force_sig(SIGFPE, tsk);
+}
+
+
+asmlinkage void
+do_fpu_state_restore(unsigned long ex, struct pt_regs *regs)
+{
+ void die(const char *str, struct pt_regs *regs, long err);
+
+ if (! user_mode(regs))
+ die("FPU used in kernel", regs, ex);
+
+ regs->sr &= ~SR_FD;
+
+ if (last_task_used_math == current)
+ return;
+
+ grab_fpu();
+ if (last_task_used_math != NULL) {
+ /* Other processes fpu state, save away */
+ fpsave(&last_task_used_math->thread.fpu.hard);
+ }
+ last_task_used_math = current;
+ if (current->used_math) {
+ fpload(¤t->thread.fpu.hard);
+ } else {
+ /* First time FPU user. */
+ fpload(&init_fpuregs.hard);
+ current->used_math = 1;
+ }
+ release_fpu();
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/head.S
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ *
+ * benedict.gaster@superh.com: 2nd May 2002
+ * Moved definition of empty_zero_page to its own section allowing
+ * it to be placed at an absolute address known at load time.
+ *
+ * lethal@linux-sh.org: 9th May 2003
+ * Kill off GLOBAL_NAME() usage.
+ *
+ * lethal@linux-sh.org: 8th May 2004
+ * Add early SCIF console DTLB mapping.
+ */
+
+#include <linux/config.h>
+
+#include <asm/page.h>
+#include <asm/mmu_context.h>
+#include <asm/cache.h>
+#include <asm/tlb.h>
+#include <asm/processor.h>
+#include <asm/registers.h>
+#include <asm/thread_info.h>
+
+/*
+ * MMU defines: TLB boundaries.
+ */
+
+#define MMUIR_FIRST ITLB_FIXED
+#define MMUIR_END ITLB_LAST_VAR_UNRESTRICTED+TLB_STEP
+#define MMUIR_STEP TLB_STEP
+
+#define MMUDR_FIRST DTLB_FIXED
+#define MMUDR_END DTLB_LAST_VAR_UNRESTRICTED+TLB_STEP
+#define MMUDR_STEP TLB_STEP
+
+/* Safety check : CONFIG_CACHED_MEMORY_OFFSET has to be a multiple of 512Mb */
+#if (CONFIG_CACHED_MEMORY_OFFSET & ((1UL<<29)-1))
+#error "CONFIG_CACHED_MEMORY_OFFSET must be a multiple of 512Mb"
+#endif
+
+/*
+ * MMU defines: Fixed TLBs.
+ */
+/* Deal safely with the case where the base of RAM is not 512Mb aligned */
+
+#define ALIGN_512M_MASK (0xffffffffe0000000)
+#define ALIGNED_EFFECTIVE ((CONFIG_CACHED_MEMORY_OFFSET + CONFIG_MEMORY_START) & ALIGN_512M_MASK)
+#define ALIGNED_PHYSICAL (CONFIG_MEMORY_START & ALIGN_512M_MASK)
+
+#define MMUIR_TEXT_H (0x0000000000000003 | ALIGNED_EFFECTIVE)
+ /* Enabled, Shared, ASID 0, Eff. Add. 0xA0000000 */
+
+#define MMUIR_TEXT_L (0x000000000000009a | ALIGNED_PHYSICAL)
+ /* 512 Mb, Cacheable, Write-back, execute, Not User, Ph. Add. */
+
+#define MMUDR_CACHED_H 0x0000000000000003 | ALIGNED_EFFECTIVE
+ /* Enabled, Shared, ASID 0, Eff. Add. 0xA0000000 */
+#define MMUDR_CACHED_L 0x000000000000015a | ALIGNED_PHYSICAL
+ /* 512 Mb, Cacheable, Write-back, read/write, Not User, Ph. Add. */
+
+#ifdef CONFIG_ICACHE_DISABLED
+#define ICCR0_INIT_VAL ICCR0_OFF /* ICACHE off */
+#else
+#define ICCR0_INIT_VAL ICCR0_ON | ICCR0_ICI /* ICE + ICI */
+#endif
+#define ICCR1_INIT_VAL ICCR1_NOLOCK /* No locking */
+
+#if defined (CONFIG_DCACHE_DISABLED)
+#define OCCR0_INIT_VAL OCCR0_OFF /* D-cache: off */
+#elif defined (CONFIG_DCACHE_WRITE_THROUGH)
+#define OCCR0_INIT_VAL OCCR0_ON | OCCR0_OCI | OCCR0_WT /* D-cache: on, */
+ /* WT, invalidate */
+#elif defined (CONFIG_DCACHE_WRITE_BACK)
+#define OCCR0_INIT_VAL OCCR0_ON | OCCR0_OCI | OCCR0_WB /* D-cache: on, */
+ /* WB, invalidate */
+#else
+#error preprocessor flag CONFIG_DCACHE_... not recognized!
+#endif
+
+#define OCCR1_INIT_VAL OCCR1_NOLOCK /* No locking */
+
+ .section .empty_zero_page, "aw"
+ .global empty_zero_page
+
+empty_zero_page:
+ .long 1 /* MOUNT_ROOT_RDONLY */
+ .long 0 /* RAMDISK_FLAGS */
+ .long 0x0200 /* ORIG_ROOT_DEV */
+ .long 1 /* LOADER_TYPE */
+ .long 0x00360000 /* INITRD_START */
+ .long 0x000a0000 /* INITRD_SIZE */
+ .long 0
+
+ .text
+ .balign 4096,0,4096
+
+ .section .data, "aw"
+ .balign PAGE_SIZE
+
+ .section .data, "aw"
+ .balign PAGE_SIZE
+
+ .global swapper_pg_dir
+swapper_pg_dir:
+ .space PAGE_SIZE, 0
+
+ .global empty_bad_page
+empty_bad_page:
+ .space PAGE_SIZE, 0
+
+ .global empty_bad_pte_table
+empty_bad_pte_table:
+ .space PAGE_SIZE, 0
+
+ .global fpu_in_use
+fpu_in_use: .quad 0
+
+
+ .section .text, "ax"
+ .balign L1_CACHE_BYTES
+/*
+ * Condition at the entry of __stext:
+ * . Reset state:
+ * . SR.FD = 1 (FPU disabled)
+ * . SR.BL = 1 (Exceptions disabled)
+ * . SR.MD = 1 (Privileged Mode)
+ * . SR.MMU = 0 (MMU Disabled)
+ * . SR.CD = 0 (CTC User Visible)
+ * . SR.IMASK = Undefined (Interrupt Mask)
+ *
+ * Operations supposed to be performed by __stext:
+ * . prevent speculative fetch onto device memory while MMU is off
+ * . reflect as much as possible SH5 ABI (r15, r26, r27, r18)
+ * . first, save CPU state and set it to something harmless
+ * . any CPU detection and/or endianness settings (?)
+ * . initialize EMI/LMI (but not TMU/RTC/INTC/SCIF): TBD
+ * . set initial TLB entries for cached and uncached regions
+ * (no fine granularity paging)
+ * . set initial cache state
+ * . enable MMU and caches
+ * . set CPU to a consistent state
+ * . registers (including stack pointer and current/KCR0)
+ * . NOT expecting to set Exception handling nor VBR/RESVEC/DCR
+ * at this stage. This is all to later Linux initialization steps.
+ * . initialize FPU
+ * . clear BSS
+ * . jump into start_kernel()
+ * . be prepared to hopeless start_kernel() returns.
+ *
+ */
+ .global _stext
+_stext:
+ /*
+ * Prevent speculative fetch on device memory due to
+ * uninitialized target registers.
+ */
+ ptabs/u ZERO, tr0
+ ptabs/u ZERO, tr1
+ ptabs/u ZERO, tr2
+ ptabs/u ZERO, tr3
+ ptabs/u ZERO, tr4
+ ptabs/u ZERO, tr5
+ ptabs/u ZERO, tr6
+ ptabs/u ZERO, tr7
+ synci
+
+ /*
+ * Read/Set CPU state. After this block:
+ * r29 = Initial SR
+ */
+ getcon SR, r29
+ movi SR_HARMLESS, r20
+ putcon r20, SR
+
+ /*
+ * Initialize EMI/LMI. To Be Done.
+ */
+
+ /*
+ * CPU detection and/or endianness settings (?). To Be Done.
+ * Pure PIC code here, please ! Just save state into r30.
+ * After this block:
+ * r30 = CPU type/Platform Endianness
+ */
+
+ /*
+ * Set initial TLB entries for cached and uncached regions.
+ * Note: PTA/BLINK is PIC code, PTABS/BLINK isn't !
+ */
+ /* Clear ITLBs */
+ pta clear_ITLB, tr1
+ movi MMUIR_FIRST, r21
+ movi MMUIR_END, r22
+clear_ITLB:
+ putcfg r21, 0, ZERO /* Clear MMUIR[n].PTEH.V */
+ addi r21, MMUIR_STEP, r21
+ bne r21, r22, tr1
+
+ /* Clear DTLBs */
+ pta clear_DTLB, tr1
+ movi MMUDR_FIRST, r21
+ movi MMUDR_END, r22
+clear_DTLB:
+ putcfg r21, 0, ZERO /* Clear MMUDR[n].PTEH.V */
+ addi r21, MMUDR_STEP, r21
+ bne r21, r22, tr1
+
+ /* Map one big (512Mb) page for ITLB */
+ movi MMUIR_FIRST, r21
+ movi MMUIR_TEXT_L, r22 /* PTEL first */
+ add.l r22, r63, r22 /* Sign extend */
+ putcfg r21, 1, r22 /* Set MMUIR[0].PTEL */
+ movi MMUIR_TEXT_H, r22 /* PTEH last */
+ add.l r22, r63, r22 /* Sign extend */
+ putcfg r21, 0, r22 /* Set MMUIR[0].PTEH */
+
+ /* Map one big CACHED (512Mb) page for DTLB */
+ movi MMUDR_FIRST, r21
+ movi MMUDR_CACHED_L, r22 /* PTEL first */
+ add.l r22, r63, r22 /* Sign extend */
+ putcfg r21, 1, r22 /* Set MMUDR[0].PTEL */
+ movi MMUDR_CACHED_H, r22 /* PTEH last */
+ add.l r22, r63, r22 /* Sign extend */
+ putcfg r21, 0, r22 /* Set MMUDR[0].PTEH */
+
+#ifdef CONFIG_EARLY_PRINTK
+ /*
+ * Setup a DTLB translation for SCIF phys.
+ */
+ addi r21, MMUDR_STEP, r21
+ movi 0x0a03, r22 /* SCIF phys */
+ shori 0x0148, r22
+ putcfg r21, 1, r22 /* PTEL first */
+ movi 0xfa03, r22 /* 0xfa030000, fixed SCIF virt */
+ shori 0x0003, r22
+ putcfg r21, 0, r22 /* PTEH last */
+#endif
+
+ /*
+ * Set cache behaviours.
+ */
+ /* ICache */
+ movi ICCR_BASE, r21
+ movi ICCR0_INIT_VAL, r22
+ movi ICCR1_INIT_VAL, r23
+ putcfg r21, ICCR_REG0, r22
+ putcfg r21, ICCR_REG1, r23
+
+ /* OCache */
+ movi OCCR_BASE, r21
+ movi OCCR0_INIT_VAL, r22
+ movi OCCR1_INIT_VAL, r23
+ putcfg r21, OCCR_REG0, r22
+ putcfg r21, OCCR_REG1, r23
+
+
+ /*
+ * Enable Caches and MMU. Do the first non-PIC jump.
+ * Now head.S global variables, constants and externs
+ * can be used.
+ */
+ getcon SR, r21
+ movi SR_ENABLE_MMU, r22
+ or r21, r22, r21
+ putcon r21, SSR
+ movi hyperspace, r22
+ ori r22, 1, r22 /* Make it SHmedia, not required but..*/
+ putcon r22, SPC
+ synco
+ rte /* And now go into the hyperspace ... */
+hyperspace: /* ... that's the next instruction ! */
+
+ /*
+ * Set CPU to a consistent state.
+ * r31 = FPU support flag
+ * tr0/tr7 in use. Others give a chance to loop somewhere safe
+ */
+ movi start_kernel, r32
+ ori r32, 1, r32
+
+ ptabs r32, tr0 /* r32 = _start_kernel address */
+ pta/u hopeless, tr1
+ pta/u hopeless, tr2
+ pta/u hopeless, tr3
+ pta/u hopeless, tr4
+ pta/u hopeless, tr5
+ pta/u hopeless, tr6
+ pta/u hopeless, tr7
+ gettr tr1, r28 /* r28 = hopeless address */
+
+ /* Set initial stack pointer */
+ movi init_thread_union, SP
+ putcon SP, KCR0 /* Set current to init_task */
+ movi THREAD_SIZE, r22 /* Point to the end */
+ add SP, r22, SP
+
+ /*
+ * Initialize FPU.
+ * Keep FPU flag in r31. After this block:
+ * r31 = FPU flag
+ */
+ movi fpu_in_use, r31 /* Temporary */
+
+#ifndef CONFIG_NOFPU_SUPPORT
+ getcon SR, r21
+ movi SR_ENABLE_FPU, r22
+ and r21, r22, r22
+ putcon r22, SR /* Try to enable */
+ getcon SR, r22
+ xor r21, r22, r21
+ shlri r21, 15, r21 /* Supposedly 0/1 */
+ st.q r31, 0 , r21 /* Set fpu_in_use */
+#else
+ movi 0, r21
+ st.q r31, 0 , r21 /* Set fpu_in_use */
+#endif
+ or r21, ZERO, r31 /* Set FPU flag at last */
+
+#ifndef CONFIG_SH_NO_BSS_INIT
+/* Don't clear BSS if running on slow platforms such as an RTL simulation,
+ remote memory via SHdebug link, etc. For these the memory can be guaranteed
+ to be all zero on boot anyway. */
+ /*
+ * Clear bss
+ */
+ pta clear_quad, tr1
+ movi __bss_start, r22
+ movi _end, r23
+clear_quad:
+ st.q r22, 0, ZERO
+ addi r22, 8, r22
+ bne r22, r23, tr1 /* Both quad aligned, see vmlinux.lds.S */
+#endif
+ pta/u hopeless, tr1
+
+ /* Say bye to head.S but be prepared to wrongly get back ... */
+ blink tr0, LINK
+
+ /* If we ever get back here through LINK/tr1-tr7 */
+ pta/u hopeless, tr7
+
+hopeless:
+ /*
+ * Something's badly wrong here. Loop endlessly,
+ * there's nothing more we can do about it.
+ *
+ * Note on hopeless: it can be jumped into invariably
+ * before or after jumping into hyperspace. The only
+ * requirement is to be PIC called (PTA) before and
+ * any way (PTA/PTABS) after. According to Virtual
+ * to Physical mapping a simulator/emulator can easily
+ * tell where we came here from just looking at hopeless
+ * (PC) address.
+ *
+ * For debugging purposes:
+ * (r28) hopeless/loop address
+ * (r29) Original SR
+ * (r30) CPU type/Platform endianness
+ * (r31) FPU Support
+ * (r32) _start_kernel address
+ */
+ blink tr7, ZERO
+
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/init_task.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+#include <linux/rwsem.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/init_task.h>
+#include <linux/mqueue.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+static struct fs_struct init_fs = INIT_FS;
+static struct files_struct init_files = INIT_FILES;
+static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
+static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
+struct mm_struct init_mm = INIT_MM(init_mm);
+
+struct pt_regs fake_swapper_regs;
+
+/*
+ * Initial thread structure.
+ *
+ * We need to make sure that this is THREAD_SIZE-byte aligned due
+ * to the way process stacks are handled. This is done by having a
+ * special "init_task" linker map entry..
+ */
+union thread_union init_thread_union
+ __attribute__((__section__(".data.init_task"))) =
+ { INIT_THREAD_INFO(init_task) };
+
+/*
+ * Initial task structure.
+ *
+ * All other task structs will be allocated on slabs in fork.c
+ */
+struct task_struct init_task = INIT_TASK(init_task);
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/irq.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * IRQs are in fact implemented a bit like signal handlers for the kernel.
+ * Naturally it's not a 1:1 relation, but there are similarities.
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/kernel_stat.h>
+#include <linux/signal.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/bitops.h>
+#include <asm/smp.h>
+#include <asm/pgalloc.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <linux/irq.h>
+
+/*
+ * Controller mappings for all interrupt sources:
+ */
+irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = {
+ [0 ... NR_IRQS-1] = {
+ .handler = &no_irq_type,
+ .lock = SPIN_LOCK_UNLOCKED
+ }
+};
+
+
+/*
+ * Special irq handlers.
+ */
+
+irqreturn_t no_action(int cpl, void *dev_id, struct pt_regs *regs)
+{
+ return IRQ_NONE;
+}
+
+/*
+ * Generic no controller code
+ */
+
+static void enable_none(unsigned int irq) { }
+static unsigned int startup_none(unsigned int irq) { return 0; }
+static void disable_none(unsigned int irq) { }
+static void ack_none(unsigned int irq)
+{
+/*
+ * 'what should we do if we get a hw irq event on an illegal vector'.
+ * each architecture has to answer this themselves, it doesnt deserve
+ * a generic callback i think.
+ */
+ printk("unexpected IRQ trap at irq %02x\n", irq);
+}
+
+/* startup is the same as "enable", shutdown is same as "disable" */
+#define shutdown_none disable_none
+#define end_none enable_none
+
+struct hw_interrupt_type no_irq_type = {
+ "none",
+ startup_none,
+ shutdown_none,
+ enable_none,
+ disable_none,
+ ack_none,
+ end_none
+};
+
+#if defined(CONFIG_PROC_FS)
+int show_interrupts(struct seq_file *p, void *v)
+{
+ int i = *(loff_t *) v, j;
+ struct irqaction * action;
+ unsigned long flags;
+
+ if (i == 0) {
+ seq_puts(p, " ");
+ for (j=0; j<NR_CPUS; j++)
+ if (cpu_online(j))
+ seq_printf(p, "CPU%d ",j);
+ seq_putc(p, '\n');
+ }
+
+ if (i < NR_IRQS) {
+ spin_lock_irqsave(&irq_desc[i].lock, flags);
+ action = irq_desc[i].action;
+ if (!action)
+ goto unlock;
+ seq_printf(p, "%3d: ",i);
+ seq_printf(p, "%10u ", kstat_irqs(i));
+ seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %s", action->name);
+
+ for (action=action->next; action; action = action->next)
+ seq_printf(p, ", %s", action->name);
+ seq_putc(p, '\n');
+unlock:
+ spin_unlock_irqrestore(&irq_desc[i].lock, flags);
+ }
+ return 0;
+}
+#endif
+
+/*
+ * do_NMI handles all Non-Maskable Interrupts.
+ */
+asmlinkage void do_NMI(unsigned long vector_num, struct pt_regs * regs)
+{
+ if (regs->sr & 0x40000000)
+ printk("unexpected NMI trap in system mode\n");
+ else
+ printk("unexpected NMI trap in user mode\n");
+
+ /* No statistics */
+}
+
+/*
+ * This should really return information about whether
+ * we should do bottom half handling etc. Right now we
+ * end up _always_ checking the bottom half, which is a
+ * waste of time and is not what some drivers would
+ * prefer.
+ */
+int handle_IRQ_event(unsigned int irq, struct pt_regs * regs, struct irqaction * action)
+{
+ int status;
+
+ status = 1; /* Force the "do bottom halves" bit */
+
+ if (!(action->flags & SA_INTERRUPT))
+ local_irq_enable();
+
+ do {
+ status |= action->flags;
+ action->handler(irq, action->dev_id, regs);
+ action = action->next;
+ } while (action);
+ if (status & SA_SAMPLE_RANDOM)
+ add_interrupt_randomness(irq);
+
+ local_irq_disable();
+
+ return status;
+}
+
+/*
+ * Generic enable/disable code: this just calls
+ * down into the PIC-specific version for the actual
+ * hardware disable after having gotten the irq
+ * controller lock.
+ */
+
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables of an interrupt
+ * stack. Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
+ */
+void disable_irq_nosync(unsigned int irq)
+{
+ irq_desc_t *desc = irq_desc + irq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&desc->lock, flags);
+ if (!desc->depth++) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->disable(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables of an interrupt
+ * stack. That is for two disables you need two enables. This
+ * function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
+ */
+void disable_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ synchronize_irq(irq);
+}
+
+/**
+ * enable_irq - enable interrupt handling on an irq
+ * @irq: Interrupt to enable
+ *
+ * Re-enables the processing of interrupts on this IRQ line
+ * providing no disable_irq calls are now in effect.
+ *
+ * This function may be called from IRQ context.
+ */
+void enable_irq(unsigned int irq)
+{
+ irq_desc_t *desc = irq_desc + irq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&desc->lock, flags);
+ switch (desc->depth) {
+ case 1: {
+ unsigned int status = desc->status & ~IRQ_DISABLED;
+ desc->status = status;
+ if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
+ desc->status = status | IRQ_REPLAY;
+ hw_resend_irq(desc->handler,irq);
+ }
+ desc->handler->enable(irq);
+ /* fall-through */
+ }
+ default:
+ desc->depth--;
+ break;
+ case 0:
+ printk("enable_irq() unbalanced from %p\n",
+ __builtin_return_address(0));
+ }
+ spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+/*
+ * do_IRQ handles all normal device IRQ's.
+ */
+asmlinkage int do_IRQ(unsigned long vector_num, struct pt_regs * regs)
+{
+ /*
+ * We ack quickly, we don't want the irq controller
+ * thinking we're snobs just because some other CPU has
+ * disabled global interrupts (we have already done the
+ * INT_ACK cycles, it's too late to try to pretend to the
+ * controller that we aren't taking the interrupt).
+ *
+ * 0 return value means that this irq is already being
+ * handled by some other CPU. (or is disabled)
+ */
+ int irq;
+ int cpu = smp_processor_id();
+ irq_desc_t *desc = NULL;
+ struct irqaction * action;
+ unsigned int status;
+
+ irq_enter();
+
+#ifdef CONFIG_PREEMPT
+ /*
+ * At this point we're now about to actually call handlers,
+ * and interrupts might get reenabled during them... bump
+ * preempt_count to prevent any preemption while the handler
+ * called here is pending...
+ */
+ preempt_disable();
+#endif
+
+ irq = irq_demux(vector_num);
+
+ /*
+ * Should never happen, if it does check
+ * vectorN_to_IRQ[] against trap_jtable[].
+ */
+ if (irq == -1) {
+ printk("unexpected IRQ trap at vector %03lx\n", vector_num);
+ goto out;
+ }
+
+ desc = irq_desc + irq;
+
+ kstat_cpu(cpu).irqs[irq]++;
+ spin_lock(&desc->lock);
+ desc->handler->ack(irq);
+ /*
+ REPLAY is when Linux resends an IRQ that was dropped earlier
+ WAITING is used by probe to mark irqs that are being tested
+ */
+ status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING | IRQ_INPROGRESS);
+ status |= IRQ_PENDING; /* we _want_ to handle it */
+
+ /*
+ * If the IRQ is disabled for whatever reason, we cannot
+ * use the action we have.
+ */
+ action = NULL;
+ if (!(status & (IRQ_DISABLED | IRQ_INPROGRESS))) {
+ action = desc->action;
+ status &= ~IRQ_PENDING; /* we commit to handling */
+ status |= IRQ_INPROGRESS; /* we are handling it */
+ }
+ desc->status = status;
+
+ /*
+ * If there is no IRQ handler or it was disabled, exit early.
+ Since we set PENDING, if another processor is handling
+ a different instance of this same irq, the other processor
+ will take care of it.
+ */
+ if (!action)
+ goto out;
+
+ /*
+ * Edge triggered interrupts need to remember
+ * pending events.
+ * This applies to any hw interrupts that allow a second
+ * instance of the same irq to arrive while we are in do_IRQ
+ * or in the handler. But the code here only handles the _second_
+ * instance of the irq, not the third or fourth. So it is mostly
+ * useful for irq hardware that does not mask cleanly in an
+ * SMP environment.
+ */
+ for (;;) {
+ spin_unlock(&desc->lock);
+ handle_IRQ_event(irq, regs, action);
+ spin_lock(&desc->lock);
+
+ if (!(desc->status & IRQ_PENDING))
+ break;
+ desc->status &= ~IRQ_PENDING;
+ }
+ desc->status &= ~IRQ_INPROGRESS;
+out:
+ /*
+ * The ->end() handler has to deal with interrupts which got
+ * disabled while the handler was running.
+ */
+ if (desc) {
+ desc->handler->end(irq);
+ spin_unlock(&desc->lock);
+ }
+
+ irq_exit();
+
+#ifdef CONFIG_PREEMPT
+ /*
+ * We're done with the handlers, interrupts should be
+ * currently disabled; decrement preempt_count now so
+ * as we return preemption may be allowed...
+ */
+ preempt_enable_no_resched();
+#endif
+
+ return 1;
+}
+
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+int request_irq(unsigned int irq,
+ irqreturn_t (*handler)(int, void *, struct pt_regs *),
+ unsigned long irqflags,
+ const char * devname,
+ void *dev_id)
+{
+ int retval;
+ struct irqaction * action;
+
+#if 1
+ /*
+ * Sanity-check: shared interrupts should REALLY pass in
+ * a real dev-ID, otherwise we'll have trouble later trying
+ * to figure out which interrupt is which (messes up the
+ * interrupt freeing logic etc).
+ */
+ if (irqflags & SA_SHIRQ) {
+ if (!dev_id)
+ printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n", devname, (&irq)[-1]);
+ }
+#endif
+
+ if (irq >= NR_IRQS)
+ return -EINVAL;
+ if (!handler)
+ return -EINVAL;
+
+ action = (struct irqaction *)
+ kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ if (!action)
+ return -ENOMEM;
+
+ action->handler = handler;
+ action->flags = irqflags;
+ cpus_clear(action->mask);
+ action->name = devname;
+ action->next = NULL;
+ action->dev_id = dev_id;
+
+ retval = setup_irq(irq, action);
+ if (retval)
+ kfree(action);
+ return retval;
+}
+
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+void free_irq(unsigned int irq, void *dev_id)
+{
+ irq_desc_t *desc;
+ struct irqaction **p;
+ unsigned long flags;
+
+ if (irq >= NR_IRQS)
+ return;
+
+ desc = irq_desc + irq;
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ for (;;) {
+ struct irqaction * action = *p;
+ if (action) {
+ struct irqaction **pp = p;
+ p = &action->next;
+ if (action->dev_id != dev_id)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *pp = action->next;
+ if (!desc->action) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->shutdown(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+ kfree(action);
+ return;
+ }
+ printk("Trying to free free IRQ%d\n",irq);
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return;
+ }
+}
+
+/*
+ * IRQ autodetection code..
+ *
+ * This depends on the fact that any interrupt that
+ * comes in on to an unassigned handler will get stuck
+ * with "IRQ_WAITING" cleared and the interrupt
+ * disabled.
+ */
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+unsigned long probe_irq_on(void)
+{
+ unsigned int i;
+ irq_desc_t *desc;
+ unsigned long val;
+ unsigned long delay;
+
+ /*
+ * something may have generated an irq long ago and we want to
+ * flush such a longstanding irq before considering it as spurious.
+ */
+ for (i = NR_IRQS-1; i >= 0; i--) {
+ desc = irq_desc + i;
+
+ spin_lock_irq(&desc->lock);
+ if (!irq_desc[i].action) {
+ irq_desc[i].handler->startup(i);
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ /* Wait for longstanding interrupts to trigger. */
+ for (delay = jiffies + HZ/50; time_after(delay, jiffies); )
+ /* about 20ms delay */ synchronize_irq();
+
+ /*
+ * enable any unassigned irqs
+ * (we must startup again here because if a longstanding irq
+ * happened in the previous stage, it may have masked itself)
+ */
+ for (i = NR_IRQS-1; i >= 0; i--) {
+ desc = irq_desc + 1;
+
+ spin_lock_irq(&desc->lock);
+ if (!desc->action) {
+ desc->status |= IRQ_AUTODETECT | IRQ_WAITING;
+ if (desc->handler->startup(i))
+ desc->status |= IRQ_PENDING;
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ /*
+ * Wait for spurious interrupts to trigger
+ */
+ for (delay = jiffies + HZ/10; time_after(delay, jiffies); )
+ /* about 100ms delay */ synchronize_irq();
+
+ /*
+ * Now filter out any obviously spurious interrupts
+ */
+ val = 0;
+ for (i = 0; i < NR_IRQS; i++) {
+ irq_desc_t *desc = irq_desc + i;
+ unsigned int status;
+
+ spin_lock_irq(&desc->lock);
+ status = desc->status;
+
+ if (status & IRQ_AUTODETECT) {
+ /* It triggered already - consider it spurious. */
+ if (!(status & IRQ_WAITING)) {
+ desc->status = status & ~IRQ_AUTODETECT;
+ desc->handler->shutdown(i);
+ } else
+ if (i < 32)
+ val |= 1 << i;
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ return val;
+}
+
+/*
+ * Return the one interrupt that triggered (this can
+ * handle any interrupt source).
+ */
+
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @val: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
+ */
+int probe_irq_off(unsigned long val)
+{
+ int i, irq_found, nr_irqs;
+
+ nr_irqs = 0;
+ irq_found = 0;
+ for (i=0; i<NR_IRQS; i++) {
+ irq_desc_t *desc = irq_desc + i;
+ unsigned int status;
+
+ spin_lock_irq(&desc->lock);
+ status = desc->status;
+ if (!(status & IRQ_AUTODETECT))
+ continue;
+
+ if (status & IRQ_AUTODETECT) {
+ if (!(status & IRQ_WAITING)) {
+ if (!nr_irqs)
+ irq_found = i;
+ nr_irqs++;
+ }
+
+ desc->status = status & ~IRQ_AUTODETECT;
+ desc->handler->shutdown(i);
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ if (nr_irqs > 1)
+ irq_found = -irq_found;
+ return irq_found;
+}
+
+int setup_irq(unsigned int irq, struct irqaction * new)
+{
+ int shared = 0;
+ unsigned long flags;
+ struct irqaction *old, **p;
+ irq_desc_t *desc = irq_desc + irq;
+
+ /*
+ * Some drivers like serial.c use request_irq() heavily,
+ * so we have to be careful not to interfere with a
+ * running system.
+ */
+ if (new->flags & SA_SAMPLE_RANDOM) {
+ /*
+ * This function might sleep, we want to call it first,
+ * outside of the atomic block.
+ * Yes, this might clear the entropy pool if the wrong
+ * driver is attempted to be loaded, without actually
+ * installing a new handler, but is this really a problem,
+ * only the sysadmin is able to do this.
+ */
+ rand_initialize_irq(irq);
+ }
+
+ /*
+ * The following block of code has to be executed atomically
+ */
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ if ((old = *p) != NULL) {
+ /* Can't share interrupts unless both agree to */
+ if (!(old->flags & new->flags & SA_SHIRQ)) {
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return -EBUSY;
+ }
+
+ /* add new interrupt at end of irq queue */
+ do {
+ p = &old->next;
+ old = *p;
+ } while (old);
+ shared = 1;
+ }
+
+ *p = new;
+
+ if (!shared) {
+ desc->depth = 0;
+ desc->status &= ~IRQ_DISABLED;
+ desc->handler->startup(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+
+ /*
+ * No PROC FS support for interrupts.
+ * For improvements in this area please check
+ * the i386 branch.
+ */
+ return 0;
+}
+
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_SYSCTL)
+
+void init_irq_proc(void)
+{
+ /*
+ * No PROC FS support for interrupts.
+ * For improvements in this area please check
+ * the i386 branch.
+ */
+}
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/irq_intc.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Interrupt Controller support for SH5 INTC.
+ * Per-interrupt selective. IRLM=0 (Fixed priority) is not
+ * supported being useless without a cascaded interrupt
+ * controller.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/stddef.h>
+
+#include <asm/hardware.h>
+#include <asm/platform.h>
+#include <asm/bitops.h> /* this includes also <asm/registers.h */
+ /* which is required to remap register */
+ /* names used into __asm__ blocks... */
+#include <asm/page.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+/*
+ * Maybe the generic Peripheral block could move to a more
+ * generic include file. INTC Block will be defined here
+ * and only here to make INTC self-contained in a single
+ * file.
+ */
+#define INTC_BLOCK_OFFSET 0x01000000
+
+/* Base */
+#define INTC_BASE PHYS_PERIPHERAL_BLOCK + \
+ INTC_BLOCK_OFFSET
+
+/* Address */
+#define INTC_ICR_SET (intc_virt + 0x0)
+#define INTC_ICR_CLEAR (intc_virt + 0x8)
+#define INTC_INTPRI_0 (intc_virt + 0x10)
+#define INTC_INTSRC_0 (intc_virt + 0x50)
+#define INTC_INTSRC_1 (intc_virt + 0x58)
+#define INTC_INTREQ_0 (intc_virt + 0x60)
+#define INTC_INTREQ_1 (intc_virt + 0x68)
+#define INTC_INTENB_0 (intc_virt + 0x70)
+#define INTC_INTENB_1 (intc_virt + 0x78)
+#define INTC_INTDSB_0 (intc_virt + 0x80)
+#define INTC_INTDSB_1 (intc_virt + 0x88)
+
+#define INTC_ICR_IRLM 0x1
+#define INTC_INTPRI_PREGS 8 /* 8 Priority Registers */
+#define INTC_INTPRI_PPREG 8 /* 8 Priorities per Register */
+
+
+/*
+ * Mapper between the vector ordinal and the IRQ number
+ * passed to kernel/device drivers.
+ */
+int intc_evt_to_irq[(0xE20/0x20)+1] = {
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x000 - 0x0E0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x100 - 0x1E0 */
+ 0, 0, 0, 0, 0, 1, 0, 0, /* 0x200 - 0x2E0 */
+ 2, 0, 0, 3, 0, 0, 0, -1, /* 0x300 - 0x3E0 */
+ 32, 33, 34, 35, 36, 37, 38, -1, /* 0x400 - 0x4E0 */
+ -1, -1, -1, 63, -1, -1, -1, -1, /* 0x500 - 0x5E0 */
+ -1, -1, 18, 19, 20, 21, 22, -1, /* 0x600 - 0x6E0 */
+ 39, 40, 41, 42, -1, -1, -1, -1, /* 0x700 - 0x7E0 */
+ 4, 5, 6, 7, -1, -1, -1, -1, /* 0x800 - 0x8E0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x900 - 0x9E0 */
+ 12, 13, 14, 15, 16, 17, -1, -1, /* 0xA00 - 0xAE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xB00 - 0xBE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xC00 - 0xCE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xD00 - 0xDE0 */
+ -1, -1 /* 0xE00 - 0xE20 */
+};
+
+/*
+ * Opposite mapper.
+ */
+static int IRQ_to_vectorN[NR_INTC_IRQS] = {
+ 0x12, 0x15, 0x18, 0x1B, 0x40, 0x41, 0x42, 0x43, /* 0- 7 */
+ -1, -1, -1, -1, 0x50, 0x51, 0x52, 0x53, /* 8-15 */
+ 0x54, 0x55, 0x32, 0x33, 0x34, 0x35, 0x36, -1, /* 16-23 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 24-31 */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x38, /* 32-39 */
+ 0x39, 0x3A, 0x3B, -1, -1, -1, -1, -1, /* 40-47 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 48-55 */
+ -1, -1, -1, -1, -1, -1, -1, 0x2B, /* 56-63 */
+
+};
+
+static unsigned long intc_virt;
+
+static unsigned int startup_intc_irq(unsigned int irq);
+static void shutdown_intc_irq(unsigned int irq);
+static void enable_intc_irq(unsigned int irq);
+static void disable_intc_irq(unsigned int irq);
+static void mask_and_ack_intc(unsigned int);
+static void end_intc_irq(unsigned int irq);
+
+static struct hw_interrupt_type intc_irq_type = {
+ "INTC",
+ startup_intc_irq,
+ shutdown_intc_irq,
+ enable_intc_irq,
+ disable_intc_irq,
+ mask_and_ack_intc,
+ end_intc_irq
+};
+
+static int irlm; /* IRL mode */
+
+static unsigned int startup_intc_irq(unsigned int irq)
+{
+ enable_intc_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void shutdown_intc_irq(unsigned int irq)
+{
+ disable_intc_irq(irq);
+}
+
+static void enable_intc_irq(unsigned int irq)
+{
+ unsigned long reg;
+ unsigned long bitmask;
+
+ if ((irq <= IRQ_IRL3) && (irlm == NO_PRIORITY))
+ printk("Trying to use straight IRL0-3 with an encoding platform.\n");
+
+ if (irq < 32) {
+ reg = INTC_INTENB_0;
+ bitmask = 1 << irq;
+ } else {
+ reg = INTC_INTENB_1;
+ bitmask = 1 << (irq - 32);
+ }
+
+ ctrl_outl(bitmask, reg);
+}
+
+static void disable_intc_irq(unsigned int irq)
+{
+ unsigned long reg;
+ unsigned long bitmask;
+
+ if (irq < 32) {
+ reg = INTC_INTDSB_0;
+ bitmask = 1 << irq;
+ } else {
+ reg = INTC_INTDSB_1;
+ bitmask = 1 << (irq - 32);
+ }
+
+ ctrl_outl(bitmask, reg);
+}
+
+static void mask_and_ack_intc(unsigned int irq)
+{
+ disable_intc_irq(irq);
+}
+
+static void end_intc_irq(unsigned int irq)
+{
+ enable_intc_irq(irq);
+}
+
+/* For future use, if we ever support IRLM=0) */
+void make_intc_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ irq_desc[irq].handler = &intc_irq_type;
+ disable_intc_irq(irq);
+}
+
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_SYSCTL)
+int intc_irq_describe(char* p, int irq)
+{
+ if (irq < NR_INTC_IRQS)
+ return sprintf(p, "(0x%3x)", IRQ_to_vectorN[irq]*0x20);
+ else
+ return 0;
+}
+#endif
+
+void __init init_IRQ(void)
+{
+ unsigned long long __dummy0, __dummy1=~0x00000000100000f0;
+ unsigned long reg;
+ unsigned long data;
+ int i;
+
+ intc_virt = onchip_remap(INTC_BASE, 1024, "INTC");
+ if (!intc_virt) {
+ panic("Unable to remap INTC\n");
+ }
+
+
+ /* Set default: per-line enable/disable, priority driven ack/eoi */
+ for (i = 0; i < NR_INTC_IRQS; i++) {
+ if (platform_int_priority[i] != NO_PRIORITY) {
+ irq_desc[i].handler = &intc_irq_type;
+ }
+ }
+
+
+ /* Disable all interrupts and set all priorities to 0 to avoid trouble */
+ ctrl_outl(-1, INTC_INTDSB_0);
+ ctrl_outl(-1, INTC_INTDSB_1);
+
+ for (reg = INTC_INTPRI_0, i = 0; i < INTC_INTPRI_PREGS; i++, reg += 8)
+ ctrl_outl( NO_PRIORITY, reg);
+
+
+ /* Set IRLM */
+ /* If all the priorities are set to 'no priority', then
+ * assume we are using encoded mode.
+ */
+ irlm = platform_int_priority[IRQ_IRL0] + platform_int_priority[IRQ_IRL1] + \
+ platform_int_priority[IRQ_IRL2] + platform_int_priority[IRQ_IRL3];
+
+ if (irlm == NO_PRIORITY) {
+ /* IRLM = 0 */
+ reg = INTC_ICR_CLEAR;
+ i = IRQ_INTA;
+ printk("Trying to use encoded IRL0-3. IRLs unsupported.\n");
+ } else {
+ /* IRLM = 1 */
+ reg = INTC_ICR_SET;
+ i = IRQ_IRL0;
+ }
+ ctrl_outl(INTC_ICR_IRLM, reg);
+
+ /* Set interrupt priorities according to platform description */
+ for (data = 0, reg = INTC_INTPRI_0; i < NR_INTC_IRQS; i++) {
+ data |= platform_int_priority[i] << ((i % INTC_INTPRI_PPREG) * 4);
+ if ((i % INTC_INTPRI_PPREG) == (INTC_INTPRI_PPREG - 1)) {
+ /* Upon the 7th, set Priority Register */
+ ctrl_outl(data, reg);
+ data = 0;
+ reg += 8;
+ }
+ }
+
+#ifdef CONFIG_SH_CAYMAN
+ {
+ extern void init_cayman_irq(void);
+
+ init_cayman_irq();
+ }
+#endif
+
+ /*
+ * And now let interrupts come in.
+ * sti() is not enough, we need to
+ * lower priority, too.
+ */
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "and %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy0)
+ : "r" (__dummy1));
+}
--- /dev/null
+/*
+ * arch/sh64/kernel/led.c
+ *
+ * Copyright (C) 2002 Stuart Menefy <stuart.menefy@st.com>
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Flash the LEDs
+ */
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/sched.h>
+
+void mach_led(int pos, int val);
+
+/* acts like an actual heart beat -- ie thump-thump-pause... */
+void heartbeat(void)
+{
+ static unsigned int cnt = 0, period = 0, dist = 0;
+
+ if (cnt == 0 || cnt == dist) {
+ mach_led(-1, 1);
+ } else if (cnt == 7 || cnt == dist + 7) {
+ mach_led(-1, 0);
+ }
+
+ if (++cnt > period) {
+ cnt = 0;
+
+ /*
+ * The hyperbolic function below modifies the heartbeat period
+ * length in dependency of the current (5min) load. It goes
+ * through the points f(0)=126, f(1)=86, f(5)=51, f(inf)->30.
+ */
+ period = ((672 << FSHIFT) / (5 * avenrun[0] +
+ (7 << FSHIFT))) + 30;
+ dist = period / 4;
+ }
+}
+
--- /dev/null
+/*
+ * Copyright (C) 2001 David J. Mckay (david.mckay@st.com)
+ * Copyright (C) 2003 Paul Mundt (lethal@linux-sh.org)
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Dynamic DMA mapping support.
+ */
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+void *consistent_alloc(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle)
+{
+ void *ret;
+ int gfp = GFP_ATOMIC;
+ void *vp;
+
+ if (hwdev == NULL || hwdev->dma_mask != 0xffffffff)
+ gfp |= GFP_DMA;
+
+ ret = (void *)__get_free_pages(gfp, get_order(size));
+
+ /* now call our friend ioremap_nocache to give us an uncached area */
+ vp = ioremap_nocache(virt_to_phys(ret), size);
+
+ if (vp != NULL) {
+ memset(vp, 0, size);
+ *dma_handle = virt_to_bus(ret);
+ dma_cache_wback_inv((unsigned long)ret, size);
+ }
+
+ return vp;
+}
+
+void consistent_free(struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle)
+{
+ void *alloc;
+
+ alloc = bus_to_virt((unsigned long)dma_handle);
+ free_pages((unsigned long)alloc, get_order(size));
+
+ iounmap(vaddr);
+}
+
--- /dev/null
+/*
+ * Copyright (C) 2001 David J. Mckay (david.mckay@st.com)
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Support functions for the SH5 PCI hardware.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <asm/pci.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include "pci_sh5.h"
+
+static unsigned long pcicr_virt;
+unsigned long pciio_virt;
+
+static void __init pci_fixup_ide_bases(struct pci_dev *d)
+{
+ int i;
+
+ /*
+ * PCI IDE controllers use non-standard I/O port decoding, respect it.
+ */
+ if ((d->class >> 8) != PCI_CLASS_STORAGE_IDE)
+ return;
+ printk("PCI: IDE base address fixup for %s\n", d->slot_name);
+ for(i=0; i<4; i++) {
+ struct resource *r = &d->resource[i];
+ if ((r->start & ~0x80) == 0x374) {
+ r->start |= 2;
+ r->end = r->start;
+ }
+ }
+}
+
+/* Add future fixups here... */
+struct pci_fixup pcibios_fixups[] = {
+ { PCI_FIXUP_HEADER, PCI_ANY_ID, PCI_ANY_ID, pci_fixup_ide_bases },
+ { 0 }
+};
+
+char * __init pcibios_setup(char *str)
+{
+ return str;
+}
+
+/* Rounds a number UP to the nearest power of two. Used for
+ * sizing the PCI window.
+ */
+static u32 __init r2p2(u32 num)
+{
+ int i = 31;
+ u32 tmp = num;
+
+ if (num == 0)
+ return 0;
+
+ do {
+ if (tmp & (1 << 31))
+ break;
+ i--;
+ tmp <<= 1;
+ } while (i >= 0);
+
+ tmp = 1 << i;
+ /* If the original number isn't a power of 2, round it up */
+ if (tmp != num)
+ tmp <<= 1;
+
+ return tmp;
+}
+
+extern unsigned long long memory_start, memory_end;
+
+int __init sh5pci_init(unsigned memStart, unsigned memSize)
+{
+ u32 lsr0;
+ u32 uval;
+
+ pcicr_virt = onchip_remap(SH5PCI_ICR_BASE, 1024, "PCICR");
+ if (!pcicr_virt) {
+ panic("Unable to remap PCICR\n");
+ }
+
+ pciio_virt = onchip_remap(SH5PCI_IO_BASE, 0x10000, "PCIIO");
+ if (!pciio_virt) {
+ panic("Unable to remap PCIIO\n");
+ }
+
+ pr_debug("Register base addres is 0x%08lx\n", pcicr_virt);
+
+ /* Clear snoop registers */
+ SH5PCI_WRITE(CSCR0, 0);
+ SH5PCI_WRITE(CSCR1, 0);
+
+ pr_debug("Wrote to reg\n");
+
+ /* Switch off interrupts */
+ SH5PCI_WRITE(INTM, 0);
+ SH5PCI_WRITE(AINTM, 0);
+ SH5PCI_WRITE(PINTM, 0);
+
+ /* Set bus active, take it out of reset */
+ uval = SH5PCI_READ(CR);
+
+ /* Set command Register */
+ SH5PCI_WRITE(CR, uval | CR_LOCK_MASK | CR_CFINT| CR_FTO | CR_PFE | CR_PFCS | CR_BMAM);
+
+ uval=SH5PCI_READ(CR);
+ pr_debug("CR is actually 0x%08x\n",uval);
+
+ /* Allow it to be a master */
+ /* NB - WE DISABLE I/O ACCESS to stop overlap */
+ /* set WAIT bit to enable stepping, an attempt to improve stability */
+ SH5PCI_WRITE_SHORT(CSR_CMD,
+ PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_WAIT);
+
+ /*
+ ** Set translation mapping memory in order to convert the address
+ ** used for the main bus, to the PCI internal address.
+ */
+ SH5PCI_WRITE(MBR,0x40000000);
+
+ /* Always set the max size 512M */
+ SH5PCI_WRITE(MBMR, PCISH5_MEM_SIZCONV(512*1024*1024));
+
+ /*
+ ** I/O addresses are mapped at internal PCI specific address
+ ** as is described into the configuration bridge table.
+ ** These are changed to 0, to allow cards that have legacy
+ ** io such as vga to function correctly. We set the SH5 IOBAR to
+ ** 256K, which is a bit big as we can only have 64K of address space
+ */
+
+ SH5PCI_WRITE(IOBR,0x0);
+
+ pr_debug("PCI:Writing 0x%08x to IOBR\n",0);
+
+ /* Set up a 256K window. Totally pointless waste of address space */
+ SH5PCI_WRITE(IOBMR,0);
+ pr_debug("PCI:Writing 0x%08x to IOBMR\n",0);
+
+ /* The SH5 has a HUGE 256K I/O region, which breaks the PCI spec. Ideally,
+ * we would want to map the I/O region somewhere, but it is so big this is not
+ * that easy!
+ */
+ SH5PCI_WRITE(CSR_IBAR0,~0);
+ /* Set memory size value */
+ memSize = memory_end - memory_start;
+
+ /* Now we set up the mbars so the PCI bus can see the memory of the machine */
+ if (memSize < (1024 * 1024)) {
+ printk(KERN_ERR "PCISH5: Ridiculous memory size of 0x%x?\n", memSize);
+ return -EINVAL;
+ }
+
+ /* Set LSR 0 */
+ lsr0 = (memSize > (512 * 1024 * 1024)) ? 0x1ff00001 : ((r2p2(memSize) - 0x100000) | 0x1);
+ SH5PCI_WRITE(LSR0, lsr0);
+
+ pr_debug("PCI:Writing 0x%08x to LSR0\n",lsr0);
+
+ /* Set MBAR 0 */
+ SH5PCI_WRITE(CSR_MBAR0, memory_start);
+ SH5PCI_WRITE(LAR0, memory_start);
+
+ SH5PCI_WRITE(CSR_MBAR1,0);
+ SH5PCI_WRITE(LAR1,0);
+ SH5PCI_WRITE(LSR1,0);
+
+ pr_debug("PCI:Writing 0x%08llx to CSR_MBAR0\n",memory_start);
+ pr_debug("PCI:Writing 0x%08llx to LAR0\n",memory_start);
+
+ /* Enable the PCI interrupts on the device */
+ SH5PCI_WRITE(INTM, ~0);
+ SH5PCI_WRITE(AINTM, ~0);
+ SH5PCI_WRITE(PINTM, ~0);
+
+ pr_debug("Switching on all error interrupts\n");
+
+ return(0);
+}
+
+static int sh5pci_read(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 *val)
+{
+ SH5PCI_WRITE(PAR, CONFIG_CMD(bus, devfn, where));
+
+ switch (size) {
+ case 1:
+ *val = (u8)SH5PCI_READ_BYTE(PDR + (where & 3));
+ break;
+ case 2:
+ *val = (u16)SH5PCI_READ_SHORT(PDR + (where & 2));
+ break;
+ case 4:
+ *val = SH5PCI_READ(PDR);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int sh5pci_write(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 val)
+{
+ SH5PCI_WRITE(PAR, CONFIG_CMD(bus, devfn, where));
+
+ switch (size) {
+ case 1:
+ SH5PCI_WRITE_BYTE(PDR + (where & 3), (u8)val);
+ break;
+ case 2:
+ SH5PCI_WRITE_SHORT(PDR + (where & 2), (u16)val);
+ break;
+ case 4:
+ SH5PCI_WRITE(PDR, val);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops pci_config_ops = {
+ .read = sh5pci_read,
+ .write = sh5pci_write,
+};
+
+/* Everything hangs off this */
+static struct pci_bus *pci_root_bus;
+
+
+static u8 __init no_swizzle(struct pci_dev *dev, u8 * pin)
+{
+ pr_debug("swizzle for dev %d on bus %d slot %d pin is %d\n",
+ dev->devfn,dev->bus->number, PCI_SLOT(dev->devfn),*pin);
+ return PCI_SLOT(dev->devfn);
+}
+
+static inline u8 bridge_swizzle(u8 pin, u8 slot)
+{
+ return (((pin-1) + slot) % 4) + 1;
+}
+
+u8 __init common_swizzle(struct pci_dev *dev, u8 *pinp)
+{
+ if (dev->bus->number != 0) {
+ u8 pin = *pinp;
+ do {
+ pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn));
+ /* Move up the chain of bridges. */
+ dev = dev->bus->self;
+ } while (dev->bus->self);
+ *pinp = pin;
+
+ /* The slot is the slot of the last bridge. */
+ }
+
+ return PCI_SLOT(dev->devfn);
+}
+
+/* This needs to be shunted out of here into the board specific bit */
+
+static int __init map_cayman_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int result = -1;
+
+ /* The complication here is that the PCI IRQ lines from the Cayman's 2
+ 5V slots get into the CPU via a different path from the IRQ lines
+ from the 3 3.3V slots. Thus, we have to detect whether the card's
+ interrupts go via the 5V or 3.3V path, i.e. the 'bridge swizzling'
+ at the point where we cross from 5V to 3.3V is not the normal case.
+
+ The added complication is that we don't know that the 5V slots are
+ always bus 2, because a card containing a PCI-PCI bridge may be
+ plugged into a 3.3V slot, and this changes the bus numbering.
+
+ Also, the Cayman has an intermediate PCI bus that goes a custom
+ expansion board header (and to the secondary bridge). This bus has
+ never been used in practice.
+
+ The 1ary onboard PCI-PCI bridge is device 3 on bus 0
+ The 2ary onboard PCI-PCI bridge is device 0 on the 2ary bus of the 1ary bridge.
+ */
+
+ struct slot_pin {
+ int slot;
+ int pin;
+ } path[4];
+ int i=0;
+
+ while (dev->bus->number > 0) {
+
+ slot = path[i].slot = PCI_SLOT(dev->devfn);
+ pin = path[i].pin = bridge_swizzle(pin, slot);
+ dev = dev->bus->self;
+ i++;
+ if (i > 3) panic("PCI path to root bus too long!\n");
+ }
+
+ slot = PCI_SLOT(dev->devfn);
+ /* This is the slot on bus 0 through which the device is eventually
+ reachable. */
+
+ /* Now work back up. */
+ if ((slot < 3) || (i == 0)) {
+ /* Bus 0 (incl. PCI-PCI bridge itself) : perform the final
+ swizzle now. */
+ result = IRQ_INTA + bridge_swizzle(pin, slot) - 1;
+ } else {
+ i--;
+ slot = path[i].slot;
+ pin = path[i].pin;
+ if (slot > 0) {
+ panic("PCI expansion bus device found - not handled!\n");
+ } else {
+ if (i > 0) {
+ /* 5V slots */
+ i--;
+ slot = path[i].slot;
+ pin = path[i].pin;
+ /* 'pin' was swizzled earlier wrt slot, don't do it again. */
+ result = IRQ_P2INTA + (pin - 1);
+ } else {
+ /* IRQ for 2ary PCI-PCI bridge : unused */
+ result = -1;
+ }
+ }
+ }
+
+ return result;
+}
+
+irqreturn_t pcish5_err_irq(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned pci_int, pci_air, pci_cir, pci_aint;
+
+ pci_int = SH5PCI_READ(INT);
+ pci_cir = SH5PCI_READ(CIR);
+ pci_air = SH5PCI_READ(AIR);
+
+ if (pci_int) {
+ printk("PCI INTERRUPT (at %08llx)!\n", regs->pc);
+ printk("PCI INT -> 0x%x\n", pci_int & 0xffff);
+ printk("PCI AIR -> 0x%x\n", pci_air);
+ printk("PCI CIR -> 0x%x\n", pci_cir);
+ SH5PCI_WRITE(INT, ~0);
+ }
+
+ pci_aint = SH5PCI_READ(AINT);
+ if (pci_aint) {
+ printk("PCI ARB INTERRUPT!\n");
+ printk("PCI AINT -> 0x%x\n", pci_aint);
+ printk("PCI AIR -> 0x%x\n", pci_air);
+ printk("PCI CIR -> 0x%x\n", pci_cir);
+ SH5PCI_WRITE(AINT, ~0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+irqreturn_t pcish5_serr_irq(int irq, void *dev_id, struct pt_regs *regs)
+{
+ printk("SERR IRQ\n");
+
+ return IRQ_NONE;
+}
+
+#define ROUND_UP(x, a) (((x) + (a) - 1) & ~((a) - 1))
+
+static void __init
+pcibios_size_bridge(struct pci_bus *bus, struct resource *ior,
+ struct resource *memr)
+{
+ struct resource io_res, mem_res;
+ struct pci_dev *dev;
+ struct pci_dev *bridge = bus->self;
+ struct list_head *ln;
+
+ if (!bridge)
+ return; /* host bridge, nothing to do */
+
+ /* set reasonable default locations for pcibios_align_resource */
+ io_res.start = PCIBIOS_MIN_IO;
+ mem_res.start = PCIBIOS_MIN_MEM;
+
+ io_res.end = io_res.start;
+ mem_res.end = mem_res.start;
+
+ /* Collect information about how our direct children are layed out. */
+ for (ln=bus->devices.next; ln != &bus->devices; ln=ln->next) {
+ int i;
+ dev = pci_dev_b(ln);
+
+ /* Skip bridges for now */
+ if (dev->class >> 8 == PCI_CLASS_BRIDGE_PCI)
+ continue;
+
+ for (i = 0; i < PCI_NUM_RESOURCES; i++) {
+ struct resource res;
+ unsigned long size;
+
+ memcpy(&res, &dev->resource[i], sizeof(res));
+ size = res.end - res.start + 1;
+
+ if (res.flags & IORESOURCE_IO) {
+ res.start = io_res.end;
+ pcibios_align_resource(dev, &res, size, 0);
+ io_res.end = res.start + size;
+ } else if (res.flags & IORESOURCE_MEM) {
+ res.start = mem_res.end;
+ pcibios_align_resource(dev, &res, size, 0);
+ mem_res.end = res.start + size;
+ }
+ }
+ }
+
+ /* And for all of the subordinate busses. */
+ for (ln=bus->children.next; ln != &bus->children; ln=ln->next)
+ pcibios_size_bridge(pci_bus_b(ln), &io_res, &mem_res);
+
+ /* turn the ending locations into sizes (subtract start) */
+ io_res.end -= io_res.start;
+ mem_res.end -= mem_res.start;
+
+ /* Align the sizes up by bridge rules */
+ io_res.end = ROUND_UP(io_res.end, 4*1024) - 1;
+ mem_res.end = ROUND_UP(mem_res.end, 1*1024*1024) - 1;
+
+ /* Adjust the bridge's allocation requirements */
+ bridge->resource[0].end = bridge->resource[0].start + io_res.end;
+ bridge->resource[1].end = bridge->resource[1].start + mem_res.end;
+
+ bridge->resource[PCI_BRIDGE_RESOURCES].end =
+ bridge->resource[PCI_BRIDGE_RESOURCES].start + io_res.end;
+ bridge->resource[PCI_BRIDGE_RESOURCES+1].end =
+ bridge->resource[PCI_BRIDGE_RESOURCES+1].start + mem_res.end;
+
+ /* adjust parent's resource requirements */
+ if (ior) {
+ ior->end = ROUND_UP(ior->end, 4*1024);
+ ior->end += io_res.end;
+ }
+
+ if (memr) {
+ memr->end = ROUND_UP(memr->end, 1*1024*1024);
+ memr->end += mem_res.end;
+ }
+}
+
+#undef ROUND_UP
+
+static void __init pcibios_size_bridges(void)
+{
+ struct resource io_res, mem_res;
+
+ memset(&io_res, 0, sizeof(io_res));
+ memset(&mem_res, 0, sizeof(mem_res));
+
+ pcibios_size_bridge(pci_root_bus, &io_res, &mem_res);
+}
+
+static int __init pcibios_init(void)
+{
+ if (request_irq(IRQ_ERR, pcish5_err_irq,
+ SA_INTERRUPT, "PCI Error",NULL) < 0) {
+ printk(KERN_ERR "PCISH5: Cannot hook PCI_PERR interrupt\n");
+ return -EINVAL;
+ }
+
+ if (request_irq(IRQ_SERR, pcish5_serr_irq,
+ SA_INTERRUPT, "PCI SERR interrupt", NULL) < 0) {
+ printk(KERN_ERR "PCISH5: Cannot hook PCI_SERR interrupt\n");
+ return -EINVAL;
+ }
+
+ /* The pci subsytem needs to know where memory is and how much
+ * of it there is. I've simply made these globals. A better mechanism
+ * is probably needed.
+ */
+ sh5pci_init(__pa(memory_start),
+ __pa(memory_end) - __pa(memory_start));
+
+ pci_root_bus = pci_scan_bus(0, &pci_config_ops, NULL);
+ pcibios_size_bridges();
+ pci_assign_unassigned_resources();
+ pci_fixup_irqs(no_swizzle, map_cayman_irq);
+
+ return 0;
+}
+
+subsys_initcall(pcibios_init);
+
+void __init pcibios_fixup_bus(struct pci_bus *bus)
+{
+ struct pci_dev *dev = bus->self;
+ int i;
+
+#if 1
+ if(dev) {
+ for(i=0; i<3; i++) {
+ bus->resource[i] =
+ &dev->resource[PCI_BRIDGE_RESOURCES+i];
+ bus->resource[i]->name = bus->name;
+ }
+ bus->resource[0]->flags |= IORESOURCE_IO;
+ bus->resource[1]->flags |= IORESOURCE_MEM;
+
+ /* For now, propogate host limits to the bus;
+ * we'll adjust them later. */
+
+#if 1
+ bus->resource[0]->end = 64*1024 - 1 ;
+ bus->resource[1]->end = PCIBIOS_MIN_MEM+(256*1024*1024)-1;
+ bus->resource[0]->start = PCIBIOS_MIN_IO;
+ bus->resource[1]->start = PCIBIOS_MIN_MEM;
+#else
+ bus->resource[0]->end = 0
+ bus->resource[1]->end = 0
+ bus->resource[0]->start =0
+ bus->resource[1]->start = 0;
+#endif
+ /* Turn off downstream PF memory address range by default */
+ bus->resource[2]->start = 1024*1024;
+ bus->resource[2]->end = bus->resource[2]->start - 1;
+ }
+#endif
+
+}
+
--- /dev/null
+/*
+ * Copyright (C) 2001 David J. Mckay (david.mckay@st.com)
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Defintions for the SH5 PCI hardware.
+ */
+
+/* Product ID */
+#define PCISH5_PID 0x350d
+
+/* vendor ID */
+#define PCISH5_VID 0x1054
+
+/* Configuration types */
+#define ST_TYPE0 0x00 /* Configuration cycle type 0 */
+#define ST_TYPE1 0x01 /* Configuration cycle type 1 */
+
+/* VCR data */
+#define PCISH5_VCR_STATUS 0x00
+#define PCISH5_VCR_VERSION 0x08
+
+/*
+** ICR register offsets and bits
+*/
+#define PCISH5_ICR_CR 0x100 /* PCI control register values */
+#define CR_PBAM (1<<12)
+#define CR_PFCS (1<<11)
+#define CR_FTO (1<<10)
+#define CR_PFE (1<<9)
+#define CR_TBS (1<<8)
+#define CR_SPUE (1<<7)
+#define CR_BMAM (1<<6)
+#define CR_HOST (1<<5)
+#define CR_CLKEN (1<<4)
+#define CR_SOCS (1<<3)
+#define CR_IOCS (1<<2)
+#define CR_RSTCTL (1<<1)
+#define CR_CFINT (1<<0)
+#define CR_LOCK_MASK 0xa5000000
+
+#define PCISH5_ICR_INT 0x114 /* Interrupt registert values */
+#define INT_MADIM (1<<2)
+
+#define PCISH5_ICR_LSR0 0X104 /* Local space register values */
+#define PCISH5_ICR_LSR1 0X108 /* Local space register values */
+#define PCISH5_ICR_LAR0 0x10c /* Local address register values */
+#define PCISH5_ICR_LAR1 0x110 /* Local address register values */
+#define PCISH5_ICR_INTM 0x118 /* Interrupt mask register values */
+#define PCISH5_ICR_AIR 0x11c /* Interrupt error address information register values */
+#define PCISH5_ICR_CIR 0x120 /* Interrupt error command information register values */
+#define PCISH5_ICR_AINT 0x130 /* Interrupt error arbiter interrupt register values */
+#define PCISH5_ICR_AINTM 0x134 /* Interrupt error arbiter interrupt mask register values */
+#define PCISH5_ICR_BMIR 0x138 /* Interrupt error info register of bus master values */
+#define PCISH5_ICR_PAR 0x1c0 /* Pio address register values */
+#define PCISH5_ICR_MBR 0x1c4 /* Memory space bank register values */
+#define PCISH5_ICR_IOBR 0x1c8 /* I/O space bank register values */
+#define PCISH5_ICR_PINT 0x1cc /* power management interrupt register values */
+#define PCISH5_ICR_PINTM 0x1d0 /* power management interrupt mask register values */
+#define PCISH5_ICR_MBMR 0x1d8 /* memory space bank mask register values */
+#define PCISH5_ICR_IOBMR 0x1dc /* I/O space bank mask register values */
+#define PCISH5_ICR_CSCR0 0x210 /* PCI cache snoop control register 0 */
+#define PCISH5_ICR_CSCR1 0x214 /* PCI cache snoop control register 1 */
+#define PCISH5_ICR_PDR 0x220 /* Pio data register values */
+
+/* These are configs space registers */
+#define PCISH5_ICR_CSR_VID 0x000 /* Vendor id */
+#define PCISH5_ICR_CSR_DID 0x002 /* Device id */
+#define PCISH5_ICR_CSR_CMD 0x004 /* Command register */
+#define PCISH5_ICR_CSR_STATUS 0x006 /* Stautus */
+#define PCISH5_ICR_CSR_IBAR0 0x010 /* I/O base address register */
+#define PCISH5_ICR_CSR_MBAR0 0x014 /* First Memory base address register */
+#define PCISH5_ICR_CSR_MBAR1 0x018 /* Second Memory base address register */
+
+
+
+/* Base address of registers */
+#define SH5PCI_ICR_BASE (PHYS_PCI_BLOCK + 0x00040000)
+#define SH5PCI_IO_BASE (PHYS_PCI_BLOCK + 0x00800000)
+/* #define SH5PCI_VCR_BASE (P2SEG_PCICB_BLOCK + P2SEG) */
+
+/* Register selection macro */
+#define PCISH5_ICR_REG(x) ( pcicr_virt + (PCISH5_ICR_##x))
+/* #define PCISH5_VCR_REG(x) ( SH5PCI_VCR_BASE (PCISH5_VCR_##x)) */
+
+/* Write I/O functions */
+#define SH5PCI_WRITE(reg,val) ctrl_outl((u32)(val),PCISH5_ICR_REG(reg))
+#define SH5PCI_WRITE_SHORT(reg,val) ctrl_outw((u16)(val),PCISH5_ICR_REG(reg))
+#define SH5PCI_WRITE_BYTE(reg,val) ctrl_outb((u8)(val),PCISH5_ICR_REG(reg))
+
+/* Read I/O functions */
+#define SH5PCI_READ(reg) ctrl_inl(PCISH5_ICR_REG(reg))
+#define SH5PCI_READ_SHORT(reg) ctrl_inw(PCISH5_ICR_REG(reg))
+#define SH5PCI_READ_BYTE(reg) ctrl_inb(PCISH5_ICR_REG(reg))
+
+/* Set PCI config bits */
+#define SET_CONFIG_BITS(bus,devfn,where) ((((bus) << 16) | ((devfn) << 8) | ((where) & ~3)) | 0x80000000)
+
+/* Set PCI command register */
+#define CONFIG_CMD(bus, devfn, where) SET_CONFIG_BITS(bus->number,devfn,where)
+
+/* Size converters */
+#define PCISH5_MEM_SIZCONV(x) (((x / 0x40000) - 1) << 18)
+#define PCISH5_IO_SIZCONV(x) (((x / 0x40000) - 1) << 18)
+
+
--- /dev/null
+/*
+ * $Id: pcibios.c,v 1.1 2001/08/24 12:38:19 dwmw2 Exp $
+ *
+ * arch/sh/kernel/pcibios.c
+ *
+ * Copyright (C) 2002 STMicroelectronics Limited
+ * Author : David J. McKay
+ *
+ * Copyright (C) 2004 Richard Curnow, SuperH UK Limited
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ * This is GPL'd.
+ *
+ * Provided here are generic versions of:
+ * pcibios_update_resource()
+ * pcibios_align_resource()
+ * pcibios_enable_device()
+ * pcibios_set_master()
+ * pcibios_update_irq()
+ *
+ * These functions are collected here to reduce duplication of common
+ * code amongst the many platform-specific PCI support code files.
+ *
+ * Platform-specific files are expected to provide:
+ * pcibios_fixup_bus()
+ * pcibios_init()
+ * pcibios_setup()
+ * pcibios_fixup_pbus_ranges()
+ */
+
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+
+void
+pcibios_update_resource(struct pci_dev *dev, struct resource *root,
+ struct resource *res, int resource)
+{
+ u32 new, check;
+ int reg;
+
+ new = res->start | (res->flags & PCI_REGION_FLAG_MASK);
+ if (resource < 6) {
+ reg = PCI_BASE_ADDRESS_0 + 4*resource;
+ } else if (resource == PCI_ROM_RESOURCE) {
+ res->flags |= PCI_ROM_ADDRESS_ENABLE;
+ new |= PCI_ROM_ADDRESS_ENABLE;
+ reg = dev->rom_base_reg;
+ } else {
+ /* Somebody might have asked allocation of a non-standard resource */
+ return;
+ }
+
+ pci_write_config_dword(dev, reg, new);
+ pci_read_config_dword(dev, reg, &check);
+ if ((new ^ check) & ((new & PCI_BASE_ADDRESS_SPACE_IO) ? PCI_BASE_ADDRESS_IO_MASK : PCI_BASE_ADDRESS_MEM_MASK)) {
+ printk(KERN_ERR "PCI: Error while updating region "
+ "%s/%d (%08x != %08x)\n", dev->slot_name, resource,
+ new, check);
+ }
+}
+
+/*
+ * We need to avoid collisions with `mirrored' VGA ports
+ * and other strange ISA hardware, so we always want the
+ * addresses to be allocated in the 0x000-0x0ff region
+ * modulo 0x400.
+ */
+void pcibios_align_resource(void *data, struct resource *res,
+ unsigned long size, unsigned long align)
+{
+ if (res->flags & IORESOURCE_IO) {
+ unsigned long start = res->start;
+
+ if (start & 0x300) {
+ start = (start + 0x3ff) & ~0x3ff;
+ res->start = start;
+ }
+ }
+}
+
+static void pcibios_enable_bridge(struct pci_dev *dev)
+{
+ struct pci_bus *bus = dev->subordinate;
+ u16 cmd, old_cmd;
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ old_cmd = cmd;
+
+ if (bus->resource[0]->flags & IORESOURCE_IO) {
+ cmd |= PCI_COMMAND_IO;
+ }
+ if ((bus->resource[1]->flags & IORESOURCE_MEM) ||
+ (bus->resource[2]->flags & IORESOURCE_PREFETCH)) {
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+
+ if (cmd != old_cmd) {
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+ }
+
+ printk("PCI bridge %s, command register -> %04x\n",
+ pci_name(dev), cmd);
+
+}
+
+
+
+int pcibios_enable_device(struct pci_dev *dev, int mask)
+{
+ u16 cmd, old_cmd;
+ int idx;
+ struct resource *r;
+
+ if ((dev->class >> 8) == PCI_CLASS_BRIDGE_PCI) {
+ pcibios_enable_bridge(dev);
+ }
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ old_cmd = cmd;
+ for(idx=0; idx<6; idx++) {
+ if (!(mask & (1 << idx)))
+ continue;
+ r = &dev->resource[idx];
+ if (!r->start && r->end) {
+ printk(KERN_ERR "PCI: Device %s not available because of resource collisions\n", dev->slot_name);
+ return -EINVAL;
+ }
+ if (r->flags & IORESOURCE_IO)
+ cmd |= PCI_COMMAND_IO;
+ if (r->flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+ if (dev->resource[PCI_ROM_RESOURCE].start)
+ cmd |= PCI_COMMAND_MEMORY;
+ if (cmd != old_cmd) {
+ printk(KERN_INFO "PCI: Enabling device %s (%04x -> %04x)\n", pci_name(dev), old_cmd, cmd);
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+ }
+ return 0;
+}
+
+/*
+ * If we set up a device for bus mastering, we need to check and set
+ * the latency timer as it may not be properly set.
+ */
+unsigned int pcibios_max_latency = 255;
+
+void pcibios_set_master(struct pci_dev *dev)
+{
+ u8 lat;
+ pci_read_config_byte(dev, PCI_LATENCY_TIMER, &lat);
+ if (lat < 16)
+ lat = (64 <= pcibios_max_latency) ? 64 : pcibios_max_latency;
+ else if (lat > pcibios_max_latency)
+ lat = pcibios_max_latency;
+ else
+ return;
+ printk(KERN_INFO "PCI: Setting latency timer of device %s to %d\n", pci_name(dev), lat);
+ pci_write_config_byte(dev, PCI_LATENCY_TIMER, lat);
+}
+
+void __init pcibios_update_irq(struct pci_dev *dev, int irq)
+{
+ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, irq);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/process.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ * Started from SH3/4 version:
+ * Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima
+ *
+ * In turn started from i386 version:
+ * Copyright (C) 1995 Linus Torvalds
+ *
+ */
+
+/*
+ * This file handles the architecture-dependent parts of process handling..
+ */
+
+/* Temporary flags/tests. All to be removed/undefined. BEGIN */
+#define IDLE_TRACE
+#define VM_SHOW_TABLES
+#define VM_TEST_FAULT
+#define VM_TEST_RTLBMISS
+#define VM_TEST_WTLBMISS
+
+#undef VM_SHOW_TABLES
+#undef IDLE_TRACE
+/* Temporary flags/tests. All to be removed/undefined. END */
+
+#define __KERNEL_SYSCALLS__
+#include <stdarg.h>
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/interrupt.h>
+#include <linux/unistd.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/init.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/processor.h> /* includes also <asm/registers.h> */
+#include <asm/mmu_context.h>
+#include <asm/elf.h>
+#include <asm/page.h>
+
+#include <linux/irq.h>
+
+struct task_struct *last_task_used_math = NULL;
+
+#ifdef IDLE_TRACE
+#ifdef VM_SHOW_TABLES
+/* For testing */
+static void print_PTE(long base)
+{
+ int i, skip=0;
+ long long x, y, *p = (long long *) base;
+
+ for (i=0; i< 512; i++, p++){
+ if (*p == 0) {
+ if (!skip) {
+ skip++;
+ printk("(0s) ");
+ }
+ } else {
+ skip=0;
+ x = (*p) >> 32;
+ y = (*p) & 0xffffffff;
+ printk("%08Lx%08Lx ", x, y);
+ if (!((i+1)&0x3)) printk("\n");
+ }
+ }
+}
+
+/* For testing */
+static void print_DIR(long base)
+{
+ int i, skip=0;
+ long *p = (long *) base;
+
+ for (i=0; i< 512; i++, p++){
+ if (*p == 0) {
+ if (!skip) {
+ skip++;
+ printk("(0s) ");
+ }
+ } else {
+ skip=0;
+ printk("%08lx ", *p);
+ if (!((i+1)&0x7)) printk("\n");
+ }
+ }
+}
+
+/* For testing */
+static void print_vmalloc_first_tables(void)
+{
+
+#define PRESENT 0x800 /* Bit 11 */
+
+ /*
+ * Do it really dirty by looking at raw addresses,
+ * raw offsets, no types. If we used pgtable/pgalloc
+ * macros/definitions we could hide potential bugs.
+ *
+ * Note that pointers are 32-bit for CDC.
+ */
+ long pgdt, pmdt, ptet;
+
+ pgdt = (long) &swapper_pg_dir;
+ printk("-->PGD (0x%08lx):\n", pgdt);
+ print_DIR(pgdt);
+ printk("\n");
+
+ /* VMALLOC pool is mapped at 0xc0000000, second (pointer) entry in PGD */
+ pgdt += 4;
+ pmdt = (long) (* (long *) pgdt);
+ if (!(pmdt & PRESENT)) {
+ printk("No PMD\n");
+ return;
+ } else pmdt &= 0xfffff000;
+
+ printk("-->PMD (0x%08lx):\n", pmdt);
+ print_DIR(pmdt);
+ printk("\n");
+
+ /* Get the pmdt displacement for 0xc0000000 */
+ pmdt += 2048;
+
+ /* just look at first two address ranges ... */
+ /* ... 0xc0000000 ... */
+ ptet = (long) (* (long *) pmdt);
+ if (!(ptet & PRESENT)) {
+ printk("No PTE0\n");
+ return;
+ } else ptet &= 0xfffff000;
+
+ printk("-->PTE0 (0x%08lx):\n", ptet);
+ print_PTE(ptet);
+ printk("\n");
+
+ /* ... 0xc0001000 ... */
+ ptet += 4;
+ if (!(ptet & PRESENT)) {
+ printk("No PTE1\n");
+ return;
+ } else ptet &= 0xfffff000;
+ printk("-->PTE1 (0x%08lx):\n", ptet);
+ print_PTE(ptet);
+ printk("\n");
+}
+#else
+#define print_vmalloc_first_tables()
+#endif /* VM_SHOW_TABLES */
+
+static void test_VM(void)
+{
+ void *a, *b, *c;
+
+#ifdef VM_SHOW_TABLES
+ printk("Initial PGD/PMD/PTE\n");
+#endif
+ print_vmalloc_first_tables();
+
+ printk("Allocating 2 bytes\n");
+ a = vmalloc(2);
+ print_vmalloc_first_tables();
+
+ printk("Allocating 4100 bytes\n");
+ b = vmalloc(4100);
+ print_vmalloc_first_tables();
+
+ printk("Allocating 20234 bytes\n");
+ c = vmalloc(20234);
+ print_vmalloc_first_tables();
+
+#ifdef VM_TEST_FAULT
+ /* Here you may want to fault ! */
+
+#ifdef VM_TEST_RTLBMISS
+ printk("Ready to fault upon read.\n");
+ if (* (char *) a) {
+ printk("RTLBMISSed on area a !\n");
+ }
+ printk("RTLBMISSed on area a !\n");
+#endif
+
+#ifdef VM_TEST_WTLBMISS
+ printk("Ready to fault upon write.\n");
+ *((char *) b) = 'L';
+ printk("WTLBMISSed on area b !\n");
+#endif
+
+#endif /* VM_TEST_FAULT */
+
+ printk("Deallocating the 4100 byte chunk\n");
+ vfree(b);
+ print_vmalloc_first_tables();
+
+ printk("Deallocating the 2 byte chunk\n");
+ vfree(a);
+ print_vmalloc_first_tables();
+
+ printk("Deallocating the last chunk\n");
+ vfree(c);
+ print_vmalloc_first_tables();
+}
+
+extern unsigned long volatile jiffies;
+int once = 0;
+unsigned long old_jiffies;
+int pid = -1, pgid = -1;
+
+void idle_trace(void)
+{
+
+ _syscall0(int, getpid)
+ _syscall1(int, getpgid, int, pid)
+
+ if (!once) {
+ /* VM allocation/deallocation simple test */
+ test_VM();
+ pid = getpid();
+
+ printk("Got all through to Idle !!\n");
+ printk("I'm now going to loop forever ...\n");
+ printk("Any ! below is a timer tick.\n");
+ printk("Any . below is a getpgid system call from pid = %d.\n", pid);
+
+
+ old_jiffies = jiffies;
+ once++;
+ }
+
+ if (old_jiffies != jiffies) {
+ old_jiffies = jiffies - old_jiffies;
+ switch (old_jiffies) {
+ case 1:
+ printk("!");
+ break;
+ case 2:
+ printk("!!");
+ break;
+ case 3:
+ printk("!!!");
+ break;
+ case 4:
+ printk("!!!!");
+ break;
+ default:
+ printk("(%d!)", (int) old_jiffies);
+ }
+ old_jiffies = jiffies;
+ }
+ pgid = getpgid(pid);
+ printk(".");
+}
+#else
+#define idle_trace() do { } while (0)
+#endif /* IDLE_TRACE */
+
+static int hlt_counter = 1;
+
+#define HARD_IDLE_TIMEOUT (HZ / 3)
+
+void disable_hlt(void)
+{
+ hlt_counter++;
+}
+
+void enable_hlt(void)
+{
+ hlt_counter--;
+}
+
+static int __init nohlt_setup(char *__unused)
+{
+ hlt_counter = 1;
+ return 1;
+}
+
+static int __init hlt_setup(char *__unused)
+{
+ hlt_counter = 0;
+ return 1;
+}
+
+__setup("nohlt", nohlt_setup);
+__setup("hlt", hlt_setup);
+
+static inline void hlt(void)
+{
+ if (hlt_counter)
+ return;
+
+ __asm__ __volatile__ ("sleep" : : : "memory");
+}
+
+/*
+ * The idle loop on a uniprocessor SH..
+ */
+void default_idle(void)
+{
+ /* endless idle loop with no priority at all */
+ while (1) {
+ if (hlt_counter) {
+ while (1)
+ if (need_resched())
+ break;
+ } else {
+ local_irq_disable();
+ while (!need_resched()) {
+ local_irq_enable();
+ idle_trace();
+ hlt();
+ local_irq_disable();
+ }
+ local_irq_enable();
+ }
+ schedule();
+ }
+}
+
+void cpu_idle(void *unused)
+{
+ default_idle();
+}
+
+void machine_restart(char * __unused)
+{
+ extern void phys_stext(void);
+
+ phys_stext();
+}
+
+void machine_halt(void)
+{
+ for (;;);
+}
+
+void machine_power_off(void)
+{
+ extern void enter_deep_standby(void);
+
+ enter_deep_standby();
+}
+
+void show_regs(struct pt_regs * regs)
+{
+ unsigned long long ah, al, bh, bl, ch, cl;
+
+ printk("\n");
+
+ ah = (regs->pc) >> 32;
+ al = (regs->pc) & 0xffffffff;
+ bh = (regs->regs[18]) >> 32;
+ bl = (regs->regs[18]) & 0xffffffff;
+ ch = (regs->regs[15]) >> 32;
+ cl = (regs->regs[15]) & 0xffffffff;
+ printk("PC : %08Lx%08Lx LINK: %08Lx%08Lx SP : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->sr) >> 32;
+ al = (regs->sr) & 0xffffffff;
+ asm volatile ("getcon " __TEA ", %0" : "=r" (bh));
+ asm volatile ("getcon " __TEA ", %0" : "=r" (bl));
+ bh = (bh) >> 32;
+ bl = (bl) & 0xffffffff;
+ asm volatile ("getcon " __KCR0 ", %0" : "=r" (ch));
+ asm volatile ("getcon " __KCR0 ", %0" : "=r" (cl));
+ ch = (ch) >> 32;
+ cl = (cl) & 0xffffffff;
+ printk("SR : %08Lx%08Lx TEA : %08Lx%08Lx KCR0: %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[0]) >> 32;
+ al = (regs->regs[0]) & 0xffffffff;
+ bh = (regs->regs[1]) >> 32;
+ bl = (regs->regs[1]) & 0xffffffff;
+ ch = (regs->regs[2]) >> 32;
+ cl = (regs->regs[2]) & 0xffffffff;
+ printk("R0 : %08Lx%08Lx R1 : %08Lx%08Lx R2 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[3]) >> 32;
+ al = (regs->regs[3]) & 0xffffffff;
+ bh = (regs->regs[4]) >> 32;
+ bl = (regs->regs[4]) & 0xffffffff;
+ ch = (regs->regs[5]) >> 32;
+ cl = (regs->regs[5]) & 0xffffffff;
+ printk("R3 : %08Lx%08Lx R4 : %08Lx%08Lx R5 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[6]) >> 32;
+ al = (regs->regs[6]) & 0xffffffff;
+ bh = (regs->regs[7]) >> 32;
+ bl = (regs->regs[7]) & 0xffffffff;
+ ch = (regs->regs[8]) >> 32;
+ cl = (regs->regs[8]) & 0xffffffff;
+ printk("R6 : %08Lx%08Lx R7 : %08Lx%08Lx R8 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[9]) >> 32;
+ al = (regs->regs[9]) & 0xffffffff;
+ bh = (regs->regs[10]) >> 32;
+ bl = (regs->regs[10]) & 0xffffffff;
+ ch = (regs->regs[11]) >> 32;
+ cl = (regs->regs[11]) & 0xffffffff;
+ printk("R9 : %08Lx%08Lx R10 : %08Lx%08Lx R11 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[12]) >> 32;
+ al = (regs->regs[12]) & 0xffffffff;
+ bh = (regs->regs[13]) >> 32;
+ bl = (regs->regs[13]) & 0xffffffff;
+ ch = (regs->regs[14]) >> 32;
+ cl = (regs->regs[14]) & 0xffffffff;
+ printk("R12 : %08Lx%08Lx R13 : %08Lx%08Lx R14 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[16]) >> 32;
+ al = (regs->regs[16]) & 0xffffffff;
+ bh = (regs->regs[17]) >> 32;
+ bl = (regs->regs[17]) & 0xffffffff;
+ ch = (regs->regs[19]) >> 32;
+ cl = (regs->regs[19]) & 0xffffffff;
+ printk("R16 : %08Lx%08Lx R17 : %08Lx%08Lx R19 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[20]) >> 32;
+ al = (regs->regs[20]) & 0xffffffff;
+ bh = (regs->regs[21]) >> 32;
+ bl = (regs->regs[21]) & 0xffffffff;
+ ch = (regs->regs[22]) >> 32;
+ cl = (regs->regs[22]) & 0xffffffff;
+ printk("R20 : %08Lx%08Lx R21 : %08Lx%08Lx R22 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[23]) >> 32;
+ al = (regs->regs[23]) & 0xffffffff;
+ bh = (regs->regs[24]) >> 32;
+ bl = (regs->regs[24]) & 0xffffffff;
+ ch = (regs->regs[25]) >> 32;
+ cl = (regs->regs[25]) & 0xffffffff;
+ printk("R23 : %08Lx%08Lx R24 : %08Lx%08Lx R25 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[26]) >> 32;
+ al = (regs->regs[26]) & 0xffffffff;
+ bh = (regs->regs[27]) >> 32;
+ bl = (regs->regs[27]) & 0xffffffff;
+ ch = (regs->regs[28]) >> 32;
+ cl = (regs->regs[28]) & 0xffffffff;
+ printk("R26 : %08Lx%08Lx R27 : %08Lx%08Lx R28 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[29]) >> 32;
+ al = (regs->regs[29]) & 0xffffffff;
+ bh = (regs->regs[30]) >> 32;
+ bl = (regs->regs[30]) & 0xffffffff;
+ ch = (regs->regs[31]) >> 32;
+ cl = (regs->regs[31]) & 0xffffffff;
+ printk("R29 : %08Lx%08Lx R30 : %08Lx%08Lx R31 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[32]) >> 32;
+ al = (regs->regs[32]) & 0xffffffff;
+ bh = (regs->regs[33]) >> 32;
+ bl = (regs->regs[33]) & 0xffffffff;
+ ch = (regs->regs[34]) >> 32;
+ cl = (regs->regs[34]) & 0xffffffff;
+ printk("R32 : %08Lx%08Lx R33 : %08Lx%08Lx R34 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[35]) >> 32;
+ al = (regs->regs[35]) & 0xffffffff;
+ bh = (regs->regs[36]) >> 32;
+ bl = (regs->regs[36]) & 0xffffffff;
+ ch = (regs->regs[37]) >> 32;
+ cl = (regs->regs[37]) & 0xffffffff;
+ printk("R35 : %08Lx%08Lx R36 : %08Lx%08Lx R37 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[38]) >> 32;
+ al = (regs->regs[38]) & 0xffffffff;
+ bh = (regs->regs[39]) >> 32;
+ bl = (regs->regs[39]) & 0xffffffff;
+ ch = (regs->regs[40]) >> 32;
+ cl = (regs->regs[40]) & 0xffffffff;
+ printk("R38 : %08Lx%08Lx R39 : %08Lx%08Lx R40 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[41]) >> 32;
+ al = (regs->regs[41]) & 0xffffffff;
+ bh = (regs->regs[42]) >> 32;
+ bl = (regs->regs[42]) & 0xffffffff;
+ ch = (regs->regs[43]) >> 32;
+ cl = (regs->regs[43]) & 0xffffffff;
+ printk("R41 : %08Lx%08Lx R42 : %08Lx%08Lx R43 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[44]) >> 32;
+ al = (regs->regs[44]) & 0xffffffff;
+ bh = (regs->regs[45]) >> 32;
+ bl = (regs->regs[45]) & 0xffffffff;
+ ch = (regs->regs[46]) >> 32;
+ cl = (regs->regs[46]) & 0xffffffff;
+ printk("R44 : %08Lx%08Lx R45 : %08Lx%08Lx R46 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[47]) >> 32;
+ al = (regs->regs[47]) & 0xffffffff;
+ bh = (regs->regs[48]) >> 32;
+ bl = (regs->regs[48]) & 0xffffffff;
+ ch = (regs->regs[49]) >> 32;
+ cl = (regs->regs[49]) & 0xffffffff;
+ printk("R47 : %08Lx%08Lx R48 : %08Lx%08Lx R49 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[50]) >> 32;
+ al = (regs->regs[50]) & 0xffffffff;
+ bh = (regs->regs[51]) >> 32;
+ bl = (regs->regs[51]) & 0xffffffff;
+ ch = (regs->regs[52]) >> 32;
+ cl = (regs->regs[52]) & 0xffffffff;
+ printk("R50 : %08Lx%08Lx R51 : %08Lx%08Lx R52 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[53]) >> 32;
+ al = (regs->regs[53]) & 0xffffffff;
+ bh = (regs->regs[54]) >> 32;
+ bl = (regs->regs[54]) & 0xffffffff;
+ ch = (regs->regs[55]) >> 32;
+ cl = (regs->regs[55]) & 0xffffffff;
+ printk("R53 : %08Lx%08Lx R54 : %08Lx%08Lx R55 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[56]) >> 32;
+ al = (regs->regs[56]) & 0xffffffff;
+ bh = (regs->regs[57]) >> 32;
+ bl = (regs->regs[57]) & 0xffffffff;
+ ch = (regs->regs[58]) >> 32;
+ cl = (regs->regs[58]) & 0xffffffff;
+ printk("R56 : %08Lx%08Lx R57 : %08Lx%08Lx R58 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[59]) >> 32;
+ al = (regs->regs[59]) & 0xffffffff;
+ bh = (regs->regs[60]) >> 32;
+ bl = (regs->regs[60]) & 0xffffffff;
+ ch = (regs->regs[61]) >> 32;
+ cl = (regs->regs[61]) & 0xffffffff;
+ printk("R59 : %08Lx%08Lx R60 : %08Lx%08Lx R61 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[62]) >> 32;
+ al = (regs->regs[62]) & 0xffffffff;
+ bh = (regs->tregs[0]) >> 32;
+ bl = (regs->tregs[0]) & 0xffffffff;
+ ch = (regs->tregs[1]) >> 32;
+ cl = (regs->tregs[1]) & 0xffffffff;
+ printk("R62 : %08Lx%08Lx T0 : %08Lx%08Lx T1 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->tregs[2]) >> 32;
+ al = (regs->tregs[2]) & 0xffffffff;
+ bh = (regs->tregs[3]) >> 32;
+ bl = (regs->tregs[3]) & 0xffffffff;
+ ch = (regs->tregs[4]) >> 32;
+ cl = (regs->tregs[4]) & 0xffffffff;
+ printk("T2 : %08Lx%08Lx T3 : %08Lx%08Lx T4 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->tregs[5]) >> 32;
+ al = (regs->tregs[5]) & 0xffffffff;
+ bh = (regs->tregs[6]) >> 32;
+ bl = (regs->tregs[6]) & 0xffffffff;
+ ch = (regs->tregs[7]) >> 32;
+ cl = (regs->tregs[7]) & 0xffffffff;
+ printk("T5 : %08Lx%08Lx T6 : %08Lx%08Lx T7 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ /*
+ * If we're in kernel mode, dump the stack too..
+ */
+ if (!user_mode(regs)) {
+ void show_stack(struct task_struct *tsk, unsigned long *sp);
+ unsigned long sp = regs->regs[15] & 0xffffffff;
+ struct task_struct *tsk = get_current();
+
+ tsk->thread.kregs = regs;
+
+ show_stack(tsk, (unsigned long *)sp);
+ }
+}
+
+struct task_struct * alloc_task_struct(void)
+{
+ /* Get task descriptor pages */
+ return (struct task_struct *)
+ __get_free_pages(GFP_KERNEL, get_order(THREAD_SIZE));
+}
+
+void free_task_struct(struct task_struct *p)
+{
+ free_pages((unsigned long) p, get_order(THREAD_SIZE));
+}
+
+/*
+ * Create a kernel thread
+ */
+
+/*
+ * This is the mechanism for creating a new kernel thread.
+ *
+ * NOTE! Only a kernel-only process(ie the swapper or direct descendants
+ * who haven't done an "execve()") should use this: it will work within
+ * a system call from a "real" process, but the process memory space will
+ * not be free'd until both the parent and the child have exited.
+ */
+int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
+{
+ /* A bit less processor dependent than older sh ... */
+
+ unsigned int reply;
+
+static __inline__ _syscall2(int,clone,unsigned long,flags,unsigned long,newsp)
+static __inline__ _syscall1(int,exit,int,ret)
+
+ reply = clone(flags | CLONE_VM, 0);
+ if (!reply) {
+ /* Child */
+ reply = exit(fn(arg));
+ }
+
+ return reply;
+}
+
+/*
+ * Free current thread data structures etc..
+ */
+void exit_thread(void)
+{
+ /* See arch/sparc/kernel/process.c for the precedent for doing this -- RPC.
+
+ The SH-5 FPU save/restore approach relies on last_task_used_math
+ pointing to a live task_struct. When another task tries to use the
+ FPU for the 1st time, the FPUDIS trap handling (see
+ arch/sh64/kernel/fpu.c) will save the existing FPU state to the
+ FP regs field within last_task_used_math before re-loading the new
+ task's FPU state (or initialising it if the FPU has been used
+ before). So if last_task_used_math is stale, and its page has already been
+ re-allocated for another use, the consequences are rather grim. Unless we
+ null it here, there is no other path through which it would get safely
+ nulled. */
+
+#ifndef CONFIG_NOFPU_SUPPORT
+ if (last_task_used_math == current) {
+ last_task_used_math = NULL;
+ }
+#endif
+}
+
+void flush_thread(void)
+{
+
+ /* Called by fs/exec.c (flush_old_exec) to remove traces of a
+ * previously running executable. */
+#ifndef CONFIG_NOFPU_SUPPORT
+ if (last_task_used_math == current) {
+ last_task_used_math = NULL;
+ }
+ /* Force FPU state to be reinitialised after exec */
+ current->used_math = 0;
+#endif
+
+ /* if we are a kernel thread, about to change to user thread,
+ * update kreg
+ */
+ if(current->thread.kregs==&fake_swapper_regs) {
+ current->thread.kregs =
+ ((struct pt_regs *)(THREAD_SIZE + (unsigned long) current) - 1);
+ current->thread.uregs = current->thread.kregs;
+ }
+}
+
+void release_thread(struct task_struct *dead_task)
+{
+ /* do nothing */
+}
+
+/* Fill in the fpu structure for a core dump.. */
+int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
+{
+#ifndef CONFIG_NOFPU_SUPPORT
+ int fpvalid;
+ struct task_struct *tsk = current;
+
+ fpvalid = tsk->used_math;
+ if (fpvalid) {
+ if (current == last_task_used_math) {
+ grab_fpu();
+ fpsave(&tsk->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ memcpy(fpu, &tsk->thread.fpu.hard, sizeof(*fpu));
+ }
+
+ return fpvalid;
+#else
+ return 0; /* Task didn't use the fpu at all. */
+#endif
+}
+
+asmlinkage void ret_from_fork(void);
+
+int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
+ unsigned long unused,
+ struct task_struct *p, struct pt_regs *regs)
+{
+ struct pt_regs *childregs;
+ unsigned long long se; /* Sign extension */
+
+#ifndef CONFIG_NOFPU_SUPPORT
+ if(last_task_used_math == current) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+#endif
+ /* Copy from sh version */
+ childregs = ((struct pt_regs *)(THREAD_SIZE + (unsigned long) p->thread_info )) - 1;
+
+ *childregs = *regs;
+
+ if (user_mode(regs)) {
+ childregs->regs[15] = usp;
+ p->thread.uregs = childregs;
+ } else {
+ childregs->regs[15] = (unsigned long)p->thread_info + THREAD_SIZE;
+ }
+
+ childregs->regs[9] = 0; /* Set return value for child */
+ childregs->sr |= SR_FD; /* Invalidate FPU flag */
+
+ /* From sh */
+ p->set_child_tid = p->clear_child_tid = NULL;
+
+ p->thread.sp = (unsigned long) childregs;
+ p->thread.pc = (unsigned long) ret_from_fork;
+
+ /*
+ * Sign extend the edited stack.
+ * Note that thread.pc and thread.pc will stay
+ * 32-bit wide and context switch must take care
+ * of NEFF sign extension.
+ */
+
+ se = childregs->regs[15];
+ se = (se & NEFF_SIGN) ? (se | NEFF_MASK) : se;
+ childregs->regs[15] = se;
+
+ return 0;
+}
+
+/*
+ * fill in the user structure for a core dump..
+ */
+void dump_thread(struct pt_regs * regs, struct user * dump)
+{
+ dump->magic = CMAGIC;
+ dump->start_code = current->mm->start_code;
+ dump->start_data = current->mm->start_data;
+ dump->start_stack = regs->regs[15] & ~(PAGE_SIZE - 1);
+ dump->u_tsize = (current->mm->end_code - dump->start_code) >> PAGE_SHIFT;
+ dump->u_dsize = (current->mm->brk + (PAGE_SIZE-1) - dump->start_data) >> PAGE_SHIFT;
+ dump->u_ssize = (current->mm->start_stack - dump->start_stack +
+ PAGE_SIZE - 1) >> PAGE_SHIFT;
+ /* Debug registers will come here. */
+
+ dump->regs = *regs;
+
+ dump->u_fpvalid = dump_fpu(regs, &dump->fpu);
+}
+
+asmlinkage int sys_fork(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ return do_fork(SIGCHLD, pregs->regs[15], pregs, 0, 0, 0);
+}
+
+asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ if (!newsp)
+ newsp = pregs->regs[15];
+ return do_fork(clone_flags & ~CLONE_IDLETASK, newsp, pregs, 0, 0, 0);
+}
+
+/*
+ * This is trivial, and on the face of it looks like it
+ * could equally well be done in user mode.
+ *
+ * Not so, for quite unobvious reasons - register pressure.
+ * In user mode vfork() cannot have a stack frame, and if
+ * done by calling the "clone()" system call directly, you
+ * do not have enough call-clobbered registers to hold all
+ * the information you need.
+ */
+asmlinkage int sys_vfork(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, pregs->regs[15], pregs, 0, 0, 0);
+}
+
+/*
+ * sys_execve() executes a new program.
+ */
+asmlinkage int sys_execve(char *ufilename, char **uargv,
+ char **uenvp, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ int error;
+ char *filename;
+
+ lock_kernel();
+ filename = getname((char __user *)ufilename);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
+ goto out;
+
+ error = do_execve(filename,
+ (char __user * __user *)uargv,
+ (char __user * __user *)uenvp,
+ pregs);
+ if (error == 0)
+ current->ptrace &= ~PT_DTRACE;
+ putname(filename);
+out:
+ unlock_kernel();
+ return error;
+}
+
+/*
+ * These bracket the sleeping functions..
+ */
+extern void interruptible_sleep_on(wait_queue_head_t *q);
+
+#define mid_sched ((unsigned long) interruptible_sleep_on)
+
+static int in_sh64_switch_to(unsigned long pc)
+{
+ extern char __sh64_switch_to_end;
+ /* For a sleeping task, the PC is somewhere in the middle of the function,
+ so we don't have to worry about masking the LSB off */
+ return (pc >= (unsigned long) sh64_switch_to) &&
+ (pc < (unsigned long) &__sh64_switch_to_end);
+}
+
+unsigned long get_wchan(struct task_struct *p)
+{
+ unsigned long schedule_fp;
+ unsigned long sh64_switch_to_fp;
+ unsigned long schedule_caller_pc;
+ unsigned long pc;
+
+ if (!p || p == current || p->state == TASK_RUNNING)
+ return 0;
+
+ /*
+ * The same comment as on the Alpha applies here, too ...
+ */
+ pc = thread_saved_pc(p);
+
+#if CONFIG_FRAME_POINTER
+ if (in_sh64_switch_to(pc)) {
+ sh64_switch_to_fp = (long) p->thread.sp;
+ /* r14 is saved at offset 4 in the sh64_switch_to frame */
+ schedule_fp = *(unsigned long *) (long)(sh64_switch_to_fp + 4);
+
+ /* and the caller of 'schedule' is (currently!) saved at offset 24
+ in the frame of schedule (from disasm) */
+ schedule_caller_pc = *(unsigned long *) (long)(schedule_fp + 24);
+ return schedule_caller_pc;
+ }
+#endif
+ return pc;
+}
+
+/* Provide a /proc/asids file that lists out the
+ ASIDs currently associated with the processes. (If the DM.PC register is
+ examined through the debug link, this shows ASID + PC. To make use of this,
+ the PID->ASID relationship needs to be known. This is primarily for
+ debugging.)
+ */
+
+#if defined(CONFIG_SH64_PROC_ASIDS)
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+
+static int
+asids_proc_info(char *buf, char **start, off_t fpos, int length, int *eof, void *data)
+{
+ int len=0;
+ struct task_struct *p;
+ read_lock(&tasklist_lock);
+ for_each_task(p) {
+ int pid = p->pid;
+ struct mm_struct *mm;
+ if (!pid) continue;
+ mm = p->mm;
+ if (mm) {
+ unsigned long asid, context;
+ context = mm->context;
+ asid = (context & 0xff);
+ len += sprintf(buf+len, "%5d : %02x\n", pid, asid);
+ } else {
+ len += sprintf(buf+len, "%5d : (none)\n", pid);
+ }
+ }
+ read_unlock(&tasklist_lock);
+ *eof = 1;
+ return len;
+}
+
+static int __init register_proc_asids(void)
+{
+ create_proc_read_entry("asids", 0, NULL, asids_proc_info, NULL);
+ return 0;
+}
+
+__initcall(register_proc_asids);
+#endif
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/ptrace.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Started from SH3/4 version:
+ * SuperH version: Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka
+ *
+ * Original x86 implementation:
+ * By Ross Biro 1/23/92
+ * edited by Linus Torvalds
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/user.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/mmu_context.h>
+
+/* This mask defines the bits of the SR which the user is not allowed to
+ change, which are everything except S, Q, M, PR, SZ, FR. */
+#define SR_MASK (0xffff8cfd)
+
+/*
+ * does not yet catch signals sent when the child dies.
+ * in exit.c or in signal.c.
+ */
+
+/*
+ * This routine will get a word from the user area in the process kernel stack.
+ */
+static inline int get_stack_long(struct task_struct *task, int offset)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)(task->thread.uregs);
+ stack += offset;
+ return (*((int *)stack));
+}
+
+static inline unsigned long
+get_fpu_long(struct task_struct *task, unsigned long addr)
+{
+ unsigned long tmp;
+ struct pt_regs *regs;
+ regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
+
+ if (!task->used_math) {
+ if (addr == offsetof(struct user_fpu_struct, fpscr)) {
+ tmp = FPSCR_INIT;
+ } else {
+ tmp = 0xffffffffUL; /* matches initial value in fpu.c */
+ }
+ return tmp;
+ }
+
+ if (last_task_used_math == task) {
+ grab_fpu();
+ fpsave(&task->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ tmp = ((long *)&task->thread.fpu)[addr / sizeof(unsigned long)];
+ return tmp;
+}
+
+/*
+ * This routine will put a word into the user area in the process kernel stack.
+ */
+static inline int put_stack_long(struct task_struct *task, int offset,
+ unsigned long data)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)(task->thread.uregs);
+ stack += offset;
+ *(unsigned long *) stack = data;
+ return 0;
+}
+
+static inline int
+put_fpu_long(struct task_struct *task, unsigned long addr, unsigned long data)
+{
+ struct pt_regs *regs;
+
+ regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
+
+ if (!task->used_math) {
+ fpinit(&task->thread.fpu.hard);
+ task->used_math = 1;
+ } else if (last_task_used_math == task) {
+ grab_fpu();
+ fpsave(&task->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ ((long *)&task->thread.fpu)[addr / sizeof(unsigned long)] = data;
+ return 0;
+}
+
+asmlinkage int sys_ptrace(long request, long pid, long addr, long data)
+{
+ struct task_struct *child;
+ int ret;
+
+ lock_kernel();
+ ret = -EPERM;
+ if (request == PTRACE_TRACEME) {
+ /* are we already being traced? */
+ if (current->ptrace & PT_PTRACED)
+ goto out;
+ /* set the ptrace bit in the process flags. */
+ current->ptrace |= PT_PTRACED;
+ ret = 0;
+ goto out;
+ }
+ ret = -ESRCH;
+ read_lock(&tasklist_lock);
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ read_unlock(&tasklist_lock);
+ if (!child)
+ goto out;
+
+ ret = -EPERM;
+ if (pid == 1) /* you may not mess with init */
+ goto out_tsk;
+
+ if (request == PTRACE_ATTACH) {
+ ret = ptrace_attach(child);
+ goto out_tsk;
+ }
+
+ ret = ptrace_check_attach(child, request == PTRACE_KILL);
+ if (ret < 0)
+ goto out_tsk;
+
+ switch (request) {
+ /* when I and D space are separate, these will need to be fixed. */
+ case PTRACE_PEEKTEXT: /* read word at location addr. */
+ case PTRACE_PEEKDATA: {
+ unsigned long tmp;
+ int copied;
+
+ copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
+ ret = -EIO;
+ if (copied != sizeof(tmp))
+ break;
+ ret = put_user(tmp,(unsigned long *) data);
+ break;
+ }
+
+ /* read the word at location addr in the USER area. */
+ case PTRACE_PEEKUSR: {
+ unsigned long tmp;
+
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ if (addr < sizeof(struct pt_regs))
+ tmp = get_stack_long(child, addr);
+ else if ((addr >= offsetof(struct user, fpu)) &&
+ (addr < offsetof(struct user, u_fpvalid))) {
+ tmp = get_fpu_long(child, addr - offsetof(struct user, fpu));
+ } else if (addr == offsetof(struct user, u_fpvalid)) {
+ tmp = child->used_math;
+ } else {
+ break;
+ }
+ ret = put_user(tmp, (unsigned long *)data);
+ break;
+ }
+
+ /* when I and D space are separate, this will have to be fixed. */
+ case PTRACE_POKETEXT: /* write the word at location addr. */
+ case PTRACE_POKEDATA:
+ ret = 0;
+ if (access_process_vm(child, addr, &data, sizeof(data), 1) == sizeof(data))
+ break;
+ ret = -EIO;
+ break;
+
+ case PTRACE_POKEUSR:
+ /* write the word at location addr in the USER area. We must
+ disallow any changes to certain SR bits or u_fpvalid, since
+ this could crash the kernel or result in a security
+ loophole. */
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ if (addr < sizeof(struct pt_regs)) {
+ /* Ignore change of top 32 bits of SR */
+ if (addr == offsetof (struct pt_regs, sr)+4)
+ {
+ ret = 0;
+ break;
+ }
+ /* If lower 32 bits of SR, ignore non-user bits */
+ if (addr == offsetof (struct pt_regs, sr))
+ {
+ long cursr = get_stack_long(child, addr);
+ data &= ~(SR_MASK);
+ data |= (cursr & SR_MASK);
+ }
+ ret = put_stack_long(child, addr, data);
+ }
+ else if ((addr >= offsetof(struct user, fpu)) &&
+ (addr < offsetof(struct user, u_fpvalid))) {
+ ret = put_fpu_long(child, addr - offsetof(struct user, fpu), data);
+ }
+ break;
+
+ case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+ case PTRACE_CONT: { /* restart after signal. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ if (request == PTRACE_SYSCALL)
+ set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ else
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ child->exit_code = data;
+ wake_up_process(child);
+ ret = 0;
+ break;
+ }
+
+/*
+ * make the child exit. Best I can do is send it a sigkill.
+ * perhaps it should be put in the status that it wants to
+ * exit.
+ */
+ case PTRACE_KILL: {
+ ret = 0;
+ if (child->state == TASK_ZOMBIE) /* already dead */
+ break;
+ child->exit_code = SIGKILL;
+ wake_up_process(child);
+ break;
+ }
+
+ case PTRACE_SINGLESTEP: { /* set the trap flag. */
+ struct pt_regs *regs;
+
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ if ((child->ptrace & PT_DTRACE) == 0) {
+ /* Spurious delayed TF traps may occur */
+ child->ptrace |= PT_DTRACE;
+ }
+
+ regs = child->thread.uregs;
+
+ regs->sr |= SR_SSTEP; /* auto-resetting upon exception */
+
+ child->exit_code = data;
+ /* give it a chance to run. */
+ wake_up_process(child);
+ ret = 0;
+ break;
+ }
+
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = ptrace_detach(child, data);
+ break;
+
+ default:
+ ret = ptrace_request(child, request, addr, data);
+ break;
+ }
+out_tsk:
+ put_task_struct(child);
+out:
+ unlock_kernel();
+ return ret;
+}
+
+asmlinkage void syscall_trace(void)
+{
+ struct task_struct *tsk = current;
+
+ if (!test_thread_flag(TIF_SYSCALL_TRACE))
+ return;
+ if (!(tsk->ptrace & PT_PTRACED))
+ return;
+
+ tsk->exit_code = SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
+ ? 0x80 : 0);
+ tsk->state = TASK_STOPPED;
+ notify_parent(tsk, SIGCHLD);
+ schedule();
+ /*
+ * this isn't the same as continuing with a signal, but it will do
+ * for normal use. strace only continues with a signal if the
+ * stopping signal is not SIGTRAP. -brl
+ */
+ if (tsk->exit_code) {
+ send_sig(tsk->exit_code, tsk, 1);
+ tsk->exit_code = 0;
+ }
+}
+
+/* Called with interrupts disabled */
+asmlinkage void do_single_step(unsigned long long vec, struct pt_regs *regs)
+{
+ /* This is called after a single step exception (DEBUGSS).
+ There is no need to change the PC, as it is a post-execution
+ exception, as entry.S does not do anything to the PC for DEBUGSS.
+ We need to clear the Single Step setting in SR to avoid
+ continually stepping. */
+ local_irq_enable();
+ regs->sr &= ~SR_SSTEP;
+ force_sig(SIGTRAP, current);
+}
+
+/* Called with interrupts disabled */
+asmlinkage void do_software_break_point(unsigned long long vec,
+ struct pt_regs *regs)
+{
+ /* We need to forward step the PC, to counteract the backstep done
+ in signal.c. */
+ local_irq_enable();
+ force_sig(SIGTRAP, current);
+ regs->pc += 4;
+}
+
+/*
+ * Called by kernel/ptrace.c when detaching..
+ *
+ * Make sure single step bits etc are not set.
+ */
+void ptrace_disable(struct task_struct *child)
+{
+ /* nothing to do.. */
+}
--- /dev/null
+/*
+ * Just taken from alpha implementation.
+ * This can't work well, perhaps.
+ */
+/*
+ * Generic semaphore code. Buyer beware. Do your own
+ * specific changes in <asm/semaphore-helper.h>
+ */
+
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/init.h>
+#include <asm/semaphore.h>
+#include <asm/semaphore-helper.h>
+
+spinlock_t semaphore_wake_lock;
+
+/*
+ * Semaphores are implemented using a two-way counter:
+ * The "count" variable is decremented for each process
+ * that tries to sleep, while the "waking" variable is
+ * incremented when the "up()" code goes to wake up waiting
+ * processes.
+ *
+ * Notably, the inline "up()" and "down()" functions can
+ * efficiently test if they need to do any extra work (up
+ * needs to do something only if count was negative before
+ * the increment operation.
+ *
+ * waking_non_zero() (from asm/semaphore.h) must execute
+ * atomically.
+ *
+ * When __up() is called, the count was negative before
+ * incrementing it, and we need to wake up somebody.
+ *
+ * This routine adds one to the count of processes that need to
+ * wake up and exit. ALL waiting processes actually wake up but
+ * only the one that gets to the "waking" field first will gate
+ * through and acquire the semaphore. The others will go back
+ * to sleep.
+ *
+ * Note that these functions are only called when there is
+ * contention on the lock, and as such all this is the
+ * "non-critical" part of the whole semaphore business. The
+ * critical part is the inline stuff in <asm/semaphore.h>
+ * where we want to avoid any extra jumps and calls.
+ */
+void __up(struct semaphore *sem)
+{
+ wake_one_more(sem);
+ wake_up(&sem->wait);
+}
+
+/*
+ * Perform the "down" function. Return zero for semaphore acquired,
+ * return negative for signalled out of the function.
+ *
+ * If called from __down, the return is ignored and the wait loop is
+ * not interruptible. This means that a task waiting on a semaphore
+ * using "down()" cannot be killed until someone does an "up()" on
+ * the semaphore.
+ *
+ * If called from __down_interruptible, the return value gets checked
+ * upon return. If the return value is negative then the task continues
+ * with the negative value in the return register (it can be tested by
+ * the caller).
+ *
+ * Either form may be used in conjunction with "up()".
+ *
+ */
+
+#define DOWN_VAR \
+ struct task_struct *tsk = current; \
+ wait_queue_t wait; \
+ init_waitqueue_entry(&wait, tsk);
+
+#define DOWN_HEAD(task_state) \
+ \
+ \
+ tsk->state = (task_state); \
+ add_wait_queue(&sem->wait, &wait); \
+ \
+ /* \
+ * Ok, we're set up. sem->count is known to be less than zero \
+ * so we must wait. \
+ * \
+ * We can let go the lock for purposes of waiting. \
+ * We re-acquire it after awaking so as to protect \
+ * all semaphore operations. \
+ * \
+ * If "up()" is called before we call waking_non_zero() then \
+ * we will catch it right away. If it is called later then \
+ * we will have to go through a wakeup cycle to catch it. \
+ * \
+ * Multiple waiters contend for the semaphore lock to see \
+ * who gets to gate through and who has to wait some more. \
+ */ \
+ for (;;) {
+
+#define DOWN_TAIL(task_state) \
+ tsk->state = (task_state); \
+ } \
+ tsk->state = TASK_RUNNING; \
+ remove_wait_queue(&sem->wait, &wait);
+
+void __sched __down(struct semaphore * sem)
+{
+ DOWN_VAR
+ DOWN_HEAD(TASK_UNINTERRUPTIBLE)
+ if (waking_non_zero(sem))
+ break;
+ schedule();
+ DOWN_TAIL(TASK_UNINTERRUPTIBLE)
+}
+
+int __sched __down_interruptible(struct semaphore * sem)
+{
+ int ret = 0;
+ DOWN_VAR
+ DOWN_HEAD(TASK_INTERRUPTIBLE)
+
+ ret = waking_non_zero_interruptible(sem, tsk);
+ if (ret)
+ {
+ if (ret == 1)
+ /* ret != 0 only if we get interrupted -arca */
+ ret = 0;
+ break;
+ }
+ schedule();
+ DOWN_TAIL(TASK_INTERRUPTIBLE)
+ return ret;
+}
+
+int __down_trylock(struct semaphore * sem)
+{
+ return waking_non_zero_trylock(sem);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/setup.c
+ *
+ * sh64 Arch Support
+ *
+ * This file handles the architecture-dependent parts of initialization
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * benedict.gaster@superh.com: 2nd May 2002
+ * Modified to use the empty_zero_page to pass command line arguments.
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time.
+ *
+ * lethal@linux-sh.org: 15th May 2003
+ * Added generic procfs cpuinfo reporting. Make boards just export their name.
+ *
+ * lethal@linux-sh.org: 25th May 2003
+ * Added generic get_cpu_subtype() for subtype reporting from cpu_data->type.
+ *
+ */
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/blkdev.h>
+#include <linux/bootmem.h>
+#include <linux/console.h>
+#include <linux/root_dev.h>
+#include <linux/cpu.h>
+#include <linux/initrd.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/platform.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+#include <asm/smp.h>
+
+#ifdef CONFIG_VT
+#include <linux/console.h>
+#endif
+
+struct screen_info screen_info;
+
+/* On a PC this would be initialised as a result of the BIOS detecting the
+ * mouse. */
+unsigned char aux_device_present = 0xaa;
+
+#ifdef CONFIG_BLK_DEV_RAM
+extern int rd_doload; /* 1 = load ramdisk, 0 = don't load */
+extern int rd_prompt; /* 1 = prompt for ramdisk, 0 = don't prompt */
+extern int rd_image_start; /* starting block # of image */
+#endif
+
+extern int root_mountflags;
+extern char *get_system_type(void);
+extern void platform_setup(void);
+extern void platform_monitor(void);
+extern void platform_reserve(void);
+extern int sh64_cache_init(void);
+extern int sh64_tlb_init(void);
+
+#define RAMDISK_IMAGE_START_MASK 0x07FF
+#define RAMDISK_PROMPT_FLAG 0x8000
+#define RAMDISK_LOAD_FLAG 0x4000
+
+static char command_line[COMMAND_LINE_SIZE] = { 0, };
+unsigned long long memory_start = CONFIG_MEMORY_START;
+unsigned long long memory_end = CONFIG_MEMORY_START + (CONFIG_MEMORY_SIZE_IN_MB * 1024 * 1024);
+
+struct sh_cpuinfo boot_cpu_data;
+
+static inline void parse_mem_cmdline (char ** cmdline_p)
+{
+ char c = ' ', *to = command_line, *from = COMMAND_LINE;
+ int len = 0;
+
+ /* Save unparsed command line copy for /proc/cmdline */
+ memcpy(saved_command_line, COMMAND_LINE, COMMAND_LINE_SIZE);
+ saved_command_line[COMMAND_LINE_SIZE-1] = '\0';
+
+ for (;;) {
+ /*
+ * "mem=XXX[kKmM]" defines a size of memory.
+ */
+ if (c == ' ' && !memcmp(from, "mem=", 4)) {
+ if (to != command_line)
+ to--;
+ {
+ unsigned long mem_size;
+
+ mem_size = memparse(from+4, &from);
+ memory_end = memory_start + mem_size;
+ }
+ }
+ c = *(from++);
+ if (!c)
+ break;
+ if (COMMAND_LINE_SIZE <= ++len)
+ break;
+ *(to++) = c;
+ }
+ *to = '\0';
+
+ *cmdline_p = command_line;
+}
+
+static void __init sh64_cpu_type_detect(void)
+{
+ extern unsigned long long peek_real_address_q(unsigned long long addr);
+ unsigned long long cir;
+ /* Do peeks in real mode to avoid having to set up a mapping for the
+ WPC registers. On SH5-101 cut2, such a mapping would be exposed to
+ an address translation erratum which would make it hard to set up
+ correctly. */
+ cir = peek_real_address_q(0x0d000008);
+
+ if ((cir & 0xffff) == 0x5103) {
+ boot_cpu_data.type = CPU_SH5_103;
+ } else if (((cir >> 32) & 0xffff) == 0x51e2) {
+ /* CPU.VCR aliased at CIR address on SH5-101 */
+ boot_cpu_data.type = CPU_SH5_101;
+ } else {
+ boot_cpu_data.type = CPU_SH_NONE;
+ }
+}
+
+void __init setup_arch(char **cmdline_p)
+{
+ unsigned long bootmap_size, i;
+ unsigned long first_pfn, start_pfn, last_pfn, pages;
+
+#ifdef CONFIG_EARLY_PRINTK
+ extern void enable_early_printk(void);
+
+ /*
+ * Setup Early SCIF console
+ */
+ enable_early_printk();
+#endif
+
+ /*
+ * Setup TLB mappings
+ */
+ sh64_tlb_init();
+
+ /*
+ * Caches are already initialized by the time we get here, so we just
+ * fill in cpu_data info for the caches.
+ */
+ sh64_cache_init();
+
+ platform_setup();
+ platform_monitor();
+
+ sh64_cpu_type_detect();
+
+ ROOT_DEV = old_decode_dev(ORIG_ROOT_DEV);
+
+#ifdef CONFIG_BLK_DEV_RAM
+ rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK;
+ rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0);
+ rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0);
+#endif
+
+ if (!MOUNT_ROOT_RDONLY)
+ root_mountflags &= ~MS_RDONLY;
+ init_mm.start_code = (unsigned long) _text;
+ init_mm.end_code = (unsigned long) _etext;
+ init_mm.end_data = (unsigned long) _edata;
+ init_mm.brk = (unsigned long) _end;
+
+ code_resource.start = __pa(_text);
+ code_resource.end = __pa(_etext)-1;
+ data_resource.start = __pa(_etext);
+ data_resource.end = __pa(_edata)-1;
+
+ parse_mem_cmdline(cmdline_p);
+
+ /*
+ * Find the lowest and highest page frame numbers we have available
+ */
+ first_pfn = PFN_DOWN(memory_start);
+ last_pfn = PFN_DOWN(memory_end);
+ pages = last_pfn - first_pfn;
+
+ /*
+ * Partially used pages are not usable - thus
+ * we are rounding upwards:
+ */
+ start_pfn = PFN_UP(__pa(_end));
+
+ /*
+ * Find a proper area for the bootmem bitmap. After this
+ * bootstrap step all allocations (until the page allocator
+ * is intact) must be done via bootmem_alloc().
+ */
+ bootmap_size = init_bootmem_node(NODE_DATA(0), start_pfn,
+ first_pfn,
+ last_pfn);
+ /*
+ * Round it up.
+ */
+ bootmap_size = PFN_PHYS(PFN_UP(bootmap_size));
+
+ /*
+ * Register fully available RAM pages with the bootmem allocator.
+ */
+ free_bootmem_node(NODE_DATA(0), PFN_PHYS(first_pfn), PFN_PHYS(pages));
+
+ /*
+ * Reserve all kernel sections + bootmem bitmap + a guard page.
+ */
+ reserve_bootmem_node(NODE_DATA(0), PFN_PHYS(first_pfn),
+ (PFN_PHYS(start_pfn) + bootmap_size + PAGE_SIZE) - PFN_PHYS(first_pfn));
+
+ /*
+ * Reserve platform dependent sections
+ */
+ platform_reserve();
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (LOADER_TYPE && INITRD_START) {
+ if (INITRD_START + INITRD_SIZE <= (PFN_PHYS(last_pfn))) {
+ reserve_bootmem_node(NODE_DATA(0), INITRD_START + __MEMORY_START, INITRD_SIZE);
+
+ initrd_start =
+ (long) INITRD_START ? INITRD_START + PAGE_OFFSET + __MEMORY_START : 0;
+
+ initrd_end = initrd_start + INITRD_SIZE;
+ } else {
+ printk("initrd extends beyond end of memory "
+ "(0x%08lx > 0x%08lx)\ndisabling initrd\n",
+ (long) INITRD_START + INITRD_SIZE,
+ PFN_PHYS(last_pfn));
+ initrd_start = 0;
+ }
+ }
+#endif
+
+ /*
+ * Claim all RAM, ROM, and I/O resources.
+ */
+
+ /* Kernel RAM */
+ request_resource(&iomem_resource, &code_resource);
+ request_resource(&iomem_resource, &data_resource);
+
+ /* Other KRAM space */
+ for (i = 0; i < STANDARD_KRAM_RESOURCES - 2; i++)
+ request_resource(&iomem_resource,
+ &platform_parms.kram_res_p[i]);
+
+ /* XRAM space */
+ for (i = 0; i < STANDARD_XRAM_RESOURCES; i++)
+ request_resource(&iomem_resource,
+ &platform_parms.xram_res_p[i]);
+
+ /* ROM space */
+ for (i = 0; i < STANDARD_ROM_RESOURCES; i++)
+ request_resource(&iomem_resource,
+ &platform_parms.rom_res_p[i]);
+
+ /* I/O space */
+ for (i = 0; i < STANDARD_IO_RESOURCES; i++)
+ request_resource(&ioport_resource,
+ &platform_parms.io_res_p[i]);
+
+
+#ifdef CONFIG_VT
+#if defined(CONFIG_VGA_CONSOLE)
+ conswitchp = &vga_con;
+#elif defined(CONFIG_DUMMY_CONSOLE)
+ conswitchp = &dummy_con;
+#endif
+#endif
+
+ printk("Hardware FPU: %s\n", fpu_in_use ? "enabled" : "disabled");
+
+ paging_init();
+}
+
+void __xchg_called_with_bad_pointer(void)
+{
+ printk(KERN_EMERG "xchg() called with bad pointer !\n");
+}
+
+static struct cpu cpu[1];
+
+static int __init topology_init(void)
+{
+ return register_cpu(cpu, 0, NULL);
+}
+
+subsys_initcall(topology_init);
+
+/*
+ * Get CPU information
+ */
+static const char *cpu_name[] = {
+ [CPU_SH5_101] = "SH5-101",
+ [CPU_SH5_103] = "SH5-103",
+ [CPU_SH_NONE] = "Unknown",
+};
+
+const char *get_cpu_subtype(void)
+{
+ return cpu_name[boot_cpu_data.type];
+}
+
+#ifdef CONFIG_PROC_FS
+static int show_cpuinfo(struct seq_file *m,void *v)
+{
+ unsigned int cpu = smp_processor_id();
+
+ if (!cpu)
+ seq_printf(m, "machine\t\t: %s\n", get_system_type());
+
+ seq_printf(m, "processor\t: %d\n", cpu);
+ seq_printf(m, "cpu family\t: SH-5\n");
+ seq_printf(m, "cpu type\t: %s\n", get_cpu_subtype());
+
+ seq_printf(m, "icache size\t: %dK-bytes\n",
+ (boot_cpu_data.icache.ways *
+ boot_cpu_data.icache.sets *
+ boot_cpu_data.icache.linesz) >> 10);
+ seq_printf(m, "dcache size\t: %dK-bytes\n",
+ (boot_cpu_data.dcache.ways *
+ boot_cpu_data.dcache.sets *
+ boot_cpu_data.dcache.linesz) >> 10);
+ seq_printf(m, "itlb entries\t: %d\n", boot_cpu_data.itlb.entries);
+ seq_printf(m, "dtlb entries\t: %d\n", boot_cpu_data.dtlb.entries);
+
+#define PRINT_CLOCK(name, value) \
+ seq_printf(m, name " clock\t: %d.%02dMHz\n", \
+ ((value) / 1000000), ((value) % 1000000)/10000)
+
+ PRINT_CLOCK("cpu", boot_cpu_data.cpu_clock);
+ PRINT_CLOCK("bus", boot_cpu_data.bus_clock);
+ PRINT_CLOCK("module", boot_cpu_data.module_clock);
+
+ seq_printf(m, "bogomips\t: %lu.%02lu\n\n",
+ (loops_per_jiffy*HZ+2500)/500000,
+ ((loops_per_jiffy*HZ+2500)/5000) % 100);
+
+ return 0;
+}
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+ return (void*)(*pos == 0);
+}
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ return NULL;
+}
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+struct seq_operations cpuinfo_op = {
+ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = show_cpuinfo,
+};
+#endif /* CONFIG_PROC_FS */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/sh_ksyms.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/rwsem.h>
+#include <linux/module.h>
+#include <linux/smp.h>
+#include <linux/user.h>
+#include <linux/elfcore.h>
+#include <linux/sched.h>
+#include <linux/in6.h>
+#include <linux/interrupt.h>
+#include <linux/smp_lock.h>
+
+#include <asm/semaphore.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+#include <asm/io.h>
+#include <asm/hardirq.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern int dump_fpu(struct pt_regs *, elf_fpregset_t *);
+
+#if 0
+/* Not yet - there's no declaration of drive_info anywhere. */
+#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_HD) || defined(CONFIG_BLK_DEV_IDE_MODULE) || defined(CONFIG_BLK_DEV_HD_MODULE)
+extern struct drive_info_struct drive_info;
+EXPORT_SYMBOL(drive_info);
+#endif
+#endif
+
+/* platform dependent support */
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(dump_fpu);
+EXPORT_SYMBOL(iounmap);
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(kernel_thread);
+
+/* Networking helper routines. */
+EXPORT_SYMBOL(csum_partial_copy);
+
+EXPORT_SYMBOL(strtok);
+EXPORT_SYMBOL(strpbrk);
+EXPORT_SYMBOL(strstr);
+
+#ifdef CONFIG_VT
+EXPORT_SYMBOL(screen_info);
+#endif
+
+EXPORT_SYMBOL_NOVERS(__down);
+EXPORT_SYMBOL_NOVERS(__down_trylock);
+EXPORT_SYMBOL_NOVERS(__up);
+EXPORT_SYMBOL_NOVERS(__put_user_asm_l);
+EXPORT_SYMBOL_NOVERS(__get_user_asm_l);
+EXPORT_SYMBOL_NOVERS(memcmp);
+EXPORT_SYMBOL_NOVERS(memcpy);
+EXPORT_SYMBOL_NOVERS(memset);
+EXPORT_SYMBOL_NOVERS(memscan);
+EXPORT_SYMBOL_NOVERS(strchr);
+EXPORT_SYMBOL_NOVERS(strlen);
+
+EXPORT_SYMBOL(flush_dcache_page);
+
+/* Ugh. These come in from libgcc.a at link time. */
+
+extern void __sdivsi3(void);
+extern void __muldi3(void);
+extern void __udivsi3(void);
+EXPORT_SYMBOL_NOVERS(__sdivsi3);
+EXPORT_SYMBOL_NOVERS(__muldi3);
+EXPORT_SYMBOL_NOVERS(__udivsi3);
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/signal.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * Started from sh version.
+ *
+ */
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/wait.h>
+#include <linux/personality.h>
+#include <linux/suspend.h>
+#include <linux/ptrace.h>
+#include <linux/unistd.h>
+#include <linux/stddef.h>
+#include <linux/personality.h>
+#include <asm/ucontext.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+
+#define REG_RET 9
+#define REG_ARG1 2
+#define REG_ARG2 3
+#define REG_ARG3 4
+#define REG_SP 15
+#define REG_PR 18
+#define REF_REG_RET regs->regs[REG_RET]
+#define REF_REG_SP regs->regs[REG_SP]
+#define DEREF_REG_PR regs->regs[REG_PR]
+
+#define DEBUG_SIG 0
+
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
+
+/*
+ * Atomically swap in the new signal mask, and wait for a signal.
+ */
+
+asmlinkage int
+sys_sigsuspend(old_sigset_t mask,
+ unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ sigset_t saveset;
+
+ mask &= _BLOCKABLE;
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ siginitset(¤t->blocked, mask);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ REF_REG_RET = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ regs->pc += 4; /* because sys_sigreturn decrements the pc */
+ if (do_signal(regs, &saveset)) {
+ /* pc now points at signal handler. Need to decrement
+ it because entry.S will increment it. */
+ regs->pc -= 4;
+ return -EINTR;
+ }
+ }
+}
+
+asmlinkage int
+sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize,
+ unsigned long r4, unsigned long r5, unsigned long r6,
+ unsigned long r7,
+ struct pt_regs * regs)
+{
+ sigset_t saveset, newset;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&newset, unewset, sizeof(newset)))
+ return -EFAULT;
+ sigdelsetmask(&newset, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ current->blocked = newset;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ REF_REG_RET = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ regs->pc += 4; /* because sys_sigreturn decrements the pc */
+ if (do_signal(regs, &saveset)) {
+ /* pc now points at signal handler. Need to decrement
+ it because entry.S will increment it. */
+ regs->pc -= 4;
+ return -EINTR;
+ }
+ }
+}
+
+asmlinkage int
+sys_sigaction(int sig, const struct old_sigaction __user *act,
+ struct old_sigaction __user *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset_t mask;
+ if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
+ return -EFAULT;
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ __get_user(mask, &act->sa_mask);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
+ __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
+ __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
+ return -EFAULT;
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+asmlinkage int
+sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
+ unsigned long r4, unsigned long r5, unsigned long r6,
+ unsigned long r7,
+ struct pt_regs * regs)
+{
+ return do_sigaltstack(uss, uoss, REF_REG_SP);
+}
+
+
+/*
+ * Do a signal return; undo the signal stack.
+ */
+
+struct sigframe
+{
+ struct sigcontext sc;
+ unsigned long extramask[_NSIG_WORDS-1];
+ long long retcode[2];
+};
+
+struct rt_sigframe
+{
+ struct siginfo __user *pinfo;
+ void *puc;
+ struct siginfo info;
+ struct ucontext uc;
+ long long retcode[2];
+};
+
+#ifndef CONFIG_NOFPU_SUPPORT
+static inline int
+restore_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{
+ int err = 0;
+ int fpvalid;
+
+ err |= __get_user (fpvalid, &sc->sc_fpvalid);
+ current->used_math = fpvalid;
+ if (! fpvalid)
+ return err;
+
+ if (current == last_task_used_math) {
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ err |= __copy_from_user(¤t->thread.fpu.hard, &sc->sc_fpregs[0],
+ (sizeof(long long) * 32) + (sizeof(int) * 1));
+
+ return err;
+}
+
+static inline int
+setup_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{
+ int err = 0;
+ int fpvalid;
+
+ fpvalid = current->used_math;
+ err |= __put_user(fpvalid, &sc->sc_fpvalid);
+ if (! fpvalid)
+ return err;
+
+ if (current == last_task_used_math) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ err |= __copy_to_user(&sc->sc_fpregs[0], ¤t->thread.fpu.hard,
+ (sizeof(long long) * 32) + (sizeof(int) * 1));
+ current->used_math = 0;
+
+ return err;
+}
+#else
+static inline int
+restore_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{}
+static inline int
+setup_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{}
+#endif
+
+static int
+restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, long long *r2_p)
+{
+ unsigned int err = 0;
+ unsigned long long current_sr, new_sr;
+#define SR_MASK 0xffff8cfd
+
+#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
+
+ COPY(regs[0]); COPY(regs[1]); COPY(regs[2]); COPY(regs[3]);
+ COPY(regs[4]); COPY(regs[5]); COPY(regs[6]); COPY(regs[7]);
+ COPY(regs[8]); COPY(regs[9]); COPY(regs[10]); COPY(regs[11]);
+ COPY(regs[12]); COPY(regs[13]); COPY(regs[14]); COPY(regs[15]);
+ COPY(regs[16]); COPY(regs[17]); COPY(regs[18]); COPY(regs[19]);
+ COPY(regs[20]); COPY(regs[21]); COPY(regs[22]); COPY(regs[23]);
+ COPY(regs[24]); COPY(regs[25]); COPY(regs[26]); COPY(regs[27]);
+ COPY(regs[28]); COPY(regs[29]); COPY(regs[30]); COPY(regs[31]);
+ COPY(regs[32]); COPY(regs[33]); COPY(regs[34]); COPY(regs[35]);
+ COPY(regs[36]); COPY(regs[37]); COPY(regs[38]); COPY(regs[39]);
+ COPY(regs[40]); COPY(regs[41]); COPY(regs[42]); COPY(regs[43]);
+ COPY(regs[44]); COPY(regs[45]); COPY(regs[46]); COPY(regs[47]);
+ COPY(regs[48]); COPY(regs[49]); COPY(regs[50]); COPY(regs[51]);
+ COPY(regs[52]); COPY(regs[53]); COPY(regs[54]); COPY(regs[55]);
+ COPY(regs[56]); COPY(regs[57]); COPY(regs[58]); COPY(regs[59]);
+ COPY(regs[60]); COPY(regs[61]); COPY(regs[62]);
+ COPY(tregs[0]); COPY(tregs[1]); COPY(tregs[2]); COPY(tregs[3]);
+ COPY(tregs[4]); COPY(tregs[5]); COPY(tregs[6]); COPY(tregs[7]);
+
+ /* Prevent the signal handler manipulating SR in a way that can
+ crash the kernel. i.e. only allow S, Q, M, PR, SZ, FR to be
+ modified */
+ current_sr = regs->sr;
+ err |= __get_user(new_sr, &sc->sc_sr);
+ regs->sr &= SR_MASK;
+ regs->sr |= (new_sr & ~SR_MASK);
+
+ COPY(pc);
+
+#undef COPY
+
+ /* Must do this last in case it sets regs->sr.fd (i.e. after rest of sr
+ * has been restored above.) */
+ err |= restore_sigcontext_fpu(regs, sc);
+
+ regs->syscall_nr = -1; /* disable syscall checks */
+ err |= __get_user(*r2_p, &sc->sc_regs[REG_RET]);
+ return err;
+}
+
+asmlinkage int sys_sigreturn(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ struct sigframe __user *frame = (struct sigframe __user *) (long) REF_REG_SP;
+ sigset_t set;
+ long long ret;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_NSIG_WORDS > 1
+ && __copy_from_user(&set.sig[1], &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(regs, &frame->sc, &ret))
+ goto badframe;
+ regs->pc -= 4;
+
+ return (int) ret;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+asmlinkage int sys_rt_sigreturn(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ struct rt_sigframe __user *frame = (struct rt_sigframe __user *) (long) REF_REG_SP;
+ sigset_t set;
+ stack_t __user st;
+ long long ret;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ret))
+ goto badframe;
+ regs->pc -= 4;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, REF_REG_SP);
+
+ return (int) ret;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+/*
+ * Set up a signal frame.
+ */
+
+static int
+setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
+ unsigned long mask)
+{
+ int err = 0;
+
+ /* Do this first, otherwise is this sets sr->fd, that value isn't preserved. */
+ err |= setup_sigcontext_fpu(regs, sc);
+
+#define COPY(x) err |= __put_user(regs->x, &sc->sc_##x)
+
+ COPY(regs[0]); COPY(regs[1]); COPY(regs[2]); COPY(regs[3]);
+ COPY(regs[4]); COPY(regs[5]); COPY(regs[6]); COPY(regs[7]);
+ COPY(regs[8]); COPY(regs[9]); COPY(regs[10]); COPY(regs[11]);
+ COPY(regs[12]); COPY(regs[13]); COPY(regs[14]); COPY(regs[15]);
+ COPY(regs[16]); COPY(regs[17]); COPY(regs[18]); COPY(regs[19]);
+ COPY(regs[20]); COPY(regs[21]); COPY(regs[22]); COPY(regs[23]);
+ COPY(regs[24]); COPY(regs[25]); COPY(regs[26]); COPY(regs[27]);
+ COPY(regs[28]); COPY(regs[29]); COPY(regs[30]); COPY(regs[31]);
+ COPY(regs[32]); COPY(regs[33]); COPY(regs[34]); COPY(regs[35]);
+ COPY(regs[36]); COPY(regs[37]); COPY(regs[38]); COPY(regs[39]);
+ COPY(regs[40]); COPY(regs[41]); COPY(regs[42]); COPY(regs[43]);
+ COPY(regs[44]); COPY(regs[45]); COPY(regs[46]); COPY(regs[47]);
+ COPY(regs[48]); COPY(regs[49]); COPY(regs[50]); COPY(regs[51]);
+ COPY(regs[52]); COPY(regs[53]); COPY(regs[54]); COPY(regs[55]);
+ COPY(regs[56]); COPY(regs[57]); COPY(regs[58]); COPY(regs[59]);
+ COPY(regs[60]); COPY(regs[61]); COPY(regs[62]);
+ COPY(tregs[0]); COPY(tregs[1]); COPY(tregs[2]); COPY(tregs[3]);
+ COPY(tregs[4]); COPY(tregs[5]); COPY(tregs[6]); COPY(tregs[7]);
+ COPY(sr); COPY(pc);
+
+#undef COPY
+
+ err |= __put_user(mask, &sc->oldmask);
+
+ return err;
+}
+
+/*
+ * Determine which stack to use..
+ */
+static inline void __user *
+get_sigframe(struct k_sigaction *ka, unsigned long sp, size_t frame_size)
+{
+ if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! on_sig_stack(sp))
+ sp = current->sas_ss_sp + current->sas_ss_size;
+
+ return (void __user *)((sp - frame_size) & -8ul);
+}
+
+void sa_default_restorer(void); /* See comments below */
+void sa_default_rt_restorer(void); /* See comments below */
+
+static void setup_frame(int sig, struct k_sigaction *ka,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct sigframe __user *frame;
+ int err = 0;
+ int signal;
+
+ frame = get_sigframe(ka, regs->regs[REG_SP], sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ signal = current_thread_info()->exec_domain
+ && current_thread_info()->exec_domain->signal_invmap
+ && sig < 32
+ ? current_thread_info()->exec_domain->signal_invmap[sig]
+ : sig;
+
+ err |= setup_sigcontext(&frame->sc, regs, set->sig[0]);
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ if (_NSIG_WORDS > 1) {
+ err |= __copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask)); }
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ DEREF_REG_PR = (unsigned long) ka->sa.sa_restorer | 0x1;
+
+ /*
+ * On SH5 all edited pointers are subject to NEFF
+ */
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+ } else {
+ /*
+ * Different approach on SH5.
+ * . Endianness independent asm code gets placed in entry.S .
+ * This is limited to four ASM instructions corresponding
+ * to two long longs in size.
+ * . err checking is done on the else branch only
+ * . flush_icache_range() is called upon __put_user() only
+ * . all edited pointers are subject to NEFF
+ * . being code, linker turns ShMedia bit on, always
+ * dereference index -1.
+ */
+ DEREF_REG_PR = (unsigned long) frame->retcode | 0x01;
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+
+ if (__copy_to_user(frame->retcode,
+ (unsigned long long)sa_default_restorer & (~1), 16) != 0)
+ goto give_sigsegv;
+
+ /* Cohere the trampoline with the I-cache. */
+ flush_cache_sigtramp(DEREF_REG_PR-1, DEREF_REG_PR-1+16);
+ }
+
+ /*
+ * Set up registers for signal handler.
+ * All edited pointers are subject to NEFF.
+ */
+ regs->regs[REG_SP] = (unsigned long) frame;
+ regs->regs[REG_SP] = (regs->regs[REG_SP] & NEFF_SIGN) ?
+ (regs->regs[REG_SP] | NEFF_MASK) : regs->regs[REG_SP];
+ regs->regs[REG_ARG1] = signal; /* Arg for signal handler */
+
+ /* FIXME:
+ The glibc profiling support for SH-5 needs to be passed a sigcontext
+ so it can retrieve the PC. At some point during 2003 the glibc
+ support was changed to receive the sigcontext through the 2nd
+ argument, but there are still versions of libc.so in use that use
+ the 3rd argument. Until libc.so is stabilised, pass the sigcontext
+ through both 2nd and 3rd arguments.
+ */
+
+ regs->regs[REG_ARG2] = (unsigned long long)(unsigned long)(signed long)&frame->sc;
+ regs->regs[REG_ARG3] = (unsigned long long)(unsigned long)(signed long)&frame->sc;
+
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->pc = (regs->pc & NEFF_SIGN) ? (regs->pc | NEFF_MASK) : regs->pc;
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ /* Broken %016Lx */
+ printk("SIG deliver (#%d,%s:%d): sp=%p pc=%08Lx%08Lx link=%08Lx%08Lx\n",
+ signal,
+ current->comm, current->pid, frame,
+ regs->pc >> 32, regs->pc & 0xffffffff,
+ DEREF_REG_PR >> 32, DEREF_REG_PR & 0xffffffff);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+}
+
+static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct rt_sigframe __user *frame;
+ int err = 0;
+ int signal;
+
+ frame = get_sigframe(ka, regs->regs[REG_SP], sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ signal = current_thread_info()->exec_domain
+ && current_thread_info()->exec_domain->signal_invmap
+ && sig < 32
+ ? current_thread_info()->exec_domain->signal_invmap[sig]
+ : sig;
+
+ err |= __put_user(&frame->info, &frame->pinfo);
+ err |= __put_user(&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user(&frame->info, info);
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user((void *)current->sas_ss_sp,
+ &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->regs[REG_SP]),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext(&frame->uc.uc_mcontext,
+ regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ DEREF_REG_PR = (unsigned long) ka->sa.sa_restorer | 0x1;
+
+ /*
+ * On SH5 all edited pointers are subject to NEFF
+ */
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+ } else {
+ /*
+ * Different approach on SH5.
+ * . Endianness independent asm code gets placed in entry.S .
+ * This is limited to four ASM instructions corresponding
+ * to two long longs in size.
+ * . err checking is done on the else branch only
+ * . flush_icache_range() is called upon __put_user() only
+ * . all edited pointers are subject to NEFF
+ * . being code, linker turns ShMedia bit on, always
+ * dereference index -1.
+ */
+
+ DEREF_REG_PR = (unsigned long) frame->retcode | 0x01;
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+
+ if (__copy_to_user(frame->retcode,
+ (unsigned long long)sa_default_rt_restorer & (~1), 16) != 0)
+ goto give_sigsegv;
+
+ flush_icache_range(DEREF_REG_PR-1, DEREF_REG_PR-1+15);
+ }
+
+ /*
+ * Set up registers for signal handler.
+ * All edited pointers are subject to NEFF.
+ */
+ regs->regs[REG_SP] = (unsigned long) frame;
+ regs->regs[REG_SP] = (regs->regs[REG_SP] & NEFF_SIGN) ?
+ (regs->regs[REG_SP] | NEFF_MASK) : regs->regs[REG_SP];
+ regs->regs[REG_ARG1] = signal; /* Arg for signal handler */
+ regs->regs[REG_ARG2] = (unsigned long long)(unsigned long)(signed long)&frame->info;
+ regs->regs[REG_ARG3] = (unsigned long long)(unsigned long)(signed long)&frame->uc.uc_mcontext;
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->pc = (regs->pc & NEFF_SIGN) ? (regs->pc | NEFF_MASK) : regs->pc;
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ /* Broken %016Lx */
+ printk("SIG deliver (#%d,%s:%d): sp=%p pc=%08Lx%08Lx link=%08Lx%08Lx\n",
+ signal,
+ current->comm, current->pid, frame,
+ regs->pc >> 32, regs->pc & 0xffffffff,
+ DEREF_REG_PR >> 32, DEREF_REG_PR & 0xffffffff);
+#endif
+
+ return;
+
+give_sigsegv:
+ if (sig == SIGSEGV)
+ ka->sa.sa_handler = SIG_DFL;
+ force_sig(SIGSEGV, current);
+}
+
+/*
+ * OK, we're invoking a handler
+ */
+
+static void
+handle_signal(unsigned long sig, siginfo_t *info, sigset_t *oldset,
+ struct pt_regs * regs)
+{
+ struct k_sigaction *ka = ¤t->sighand->action[sig-1];
+
+ /* Are we from a system call? */
+ if (regs->syscall_nr >= 0) {
+ /* If so, check system call restarting.. */
+ switch (regs->regs[REG_RET]) {
+ case -ERESTARTNOHAND:
+ regs->regs[REG_RET] = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+ regs->regs[REG_RET] = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+ /* Decode syscall # */
+ regs->regs[REG_RET] = regs->syscall_nr;
+ regs->pc -= 4;
+ }
+ }
+
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+ setup_rt_frame(sig, ka, info, oldset, regs);
+ else
+ setup_frame(sig, ka, oldset, regs);
+
+ if (ka->sa.sa_flags & SA_ONESHOT)
+ ka->sa.sa_handler = SIG_DFL;
+
+ if (!(ka->sa.sa_flags & SA_NODEFER)) {
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ sigaddset(¤t->blocked,sig);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+ }
+}
+
+/*
+ * Note that 'init' is a special process: it doesn't get signals it doesn't
+ * want to handle. Thus you cannot kill init even with a SIGKILL even by
+ * mistake.
+ *
+ * Note that we go through the signals twice: once to check the signals that
+ * the kernel can handle, and then we build all the user-level signal handling
+ * stack-frames in one go after that.
+ */
+int do_signal(struct pt_regs *regs, sigset_t *oldset)
+{
+ siginfo_t info;
+ int signr;
+
+ /*
+ * We want the common case to go fast, which
+ * is why we may in certain cases get here from
+ * kernel mode. Just return without doing anything
+ * if so.
+ */
+ if (!user_mode(regs))
+ return 1;
+
+ if (current->flags & PF_FREEZE) {
+ refrigerator(0);
+ goto no_signal;
+ }
+
+ if (!oldset)
+ oldset = ¤t->blocked;
+
+ signr = get_signal_to_deliver(&info, regs, 0);
+
+ if (signr > 0) {
+ /* Whee! Actually deliver the signal. */
+ handle_signal(signr, &info, oldset, regs);
+ return 1;
+ }
+
+no_signal:
+ /* Did we come from a system call? */
+ if (regs->syscall_nr >= 0) {
+ /* Restart the system call - no handlers present */
+ if (regs->regs[REG_RET] == -ERESTARTNOHAND ||
+ regs->regs[REG_RET] == -ERESTARTSYS ||
+ regs->regs[REG_RET] == -ERESTARTNOINTR) {
+ /* Decode Syscall # */
+ regs->regs[REG_RET] = regs->syscall_nr;
+ regs->pc -= 4;
+ }
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * arch/sh64/kernel/switchto.S
+ *
+ * sh64 context switch
+ *
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+*/
+
+ .section .text..SHmedia32,"ax"
+ .little
+
+ .balign 32
+
+ .type sh64_switch_to,@function
+ .global sh64_switch_to
+ .global __sh64_switch_to_end
+sh64_switch_to:
+
+/* Incoming args
+ r2 - prev
+ r3 - &prev->thread
+ r4 - next
+ r5 - &next->thread
+
+ Outgoing results
+ r2 - last (=prev)
+
+ Want to create a full (struct pt_regs) on the stack to allow backtracing
+ functions to work. However, we only need to populate the callee-save
+ register slots in this structure; since we're a function our ancestors must
+ have themselves preserved all caller saved state in the stack. This saves
+ some wasted effort since we won't need to look at the values.
+
+ In particular, all caller-save registers are immediately available for
+ scratch use.
+
+*/
+
+#define FRAME_SIZE (76*8 + 8)
+
+ movi FRAME_SIZE, r0
+ sub.l r15, r0, r15
+ ! Do normal-style register save to support backtrace
+
+ st.l r15, 0, r18 ! save link reg
+ st.l r15, 4, r14 ! save fp
+ add.l r15, r63, r14 ! setup frame pointer
+
+ ! hopefully this looks normal to the backtrace now.
+
+ addi.l r15, 8, r1 ! base of pt_regs
+ addi.l r1, 24, r0 ! base of pt_regs.regs
+ addi.l r0, (63*8), r8 ! base of pt_regs.trregs
+
+ /* Note : to be fixed?
+ struct pt_regs is really designed for holding the state on entry
+ to an exception, i.e. pc,sr,regs etc. However, for the context
+ switch state, some of this is not required. But the unwinder takes
+ struct pt_regs * as an arg so we have to build this structure
+ to allow unwinding switched tasks in show_state() */
+
+ st.q r0, ( 9*8), r9
+ st.q r0, (10*8), r10
+ st.q r0, (11*8), r11
+ st.q r0, (12*8), r12
+ st.q r0, (13*8), r13
+ st.q r0, (14*8), r14 ! for unwind, want to look as though we took a trap at
+ ! the point where the process is left in suspended animation, i.e. current
+ ! fp here, not the saved one.
+ st.q r0, (16*8), r16
+
+ st.q r0, (24*8), r24
+ st.q r0, (25*8), r25
+ st.q r0, (26*8), r26
+ st.q r0, (27*8), r27
+ st.q r0, (28*8), r28
+ st.q r0, (29*8), r29
+ st.q r0, (30*8), r30
+ st.q r0, (31*8), r31
+ st.q r0, (32*8), r32
+ st.q r0, (33*8), r33
+ st.q r0, (34*8), r34
+ st.q r0, (35*8), r35
+
+ st.q r0, (44*8), r44
+ st.q r0, (45*8), r45
+ st.q r0, (46*8), r46
+ st.q r0, (47*8), r47
+ st.q r0, (48*8), r48
+ st.q r0, (49*8), r49
+ st.q r0, (50*8), r50
+ st.q r0, (51*8), r51
+ st.q r0, (52*8), r52
+ st.q r0, (53*8), r53
+ st.q r0, (54*8), r54
+ st.q r0, (55*8), r55
+ st.q r0, (56*8), r56
+ st.q r0, (57*8), r57
+ st.q r0, (58*8), r58
+ st.q r0, (59*8), r59
+
+ ! do this early as pta->gettr has no pipeline forwarding (=> 5 cycle latency)
+ ! Use a local label to avoid creating a symbol that will confuse the !
+ ! backtrace
+ pta .Lsave_pc, tr0
+
+ gettr tr5, r45
+ gettr tr6, r46
+ gettr tr7, r47
+ st.q r8, (5*8), r45
+ st.q r8, (6*8), r46
+ st.q r8, (7*8), r47
+
+ ! Now switch context
+ gettr tr0, r9
+ st.l r3, 0, r15 ! prev->thread.sp
+ st.l r3, 8, r1 ! prev->thread.kregs
+ st.l r3, 4, r9 ! prev->thread.pc
+ st.q r1, 0, r9 ! save prev->thread.pc into pt_regs->pc
+
+ ! Load PC for next task (init value or save_pc later)
+ ld.l r5, 4, r18 ! next->thread.pc
+ ! Switch stacks
+ ld.l r5, 0, r15 ! next->thread.sp
+ ptabs r18, tr0
+
+ ! Update current
+ ld.l r4, 4, r9 ! next->thread_info (2nd element of next task_struct)
+ putcon r9, kcr0 ! current = next->thread_info
+
+ ! go to save_pc for a reschedule, or the initial thread.pc for a new process
+ blink tr0, r63
+
+ ! Restore (when we come back to a previously saved task)
+.Lsave_pc:
+ addi.l r15, 32, r0 ! r0 = next's regs
+ addi.l r0, (63*8), r8 ! r8 = next's tr_regs
+
+ ld.q r8, (5*8), r45
+ ld.q r8, (6*8), r46
+ ld.q r8, (7*8), r47
+ ptabs r45, tr5
+ ptabs r46, tr6
+ ptabs r47, tr7
+
+ ld.q r0, ( 9*8), r9
+ ld.q r0, (10*8), r10
+ ld.q r0, (11*8), r11
+ ld.q r0, (12*8), r12
+ ld.q r0, (13*8), r13
+ ld.q r0, (14*8), r14
+ ld.q r0, (16*8), r16
+
+ ld.q r0, (24*8), r24
+ ld.q r0, (25*8), r25
+ ld.q r0, (26*8), r26
+ ld.q r0, (27*8), r27
+ ld.q r0, (28*8), r28
+ ld.q r0, (29*8), r29
+ ld.q r0, (30*8), r30
+ ld.q r0, (31*8), r31
+ ld.q r0, (32*8), r32
+ ld.q r0, (33*8), r33
+ ld.q r0, (34*8), r34
+ ld.q r0, (35*8), r35
+
+ ld.q r0, (44*8), r44
+ ld.q r0, (45*8), r45
+ ld.q r0, (46*8), r46
+ ld.q r0, (47*8), r47
+ ld.q r0, (48*8), r48
+ ld.q r0, (49*8), r49
+ ld.q r0, (50*8), r50
+ ld.q r0, (51*8), r51
+ ld.q r0, (52*8), r52
+ ld.q r0, (53*8), r53
+ ld.q r0, (54*8), r54
+ ld.q r0, (55*8), r55
+ ld.q r0, (56*8), r56
+ ld.q r0, (57*8), r57
+ ld.q r0, (58*8), r58
+ ld.q r0, (59*8), r59
+
+ ! epilogue
+ ld.l r15, 0, r18
+ ld.l r15, 4, r14
+ ori r4, 0, r2 ! last = prev
+ ptabs r18, tr0
+ movi FRAME_SIZE, r0
+ add r15, r0, r15
+ blink tr0, r63
+__sh64_switch_to_end:
+.LFE1:
+ .size sh64_switch_to,.LFE1-sh64_switch_to
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/sys_sh64.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * This file contains various random system calls that
+ * have a non-standard calling sequence on the Linux/SH5
+ * platform.
+ *
+ * Mostly taken from i386 version.
+ *
+ */
+
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/sem.h>
+#include <linux/msg.h>
+#include <linux/shm.h>
+#include <linux/stat.h>
+#include <linux/mman.h>
+#include <linux/file.h>
+#include <linux/utsname.h>
+#include <linux/syscalls.h>
+#include <asm/uaccess.h>
+#include <asm/ipc.h>
+#include <asm/ptrace.h>
+
+#define REG_3 3
+
+/*
+ * sys_pipe() is the normal C calling standard for creating
+ * a pipe. It's not the way Unix traditionally does this, though.
+ */
+#ifdef NEW_PIPE_IMPLEMENTATION
+asmlinkage int sys_pipe(unsigned long * fildes,
+ unsigned long dummy_r3,
+ unsigned long dummy_r4,
+ unsigned long dummy_r5,
+ unsigned long dummy_r6,
+ unsigned long dummy_r7,
+ struct pt_regs * regs) /* r8 = pt_regs forced by entry.S */
+{
+ int fd[2];
+ int ret;
+
+ ret = do_pipe(fd);
+ if (ret == 0)
+ /*
+ ***********************************************************************
+ * To avoid the copy_to_user we prefer to break the ABIs convention, *
+ * packing the valid pair of file IDs into a single register (r3); *
+ * while r2 is the return code as defined by the sh5-ABIs. *
+ * BE CAREFUL: pipe stub, into glibc, must be aware of this solution *
+ ***********************************************************************
+
+#ifdef __LITTLE_ENDIAN__
+ regs->regs[REG_3] = (((unsigned long long) fd[1]) << 32) | ((unsigned long long) fd[0]);
+#else
+ regs->regs[REG_3] = (((unsigned long long) fd[0]) << 32) | ((unsigned long long) fd[1]);
+#endif
+
+ */
+ /* although not very clever this is endianess independent */
+ regs->regs[REG_3] = (unsigned long long) *((unsigned long long *) fd);
+
+ return ret;
+}
+
+#else
+asmlinkage int sys_pipe(unsigned long * fildes)
+{
+ int fd[2];
+ int error;
+
+ error = do_pipe(fd);
+ if (!error) {
+ if (copy_to_user(fildes, fd, 2*sizeof(int)))
+ error = -EFAULT;
+ }
+ return error;
+}
+
+#endif
+
+/*
+ * To avoid cache alias, we map the shard page with same color.
+ */
+#define COLOUR_ALIGN(addr) (((addr)+SHMLBA-1)&~(SHMLBA-1))
+
+unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+ struct vm_area_struct *vma;
+
+ if (flags & MAP_FIXED) {
+ /* We do not accept a shared mapping if it would violate
+ * cache aliasing constraints.
+ */
+ if ((flags & MAP_SHARED) && (addr & (SHMLBA - 1)))
+ return -EINVAL;
+ return addr;
+ }
+
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+ if (!addr)
+ addr = TASK_UNMAPPED_BASE;
+
+ if (flags & MAP_PRIVATE)
+ addr = PAGE_ALIGN(addr);
+ else
+ addr = COLOUR_ALIGN(addr);
+
+ for (vma = find_vma(current->mm, addr); ; vma = vma->vm_next) {
+ /* At this point: (!vma || addr < vma->vm_end). */
+ if (TASK_SIZE - len < addr)
+ return -ENOMEM;
+ if (!vma || addr + len <= vma->vm_start)
+ return addr;
+ addr = vma->vm_end;
+ if (!(flags & MAP_PRIVATE))
+ addr = COLOUR_ALIGN(addr);
+ }
+}
+
+/* common code for old and new mmaps */
+static inline long do_mmap2(
+ unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+ unsigned long fd, unsigned long pgoff)
+{
+ int error = -EBADF;
+ struct file * file = NULL;
+
+ flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
+ if (!(flags & MAP_ANONYMOUS)) {
+ file = fget(fd);
+ if (!file)
+ goto out;
+ }
+
+ down_write(¤t->mm->mmap_sem);
+ error = do_mmap_pgoff(file, addr, len, prot, flags, pgoff);
+ up_write(¤t->mm->mmap_sem);
+
+ if (file)
+ fput(file);
+out:
+ return error;
+}
+
+asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+ unsigned long fd, unsigned long pgoff)
+{
+ return do_mmap2(addr, len, prot, flags, fd, pgoff);
+}
+
+asmlinkage int old_mmap(unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+ int fd, unsigned long off)
+{
+ if (off & ~PAGE_MASK)
+ return -EINVAL;
+ return do_mmap2(addr, len, prot, flags, fd, off>>PAGE_SHIFT);
+}
+
+/*
+ * sys_ipc() is the de-multiplexer for the SysV IPC calls..
+ *
+ * This is really horribly ugly.
+ */
+asmlinkage int sys_ipc(uint call, int first, int second,
+ int third, void __user *ptr, long fifth)
+{
+ int version, ret;
+
+ version = call >> 16; /* hack for backward compatibility */
+ call &= 0xffff;
+
+ if (call <= SEMCTL)
+ switch (call) {
+ case SEMOP:
+ return sys_semtimedop(first, (struct sembuf __user *)ptr,
+ second, NULL);
+ case SEMTIMEDOP:
+ return sys_semtimedop(first, (struct sembuf __user *)ptr,
+ second,
+ (const struct timespec __user *)fifth);
+ case SEMGET:
+ return sys_semget (first, second, third);
+ case SEMCTL: {
+ union semun fourth;
+ if (!ptr)
+ return -EINVAL;
+ if (get_user(fourth.__pad, (void * __user *) ptr))
+ return -EFAULT;
+ return sys_semctl (first, second, third, fourth);
+ }
+ default:
+ return -EINVAL;
+ }
+
+ if (call <= MSGCTL)
+ switch (call) {
+ case MSGSND:
+ return sys_msgsnd (first, (struct msgbuf __user *) ptr,
+ second, third);
+ case MSGRCV:
+ switch (version) {
+ case 0: {
+ struct ipc_kludge tmp;
+ if (!ptr)
+ return -EINVAL;
+
+ if (copy_from_user(&tmp,
+ (struct ipc_kludge __user *) ptr,
+ sizeof (tmp)))
+ return -EFAULT;
+ return sys_msgrcv (first, tmp.msgp, second,
+ tmp.msgtyp, third);
+ }
+ default:
+ return sys_msgrcv (first,
+ (struct msgbuf __user *) ptr,
+ second, fifth, third);
+ }
+ case MSGGET:
+ return sys_msgget ((key_t) first, second);
+ case MSGCTL:
+ return sys_msgctl (first, second,
+ (struct msqid_ds __user *) ptr);
+ default:
+ return -EINVAL;
+ }
+ if (call <= SHMCTL)
+ switch (call) {
+ case SHMAT:
+ switch (version) {
+ default: {
+ ulong raddr;
+ ret = do_shmat (first, (char __user *) ptr,
+ second, &raddr);
+ if (ret)
+ return ret;
+ return put_user (raddr, (ulong __user *) third);
+ }
+ case 1: /* iBCS2 emulator entry point */
+ if (!segment_eq(get_fs(), get_ds()))
+ return -EINVAL;
+ return do_shmat (first, (char __user *) ptr,
+ second, (ulong *) third);
+ }
+ case SHMDT:
+ return sys_shmdt ((char __user *)ptr);
+ case SHMGET:
+ return sys_shmget (first, second, third);
+ case SHMCTL:
+ return sys_shmctl (first, second,
+ (struct shmid_ds __user *) ptr);
+ default:
+ return -EINVAL;
+ }
+
+ return -EINVAL;
+}
+
+asmlinkage int sys_uname(struct old_utsname * name)
+{
+ int err;
+ if (!name)
+ return -EFAULT;
+ down_read(&uts_sem);
+ err=copy_to_user(name, &system_utsname, sizeof (*name));
+ up_read(&uts_sem);
+ return err?-EFAULT:0;
+}
+
--- /dev/null
+/*
+ * arch/sh64/kernel/syscalls.S
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/sys.h>
+
+ .section .data, "aw"
+ .balign 32
+
+/*
+ * System calls jump table
+ */
+ .globl sys_call_table
+sys_call_table:
+ .long sys_ni_syscall /* 0 - old "setup()" system call */
+ .long sys_exit
+ .long sys_fork
+ .long sys_read
+ .long sys_write
+ .long sys_open /* 5 */
+ .long sys_close
+ .long sys_waitpid
+ .long sys_creat
+ .long sys_link
+ .long sys_unlink /* 10 */
+ .long sys_execve
+ .long sys_chdir
+ .long sys_time
+ .long sys_mknod
+ .long sys_chmod /* 15 */
+ .long sys_lchown16
+ .long sys_ni_syscall /* old break syscall holder */
+ .long sys_stat
+ .long sys_lseek
+ .long sys_getpid /* 20 */
+ .long sys_mount
+ .long sys_oldumount
+ .long sys_setuid16
+ .long sys_getuid16
+ .long sys_stime /* 25 */
+ .long sys_ptrace
+ .long sys_alarm
+ .long sys_fstat
+ .long sys_pause
+ .long sys_utime /* 30 */
+ .long sys_ni_syscall /* old stty syscall holder */
+ .long sys_ni_syscall /* old gtty syscall holder */
+ .long sys_access
+ .long sys_nice
+ .long sys_ni_syscall /* 35 */ /* old ftime syscall holder */
+ .long sys_sync
+ .long sys_kill
+ .long sys_rename
+ .long sys_mkdir
+ .long sys_rmdir /* 40 */
+ .long sys_dup
+ .long sys_pipe
+ .long sys_times
+ .long sys_ni_syscall /* old prof syscall holder */
+ .long sys_brk /* 45 */
+ .long sys_setgid16
+ .long sys_getgid16
+ .long sys_signal
+ .long sys_geteuid16
+ .long sys_getegid16 /* 50 */
+ .long sys_acct
+ .long sys_umount /* recycled never used phys( */
+ .long sys_ni_syscall /* old lock syscall holder */
+ .long sys_ioctl
+ .long sys_fcntl /* 55 */
+ .long sys_ni_syscall /* old mpx syscall holder */
+ .long sys_setpgid
+ .long sys_ni_syscall /* old ulimit syscall holder */
+ .long sys_ni_syscall /* sys_olduname */
+ .long sys_umask /* 60 */
+ .long sys_chroot
+ .long sys_ustat
+ .long sys_dup2
+ .long sys_getppid
+ .long sys_getpgrp /* 65 */
+ .long sys_setsid
+ .long sys_sigaction
+ .long sys_sgetmask
+ .long sys_ssetmask
+ .long sys_setreuid16 /* 70 */
+ .long sys_setregid16
+ .long sys_sigsuspend
+ .long sys_sigpending
+ .long sys_sethostname
+ .long sys_setrlimit /* 75 */
+ .long sys_old_getrlimit
+ .long sys_getrusage
+ .long sys_gettimeofday
+ .long sys_settimeofday
+ .long sys_getgroups16 /* 80 */
+ .long sys_setgroups16
+ .long sys_ni_syscall /* sys_oldselect */
+ .long sys_symlink
+ .long sys_lstat
+ .long sys_readlink /* 85 */
+ .long sys_uselib
+ .long sys_swapon
+ .long sys_reboot
+ .long old_readdir
+ .long old_mmap /* 90 */
+ .long sys_munmap
+ .long sys_truncate
+ .long sys_ftruncate
+ .long sys_fchmod
+ .long sys_fchown16 /* 95 */
+ .long sys_getpriority
+ .long sys_setpriority
+ .long sys_ni_syscall /* old profil syscall holder */
+ .long sys_statfs
+ .long sys_fstatfs /* 100 */
+ .long sys_ni_syscall /* ioperm */
+ .long sys_socketcall /* Obsolete implementation of socket syscall */
+ .long sys_syslog
+ .long sys_setitimer
+ .long sys_getitimer /* 105 */
+ .long sys_newstat
+ .long sys_newlstat
+ .long sys_newfstat
+ .long sys_uname
+ .long sys_ni_syscall /* 110 */ /* iopl */
+ .long sys_vhangup
+ .long sys_ni_syscall /* idle */
+ .long sys_ni_syscall /* vm86old */
+ .long sys_wait4
+ .long sys_swapoff /* 115 */
+ .long sys_sysinfo
+ .long sys_ipc /* Obsolete ipc syscall implementation */
+ .long sys_fsync
+ .long sys_sigreturn
+ .long sys_clone /* 120 */
+ .long sys_setdomainname
+ .long sys_newuname
+ .long sys_ni_syscall /* sys_modify_ldt */
+ .long sys_adjtimex
+ .long sys_mprotect /* 125 */
+ .long sys_sigprocmask
+ .long sys_ni_syscall /* old "create_module" */
+ .long sys_init_module
+ .long sys_delete_module
+ .long sys_ni_syscall /* 130: old "get_kernel_syms" */
+ .long sys_quotactl
+ .long sys_getpgid
+ .long sys_fchdir
+ .long sys_bdflush
+ .long sys_sysfs /* 135 */
+ .long sys_personality
+ .long sys_ni_syscall /* for afs_syscall */
+ .long sys_setfsuid16
+ .long sys_setfsgid16
+ .long sys_llseek /* 140 */
+ .long sys_getdents
+ .long sys_select
+ .long sys_flock
+ .long sys_msync
+ .long sys_readv /* 145 */
+ .long sys_writev
+ .long sys_getsid
+ .long sys_fdatasync
+ .long sys_sysctl
+ .long sys_mlock /* 150 */
+ .long sys_munlock
+ .long sys_mlockall
+ .long sys_munlockall
+ .long sys_sched_setparam
+ .long sys_sched_getparam /* 155 */
+ .long sys_sched_setscheduler
+ .long sys_sched_getscheduler
+ .long sys_sched_yield
+ .long sys_sched_get_priority_max
+ .long sys_sched_get_priority_min /* 160 */
+ .long sys_sched_rr_get_interval
+ .long sys_nanosleep
+ .long sys_mremap
+ .long sys_setresuid16
+ .long sys_getresuid16 /* 165 */
+ .long sys_ni_syscall /* vm86 */
+ .long sys_ni_syscall /* old "query_module" */
+ .long sys_poll
+ .long sys_nfsservctl
+ .long sys_setresgid16 /* 170 */
+ .long sys_getresgid16
+ .long sys_prctl
+ .long sys_rt_sigreturn
+ .long sys_rt_sigaction
+ .long sys_rt_sigprocmask /* 175 */
+ .long sys_rt_sigpending
+ .long sys_rt_sigtimedwait
+ .long sys_rt_sigqueueinfo
+ .long sys_rt_sigsuspend
+ .long sys_pread64 /* 180 */
+ .long sys_pwrite64
+ .long sys_chown16
+ .long sys_getcwd
+ .long sys_capget
+ .long sys_capset /* 185 */
+ .long sys_sigaltstack
+ .long sys_sendfile
+ .long sys_ni_syscall /* streams1 */
+ .long sys_ni_syscall /* streams2 */
+ .long sys_vfork /* 190 */
+ .long sys_getrlimit
+ .long sys_mmap2
+ .long sys_truncate64
+ .long sys_ftruncate64
+ .long sys_stat64 /* 195 */
+ .long sys_lstat64
+ .long sys_fstat64
+ .long sys_lchown
+ .long sys_getuid
+ .long sys_getgid /* 200 */
+ .long sys_geteuid
+ .long sys_getegid
+ .long sys_setreuid
+ .long sys_setregid
+ .long sys_getgroups /* 205 */
+ .long sys_setgroups
+ .long sys_fchown
+ .long sys_setresuid
+ .long sys_getresuid
+ .long sys_setresgid /* 210 */
+ .long sys_getresgid
+ .long sys_chown
+ .long sys_setuid
+ .long sys_setgid
+ .long sys_setfsuid /* 215 */
+ .long sys_setfsgid
+ .long sys_pivot_root
+ .long sys_mincore
+ .long sys_madvise
+ /* Broken-out socket family (maintain backwards compatibility in syscall
+ numbering with 2.4) */
+ .long sys_socket /* 220 */
+ .long sys_bind
+ .long sys_connect
+ .long sys_listen
+ .long sys_accept
+ .long sys_getsockname /* 225 */
+ .long sys_getpeername
+ .long sys_socketpair
+ .long sys_send
+ .long sys_sendto
+ .long sys_recv /* 230*/
+ .long sys_recvfrom
+ .long sys_shutdown
+ .long sys_setsockopt
+ .long sys_getsockopt
+ .long sys_sendmsg /* 235 */
+ .long sys_recvmsg
+ /* Broken-out IPC family (maintain backwards compatibility in syscall
+ numbering with 2.4) */
+ .long sys_semop
+ .long sys_semget
+ .long sys_semctl
+ .long sys_msgsnd /* 240 */
+ .long sys_msgrcv
+ .long sys_msgget
+ .long sys_msgctl
+ .long sys_ni_syscall /* sys_shmatcall */
+ .long sys_shmdt /* 245 */
+ .long sys_shmget
+ .long sys_shmctl
+ /* Rest of syscalls listed in 2.4 i386 unistd.h */
+ .long sys_getdents64
+ .long sys_fcntl64
+ .long sys_ni_syscall /* 250 reserved for TUX */
+ .long sys_ni_syscall /* Reserved for Security */
+ .long sys_gettid
+ .long sys_readahead
+ .long sys_setxattr
+ .long sys_lsetxattr /* 255 */
+ .long sys_fsetxattr
+ .long sys_getxattr
+ .long sys_lgetxattr
+ .long sys_fgetxattr
+ .long sys_listxattr /* 260 */
+ .long sys_llistxattr
+ .long sys_flistxattr
+ .long sys_removexattr
+ .long sys_lremovexattr
+ .long sys_fremovexattr /* 265 */
+ .long sys_tkill
+ .long sys_sendfile64
+ .long sys_futex
+ .long sys_sched_setaffinity
+ .long sys_sched_getaffinity /* 270 */
+ .long sys_ni_syscall
+ .long sys_ni_syscall
+ .long sys_io_setup
+ .long sys_io_destroy
+ .long sys_io_getevents /* 275 */
+ .long sys_io_submit
+ .long sys_io_cancel
+ .long sys_fadvise64
+ .long sys_ni_syscall
+ .long sys_exit_group /* 280 */
+ /* Rest of new 2.6 syscalls */
+ .long sys_lookup_dcookie
+ .long sys_epoll_create
+ .long sys_epoll_ctl
+ .long sys_epoll_wait
+ .long sys_remap_file_pages /* 285 */
+ .long sys_set_tid_address
+ .long sys_timer_create
+ .long sys_timer_settime
+ .long sys_timer_gettime
+ .long sys_timer_getoverrun /* 290 */
+ .long sys_timer_delete
+ .long sys_clock_settime
+ .long sys_clock_gettime
+ .long sys_clock_getres
+ .long sys_clock_nanosleep /* 295 */
+ .long sys_statfs64
+ .long sys_fstatfs64
+ .long sys_tgkill
+ .long sys_utimes
+ .long sys_fadvise64_64 /* 300 */
+ .long sys_ni_syscall /* Reserved for vserver */
+ .long sys_ni_syscall /* Reserved for mbind */
+ .long sys_ni_syscall /* get_mempolicy */
+ .long sys_ni_syscall /* set_mempolicy */
+ .long sys_mq_open /* 305 */
+ .long sys_mq_unlink
+ .long sys_mq_timedsend
+ .long sys_mq_timedreceive
+ .long sys_mq_notify
+ .long sys_mq_getsetattr /* 310 */
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/time.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003 Richard Curnow
+ *
+ * Original TMU/RTC code taken from sh version.
+ * Copyright (C) 1999 Tetsuya Okada & Niibe Yutaka
+ * Some code taken from i386 version.
+ * Copyright (C) 1991, 1992, 1995 Linus Torvalds
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/profile.h>
+#include <linux/smp.h>
+
+#include <asm/registers.h> /* required by inline __asm__ stmt. */
+
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/delay.h>
+
+#include <linux/timex.h>
+#include <linux/irq.h>
+#include <asm/hardware.h>
+
+#define TMU_TOCR_INIT 0x00
+#define TMU0_TCR_INIT 0x0020
+#define TMU_TSTR_INIT 1
+
+/* RCR1 Bits */
+#define RCR1_CF 0x80 /* Carry Flag */
+#define RCR1_CIE 0x10 /* Carry Interrupt Enable */
+#define RCR1_AIE 0x08 /* Alarm Interrupt Enable */
+#define RCR1_AF 0x01 /* Alarm Flag */
+
+/* RCR2 Bits */
+#define RCR2_PEF 0x80 /* PEriodic interrupt Flag */
+#define RCR2_PESMASK 0x70 /* Periodic interrupt Set */
+#define RCR2_RTCEN 0x08 /* ENable RTC */
+#define RCR2_ADJ 0x04 /* ADJustment (30-second) */
+#define RCR2_RESET 0x02 /* Reset bit */
+#define RCR2_START 0x01 /* Start bit */
+
+/* Clock, Power and Reset Controller */
+#define CPRC_BLOCK_OFF 0x01010000
+#define CPRC_BASE PHYS_PERIPHERAL_BLOCK + CPRC_BLOCK_OFF
+
+#define FRQCR (cprc_base+0x0)
+#define WTCSR (cprc_base+0x0018)
+#define STBCR (cprc_base+0x0030)
+
+/* Time Management Unit */
+#define TMU_BLOCK_OFF 0x01020000
+#define TMU_BASE PHYS_PERIPHERAL_BLOCK + TMU_BLOCK_OFF
+#define TMU0_BASE tmu_base + 0x8 + (0xc * 0x0)
+#define TMU1_BASE tmu_base + 0x8 + (0xc * 0x1)
+#define TMU2_BASE tmu_base + 0x8 + (0xc * 0x2)
+
+#define TMU_TOCR tmu_base+0x0 /* Byte access */
+#define TMU_TSTR tmu_base+0x4 /* Byte access */
+
+#define TMU0_TCOR TMU0_BASE+0x0 /* Long access */
+#define TMU0_TCNT TMU0_BASE+0x4 /* Long access */
+#define TMU0_TCR TMU0_BASE+0x8 /* Word access */
+
+/* Real Time Clock */
+#define RTC_BLOCK_OFF 0x01040000
+#define RTC_BASE PHYS_PERIPHERAL_BLOCK + RTC_BLOCK_OFF
+
+#define R64CNT rtc_base+0x00
+#define RSECCNT rtc_base+0x04
+#define RMINCNT rtc_base+0x08
+#define RHRCNT rtc_base+0x0c
+#define RWKCNT rtc_base+0x10
+#define RDAYCNT rtc_base+0x14
+#define RMONCNT rtc_base+0x18
+#define RYRCNT rtc_base+0x1c /* 16bit */
+#define RSECAR rtc_base+0x20
+#define RMINAR rtc_base+0x24
+#define RHRAR rtc_base+0x28
+#define RWKAR rtc_base+0x2c
+#define RDAYAR rtc_base+0x30
+#define RMONAR rtc_base+0x34
+#define RCR1 rtc_base+0x38
+#define RCR2 rtc_base+0x3c
+
+#ifndef BCD_TO_BIN
+#define BCD_TO_BIN(val) ((val)=((val)&15) + ((val)>>4)*10)
+#endif
+
+#ifndef BIN_TO_BCD
+#define BIN_TO_BCD(val) ((val)=(((val)/10)<<4) + (val)%10)
+#endif
+
+#define TICK_SIZE (tick_nsec / 1000)
+
+extern unsigned long wall_jiffies;
+
+u64 jiffies_64 = INITIAL_JIFFIES;
+
+static unsigned long tmu_base, rtc_base;
+unsigned long cprc_base;
+
+/* Variables to allow interpolation of time of day to resolution better than a
+ * jiffy. */
+
+/* This is effectively protected by xtime_lock */
+static unsigned long ctc_last_interrupt;
+static unsigned long long usecs_per_jiffy = 1000000/HZ; /* Approximation */
+
+#define CTC_JIFFY_SCALE_SHIFT 40
+
+/* 2**CTC_JIFFY_SCALE_SHIFT / ctc_ticks_per_jiffy */
+static unsigned long long scaled_recip_ctc_ticks_per_jiffy;
+
+/* Estimate number of microseconds that have elapsed since the last timer tick,
+ by scaling the delta that has occured in the CTC register.
+
+ WARNING WARNING WARNING : This algorithm relies on the CTC decrementing at
+ the CPU clock rate. If the CPU sleeps, the CTC stops counting. Bear this
+ in mind if enabling SLEEP_WORKS in process.c. In that case, this algorithm
+ probably needs to use TMU.TCNT0 instead. This will work even if the CPU is
+ sleeping, though will be coarser.
+
+ FIXME : What if usecs_per_tick is moving around too much, e.g. if an adjtime
+ is running or if the freq or tick arguments of adjtimex are modified after
+ we have calibrated the scaling factor? This will result in either a jump at
+ the end of a tick period, or a wrap backwards at the start of the next one,
+ if the application is reading the time of day often enough. I think we
+ ought to do better than this. For this reason, usecs_per_jiffy is left
+ separated out in the calculation below. This allows some future hook into
+ the adjtime-related stuff in kernel/timer.c to remove this hazard.
+
+*/
+
+static unsigned long usecs_since_tick(void)
+{
+ unsigned long long current_ctc;
+ long ctc_ticks_since_interrupt;
+ unsigned long long ull_ctc_ticks_since_interrupt;
+ unsigned long result;
+
+ unsigned long long mul1_out;
+ unsigned long long mul1_out_high;
+ unsigned long long mul2_out_low, mul2_out_high;
+
+ /* Read CTC register */
+ asm ("getcon cr62, %0" : "=r" (current_ctc));
+ /* Note, the CTC counts down on each CPU clock, not up.
+ Note(2), use long type to get correct wraparound arithmetic when
+ the counter crosses zero. */
+ ctc_ticks_since_interrupt = (long) ctc_last_interrupt - (long) current_ctc;
+ ull_ctc_ticks_since_interrupt = (unsigned long long) ctc_ticks_since_interrupt;
+
+ /* Inline assembly to do 32x32x32->64 multiplier */
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul1_out) :
+ "r" (ull_ctc_ticks_since_interrupt), "r" (usecs_per_jiffy));
+
+ mul1_out_high = mul1_out >> 32;
+
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul2_out_low) :
+ "r" (mul1_out), "r" (scaled_recip_ctc_ticks_per_jiffy));
+
+#if 1
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul2_out_high) :
+ "r" (mul1_out_high), "r" (scaled_recip_ctc_ticks_per_jiffy));
+#endif
+
+ result = (unsigned long) (((mul2_out_high << 32) + mul2_out_low) >> CTC_JIFFY_SCALE_SHIFT);
+
+ return result;
+}
+
+void do_gettimeofday(struct timeval *tv)
+{
+ unsigned long flags;
+ unsigned long seq;
+ unsigned long usec, sec;
+
+ do {
+ seq = read_seqbegin_irqsave(&xtime_lock, flags);
+ usec = usecs_since_tick();
+ {
+ unsigned long lost = jiffies - wall_jiffies;
+
+ if (lost)
+ usec += lost * (1000000 / HZ);
+ }
+
+ sec = xtime.tv_sec;
+ usec += xtime.tv_nsec / 1000;
+ } while (read_seqretry_irqrestore(&xtime_lock, seq, flags));
+
+ while (usec >= 1000000) {
+ usec -= 1000000;
+ sec++;
+ }
+
+ tv->tv_sec = sec;
+ tv->tv_usec = usec;
+}
+
+int do_settimeofday(struct timespec *tv)
+{
+ time_t wtm_sec, sec = tv->tv_sec;
+ long wtm_nsec, nsec = tv->tv_nsec;
+
+ if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
+ return -EINVAL;
+
+ write_seqlock_irq(&xtime_lock);
+ /*
+ * This is revolting. We need to set "xtime" correctly. However, the
+ * value in this location is the value at the most recent update of
+ * wall time. Discover what correction gettimeofday() would have
+ * made, and then undo it!
+ */
+ nsec -= 1000 * (usecs_since_tick() +
+ (jiffies - wall_jiffies) * (1000000 / HZ));
+
+ wtm_sec = wall_to_monotonic.tv_sec + (xtime.tv_sec - sec);
+ wtm_nsec = wall_to_monotonic.tv_nsec + (xtime.tv_nsec - nsec);
+
+ set_normalized_timespec(&xtime, sec, nsec);
+ set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
+
+ time_adjust = 0; /* stop active adjtime() */
+ time_status |= STA_UNSYNC;
+ time_maxerror = NTP_PHASE_LIMIT;
+ time_esterror = NTP_PHASE_LIMIT;
+ write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
+
+ return 0;
+}
+
+static int set_rtc_time(unsigned long nowtime)
+{
+ int retval = 0;
+ int real_seconds, real_minutes, cmos_minutes;
+
+ ctrl_outb(RCR2_RESET, RCR2); /* Reset pre-scaler & stop RTC */
+
+ cmos_minutes = ctrl_inb(RMINCNT);
+ BCD_TO_BIN(cmos_minutes);
+
+ /*
+ * since we're only adjusting minutes and seconds,
+ * don't interfere with hour overflow. This avoids
+ * messing with unknown time zones but requires your
+ * RTC not to be off by more than 15 minutes
+ */
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
+ real_minutes += 30; /* correct for half hour time zone */
+ real_minutes %= 60;
+
+ if (abs(real_minutes - cmos_minutes) < 30) {
+ BIN_TO_BCD(real_seconds);
+ BIN_TO_BCD(real_minutes);
+ ctrl_outb(real_seconds, RSECCNT);
+ ctrl_outb(real_minutes, RMINCNT);
+ } else {
+ printk(KERN_WARNING
+ "set_rtc_time: can't update from %d to %d\n",
+ cmos_minutes, real_minutes);
+ retval = -1;
+ }
+
+ ctrl_outb(RCR2_RTCEN|RCR2_START, RCR2); /* Start RTC */
+
+ return retval;
+}
+
+/* last time the RTC clock got updated */
+static long last_rtc_update = 0;
+
+static inline void sh64_do_profile(struct pt_regs *regs)
+{
+ extern int _stext;
+ unsigned long pc;
+
+ profile_hook(regs);
+
+ if (user_mode(regs))
+ return;
+
+ /* Don't profile cpu_idle.. */
+ if (!prof_buffer || !current->pid)
+ return;
+
+ pc = instruction_pointer(regs);
+ pc -= (unsigned long) &_stext;
+ pc >>= prof_shift;
+
+ /*
+ * Don't ignore out-of-bounds PC values silently, put them into the
+ * last histogram slot, so if present, they will show up as a sharp
+ * peak.
+ */
+ if (pc > prof_len - 1)
+ pc = prof_len - 1;
+
+ /* We could just be sloppy and not lock against a re-entry on this
+ increment, but the profiling code won't always be linked in anyway. */
+ atomic_inc((atomic_t *)&prof_buffer[pc]);
+}
+
+/*
+ * timer_interrupt() needs to keep up the real-time clock,
+ * as well as call the "do_timer()" routine every clocktick
+ */
+static inline void do_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long long current_ctc;
+ asm ("getcon cr62, %0" : "=r" (current_ctc));
+ ctc_last_interrupt = (unsigned long) current_ctc;
+
+ do_timer(regs);
+
+ sh64_do_profile(regs);
+
+#ifdef CONFIG_HEARTBEAT
+ {
+ extern void heartbeat(void);
+
+ heartbeat();
+ }
+#endif
+
+ /*
+ * If we have an externally synchronized Linux clock, then update
+ * RTC clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
+ * called as close as possible to 500 ms before the new second starts.
+ */
+ if ((time_status & STA_UNSYNC) == 0 &&
+ xtime.tv_sec > last_rtc_update + 660 &&
+ (xtime.tv_nsec / 1000) >= 500000 - ((unsigned) TICK_SIZE) / 2 &&
+ (xtime.tv_nsec / 1000) <= 500000 + ((unsigned) TICK_SIZE) / 2) {
+ if (set_rtc_time(xtime.tv_sec) == 0)
+ last_rtc_update = xtime.tv_sec;
+ else
+ last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
+ }
+}
+
+/*
+ * This is the same as the above, except we _also_ save the current
+ * Time Stamp Counter value at the time of the timer interrupt, so that
+ * we later on can estimate the time of day more exactly.
+ */
+static irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long timer_status;
+
+ /* Clear UNF bit */
+ timer_status = ctrl_inw(TMU0_TCR);
+ timer_status &= ~0x100;
+ ctrl_outw(timer_status, TMU0_TCR);
+
+ /*
+ * Here we are in the timer irq handler. We just have irqs locally
+ * disabled but we don't know if the timer_bh is running on the other
+ * CPU. We need to avoid to SMP race with it. NOTE: we don' t need
+ * the irq version of write_lock because as just said we have irq
+ * locally disabled. -arca
+ */
+ write_lock(&xtime_lock);
+ do_timer_interrupt(irq, NULL, regs);
+ write_unlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static unsigned long get_rtc_time(void)
+{
+ unsigned int sec, min, hr, wk, day, mon, yr, yr100;
+
+ again:
+ do {
+ ctrl_outb(0, RCR1); /* Clear CF-bit */
+ sec = ctrl_inb(RSECCNT);
+ min = ctrl_inb(RMINCNT);
+ hr = ctrl_inb(RHRCNT);
+ wk = ctrl_inb(RWKCNT);
+ day = ctrl_inb(RDAYCNT);
+ mon = ctrl_inb(RMONCNT);
+ yr = ctrl_inw(RYRCNT);
+ yr100 = (yr >> 8);
+ yr &= 0xff;
+ } while ((ctrl_inb(RCR1) & RCR1_CF) != 0);
+
+ BCD_TO_BIN(yr100);
+ BCD_TO_BIN(yr);
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(hr);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(sec);
+
+ if (yr > 99 || mon < 1 || mon > 12 || day > 31 || day < 1 ||
+ hr > 23 || min > 59 || sec > 59) {
+ printk(KERN_ERR
+ "SH RTC: invalid value, resetting to 1 Jan 2000\n");
+ ctrl_outb(RCR2_RESET, RCR2); /* Reset & Stop */
+ ctrl_outb(0, RSECCNT);
+ ctrl_outb(0, RMINCNT);
+ ctrl_outb(0, RHRCNT);
+ ctrl_outb(6, RWKCNT);
+ ctrl_outb(1, RDAYCNT);
+ ctrl_outb(1, RMONCNT);
+ ctrl_outw(0x2000, RYRCNT);
+ ctrl_outb(RCR2_RTCEN|RCR2_START, RCR2); /* Start */
+ goto again;
+ }
+
+ return mktime(yr100 * 100 + yr, mon, day, hr, min, sec);
+}
+
+static __init unsigned int get_cpu_hz(void)
+{
+ unsigned int count;
+ unsigned long __dummy;
+ unsigned long ctc_val_init, ctc_val;
+
+ /*
+ ** Regardless the toolchain, force the compiler to use the
+ ** arbitrary register r3 as a clock tick counter.
+ ** NOTE: r3 must be in accordance with rtc_interrupt()
+ */
+ register unsigned long long __rtc_irq_flag __asm__ ("r3");
+
+ sti();
+ do {} while (ctrl_inb(R64CNT) != 0);
+ ctrl_outb(RCR1_CIE, RCR1); /* Enable carry interrupt */
+
+ /*
+ * r3 is arbitrary. CDC does not support "=z".
+ */
+ ctc_val_init = 0xffffffff;
+ ctc_val = ctc_val_init;
+
+ asm volatile("gettr tr0, %1\n\t"
+ "putcon %0, " __CTC "\n\t"
+ "and %2, r63, %2\n\t"
+ "pta $+4, tr0\n\t"
+ "beq/l %2, r63, tr0\n\t"
+ "ptabs %1, tr0\n\t"
+ "getcon " __CTC ", %0\n\t"
+ : "=r"(ctc_val), "=r" (__dummy), "=r" (__rtc_irq_flag)
+ : "0" (0));
+ cli();
+ /*
+ * SH-3:
+ * CPU clock = 4 stages * loop
+ * tst rm,rm if id ex
+ * bt/s 1b if id ex
+ * add #1,rd if id ex
+ * (if) pipe line stole
+ * tst rm,rm if id ex
+ * ....
+ *
+ *
+ * SH-4:
+ * CPU clock = 6 stages * loop
+ * I don't know why.
+ * ....
+ *
+ * SH-5:
+ * Use CTC register to count. This approach returns the right value
+ * even if the I-cache is disabled (e.g. whilst debugging.)
+ *
+ */
+
+ count = ctc_val_init - ctc_val; /* CTC counts down */
+
+#if defined (CONFIG_SH_SIMULATOR)
+ /*
+ * Let's pretend we are a 5MHz SH-5 to avoid a too
+ * little timer interval. Also to keep delay
+ * calibration within a reasonable time.
+ */
+ return 5000000;
+#else
+ /*
+ * This really is count by the number of clock cycles
+ * by the ratio between a complete R64CNT
+ * wrap-around (128) and CUI interrupt being raised (64).
+ */
+ return count*2;
+#endif
+}
+
+static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ ctrl_outb(0, RCR1); /* Disable Carry Interrupts */
+ regs->regs[3] = 1; /* Using r3 */
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq1 = { rtc_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "rtc", NULL, NULL};
+
+void __init time_init(void)
+{
+ unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+ unsigned long interval;
+ unsigned long frqcr, ifc, pfc;
+ static int ifc_table[] = { 2, 4, 6, 8, 10, 12, 16, 24 };
+#define bfc_table ifc_table /* Same */
+#define pfc_table ifc_table /* Same */
+
+ tmu_base = onchip_remap(TMU_BASE, 1024, "TMU");
+ if (!tmu_base) {
+ panic("Unable to remap TMU\n");
+ }
+
+ rtc_base = onchip_remap(RTC_BASE, 1024, "RTC");
+ if (!rtc_base) {
+ panic("Unable to remap RTC\n");
+ }
+
+ cprc_base = onchip_remap(CPRC_BASE, 1024, "CPRC");
+ if (!cprc_base) {
+ panic("Unable to remap CPRC\n");
+ }
+
+ xtime.tv_sec = get_rtc_time();
+ xtime.tv_nsec = 0;
+
+ setup_irq(TIMER_IRQ, &irq0);
+ setup_irq(RTC_IRQ, &irq1);
+
+ /* Check how fast it is.. */
+ cpu_clock = get_cpu_hz();
+
+ /* Note careful order of operations to maintain reasonable precision and avoid overflow. */
+ scaled_recip_ctc_ticks_per_jiffy = ((1ULL << CTC_JIFFY_SCALE_SHIFT) / (unsigned long long)(cpu_clock / HZ));
+
+ disable_irq(RTC_IRQ);
+
+ printk("CPU clock: %d.%02dMHz\n",
+ (cpu_clock / 1000000), (cpu_clock % 1000000)/10000);
+ {
+ unsigned short bfc;
+ frqcr = ctrl_inl(FRQCR);
+ ifc = ifc_table[(frqcr>> 6) & 0x0007];
+ bfc = bfc_table[(frqcr>> 3) & 0x0007];
+ pfc = pfc_table[(frqcr>> 12) & 0x0007];
+ master_clock = cpu_clock * ifc;
+ bus_clock = master_clock/bfc;
+ }
+
+ printk("Bus clock: %d.%02dMHz\n",
+ (bus_clock/1000000), (bus_clock % 1000000)/10000);
+ module_clock = master_clock/pfc;
+ printk("Module clock: %d.%02dMHz\n",
+ (module_clock/1000000), (module_clock % 1000000)/10000);
+ interval = (module_clock/(HZ*4));
+
+ printk("Interval = %ld\n", interval);
+
+ current_cpu_data.cpu_clock = cpu_clock;
+ current_cpu_data.master_clock = master_clock;
+ current_cpu_data.bus_clock = bus_clock;
+ current_cpu_data.module_clock = module_clock;
+
+ /* Start TMU0 */
+ ctrl_outb(TMU_TOCR_INIT, TMU_TOCR);
+ ctrl_outw(TMU0_TCR_INIT, TMU0_TCR);
+ ctrl_outl(interval, TMU0_TCOR);
+ ctrl_outl(interval, TMU0_TCNT);
+ ctrl_outb(TMU_TSTR_INIT, TMU_TSTR);
+}
+
+void enter_deep_standby(void)
+{
+ /* Disable watchdog timer */
+ ctrl_outl(0xa5000000, WTCSR);
+ /* Configure deep standby on sleep */
+ ctrl_outl(0x03, STBCR);
+
+#ifdef CONFIG_SH_ALPHANUMERIC
+ {
+ extern void mach_alphanum(int position, unsigned char value);
+ extern void mach_alphanum_brightness(int setting);
+ char halted[] = "Halted. ";
+ int i;
+ mach_alphanum_brightness(6); /* dimmest setting above off */
+ for (i=0; i<8; i++) {
+ mach_alphanum(i, halted[i]);
+ }
+ asm __volatile__ ("synco");
+ }
+#endif
+
+ asm __volatile__ ("sleep");
+ asm __volatile__ ("synci");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ panic("Unexpected wakeup!\n");
+}
+
+/*
+ * Scheduler clock - returns current time in nanosec units.
+ */
+unsigned long long sched_clock(void)
+{
+ return (unsigned long long)jiffies * (1000000000 / HZ);
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/traps.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ */
+
+/*
+ * 'Traps.c' handles hardware traps and faults after we have saved some
+ * state in 'entry.S'.
+ */
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/timer.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/spinlock.h>
+#include <linux/kallsyms.h>
+#include <linux/interrupt.h>
+#include <linux/sysctl.h>
+
+#include <asm/system.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/atomic.h>
+#include <asm/processor.h>
+#include <asm/pgtable.h>
+
+#undef DEBUG_EXCEPTION
+#ifdef DEBUG_EXCEPTION
+/* implemented in ../lib/dbg.c */
+extern void show_excp_regs(char *fname, int trapnr, int signr,
+ struct pt_regs *regs);
+#else
+#define show_excp_regs(a, b, c, d)
+#endif
+
+static void do_unhandled_exception(int trapnr, int signr, char *str, char *fn_name,
+ unsigned long error_code, struct pt_regs *regs, struct task_struct *tsk);
+
+#define DO_ERROR(trapnr, signr, str, name, tsk) \
+asmlinkage void do_##name(unsigned long error_code, struct pt_regs *regs) \
+{ \
+ do_unhandled_exception(trapnr, signr, str, __stringify(name), error_code, regs, current); \
+}
+
+spinlock_t die_lock;
+
+void die(const char * str, struct pt_regs * regs, long err)
+{
+ console_verbose();
+ spin_lock_irq(&die_lock);
+ printk("%s: %lx\n", str, (err & 0xffffff));
+ show_regs(regs);
+ spin_unlock_irq(&die_lock);
+ do_exit(SIGSEGV);
+}
+
+static inline void die_if_kernel(const char * str, struct pt_regs * regs, long err)
+{
+ if (!user_mode(regs))
+ die(str, regs, err);
+}
+
+static void die_if_no_fixup(const char * str, struct pt_regs * regs, long err)
+{
+ if (!user_mode(regs)) {
+ const struct exception_table_entry *fixup;
+ fixup = search_exception_tables(regs->pc);
+ if (fixup) {
+ regs->pc = fixup->fixup;
+ return;
+ }
+ die(str, regs, err);
+ }
+}
+
+DO_ERROR(13, SIGILL, "illegal slot instruction", illegal_slot_inst, current)
+DO_ERROR(87, SIGSEGV, "address error (exec)", address_error_exec, current)
+
+
+/* Implement misaligned load/store handling for kernel (and optionally for user
+ mode too). Limitation : only SHmedia mode code is handled - there is no
+ handling at all for misaligned accesses occurring in SHcompact code yet. */
+
+static int misaligned_fixup(struct pt_regs *regs);
+
+asmlinkage void do_address_error_load(unsigned long error_code, struct pt_regs *regs)
+{
+ if (misaligned_fixup(regs) < 0) {
+ do_unhandled_exception(7, SIGSEGV, "address error(load)",
+ "do_address_error_load",
+ error_code, regs, current);
+ }
+ return;
+}
+
+asmlinkage void do_address_error_store(unsigned long error_code, struct pt_regs *regs)
+{
+ if (misaligned_fixup(regs) < 0) {
+ do_unhandled_exception(8, SIGSEGV, "address error(store)",
+ "do_address_error_store",
+ error_code, regs, current);
+ }
+ return;
+}
+
+#if defined(CONFIG_SH64_ID2815_WORKAROUND)
+
+#define OPCODE_INVALID 0
+#define OPCODE_USER_VALID 1
+#define OPCODE_PRIV_VALID 2
+
+/* getcon/putcon - requires checking which control register is referenced. */
+#define OPCODE_CTRL_REG 3
+
+/* Table of valid opcodes for SHmedia mode.
+ Form a 10-bit value by concatenating the major/minor opcodes i.e.
+ opcode[31:26,20:16]. The 6 MSBs of this value index into the following
+ array. The 4 LSBs select the bit-pair in the entry (bits 1:0 correspond to
+ LSBs==4'b0000 etc). */
+static unsigned long shmedia_opcode_table[64] = {
+ 0x55554044,0x54445055,0x15141514,0x14541414,0x00000000,0x10001000,0x01110055,0x04050015,
+ 0x00000444,0xc0000000,0x44545515,0x40405555,0x55550015,0x10005555,0x55555505,0x04050000,
+ 0x00000555,0x00000404,0x00040445,0x15151414,0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000055,0x40404444,0x00000404,0xc0009495,0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,
+ 0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,
+ 0x80005050,0x04005055,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,0x55555555,
+ 0x81055554,0x00000404,0x55555555,0x55555555,0x00000000,0x00000000,0x00000000,0x00000000
+};
+
+void do_reserved_inst(unsigned long error_code, struct pt_regs *regs)
+{
+ /* Workaround SH5-101 cut2 silicon defect #2815 :
+ in some situations, inter-mode branches from SHcompact -> SHmedia
+ which should take ITLBMISS or EXECPROT exceptions at the target
+ falsely take RESINST at the target instead. */
+
+ unsigned long opcode = 0x6ff4fff0; /* guaranteed reserved opcode */
+ unsigned long pc, aligned_pc;
+ int get_user_error;
+ int trapnr = 12;
+ int signr = SIGILL;
+ char *exception_name = "reserved_instruction";
+
+ pc = regs->pc;
+ if ((pc & 3) == 1) {
+ /* SHmedia : check for defect. This requires executable vmas
+ to be readable too. */
+ aligned_pc = pc & ~3;
+ if (!access_ok(VERIFY_READ, aligned_pc, sizeof(unsigned long))) {
+ get_user_error = -EFAULT;
+ } else {
+ get_user_error = __get_user(opcode, (unsigned long *)aligned_pc);
+ }
+ if (get_user_error >= 0) {
+ unsigned long index, shift;
+ unsigned long major, minor, combined;
+ unsigned long reserved_field;
+ reserved_field = opcode & 0xf; /* These bits are currently reserved as zero in all valid opcodes */
+ major = (opcode >> 26) & 0x3f;
+ minor = (opcode >> 16) & 0xf;
+ combined = (major << 4) | minor;
+ index = major;
+ shift = minor << 1;
+ if (reserved_field == 0) {
+ int opcode_state = (shmedia_opcode_table[index] >> shift) & 0x3;
+ switch (opcode_state) {
+ case OPCODE_INVALID:
+ /* Trap. */
+ break;
+ case OPCODE_USER_VALID:
+ /* Restart the instruction : the branch to the instruction will now be from an RTE
+ not from SHcompact so the silicon defect won't be triggered. */
+ return;
+ case OPCODE_PRIV_VALID:
+ if (!user_mode(regs)) {
+ /* Should only ever get here if a module has
+ SHcompact code inside it. If so, the same fix up is needed. */
+ return; /* same reason */
+ }
+ /* Otherwise, user mode trying to execute a privileged instruction -
+ fall through to trap. */
+ break;
+ case OPCODE_CTRL_REG:
+ /* If in privileged mode, return as above. */
+ if (!user_mode(regs)) return;
+ /* In user mode ... */
+ if (combined == 0x9f) { /* GETCON */
+ unsigned long regno = (opcode >> 20) & 0x3f;
+ if (regno >= 62) {
+ return;
+ }
+ /* Otherwise, reserved or privileged control register, => trap */
+ } else if (combined == 0x1bf) { /* PUTCON */
+ unsigned long regno = (opcode >> 4) & 0x3f;
+ if (regno >= 62) {
+ return;
+ }
+ /* Otherwise, reserved or privileged control register, => trap */
+ } else {
+ /* Trap */
+ }
+ break;
+ default:
+ /* Fall through to trap. */
+ break;
+ }
+ }
+ /* fall through to normal resinst processing */
+ } else {
+ /* Error trying to read opcode. This typically means a
+ real fault, not a RESINST any more. So change the
+ codes. */
+ trapnr = 87;
+ exception_name = "address error (exec)";
+ signr = SIGSEGV;
+ }
+ }
+
+ do_unhandled_exception(trapnr, signr, exception_name, "do_reserved_inst", error_code, regs, current);
+}
+
+#else /* CONFIG_SH64_ID2815_WORKAROUND */
+
+/* If the workaround isn't needed, this is just a straightforward reserved
+ instruction */
+DO_ERROR(12, SIGILL, "reserved instruction", reserved_inst, current)
+
+#endif /* CONFIG_SH64_ID2815_WORKAROUND */
+
+
+#include <asm/system.h>
+
+/* Called with interrupts disabled */
+asmlinkage void do_exception_error(unsigned long ex, struct pt_regs *regs)
+{
+ PLS();
+ show_excp_regs(__FUNCTION__, -1, -1, regs);
+ die_if_kernel("exception", regs, ex);
+}
+
+int do_unknown_trapa(unsigned long scId, struct pt_regs *regs)
+{
+ /* Syscall debug */
+ printk("System call ID error: [0x1#args:8 #syscall:16 0x%lx]\n", scId);
+
+ die_if_kernel("unknown trapa", regs, scId);
+
+ return -ENOSYS;
+}
+
+void show_stack(struct task_struct *tsk, unsigned long *sp)
+{
+#ifdef CONFIG_KALLSYMS
+ extern void sh64_unwind(struct pt_regs *regs);
+ struct pt_regs *regs;
+
+ regs = tsk ? tsk->thread.kregs : NULL;
+
+ sh64_unwind(regs);
+#else
+ printk(KERN_ERR "Can't backtrace on sh64 without CONFIG_KALLSYMS\n");
+#endif
+}
+
+void show_task(unsigned long *sp)
+{
+ show_stack(NULL, sp);
+}
+
+void dump_stack(void)
+{
+ show_task(NULL);
+}
+
+static void do_unhandled_exception(int trapnr, int signr, char *str, char *fn_name,
+ unsigned long error_code, struct pt_regs *regs, struct task_struct *tsk)
+{
+ show_excp_regs(fn_name, trapnr, signr, regs);
+ tsk->thread.error_code = error_code;
+ tsk->thread.trap_no = trapnr;
+
+ if (user_mode(regs))
+ force_sig(signr, tsk);
+
+ die_if_no_fixup(str, regs, error_code);
+}
+
+static int read_opcode(unsigned long long pc, unsigned long *result_opcode, int from_user_mode)
+{
+ int get_user_error;
+ unsigned long aligned_pc;
+ unsigned long opcode;
+
+ if ((pc & 3) == 1) {
+ /* SHmedia */
+ aligned_pc = pc & ~3;
+ if (from_user_mode) {
+ if (!access_ok(VERIFY_READ, aligned_pc, sizeof(unsigned long))) {
+ get_user_error = -EFAULT;
+ } else {
+ get_user_error = __get_user(opcode, (unsigned long *)aligned_pc);
+ *result_opcode = opcode;
+ }
+ return get_user_error;
+ } else {
+ /* If the fault was in the kernel, we can either read
+ * this directly, or if not, we fault.
+ */
+ *result_opcode = *(unsigned long *) aligned_pc;
+ return 0;
+ }
+ } else if ((pc & 1) == 0) {
+ /* SHcompact */
+ /* TODO : provide handling for this. We don't really support
+ user-mode SHcompact yet, and for a kernel fault, this would
+ have to come from a module built for SHcompact. */
+ return -EFAULT;
+ } else {
+ /* misaligned */
+ return -EFAULT;
+ }
+}
+
+static int address_is_sign_extended(__u64 a)
+{
+ __u64 b;
+#if (NEFF == 32)
+ b = (__u64)(__s64)(__s32)(a & 0xffffffffUL);
+ return (b == a) ? 1 : 0;
+#else
+#error "Sign extend check only works for NEFF==32"
+#endif
+}
+
+static int generate_and_check_address(struct pt_regs *regs,
+ __u32 opcode,
+ int displacement_not_indexed,
+ int width_shift,
+ __u64 *address)
+{
+ /* return -1 for fault, 0 for OK */
+
+ __u64 base_address, addr;
+ int basereg;
+
+ basereg = (opcode >> 20) & 0x3f;
+ base_address = regs->regs[basereg];
+ if (displacement_not_indexed) {
+ __s64 displacement;
+ displacement = (opcode >> 10) & 0x3ff;
+ displacement = ((displacement << 54) >> 54); /* sign extend */
+ addr = (__u64)((__s64)base_address + (displacement << width_shift));
+ } else {
+ __u64 offset;
+ int offsetreg;
+ offsetreg = (opcode >> 10) & 0x3f;
+ offset = regs->regs[offsetreg];
+ addr = base_address + offset;
+ }
+
+ /* Check sign extended */
+ if (!address_is_sign_extended(addr)) {
+ return -1;
+ }
+
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ /* Check accessible. For misaligned access in the kernel, assume the
+ address is always accessible (and if not, just fault when the
+ load/store gets done.) */
+ if (user_mode(regs)) {
+ if (addr >= TASK_SIZE) {
+ return -1;
+ }
+ /* Do access_ok check later - it depends on whether it's a load or a store. */
+ }
+#endif
+
+ *address = addr;
+ return 0;
+}
+
+/* Default value as for sh */
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+static int user_mode_unaligned_fixup_count = 10;
+static int user_mode_unaligned_fixup_enable = 1;
+#endif
+
+static int kernel_mode_unaligned_fixup_count = 32;
+
+static void misaligned_kernel_word_load(__u64 address, int do_sign_extend, __u64 *result)
+{
+ unsigned short x;
+ unsigned char *p, *q;
+ p = (unsigned char *) (int) address;
+ q = (unsigned char *) &x;
+ q[0] = p[0];
+ q[1] = p[1];
+
+ if (do_sign_extend) {
+ *result = (__u64)(__s64) *(short *) &x;
+ } else {
+ *result = (__u64) x;
+ }
+}
+
+static void misaligned_kernel_word_store(__u64 address, __u64 value)
+{
+ unsigned short x;
+ unsigned char *p, *q;
+ p = (unsigned char *) (int) address;
+ q = (unsigned char *) &x;
+
+ x = (__u16) value;
+ p[0] = q[0];
+ p[1] = q[1];
+}
+
+static int misaligned_load(struct pt_regs *regs,
+ __u32 opcode,
+ int displacement_not_indexed,
+ int width_shift,
+ int do_sign_extend)
+{
+ /* Return -1 for a fault, 0 for OK */
+ int error;
+ int destreg;
+ __u64 address;
+
+ error = generate_and_check_address(regs, opcode,
+ displacement_not_indexed, width_shift, &address);
+ if (error < 0) {
+ return error;
+ }
+
+ destreg = (opcode >> 4) & 0x3f;
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ if (user_mode(regs)) {
+ __u64 buffer;
+
+ if (!access_ok(VERIFY_READ, (unsigned long) address, 1UL<<width_shift)) {
+ return -1;
+ }
+
+ if (__copy_user(&buffer, (const void *)(int)address, (1 << width_shift)) > 0) {
+ return -1; /* fault */
+ }
+ switch (width_shift) {
+ case 1:
+ if (do_sign_extend) {
+ regs->regs[destreg] = (__u64)(__s64) *(__s16 *) &buffer;
+ } else {
+ regs->regs[destreg] = (__u64) *(__u16 *) &buffer;
+ }
+ break;
+ case 2:
+ regs->regs[destreg] = (__u64)(__s64) *(__s32 *) &buffer;
+ break;
+ case 3:
+ regs->regs[destreg] = buffer;
+ break;
+ default:
+ printk("Unexpected width_shift %d in misaligned_load, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+ } else
+#endif
+ {
+ /* kernel mode - we can take short cuts since if we fault, it's a genuine bug */
+ __u64 lo, hi;
+
+ switch (width_shift) {
+ case 1:
+ misaligned_kernel_word_load(address, do_sign_extend, ®s->regs[destreg]);
+ break;
+ case 2:
+ asm ("ldlo.l %1, 0, %0" : "=r" (lo) : "r" (address));
+ asm ("ldhi.l %1, 3, %0" : "=r" (hi) : "r" (address));
+ regs->regs[destreg] = lo | hi;
+ break;
+ case 3:
+ asm ("ldlo.q %1, 0, %0" : "=r" (lo) : "r" (address));
+ asm ("ldhi.q %1, 7, %0" : "=r" (hi) : "r" (address));
+ regs->regs[destreg] = lo | hi;
+ break;
+
+ default:
+ printk("Unexpected width_shift %d in misaligned_load, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+ }
+
+ return 0;
+
+}
+
+static int misaligned_store(struct pt_regs *regs,
+ __u32 opcode,
+ int displacement_not_indexed,
+ int width_shift)
+{
+ /* Return -1 for a fault, 0 for OK */
+ int error;
+ int srcreg;
+ __u64 address;
+
+ error = generate_and_check_address(regs, opcode,
+ displacement_not_indexed, width_shift, &address);
+ if (error < 0) {
+ return error;
+ }
+
+ srcreg = (opcode >> 4) & 0x3f;
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ if (user_mode(regs)) {
+ __u64 buffer;
+
+ if (!access_ok(VERIFY_WRITE, (unsigned long) address, 1UL<<width_shift)) {
+ return -1;
+ }
+
+ switch (width_shift) {
+ case 1:
+ *(__u16 *) &buffer = (__u16) regs->regs[srcreg];
+ break;
+ case 2:
+ *(__u32 *) &buffer = (__u32) regs->regs[srcreg];
+ break;
+ case 3:
+ buffer = regs->regs[srcreg];
+ break;
+ default:
+ printk("Unexpected width_shift %d in misaligned_store, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+
+ if (__copy_user((void *)(int)address, &buffer, (1 << width_shift)) > 0) {
+ return -1; /* fault */
+ }
+ } else
+#endif
+ {
+ /* kernel mode - we can take short cuts since if we fault, it's a genuine bug */
+ __u64 val = regs->regs[srcreg];
+
+ switch (width_shift) {
+ case 1:
+ misaligned_kernel_word_store(address, val);
+ break;
+ case 2:
+ asm ("stlo.l %1, 0, %0" : : "r" (val), "r" (address));
+ asm ("sthi.l %1, 3, %0" : : "r" (val), "r" (address));
+ break;
+ case 3:
+ asm ("stlo.q %1, 0, %0" : : "r" (val), "r" (address));
+ asm ("sthi.q %1, 7, %0" : : "r" (val), "r" (address));
+ break;
+
+ default:
+ printk("Unexpected width_shift %d in misaligned_store, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+ }
+
+ return 0;
+
+}
+
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+/* Never need to fix up misaligned FPU accesses within the kernel since that's a real
+ error. */
+static int misaligned_fpu_load(struct pt_regs *regs,
+ __u32 opcode,
+ int displacement_not_indexed,
+ int width_shift,
+ int do_paired_load)
+{
+ /* Return -1 for a fault, 0 for OK */
+ int error;
+ int destreg;
+ __u64 address;
+
+ error = generate_and_check_address(regs, opcode,
+ displacement_not_indexed, width_shift, &address);
+ if (error < 0) {
+ return error;
+ }
+
+ destreg = (opcode >> 4) & 0x3f;
+ if (user_mode(regs)) {
+ __u64 buffer;
+ __u32 buflo, bufhi;
+
+ if (!access_ok(VERIFY_READ, (unsigned long) address, 1UL<<width_shift)) {
+ return -1;
+ }
+
+ if (__copy_user(&buffer, (const void *)(int)address, (1 << width_shift)) > 0) {
+ return -1; /* fault */
+ }
+ /* 'current' may be the current owner of the FPU state, so
+ context switch the registers into memory so they can be
+ indexed by register number. */
+ if (last_task_used_math == current) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ buflo = *(__u32*) &buffer;
+ bufhi = *(1 + (__u32*) &buffer);
+
+ switch (width_shift) {
+ case 2:
+ current->thread.fpu.hard.fp_regs[destreg] = buflo;
+ break;
+ case 3:
+ if (do_paired_load) {
+ current->thread.fpu.hard.fp_regs[destreg] = buflo;
+ current->thread.fpu.hard.fp_regs[destreg+1] = bufhi;
+ } else {
+#if defined(CONFIG_LITTLE_ENDIAN)
+ current->thread.fpu.hard.fp_regs[destreg] = bufhi;
+ current->thread.fpu.hard.fp_regs[destreg+1] = buflo;
+#else
+ current->thread.fpu.hard.fp_regs[destreg] = buflo;
+ current->thread.fpu.hard.fp_regs[destreg+1] = bufhi;
+#endif
+ }
+ break;
+ default:
+ printk("Unexpected width_shift %d in misaligned_fpu_load, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+ return 0;
+ } else {
+ die ("Misaligned FPU load inside kernel", regs, 0);
+ return -1;
+ }
+
+
+}
+
+static int misaligned_fpu_store(struct pt_regs *regs,
+ __u32 opcode,
+ int displacement_not_indexed,
+ int width_shift,
+ int do_paired_load)
+{
+ /* Return -1 for a fault, 0 for OK */
+ int error;
+ int srcreg;
+ __u64 address;
+
+ error = generate_and_check_address(regs, opcode,
+ displacement_not_indexed, width_shift, &address);
+ if (error < 0) {
+ return error;
+ }
+
+ srcreg = (opcode >> 4) & 0x3f;
+ if (user_mode(regs)) {
+ __u64 buffer;
+ /* Initialise these to NaNs. */
+ __u32 buflo=0xffffffffUL, bufhi=0xffffffffUL;
+
+ if (!access_ok(VERIFY_WRITE, (unsigned long) address, 1UL<<width_shift)) {
+ return -1;
+ }
+
+ /* 'current' may be the current owner of the FPU state, so
+ context switch the registers into memory so they can be
+ indexed by register number. */
+ if (last_task_used_math == current) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ switch (width_shift) {
+ case 2:
+ buflo = current->thread.fpu.hard.fp_regs[srcreg];
+ break;
+ case 3:
+ if (do_paired_load) {
+ buflo = current->thread.fpu.hard.fp_regs[srcreg];
+ bufhi = current->thread.fpu.hard.fp_regs[srcreg+1];
+ } else {
+#if defined(CONFIG_LITTLE_ENDIAN)
+ bufhi = current->thread.fpu.hard.fp_regs[srcreg];
+ buflo = current->thread.fpu.hard.fp_regs[srcreg+1];
+#else
+ buflo = current->thread.fpu.hard.fp_regs[srcreg];
+ bufhi = current->thread.fpu.hard.fp_regs[srcreg+1];
+#endif
+ }
+ break;
+ default:
+ printk("Unexpected width_shift %d in misaligned_fpu_store, PC=%08lx\n",
+ width_shift, (unsigned long) regs->pc);
+ break;
+ }
+
+ *(__u32*) &buffer = buflo;
+ *(1 + (__u32*) &buffer) = bufhi;
+ if (__copy_user((void *)(int)address, &buffer, (1 << width_shift)) > 0) {
+ return -1; /* fault */
+ }
+ return 0;
+ } else {
+ die ("Misaligned FPU load inside kernel", regs, 0);
+ return -1;
+ }
+}
+#endif
+
+static int misaligned_fixup(struct pt_regs *regs)
+{
+ unsigned long opcode;
+ int error;
+ int major, minor;
+
+#if !defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ /* Never fixup user mode misaligned accesses without this option enabled. */
+ return -1;
+#else
+ if (!user_mode_unaligned_fixup_enable) return -1;
+#endif
+
+ error = read_opcode(regs->pc, &opcode, user_mode(regs));
+ if (error < 0) {
+ return error;
+ }
+ major = (opcode >> 26) & 0x3f;
+ minor = (opcode >> 16) & 0xf;
+
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ if (user_mode(regs) && (user_mode_unaligned_fixup_count > 0)) {
+ --user_mode_unaligned_fixup_count;
+ /* Only do 'count' worth of these reports, to remove a potential DoS against syslog */
+ printk("Fixing up unaligned userspace access in \"%s\" pid=%d pc=0x%08x ins=0x%08lx\n",
+ current->comm, current->pid, (__u32)regs->pc, opcode);
+ } else
+#endif
+ if (!user_mode(regs) && (kernel_mode_unaligned_fixup_count > 0)) {
+ --kernel_mode_unaligned_fixup_count;
+ if (in_interrupt()) {
+ printk("Fixing up unaligned kernelspace access in interrupt pc=0x%08x ins=0x%08lx\n",
+ (__u32)regs->pc, opcode);
+ } else {
+ printk("Fixing up unaligned kernelspace access in \"%s\" pid=%d pc=0x%08x ins=0x%08lx\n",
+ current->comm, current->pid, (__u32)regs->pc, opcode);
+ }
+ }
+
+
+ switch (major) {
+ case (0x84>>2): /* LD.W */
+ error = misaligned_load(regs, opcode, 1, 1, 1);
+ break;
+ case (0xb0>>2): /* LD.UW */
+ error = misaligned_load(regs, opcode, 1, 1, 0);
+ break;
+ case (0x88>>2): /* LD.L */
+ error = misaligned_load(regs, opcode, 1, 2, 1);
+ break;
+ case (0x8c>>2): /* LD.Q */
+ error = misaligned_load(regs, opcode, 1, 3, 0);
+ break;
+
+ case (0xa4>>2): /* ST.W */
+ error = misaligned_store(regs, opcode, 1, 1);
+ break;
+ case (0xa8>>2): /* ST.L */
+ error = misaligned_store(regs, opcode, 1, 2);
+ break;
+ case (0xac>>2): /* ST.Q */
+ error = misaligned_store(regs, opcode, 1, 3);
+ break;
+
+ case (0x40>>2): /* indexed loads */
+ switch (minor) {
+ case 0x1: /* LDX.W */
+ error = misaligned_load(regs, opcode, 0, 1, 1);
+ break;
+ case 0x5: /* LDX.UW */
+ error = misaligned_load(regs, opcode, 0, 1, 0);
+ break;
+ case 0x2: /* LDX.L */
+ error = misaligned_load(regs, opcode, 0, 2, 1);
+ break;
+ case 0x3: /* LDX.Q */
+ error = misaligned_load(regs, opcode, 0, 3, 0);
+ break;
+ default:
+ error = -1;
+ break;
+ }
+ break;
+
+ case (0x60>>2): /* indexed stores */
+ switch (minor) {
+ case 0x1: /* STX.W */
+ error = misaligned_store(regs, opcode, 0, 1);
+ break;
+ case 0x2: /* STX.L */
+ error = misaligned_store(regs, opcode, 0, 2);
+ break;
+ case 0x3: /* STX.Q */
+ error = misaligned_store(regs, opcode, 0, 3);
+ break;
+ default:
+ error = -1;
+ break;
+ }
+ break;
+
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ case (0x94>>2): /* FLD.S */
+ error = misaligned_fpu_load(regs, opcode, 1, 2, 0);
+ break;
+ case (0x98>>2): /* FLD.P */
+ error = misaligned_fpu_load(regs, opcode, 1, 3, 1);
+ break;
+ case (0x9c>>2): /* FLD.D */
+ error = misaligned_fpu_load(regs, opcode, 1, 3, 0);
+ break;
+ case (0x1c>>2): /* floating indexed loads */
+ switch (minor) {
+ case 0x8: /* FLDX.S */
+ error = misaligned_fpu_load(regs, opcode, 0, 2, 0);
+ break;
+ case 0xd: /* FLDX.P */
+ error = misaligned_fpu_load(regs, opcode, 0, 3, 1);
+ break;
+ case 0x9: /* FLDX.D */
+ error = misaligned_fpu_load(regs, opcode, 0, 3, 0);
+ break;
+ default:
+ error = -1;
+ break;
+ }
+ break;
+ case (0xb4>>2): /* FLD.S */
+ error = misaligned_fpu_store(regs, opcode, 1, 2, 0);
+ break;
+ case (0xb8>>2): /* FLD.P */
+ error = misaligned_fpu_store(regs, opcode, 1, 3, 1);
+ break;
+ case (0xbc>>2): /* FLD.D */
+ error = misaligned_fpu_store(regs, opcode, 1, 3, 0);
+ break;
+ case (0x3c>>2): /* floating indexed stores */
+ switch (minor) {
+ case 0x8: /* FSTX.S */
+ error = misaligned_fpu_store(regs, opcode, 0, 2, 0);
+ break;
+ case 0xd: /* FSTX.P */
+ error = misaligned_fpu_store(regs, opcode, 0, 3, 1);
+ break;
+ case 0x9: /* FSTX.D */
+ error = misaligned_fpu_store(regs, opcode, 0, 3, 0);
+ break;
+ default:
+ error = -1;
+ break;
+ }
+ break;
+#endif
+
+ default:
+ /* Fault */
+ error = -1;
+ break;
+ }
+
+ if (error < 0) {
+ return error;
+ } else {
+ regs->pc += 4; /* Skip the instruction that's just been emulated */
+ return 0;
+ }
+
+}
+
+static ctl_table unaligned_table[] = {
+ {1, "kernel_reports", &kernel_mode_unaligned_fixup_count,
+ sizeof(int), 0644, NULL, &proc_dointvec},
+#if defined(CONFIG_SH64_USER_MISALIGNED_FIXUP)
+ {2, "user_reports", &user_mode_unaligned_fixup_count,
+ sizeof(int), 0644, NULL, &proc_dointvec},
+ {3, "user_enable", &user_mode_unaligned_fixup_enable,
+ sizeof(int), 0644, NULL, &proc_dointvec},
+#endif
+ {0}
+};
+
+static ctl_table unaligned_root[] = {
+ {1, "unaligned_fixup", NULL, 0, 0555, unaligned_table},
+ {0}
+};
+
+static ctl_table sh64_root[] = {
+ {1, "sh64", NULL, 0, 0555, unaligned_root},
+ {0}
+};
+static struct ctl_table_header *sysctl_header;
+static int __init init_sysctl(void)
+{
+ sysctl_header = register_sysctl_table(sh64_root, 0);
+ return 0;
+}
+
+__initcall(init_sysctl);
+
+
+asmlinkage void do_debug_interrupt(unsigned long code, struct pt_regs *regs)
+{
+ u64 peek_real_address_q(u64 addr);
+ u64 poke_real_address_q(u64 addr, u64 val);
+ unsigned long long DM_EXP_CAUSE_PHY = 0x0c100010;
+ unsigned long long exp_cause;
+ /* It's not worth ioremapping the debug module registers for the amount
+ of access we make to them - just go direct to their physical
+ addresses. */
+ exp_cause = peek_real_address_q(DM_EXP_CAUSE_PHY);
+ if (exp_cause & ~4) {
+ printk("DM.EXP_CAUSE had unexpected bits set (=%08lx)\n",
+ (unsigned long)(exp_cause & 0xffffffff));
+ }
+ show_state();
+ /* Clear all DEBUGINT causes */
+ poke_real_address_q(DM_EXP_CAUSE_PHY, 0x0);
+}
+
--- /dev/null
+/*
+ * arch/sh64/kernel/unwind.c
+ *
+ * Copyright (C) 2004 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/kallsyms.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <asm/page.h>
+#include <asm/ptrace.h>
+#include <asm/processor.h>
+#include <asm/io.h>
+
+static u8 regcache[63];
+
+/*
+ * Finding the previous stack frame isn't horribly straightforward as it is
+ * on some other platforms. In the sh64 case, we don't have "linked" stack
+ * frames, so we need to do a bit of work to determine the previous frame,
+ * and in turn, the previous r14/r18 pair.
+ *
+ * There are generally a few cases which determine where we can find out
+ * the r14/r18 values. In the general case, this can be determined by poking
+ * around the prologue of the symbol PC is in (note that we absolutely must
+ * have frame pointer support as well as the kernel symbol table mapped,
+ * otherwise we can't even get this far).
+ *
+ * In other cases, such as the interrupt/exception path, we can poke around
+ * the sp/fp.
+ *
+ * Notably, this entire approach is somewhat error prone, and in the event
+ * that the previous frame cannot be determined, that's all we can do.
+ * Either way, this still leaves us with a more correct backtrace then what
+ * we would be able to come up with by walking the stack (which is garbage
+ * for anything beyond the first frame).
+ * -- PFM.
+ */
+static int lookup_prev_stack_frame(unsigned long fp, unsigned long pc,
+ unsigned long *pprev_fp, unsigned long *pprev_pc,
+ struct pt_regs *regs)
+{
+ const char *sym;
+ char *modname, namebuf[128];
+ unsigned long offset, size;
+ unsigned long prologue = 0;
+ unsigned long fp_displacement = 0;
+ unsigned long fp_prev = 0;
+ unsigned long offset_r14 = 0, offset_r18 = 0;
+ int i, found_prologue_end = 0;
+
+ sym = kallsyms_lookup(pc, &size, &offset, &modname, namebuf);
+ if (!sym)
+ return -EINVAL;
+
+ prologue = pc - offset;
+ if (!prologue)
+ return -EINVAL;
+
+ /* Validate fp, to avoid risk of dereferencing a bad pointer later.
+ Assume 128Mb since that's the amount of RAM on a Cayman. Modify
+ when there is an SH-5 board with more. */
+ if ((fp < (unsigned long) phys_to_virt(__MEMORY_START)) ||
+ (fp >= (unsigned long)(phys_to_virt(__MEMORY_START)) + 128*1024*1024) ||
+ ((fp & 7) != 0)) {
+ return -EINVAL;
+ }
+
+ /*
+ * Depth to walk, depth is completely arbitrary.
+ */
+ for (i = 0; i < 100; i++, prologue += sizeof(unsigned long)) {
+ unsigned long op;
+ u8 major, minor;
+ u8 src, dest, disp;
+
+ op = *(unsigned long *)prologue;
+
+ major = (op >> 26) & 0x3f;
+ src = (op >> 20) & 0x3f;
+ minor = (op >> 16) & 0xf;
+ disp = (op >> 10) & 0x3f;
+ dest = (op >> 4) & 0x3f;
+
+ /*
+ * Stack frame creation happens in a number of ways.. in the
+ * general case when the stack frame is less than 511 bytes,
+ * it's generally created by an addi or addi.l:
+ *
+ * addi/addi.l r15, -FRAME_SIZE, r15
+ *
+ * in the event that the frame size is bigger than this, it's
+ * typically created using a movi/sub pair as follows:
+ *
+ * movi FRAME_SIZE, rX
+ * sub r15, rX, r15
+ */
+
+ switch (major) {
+ case (0x00 >> 2):
+ switch (minor) {
+ case 0x8: /* add.l */
+ case 0x9: /* add */
+ /* Look for r15, r63, r14 */
+ if (src == 15 && disp == 63 && dest == 14)
+ found_prologue_end = 1;
+
+ break;
+ case 0xa: /* sub.l */
+ case 0xb: /* sub */
+ if (src != 15 || dest != 15)
+ continue;
+
+ fp_displacement -= regcache[disp];
+ fp_prev = fp - fp_displacement;
+ break;
+ }
+ break;
+ case (0xa8 >> 2): /* st.l */
+ if (src != 15)
+ continue;
+
+ switch (dest) {
+ case 14:
+ if (offset_r14 || fp_displacement == 0)
+ continue;
+
+ offset_r14 = (u64)(((((s64)op >> 10) & 0x3ff) << 54) >> 54);
+ offset_r14 *= sizeof(unsigned long);
+ offset_r14 += fp_displacement;
+ break;
+ case 18:
+ if (offset_r18 || fp_displacement == 0)
+ continue;
+
+ offset_r18 = (u64)(((((s64)op >> 10) & 0x3ff) << 54) >> 54);
+ offset_r18 *= sizeof(unsigned long);
+ offset_r18 += fp_displacement;
+ break;
+ }
+
+ break;
+ case (0xcc >> 2): /* movi */
+ if (dest >= 63) {
+ printk(KERN_NOTICE "%s: Invalid dest reg %d "
+ "specified in movi handler. Failed "
+ "opcode was 0x%lx: ", __FUNCTION__,
+ dest, op);
+
+ continue;
+ }
+
+ /* Sign extend */
+ regcache[dest] =
+ ((((s64)(u64)op >> 10) & 0xffff) << 54) >> 54;
+ break;
+ case (0xd0 >> 2): /* addi */
+ case (0xd4 >> 2): /* addi.l */
+ /* Look for r15, -FRAME_SIZE, r15 */
+ if (src != 15 || dest != 15)
+ continue;
+
+ /* Sign extended frame size.. */
+ fp_displacement +=
+ (u64)(((((s64)op >> 10) & 0x3ff) << 54) >> 54);
+ fp_prev = fp - fp_displacement;
+ break;
+ }
+
+ if (found_prologue_end && offset_r14 && (offset_r18 || *pprev_pc) && fp_prev)
+ break;
+ }
+
+ if (offset_r14 == 0 || fp_prev == 0) {
+ if (!offset_r14)
+ pr_debug("Unable to find r14 offset\n");
+ if (!fp_prev)
+ pr_debug("Unable to find previous fp\n");
+
+ return -EINVAL;
+ }
+
+ /* For innermost leaf function, there might not be a offset_r18 */
+ if (!*pprev_pc && (offset_r18 == 0))
+ return -EINVAL;
+
+ *pprev_fp = *(unsigned long *)(fp_prev + offset_r14);
+
+ if (offset_r18)
+ *pprev_pc = *(unsigned long *)(fp_prev + offset_r18);
+
+ *pprev_pc &= ~1;
+
+ return 0;
+}
+
+/* Don't put this on the stack since we'll want to call sh64_unwind
+ * when we're close to underflowing the stack anyway. */
+static struct pt_regs here_regs;
+
+extern const char syscall_ret;
+extern const char ret_from_syscall;
+extern const char ret_from_exception;
+extern const char ret_from_irq;
+
+static void sh64_unwind_inner(struct pt_regs *regs);
+
+static void unwind_nested (unsigned long pc, unsigned long fp)
+{
+ if ((fp >= __MEMORY_START) &&
+ ((fp & 7) == 0)) {
+ sh64_unwind_inner((struct pt_regs *) fp);
+ }
+}
+
+static void sh64_unwind_inner(struct pt_regs *regs)
+{
+ unsigned long pc, fp;
+ int ofs = 0;
+ int first_pass;
+
+ pc = regs->pc & ~1;
+ fp = regs->regs[14];
+
+ first_pass = 1;
+ for (;;) {
+ int cond;
+ unsigned long next_fp, next_pc;
+
+ if (pc == ((unsigned long) &syscall_ret & ~1)) {
+ printk("SYSCALL\n");
+ unwind_nested(pc,fp);
+ return;
+ }
+
+ if (pc == ((unsigned long) &ret_from_syscall & ~1)) {
+ printk("SYSCALL (PREEMPTED)\n");
+ unwind_nested(pc,fp);
+ return;
+ }
+
+ /* In this case, the PC is discovered by lookup_prev_stack_frame but
+ it has 4 taken off it to look like the 'caller' */
+ if (pc == ((unsigned long) &ret_from_exception & ~1)) {
+ printk("EXCEPTION\n");
+ unwind_nested(pc,fp);
+ return;
+ }
+
+ if (pc == ((unsigned long) &ret_from_irq & ~1)) {
+ printk("IRQ\n");
+ unwind_nested(pc,fp);
+ return;
+ }
+
+ cond = ((pc >= __MEMORY_START) && (fp >= __MEMORY_START) &&
+ ((pc & 3) == 0) && ((fp & 7) == 0));
+
+ pc -= ofs;
+
+ printk("[<%08lx>] ", pc);
+ print_symbol("%s\n", pc);
+
+ if (first_pass) {
+ /* If the innermost frame is a leaf function, it's
+ * possible that r18 is never saved out to the stack.
+ */
+ next_pc = regs->regs[18];
+ } else {
+ next_pc = 0;
+ }
+
+ if (lookup_prev_stack_frame(fp, pc, &next_fp, &next_pc, regs) == 0) {
+ ofs = sizeof(unsigned long);
+ pc = next_pc & ~1;
+ fp = next_fp;
+ } else {
+ printk("Unable to lookup previous stack frame\n");
+ break;
+ }
+ first_pass = 0;
+ }
+
+ printk("\n");
+
+}
+
+void sh64_unwind(struct pt_regs *regs)
+{
+ if (!regs) {
+ /*
+ * Fetch current regs if we have no other saved state to back
+ * trace from.
+ */
+ regs = &here_regs;
+
+ __asm__ __volatile__ ("ori r14, 0, %0" : "=r" (regs->regs[14]));
+ __asm__ __volatile__ ("ori r15, 0, %0" : "=r" (regs->regs[15]));
+ __asm__ __volatile__ ("ori r18, 0, %0" : "=r" (regs->regs[18]));
+
+ __asm__ __volatile__ ("gettr tr0, %0" : "=r" (regs->tregs[0]));
+ __asm__ __volatile__ ("gettr tr1, %0" : "=r" (regs->tregs[1]));
+ __asm__ __volatile__ ("gettr tr2, %0" : "=r" (regs->tregs[2]));
+ __asm__ __volatile__ ("gettr tr3, %0" : "=r" (regs->tregs[3]));
+ __asm__ __volatile__ ("gettr tr4, %0" : "=r" (regs->tregs[4]));
+ __asm__ __volatile__ ("gettr tr5, %0" : "=r" (regs->tregs[5]));
+ __asm__ __volatile__ ("gettr tr6, %0" : "=r" (regs->tregs[6]));
+ __asm__ __volatile__ ("gettr tr7, %0" : "=r" (regs->tregs[7]));
+
+ __asm__ __volatile__ (
+ "pta 0f, tr0\n\t"
+ "blink tr0, %0\n\t"
+ "0: nop"
+ : "=r" (regs->pc)
+ );
+ }
+
+ printk("\nCall Trace:\n");
+ sh64_unwind_inner(regs);
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh5/vmlinux.lds.S
+ *
+ * ld script to make ST50 Linux kernel
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * benedict.gaster@superh.com: 2nd May 2002
+ * Add definition of empty_zero_page to be the first page of kernel image.
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time.
+ *
+ * lethal@linux-sh.org: 9th May 2003
+ * Kill off GLOBAL_NAME() usage and other CDC-isms.
+ *
+ * lethal@linux-sh.org: 19th May 2003
+ * Remove support for ancient toolchains.
+ */
+
+#include <linux/config.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/processor.h>
+#include <asm/thread_info.h>
+
+#define LOAD_OFFSET CONFIG_CACHED_MEMORY_OFFSET
+#include <asm-generic/vmlinux.lds.h>
+
+#ifdef NOTDEF
+#ifdef CONFIG_LITTLE_ENDIAN
+OUTPUT_FORMAT("elf32-sh64l-linux", "elf32-sh64l-linux", "elf32-sh64l-linux")
+#else
+OUTPUT_FORMAT("elf32-sh64", "elf32-sh64", "elf32-sh64")
+#endif
+#endif
+
+OUTPUT_ARCH(sh:sh5)
+
+#define C_PHYS(x) AT (ADDR(x) - LOAD_OFFSET)
+
+ENTRY(__start)
+SECTIONS
+{
+ . = CONFIG_CACHED_MEMORY_OFFSET + CONFIG_MEMORY_START + PAGE_SIZE;
+ _text = .; /* Text and read-only data */
+ text = .; /* Text and read-only data */
+
+ .empty_zero_page : C_PHYS(.empty_zero_page) {
+ *(.empty_zero_page)
+ } = 0
+
+ .text : C_PHYS(.text) {
+ *(.text)
+ *(.text64)
+ *(.text..SHmedia32)
+ SCHED_TEXT
+ *(.fixup)
+ *(.gnu.warning)
+#ifdef CONFIG_LITTLE_ENDIAN
+ } = 0x6ff0fff0
+#else
+ } = 0xf0fff06f
+#endif
+
+ /* We likely want __ex_table to be Cache Line aligned */
+ . = ALIGN(L1_CACHE_BYTES); /* Exception table */
+ __start___ex_table = .;
+ __ex_table : C_PHYS(__ex_table) { *(__ex_table) }
+ __stop___ex_table = .;
+
+ RODATA
+
+ _etext = .; /* End of text section */
+
+ .data : C_PHYS(.data) { /* Data */
+ *(.data)
+ CONSTRUCTORS
+ }
+
+ . = ALIGN(PAGE_SIZE);
+ .data.page_aligned : C_PHYS(.data.page_aligned) { *(.data.page_aligned) }
+
+ . = ALIGN(L1_CACHE_BYTES);
+ __per_cpu_start = .;
+ .data.percpu : C_PHYS(.data.percpu) { *(.data.percpu) }
+ __per_cpu_end = . ;
+ .data.cacheline_aligned : C_PHYS(.data.cacheline_aligned) { *(.data.cacheline_aligned) }
+
+ _edata = .; /* End of data section */
+
+ . = ALIGN(THREAD_SIZE); /* init_task: structure size aligned */
+ .data.init_task : C_PHYS(.data.init_task) { *(.data.init_task) }
+
+ . = ALIGN(PAGE_SIZE); /* Init code and data */
+ __init_begin = .;
+ _sinittext = .;
+ .init.text : C_PHYS(.init.text) { *(.init.text) }
+ _einittext = .;
+ .init.data : C_PHYS(.init.data) { *(.init.data) }
+ . = ALIGN(L1_CACHE_BYTES); /* Better if Cache Line aligned */
+ __setup_start = .;
+ .init.setup : C_PHYS(.init.setup) { *(.init.setup) }
+ __setup_end = .;
+ __start___param = .;
+ __param : C_PHYS(__param) { *(__param) }
+ __stop___param = .;
+ __initcall_start = .;
+ .initcall.init : C_PHYS(.initcall.init) {
+ *(.initcall1.init)
+ *(.initcall2.init)
+ *(.initcall3.init)
+ *(.initcall4.init)
+ *(.initcall5.init)
+ *(.initcall6.init)
+ *(.initcall7.init)
+ }
+ __initcall_end = .;
+ __con_initcall_start = .;
+ .con_initcall.init : C_PHYS(.con_initcall.init) { *(.con_initcall.init) }
+ __con_initcall_end = .;
+ SECURITY_INIT
+ __initramfs_start = .;
+ .init.ramfs : C_PHYS(.init.ramfs) { *(.init.ramfs) }
+ __initramfs_end = .;
+ . = ALIGN(PAGE_SIZE);
+ __init_end = .;
+
+ /* Align to the biggest single data representation, head and tail */
+ . = ALIGN(8);
+ __bss_start = .; /* BSS */
+ .bss : C_PHYS(.bss) {
+ *(.bss)
+ }
+ . = ALIGN(8);
+ _end = . ;
+
+ /* Sections to be discarded */
+ /DISCARD/ : {
+ *(.exit.text)
+ *(.exit.data)
+ *(.exitcall.exit)
+ }
+
+ /* Stabs debugging sections. */
+ .stab 0 : C_PHYS(.stab) { *(.stab) }
+ .stabstr 0 : C_PHYS(.stabstr) { *(.stabstr) }
+ .stab.excl 0 : C_PHYS(.stab.excl) { *(.stab.excl) }
+ .stab.exclstr 0 : C_PHYS(.stab.exclstr) { *(.stab.exclstr) }
+ .stab.index 0 : C_PHYS(.stab.index) { *(.stab.index) }
+ .stab.indexstr 0 : C_PHYS(.stab.indexstr) { *(.stab.indexstr) }
+ .comment 0 : C_PHYS(.comment) { *(.comment) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging section are relative to the beginning
+ of the section so we begin .debug at 0. */
+ /* DWARF 1 */
+ .debug 0 : C_PHYS(.debug) { *(.debug) }
+ .line 0 : C_PHYS(.line) { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : C_PHYS(.debug_srcinfo) { *(.debug_srcinfo) }
+ .debug_sfnames 0 : C_PHYS(.debug_sfnames) { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : C_PHYS(.debug_aranges) { *(.debug_aranges) }
+ .debug_pubnames 0 : C_PHYS(.debug_pubnames) { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : C_PHYS(.debug_info) { *(.debug_info) }
+ .debug_abbrev 0 : C_PHYS(.debug_abbrev) { *(.debug_abbrev) }
+ .debug_line 0 : C_PHYS(.debug_line) { *(.debug_line) }
+ .debug_frame 0 : C_PHYS(.debug_frame) { *(.debug_frame) }
+ .debug_str 0 : C_PHYS(.debug_str) { *(.debug_str) }
+ .debug_loc 0 : C_PHYS(.debug_loc) { *(.debug_loc) }
+ .debug_macinfo 0 : C_PHYS(.debug_macinfo) { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : C_PHYS(.debug_weaknames) { *(.debug_weaknames) }
+ .debug_funcnames 0 : C_PHYS(.debug_funcnames) { *(.debug_funcnames) }
+ .debug_typenames 0 : C_PHYS(.debug_typenames) { *(.debug_typenames) }
+ .debug_varnames 0 : C_PHYS(.debug_varnames) { *(.debug_varnames) }
+ /* These must appear regardless of . */
+}
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000, 2001 Paolo Alberelli
+# Coprygith (C) 2003 Paul Mundt
+#
+# Makefile for the SH-5 specific library files..
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+# Panic should really be compiled as PIC
+lib-y := udelay.o c-checksum.o dbg.o io.o panic.o memcpy.o copy_user_memcpy.o \
+ page_copy.o page_clear.o
+
--- /dev/null
+/*
+ * arch/sh/lib/csum_parial.c
+ *
+ * This file contains network checksum routines that are better done
+ * in an architecture-specific manner due to speed..
+ */
+
+#undef DEBUG
+
+#include <linux/config.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <asm/byteorder.h>
+#include <asm/uaccess.h>
+
+static inline unsigned short from64to16(unsigned long long x)
+{
+ /* add up 32-bit words for 33 bits */
+ x = (x & 0xffffffff) + (x >> 32);
+ /* add up 16-bit and 17-bit words for 17+c bits */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up 16-bit and 2-bit for 16+c bit */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+static inline unsigned short foldto16(unsigned long x)
+{
+ /* add up 16-bit for 17 bits */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+static inline unsigned short myfoldto16(unsigned long long x)
+{
+ /* Fold down to 32-bits so we don't loose in the typedef-less
+ network stack. */
+ /* 64 to 33 */
+ x = (x & 0xffffffff) + (x >> 32);
+ /* 33 to 32 */
+ x = (x & 0xffffffff) + (x >> 32);
+
+ /* add up 16-bit for 17 bits */
+ x = (x & 0xffff) + (x >> 16);
+ /* add up carry.. */
+ x = (x & 0xffff) + (x >> 16);
+ return x;
+}
+
+#define odd(x) ((x)&1)
+#define U16(x) ntohs(x)
+
+static unsigned long do_csum(const unsigned char *buff, int len)
+{
+ int odd, count;
+ unsigned long result = 0;
+
+ pr_debug("do_csum buff %p, len %d (0x%x)\n", buff, len, len);
+#ifdef DEBUG
+ for (i = 0; i < len; i++) {
+ if ((i % 26) == 0)
+ printk("\n");
+ printk("%02X ", buff[i]);
+ }
+#endif
+
+ if (len <= 0)
+ goto out;
+
+ odd = 1 & (unsigned long) buff;
+ if (odd) {
+ result = *buff << 8;
+ len--;
+ buff++;
+ }
+ count = len >> 1; /* nr of 16-bit words.. */
+ if (count) {
+ if (2 & (unsigned long) buff) {
+ result += *(unsigned short *) buff;
+ count--;
+ len -= 2;
+ buff += 2;
+ }
+ count >>= 1; /* nr of 32-bit words.. */
+ if (count) {
+ unsigned long carry = 0;
+ do {
+ unsigned long w = *(unsigned long *) buff;
+ buff += 4;
+ count--;
+ result += carry;
+ result += w;
+ carry = (w > result);
+ } while (count);
+ result += carry;
+ result = (result & 0xffff) + (result >> 16);
+ }
+ if (len & 2) {
+ result += *(unsigned short *) buff;
+ buff += 2;
+ }
+ }
+ if (len & 1)
+ result += *buff;
+ result = foldto16(result);
+ if (odd)
+ result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
+
+ pr_debug("\nCHECKSUM is 0x%x\n", result);
+
+ out:
+ return result;
+}
+
+/* computes the checksum of a memory block at buff, length len,
+ and adds in "sum" (32-bit) */
+unsigned int csum_partial(const unsigned char *buff, int len, unsigned int sum)
+{
+ unsigned long long result = do_csum(buff, len);
+
+ /* add in old sum, and carry.. */
+ result += sum;
+ /* 32+c bits -> 32 bits */
+ result = (result & 0xffffffff) + (result >> 32);
+
+ pr_debug("csum_partial, buff %p len %d sum 0x%x result=0x%016Lx\n",
+ buff, len, sum, result);
+
+ return result;
+}
+
+/* Copy while checksumming, otherwise like csum_partial. */
+unsigned int
+csum_partial_copy(const char *src, char *dst, int len, unsigned int sum)
+{
+ sum = csum_partial(src, len, sum);
+ memcpy(dst, src, len);
+
+ return sum;
+}
+
+/* Copy from userspace and compute checksum. If we catch an exception
+ then zero the rest of the buffer. */
+unsigned int
+csum_partial_copy_from_user(const char *src, char *dst, int len,
+ unsigned int sum, int *err_ptr)
+{
+ int missing;
+
+ pr_debug
+ ("csum_partial_copy_from_user src %p, dest %p, len %d, sum %08x, err_ptr %p\n",
+ src, dst, len, sum, err_ptr);
+ missing = copy_from_user(dst, src, len);
+ pr_debug(" access_ok %d\n", __access_ok((unsigned long) src, len));
+ pr_debug(" missing %d\n", missing);
+ if (missing) {
+ memset(dst + len - missing, 0, missing);
+ *err_ptr = -EFAULT;
+ }
+
+ return csum_partial(dst, len, sum);
+}
+
+/* Copy to userspace and compute checksum. */
+unsigned int
+csum_partial_copy_to_user(const char *src, char *dst, int len,
+ unsigned int sum, int *err_ptr)
+{
+ sum = csum_partial(src, len, sum);
+
+ if (copy_to_user(dst, src, len))
+ *err_ptr = -EFAULT;
+
+ return sum;
+}
+
+/*
+ * This is a version of ip_compute_csum() optimized for IP headers,
+ * which always checksum on 4 octet boundaries.
+ */
+unsigned short ip_fast_csum(unsigned char *iph, unsigned int ihl)
+{
+ pr_debug("ip_fast_csum %p,%d\n", iph, ihl);
+
+ return ~do_csum(iph, ihl * 4);
+}
+
+unsigned int csum_tcpudp_nofold(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto, unsigned int sum)
+{
+ unsigned long long result;
+
+ pr_debug("ntohs(0x%x)=0x%x\n", 0xdead, ntohs(0xdead));
+ pr_debug("htons(0x%x)=0x%x\n", 0xdead, htons(0xdead));
+
+ result = ((unsigned long long) saddr +
+ (unsigned long long) daddr +
+ (unsigned long long) sum +
+ ((unsigned long long) ntohs(len) << 16) +
+ ((unsigned long long) proto << 8));
+
+ /* Fold down to 32-bits so we don't loose in the typedef-less
+ network stack. */
+ /* 64 to 33 */
+ result = (result & 0xffffffff) + (result >> 32);
+ /* 33 to 32 */
+ result = (result & 0xffffffff) + (result >> 32);
+
+ pr_debug("%s saddr %x daddr %x len %x proto %x sum %x result %08Lx\n",
+ __FUNCTION__, saddr, daddr, len, proto, sum, result);
+
+ return result;
+}
+
+// Post SIM:
+unsigned int
+csum_partial_copy_nocheck(const char *src, char *dst, int len, unsigned int sum)
+{
+ // unsigned dummy;
+ pr_debug("csum_partial_copy_nocheck src %p dst %p len %d\n", src, dst,
+ len);
+
+ return csum_partial_copy(src, dst, len, sum);
+}
--- /dev/null
+!
+! Fast SH memcpy
+!
+! by Toshiyasu Morita (tm@netcom.com)
+! hacked by J"orn Rernnecke (joern.rennecke@superh.com) ("o for o-umlaut)
+! SH5 code Copyright 2002 SuperH Ltd.
+!
+! Entry: ARG0: destination pointer
+! ARG1: source pointer
+! ARG2: byte count
+!
+! Exit: RESULT: destination pointer
+! any other registers in the range r0-r7: trashed
+!
+! Notes: Usually one wants to do small reads and write a longword, but
+! unfortunately it is difficult in some cases to concatanate bytes
+! into a longword on the SH, so this does a longword read and small
+! writes.
+!
+! This implementation makes two assumptions about how it is called:
+!
+! 1.: If the byte count is nonzero, the address of the last byte to be
+! copied is unsigned greater than the address of the first byte to
+! be copied. This could be easily swapped for a signed comparison,
+! but the algorithm used needs some comparison.
+!
+! 2.: When there are two or three bytes in the last word of an 11-or-more
+! bytes memory chunk to b copied, the rest of the word can be read
+! without side effects.
+! This could be easily changed by increasing the minumum size of
+! a fast memcpy and the amount subtracted from r7 before L_2l_loop be 2,
+! however, this would cost a few extra cyles on average.
+! For SHmedia, the assumption is that any quadword can be read in its
+! enirety if at least one byte is included in the copy.
+
+/* Imported into Linux kernel by Richard Curnow. This is used to implement the
+ __copy_user function in the general case, so it has to be a distinct
+ function from intra-kernel memcpy to allow for exception fix-ups in the
+ event that the user pointer is bad somewhere in the copy (e.g. due to
+ running off the end of the vma).
+
+ Note, this algorithm will be slightly wasteful in the case where the source
+ and destination pointers are equally aligned, because the stlo/sthi pairs
+ could then be merged back into single stores. If there are a lot of cache
+ misses, this is probably offset by the stall lengths on the preloads.
+
+*/
+
+ .section .text..SHmedia32,"ax"
+ .little
+ .balign 32
+ .global copy_user_memcpy
+ .global copy_user_memcpy_end
+copy_user_memcpy:
+
+#define LDUAQ(P,O,D0,D1) ldlo.q P,O,D0; ldhi.q P,O+7,D1
+#define STUAQ(P,O,D0,D1) stlo.q P,O,D0; sthi.q P,O+7,D1
+#define LDUAL(P,O,D0,D1) ldlo.l P,O,D0; ldhi.l P,O+3,D1
+#define STUAL(P,O,D0,D1) stlo.l P,O,D0; sthi.l P,O+3,D1
+
+ ld.b r3,0,r63
+ pta/l Large,tr0
+ movi 25,r0
+ bgeu/u r4,r0,tr0
+ nsb r4,r0
+ shlli r0,5,r0
+ movi (L1-L0+63*32 + 1) & 0xffff,r1
+ sub r1, r0, r0
+L0: ptrel r0,tr0
+ add r2,r4,r5
+ ptabs r18,tr1
+ add r3,r4,r6
+ blink tr0,r63
+
+/* Rearranged to make cut2 safe */
+ .balign 8
+L4_7: /* 4..7 byte memcpy cntd. */
+ stlo.l r2, 0, r0
+ or r6, r7, r6
+ sthi.l r5, -1, r6
+ stlo.l r5, -4, r6
+ blink tr1,r63
+
+ .balign 8
+L1: /* 0 byte memcpy */
+ nop
+ blink tr1,r63
+ nop
+ nop
+ nop
+ nop
+
+L2_3: /* 2 or 3 byte memcpy cntd. */
+ st.b r5,-1,r6
+ blink tr1,r63
+
+ /* 1 byte memcpy */
+ ld.b r3,0,r0
+ st.b r2,0,r0
+ blink tr1,r63
+
+L8_15: /* 8..15 byte memcpy cntd. */
+ stlo.q r2, 0, r0
+ or r6, r7, r6
+ sthi.q r5, -1, r6
+ stlo.q r5, -8, r6
+ blink tr1,r63
+
+ /* 2 or 3 byte memcpy */
+ ld.b r3,0,r0
+ ld.b r2,0,r63
+ ld.b r3,1,r1
+ st.b r2,0,r0
+ pta/l L2_3,tr0
+ ld.b r6,-1,r6
+ st.b r2,1,r1
+ blink tr0, r63
+
+ /* 4 .. 7 byte memcpy */
+ LDUAL (r3, 0, r0, r1)
+ pta L4_7, tr0
+ ldlo.l r6, -4, r7
+ or r0, r1, r0
+ sthi.l r2, 3, r0
+ ldhi.l r6, -1, r6
+ blink tr0, r63
+
+ /* 8 .. 15 byte memcpy */
+ LDUAQ (r3, 0, r0, r1)
+ pta L8_15, tr0
+ ldlo.q r6, -8, r7
+ or r0, r1, r0
+ sthi.q r2, 7, r0
+ ldhi.q r6, -1, r6
+ blink tr0, r63
+
+ /* 16 .. 24 byte memcpy */
+ LDUAQ (r3, 0, r0, r1)
+ LDUAQ (r3, 8, r8, r9)
+ or r0, r1, r0
+ sthi.q r2, 7, r0
+ or r8, r9, r8
+ sthi.q r2, 15, r8
+ ldlo.q r6, -8, r7
+ ldhi.q r6, -1, r6
+ stlo.q r2, 8, r8
+ stlo.q r2, 0, r0
+ or r6, r7, r6
+ sthi.q r5, -1, r6
+ stlo.q r5, -8, r6
+ blink tr1,r63
+
+Large:
+ ld.b r2, 0, r63
+ pta/l Loop_ua, tr1
+ ori r3, -8, r7
+ sub r2, r7, r22
+ sub r3, r2, r6
+ add r2, r4, r5
+ ldlo.q r3, 0, r0
+ addi r5, -16, r5
+ movi 64+8, r27 ! could subtract r7 from that.
+ stlo.q r2, 0, r0
+ sthi.q r2, 7, r0
+ ldx.q r22, r6, r0
+ bgtu/l r27, r4, tr1
+
+ addi r5, -48, r27
+ pta/l Loop_line, tr0
+ addi r6, 64, r36
+ addi r6, -24, r19
+ addi r6, -16, r20
+ addi r6, -8, r21
+
+Loop_line:
+ ldx.q r22, r36, r63
+ synco
+ alloco r22, 32
+ synco
+ addi r22, 32, r22
+ ldx.q r22, r19, r23
+ sthi.q r22, -25, r0
+ ldx.q r22, r20, r24
+ ldx.q r22, r21, r25
+ stlo.q r22, -32, r0
+ ldx.q r22, r6, r0
+ sthi.q r22, -17, r23
+ sthi.q r22, -9, r24
+ sthi.q r22, -1, r25
+ stlo.q r22, -24, r23
+ stlo.q r22, -16, r24
+ stlo.q r22, -8, r25
+ bgeu r27, r22, tr0
+
+Loop_ua:
+ addi r22, 8, r22
+ sthi.q r22, -1, r0
+ stlo.q r22, -8, r0
+ ldx.q r22, r6, r0
+ bgtu/l r5, r22, tr1
+
+ add r3, r4, r7
+ ldlo.q r7, -8, r1
+ sthi.q r22, 7, r0
+ ldhi.q r7, -1, r7
+ ptabs r18,tr1
+ stlo.q r22, 0, r0
+ or r1, r7, r1
+ sthi.q r5, 15, r1
+ stlo.q r5, 8, r1
+ blink tr1, r63
+copy_user_memcpy_end:
+ nop
--- /dev/null
+/*--------------------------------------------------------------------------
+--
+-- Identity : Linux50 Debug Funcions
+--
+-- File : arch/sh64/lib/dbg.C
+--
+-- Copyright 2000, 2001 STMicroelectronics Limited.
+-- Copyright 2004 Richard Curnow (evt_debug etc)
+--
+--------------------------------------------------------------------------*/
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <asm/mmu_context.h>
+
+typedef u64 regType_t;
+
+static regType_t getConfigReg(u64 id)
+{
+ register u64 reg __asm__("r2");
+ asm volatile ("getcfg %1, 0, %0":"=r" (reg):"r"(id));
+ return (reg);
+}
+
+/* ======================================================================= */
+
+static char *szTab[] = { "4k", "64k", "1M", "512M" };
+static char *protTab[] = { "----",
+ "---R",
+ "--X-",
+ "--XR",
+ "-W--",
+ "-W-R",
+ "-WX-",
+ "-WXR",
+ "U---",
+ "U--R",
+ "U-X-",
+ "U-XR",
+ "UW--",
+ "UW-R",
+ "UWX-",
+ "UWXR"
+};
+#define ITLB_BASE 0x00000000
+#define DTLB_BASE 0x00800000
+#define MAX_TLBs 64
+/* PTE High */
+#define GET_VALID(pte) ((pte) & 0x1)
+#define GET_SHARED(pte) ((pte) & 0x2)
+#define GET_ASID(pte) ((pte >> 2) & 0x0ff)
+#define GET_EPN(pte) ((pte) & 0xfffff000)
+
+/* PTE Low */
+#define GET_CBEHAVIOR(pte) ((pte) & 0x3)
+#define GET_PAGE_SIZE(pte) szTab[((pte >> 3) & 0x3)]
+#define GET_PROTECTION(pte) protTab[((pte >> 6) & 0xf)]
+#define GET_PPN(pte) ((pte) & 0xfffff000)
+
+#define PAGE_1K_MASK 0x00000000
+#define PAGE_4K_MASK 0x00000010
+#define PAGE_64K_MASK 0x00000080
+#define MMU_PAGESIZE_MASK (PAGE_64K_MASK | PAGE_4K_MASK)
+#define PAGE_1MB_MASK MMU_PAGESIZE_MASK
+#define PAGE_1K (1024)
+#define PAGE_4K (1024 * 4)
+#define PAGE_64K (1024 * 64)
+#define PAGE_1MB (1024 * 1024)
+
+#define HOW_TO_READ_TLB_CONTENT \
+ "[ ID] PPN EPN ASID Share CB P.Size PROT.\n"
+
+void print_single_tlb(unsigned long tlb, int single_print)
+{
+ regType_t pteH;
+ regType_t pteL;
+ unsigned int valid, shared, asid, epn, cb, ppn;
+ char *pSize;
+ char *pProt;
+
+ /*
+ ** in case of single print <single_print> is true, this implies:
+ ** 1) print the TLB in any case also if NOT VALID
+ ** 2) print out the header
+ */
+
+ pteH = getConfigReg(tlb);
+ valid = GET_VALID(pteH);
+ if (single_print)
+ printk(HOW_TO_READ_TLB_CONTENT);
+ else if (!valid)
+ return;
+
+ pteL = getConfigReg(tlb + 1);
+
+ shared = GET_SHARED(pteH);
+ asid = GET_ASID(pteH);
+ epn = GET_EPN(pteH);
+ cb = GET_CBEHAVIOR(pteL);
+ pSize = GET_PAGE_SIZE(pteL);
+ pProt = GET_PROTECTION(pteL);
+ ppn = GET_PPN(pteL);
+ printk("[%c%2ld] 0x%08x 0x%08x %03d %02x %02x %4s %s\n",
+ ((valid) ? ' ' : 'u'), ((tlb & 0x0ffff) / TLB_STEP),
+ ppn, epn, asid, shared, cb, pSize, pProt);
+}
+
+void print_dtlb(void)
+{
+ int count;
+ unsigned long tlb;
+
+ printk(" ================= SH-5 D-TLBs Status ===================\n");
+ printk(HOW_TO_READ_TLB_CONTENT);
+ tlb = DTLB_BASE;
+ for (count = 0; count < MAX_TLBs; count++, tlb += TLB_STEP)
+ print_single_tlb(tlb, 0);
+ printk
+ (" =============================================================\n");
+}
+
+void print_itlb(void)
+{
+ int count;
+ unsigned long tlb;
+
+ printk(" ================= SH-5 I-TLBs Status ===================\n");
+ printk(HOW_TO_READ_TLB_CONTENT);
+ tlb = ITLB_BASE;
+ for (count = 0; count < MAX_TLBs; count++, tlb += TLB_STEP)
+ print_single_tlb(tlb, 0);
+ printk
+ (" =============================================================\n");
+}
+
+/* ======================================================================= */
+
+#include "syscalltab.h"
+
+struct ring_node {
+ int evt;
+ int ret_addr;
+ int event;
+ int tra;
+ int pid;
+ unsigned long sp;
+ unsigned long pc;
+};
+
+static struct ring_node event_ring[16];
+static int event_ptr = 0;
+
+void evt_debug(int evt, int ret_addr, int event, int tra, struct pt_regs *regs)
+{
+ int syscallno = tra & 0xff;
+ unsigned long sp;
+ unsigned long stack_bottom;
+ int pid;
+ struct ring_node *rr;
+
+ pid = current->pid;
+ stack_bottom = (unsigned long) current->thread_info;
+ asm volatile("ori r15, 0, %0" : "=r" (sp));
+ rr = event_ring + event_ptr;
+ rr->evt = evt;
+ rr->ret_addr = ret_addr;
+ rr->event = event;
+ rr->tra = tra;
+ rr->pid = pid;
+ rr->sp = sp;
+ rr->pc = regs->pc;
+
+ if (sp < stack_bottom + 3092) {
+ printk("evt_debug : stack underflow report\n");
+ int i, j;
+ for (j=0, i = event_ptr; j<16; j++) {
+ rr = event_ring + i;
+ printk("evt=%08x event=%08x tra=%08x pid=%5d sp=%08lx pc=%08lx\n",
+ rr->evt, rr->event, rr->tra, rr->pid, rr->sp, rr->pc);
+ i--;
+ i &= 15;
+ }
+ panic("STACK UNDERFLOW\n");
+ }
+
+ event_ptr = (event_ptr + 1) & 15;
+
+ if ((event == 2) && (evt == 0x160)) {
+ if (syscallno < NUM_SYSCALL_INFO_ENTRIES)
+ printk("Task %d: %s()\n",
+ current->pid,
+ syscall_info_table[syscallno].name);
+ }
+}
+
+void evt_debug2(unsigned int ret)
+{
+ printk("Task %d: syscall returns %08x\n", current->pid, ret);
+}
+
+void evt_debug_ret_from_irq(struct pt_regs *regs)
+{
+ int pid;
+ struct ring_node *rr;
+
+ pid = current->pid;
+ rr = event_ring + event_ptr;
+ rr->evt = 0xffff;
+ rr->ret_addr = 0;
+ rr->event = 0;
+ rr->tra = 0;
+ rr->pid = pid;
+ rr->pc = regs->pc;
+ event_ptr = (event_ptr + 1) & 15;
+}
+
+void evt_debug_ret_from_exc(struct pt_regs *regs)
+{
+ int pid;
+ struct ring_node *rr;
+
+ pid = current->pid;
+ rr = event_ring + event_ptr;
+ rr->evt = 0xfffe;
+ rr->ret_addr = 0;
+ rr->event = 0;
+ rr->tra = 0;
+ rr->pid = pid;
+ rr->pc = regs->pc;
+ event_ptr = (event_ptr + 1) & 15;
+}
+
+/* ======================================================================= */
+
+void show_excp_regs(char *from, int trapnr, int signr, struct pt_regs *regs)
+{
+
+ unsigned long long ah, al, bh, bl, ch, cl;
+
+ printk("\n");
+ printk("EXCEPTION - %s: task %d; Linux trap # %d; signal = %d\n",
+ ((from) ? from : "???"), current->pid, trapnr, signr);
+
+ asm volatile ("getcon " __EXPEVT ", %0":"=r"(ah));
+ asm volatile ("getcon " __EXPEVT ", %0":"=r"(al));
+ ah = (ah) >> 32;
+ al = (al) & 0xffffffff;
+ asm volatile ("getcon " __KCR1 ", %0":"=r"(bh));
+ asm volatile ("getcon " __KCR1 ", %0":"=r"(bl));
+ bh = (bh) >> 32;
+ bl = (bl) & 0xffffffff;
+ asm volatile ("getcon " __INTEVT ", %0":"=r"(ch));
+ asm volatile ("getcon " __INTEVT ", %0":"=r"(cl));
+ ch = (ch) >> 32;
+ cl = (cl) & 0xffffffff;
+ printk("EXPE: %08Lx%08Lx KCR1: %08Lx%08Lx INTE: %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ asm volatile ("getcon " __PEXPEVT ", %0":"=r"(ah));
+ asm volatile ("getcon " __PEXPEVT ", %0":"=r"(al));
+ ah = (ah) >> 32;
+ al = (al) & 0xffffffff;
+ asm volatile ("getcon " __PSPC ", %0":"=r"(bh));
+ asm volatile ("getcon " __PSPC ", %0":"=r"(bl));
+ bh = (bh) >> 32;
+ bl = (bl) & 0xffffffff;
+ asm volatile ("getcon " __PSSR ", %0":"=r"(ch));
+ asm volatile ("getcon " __PSSR ", %0":"=r"(cl));
+ ch = (ch) >> 32;
+ cl = (cl) & 0xffffffff;
+ printk("PEXP: %08Lx%08Lx PSPC: %08Lx%08Lx PSSR: %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->pc) >> 32;
+ al = (regs->pc) & 0xffffffff;
+ bh = (regs->regs[18]) >> 32;
+ bl = (regs->regs[18]) & 0xffffffff;
+ ch = (regs->regs[15]) >> 32;
+ cl = (regs->regs[15]) & 0xffffffff;
+ printk("PC : %08Lx%08Lx LINK: %08Lx%08Lx SP : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->sr) >> 32;
+ al = (regs->sr) & 0xffffffff;
+ asm volatile ("getcon " __TEA ", %0":"=r"(bh));
+ asm volatile ("getcon " __TEA ", %0":"=r"(bl));
+ bh = (bh) >> 32;
+ bl = (bl) & 0xffffffff;
+ asm volatile ("getcon " __KCR0 ", %0":"=r"(ch));
+ asm volatile ("getcon " __KCR0 ", %0":"=r"(cl));
+ ch = (ch) >> 32;
+ cl = (cl) & 0xffffffff;
+ printk("SR : %08Lx%08Lx TEA : %08Lx%08Lx KCR0: %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[0]) >> 32;
+ al = (regs->regs[0]) & 0xffffffff;
+ bh = (regs->regs[1]) >> 32;
+ bl = (regs->regs[1]) & 0xffffffff;
+ ch = (regs->regs[2]) >> 32;
+ cl = (regs->regs[2]) & 0xffffffff;
+ printk("R0 : %08Lx%08Lx R1 : %08Lx%08Lx R2 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[3]) >> 32;
+ al = (regs->regs[3]) & 0xffffffff;
+ bh = (regs->regs[4]) >> 32;
+ bl = (regs->regs[4]) & 0xffffffff;
+ ch = (regs->regs[5]) >> 32;
+ cl = (regs->regs[5]) & 0xffffffff;
+ printk("R3 : %08Lx%08Lx R4 : %08Lx%08Lx R5 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[6]) >> 32;
+ al = (regs->regs[6]) & 0xffffffff;
+ bh = (regs->regs[7]) >> 32;
+ bl = (regs->regs[7]) & 0xffffffff;
+ ch = (regs->regs[8]) >> 32;
+ cl = (regs->regs[8]) & 0xffffffff;
+ printk("R6 : %08Lx%08Lx R7 : %08Lx%08Lx R8 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[9]) >> 32;
+ al = (regs->regs[9]) & 0xffffffff;
+ bh = (regs->regs[10]) >> 32;
+ bl = (regs->regs[10]) & 0xffffffff;
+ ch = (regs->regs[11]) >> 32;
+ cl = (regs->regs[11]) & 0xffffffff;
+ printk("R9 : %08Lx%08Lx R10 : %08Lx%08Lx R11 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+ printk("....\n");
+
+ ah = (regs->tregs[0]) >> 32;
+ al = (regs->tregs[0]) & 0xffffffff;
+ bh = (regs->tregs[1]) >> 32;
+ bl = (regs->tregs[1]) & 0xffffffff;
+ ch = (regs->tregs[2]) >> 32;
+ cl = (regs->tregs[2]) & 0xffffffff;
+ printk("T0 : %08Lx%08Lx T1 : %08Lx%08Lx T2 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+ printk("....\n");
+
+ print_dtlb();
+ print_itlb();
+}
+
+/* ======================================================================= */
+
+/*
+** Depending on <base> scan the MMU, Data or Instrction side
+** looking for a valid mapping matching Eaddr & asid.
+** Return -1 if not found or the TLB id entry otherwise.
+** Note: it works only for 4k pages!
+*/
+static unsigned long
+lookup_mmu_side(unsigned long base, unsigned long Eaddr, unsigned long asid)
+{
+ regType_t pteH;
+ unsigned long epn;
+ int count;
+
+ epn = Eaddr & 0xfffff000;
+
+ for (count = 0; count < MAX_TLBs; count++, base += TLB_STEP) {
+ pteH = getConfigReg(base);
+ if (GET_VALID(pteH))
+ if ((unsigned long) GET_EPN(pteH) == epn)
+ if ((unsigned long) GET_ASID(pteH) == asid)
+ break;
+ }
+ return ((unsigned long) ((count < MAX_TLBs) ? base : -1));
+}
+
+unsigned long lookup_dtlb(unsigned long Eaddr)
+{
+ unsigned long asid = get_asid();
+ return (lookup_mmu_side((u64) DTLB_BASE, Eaddr, asid));
+}
+
+unsigned long lookup_itlb(unsigned long Eaddr)
+{
+ unsigned long asid = get_asid();
+ return (lookup_mmu_side((u64) ITLB_BASE, Eaddr, asid));
+}
+
+void print_page(struct page *page)
+{
+ printk(" page[%p] -> index 0x%lx, count 0x%x, flags 0x%lx\n",
+ page, page->index, page_count(page), page->flags);
+ printk(" address_space = %p, pages =%ld\n", page->mapping,
+ page->mapping->nrpages);
+
+}
--- /dev/null
+/*
+ * Copyright (C) 2000 David J. Mckay (david.mckay@st.com)
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * This file contains the I/O routines for use on the overdrive board
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/io.h>
+#ifdef CONFIG_SH_CAYMAN
+#include <asm/cayman.h>
+#endif
+
+/*
+ * readX/writeX() are used to access memory mapped devices. On some
+ * architectures the memory mapped IO stuff needs to be accessed
+ * differently. On the SuperH architecture, we just read/write the
+ * memory location directly.
+ */
+
+#define dprintk(x...)
+
+static int io_addr(int x) {
+ if (x < 0x400) {
+#ifdef CONFIG_SH_CAYMAN
+ return (x << 2) | smsc_superio_virt;
+#else
+ panic ("Illegal access to I/O port 0x%04x\n", x);
+ return 0;
+#endif
+ } else {
+#ifdef CONFIG_PCI
+ return (x + pciio_virt);
+#else
+ panic ("Illegal access to I/O port 0x%04x\n", x);
+ return 0;
+#endif
+ }
+}
+
+unsigned long inb(unsigned long port)
+{
+ unsigned long r;
+
+ r = ctrl_inb(io_addr(port));
+ dprintk("inb(0x%x)=0x%x (0x%x)\n", port, r, io_addr(port));
+ return r;
+}
+
+unsigned long inw(unsigned long port)
+{
+ unsigned long r;
+
+ r = ctrl_inw(io_addr(port));
+ dprintk("inw(0x%x)=0x%x (0x%x)\n", port, r, io_addr(port));
+ return r;
+}
+
+unsigned long inl(unsigned long port)
+{
+ unsigned long r;
+
+ r = ctrl_inl(io_addr(port));
+ dprintk("inl(0x%x)=0x%x (0x%x)\n", port, r, io_addr(port));
+ return r;
+}
+
+void outb(unsigned long value, unsigned long port)
+{
+ dprintk("outb(0x%x,0x%x) (0x%x)\n", value, port, io_addr(port));
+ ctrl_outb(value, io_addr(port));
+}
+
+void outw(unsigned long value, unsigned long port)
+{
+ dprintk("outw(0x%x,0x%x) (0x%x)\n", value, port, io_addr(port));
+ ctrl_outw(value, io_addr(port));
+}
+
+void outl(unsigned long value, unsigned long port)
+{
+ dprintk("outw(0x%x,0x%x) (0x%x)\n", value, port, io_addr(port));
+ ctrl_outl(value, io_addr(port));
+}
+
+/* This is horrible at the moment - needs more work to do something sensible */
+#define IO_DELAY()
+
+#define OUT_DELAY(x,type) \
+void out##x##_p(unsigned type value,unsigned long port){out##x(value,port);IO_DELAY();}
+
+#define IN_DELAY(x,type) \
+unsigned type in##x##_p(unsigned long port) {unsigned type tmp=in##x(port);IO_DELAY();return tmp;}
+
+#if 1
+OUT_DELAY(b, long) OUT_DELAY(w, long) OUT_DELAY(l, long)
+ IN_DELAY(b, long) IN_DELAY(w, long) IN_DELAY(l, long)
+#endif
+/* Now for the string version of these functions */
+void outsb(unsigned long port, const void *addr, unsigned long count)
+{
+ int i;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p++) {
+ outb(*p, port);
+ }
+}
+
+void insb(unsigned long port, void *addr, unsigned long count)
+{
+ int i;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p++) {
+ *p = inb(port);
+ }
+}
+
+/* For the 16 and 32 bit string functions, we have to worry about alignment.
+ * The SH does not do unaligned accesses, so we have to read as bytes and
+ * then write as a word or dword.
+ * This can be optimised a lot more, especially in the case where the data
+ * is aligned
+ */
+
+void outsw(unsigned long port, const void *addr, unsigned long count)
+{
+ int i;
+ unsigned short tmp;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p += 2) {
+ tmp = (*p) | ((*(p + 1)) << 8);
+ outw(tmp, port);
+ }
+}
+
+void insw(unsigned long port, void *addr, unsigned long count)
+{
+ int i;
+ unsigned short tmp;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p += 2) {
+ tmp = inw(port);
+ p[0] = tmp & 0xff;
+ p[1] = (tmp >> 8) & 0xff;
+ }
+}
+
+void outsl(unsigned long port, const void *addr, unsigned long count)
+{
+ int i;
+ unsigned tmp;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p += 4) {
+ tmp = (*p) | ((*(p + 1)) << 8) | ((*(p + 2)) << 16) |
+ ((*(p + 3)) << 24);
+ outl(tmp, port);
+ }
+}
+
+void insl(unsigned long port, void *addr, unsigned long count)
+{
+ int i;
+ unsigned tmp;
+ unsigned char *p = (unsigned char *) addr;
+
+ for (i = 0; i < count; i++, p += 4) {
+ tmp = inl(port);
+ p[0] = tmp & 0xff;
+ p[1] = (tmp >> 8) & 0xff;
+ p[2] = (tmp >> 16) & 0xff;
+ p[3] = (tmp >> 24) & 0xff;
+
+ }
+}
+
+void memcpy_toio(unsigned long to, const void *from, long count)
+{
+ unsigned char *p = (unsigned char *) from;
+
+ while (count) {
+ count--;
+ writeb(*p++, to++);
+ }
+}
+
+void memcpy_fromio(void *to, unsigned long from, long count)
+{
+ int i;
+ unsigned char *p = (unsigned char *) to;
+
+ for (i = 0; i < count; i++) {
+ p[i] = readb(from);
+ from++;
+ }
+}
--- /dev/null
+/*
+ * Copyright (C) 2002 Mark Debbage (Mark.Debbage@superh.com)
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/string.h>
+
+// This is a simplistic optimization of memcpy to increase the
+// granularity of access beyond one byte using aligned
+// loads and stores. This is not an optimal implementation
+// for SH-5 (especially with regard to prefetching and the cache),
+// and a better version should be provided later ...
+
+void *memcpy(void *dest, const void *src, size_t count)
+{
+ char *d = (char *) dest, *s = (char *) src;
+
+ if (count >= 32) {
+ int i = 8 - (((unsigned long) d) & 0x7);
+
+ if (i != 8)
+ while (i-- && count--) {
+ *d++ = *s++;
+ }
+
+ if (((((unsigned long) d) & 0x7) == 0) &&
+ ((((unsigned long) s) & 0x7) == 0)) {
+ while (count >= 32) {
+ unsigned long long t1, t2, t3, t4;
+ t1 = *(unsigned long long *) (s);
+ t2 = *(unsigned long long *) (s + 8);
+ t3 = *(unsigned long long *) (s + 16);
+ t4 = *(unsigned long long *) (s + 24);
+ *(unsigned long long *) (d) = t1;
+ *(unsigned long long *) (d + 8) = t2;
+ *(unsigned long long *) (d + 16) = t3;
+ *(unsigned long long *) (d + 24) = t4;
+ d += 32;
+ s += 32;
+ count -= 32;
+ }
+ while (count >= 8) {
+ *(unsigned long long *) d =
+ *(unsigned long long *) s;
+ d += 8;
+ s += 8;
+ count -= 8;
+ }
+ }
+
+ if (((((unsigned long) d) & 0x3) == 0) &&
+ ((((unsigned long) s) & 0x3) == 0)) {
+ while (count >= 4) {
+ *(unsigned long *) d = *(unsigned long *) s;
+ d += 4;
+ s += 4;
+ count -= 4;
+ }
+ }
+
+ if (((((unsigned long) d) & 0x1) == 0) &&
+ ((((unsigned long) s) & 0x1) == 0)) {
+ while (count >= 2) {
+ *(unsigned short *) d = *(unsigned short *) s;
+ d += 2;
+ s += 2;
+ count -= 2;
+ }
+ }
+ }
+
+ while (count--) {
+ *d++ = *s++;
+ }
+
+ return d;
+}
--- /dev/null
+/*
+ * FIXME: old compatibility stuff, will be removed soon.
+ */
+
+#include <net/checksum.h>
+
+unsigned int csum_partial_copy( const char *src, char *dst, int len, int sum)
+{
+ int src_err=0, dst_err=0;
+
+ sum = csum_partial_copy_generic ( src, dst, len, sum, &src_err, &dst_err);
+
+ if (src_err || dst_err)
+ printk("old csum_partial_copy_fromuser(), tell mingo to convert me.\n");
+
+ return sum;
+}
--- /dev/null
+/*
+ Copyright 2003 Richard Curnow, SuperH (UK) Ltd.
+
+ This file is subject to the terms and conditions of the GNU General Public
+ License. See the file "COPYING" in the main directory of this archive
+ for more details.
+
+ Tight version of memset for the case of just clearing a page. It turns out
+ that having the alloco's spaced out slightly due to the increment/branch
+ pair causes them to contend less for access to the cache. Similarly,
+ keeping the stores apart from the allocos causes less contention. => Do two
+ separate loops. Do multiple stores per loop to amortise the
+ increment/branch cost a little.
+
+ Parameters:
+ r2 : source effective address (start of page)
+
+ Always clears 4096 bytes.
+
+*/
+
+ .section .text..SHmedia32,"ax"
+ .little
+
+ .balign 8
+ .global sh64_page_clear
+sh64_page_clear:
+ pta/l 1f, tr1
+ pta/l 2f, tr2
+ ptabs/l r18, tr0
+
+ movi 4096, r7
+ add r2, r7, r7
+ add r2, r63, r6
+1:
+ alloco r6, 0
+ addi r6, 32, r6
+ bgt/l r7, r6, tr1
+
+ add r2, r63, r6
+2:
+ st.q r6, 0, r63
+ st.q r6, 8, r63
+ st.q r6, 16, r63
+ st.q r6, 24, r63
+ addi r6, 32, r6
+ bgt/l r7, r6, tr2
+
+ blink tr0, r63
+
+
--- /dev/null
+/*
+ Copyright 2003 Richard Curnow, SuperH (UK) Ltd.
+
+ This file is subject to the terms and conditions of the GNU General Public
+ License. See the file "COPYING" in the main directory of this archive
+ for more details.
+
+ Tight version of mempy for the case of just copying a page.
+ Prefetch strategy empirically optimised against RTL simulations
+ of SH5-101 cut2 eval chip with Cayman board DDR memory.
+
+ Parameters:
+ r2 : source effective address (start of page)
+ r3 : destination effective address (start of page)
+
+ Always copies 4096 bytes.
+
+ Points to review.
+ * Currently the prefetch is 4 lines ahead and the alloco is 2 lines ahead.
+ It seems like the prefetch needs to be at at least 4 lines ahead to get
+ the data into the cache in time, and the allocos contend with outstanding
+ prefetches for the same cache set, so it's better to have the numbers
+ different.
+ */
+
+ .section .text..SHmedia32,"ax"
+ .little
+
+ .balign 8
+ .global sh64_page_copy
+sh64_page_copy:
+
+ /* Copy 4096 bytes worth of data from r2 to r3.
+ Do prefetches 4 lines ahead.
+ Do alloco 2 lines ahead */
+
+ pta 1f, tr1
+ pta 2f, tr2
+ pta 3f, tr3
+ ptabs r18, tr0
+
+ ld.q r2, 0x00, r63
+ ld.q r2, 0x20, r63
+ ld.q r2, 0x40, r63
+ ld.q r2, 0x60, r63
+ alloco r3, 0x00
+ alloco r3, 0x20
+
+ movi 3968, r6
+ add r3, r6, r6
+ addi r6, 64, r7
+ addi r7, 64, r8
+ sub r2, r3, r60
+ addi r60, 8, r61
+ addi r61, 8, r62
+ addi r62, 8, r23
+ addi r60, 0x80, r22
+
+/* Minimal code size. The extra branches inside the loop don't cost much
+ because they overlap with the time spent waiting for prefetches to
+ complete. */
+1:
+ bge/u r3, r6, tr2 ! skip prefetch for last 4 lines
+ ldx.q r3, r22, r63 ! prefetch 4 lines hence
+2:
+ bge/u r3, r7, tr3 ! skip alloco for last 2 lines
+ alloco r3, 0x40 ! alloc destination line 2 lines ahead
+3:
+ ldx.q r3, r60, r36
+ ldx.q r3, r61, r37
+ ldx.q r3, r62, r38
+ ldx.q r3, r23, r39
+ st.q r3, 0, r36
+ st.q r3, 8, r37
+ st.q r3, 16, r38
+ st.q r3, 24, r39
+ addi r3, 32, r3
+ bgt/l r8, r3, tr1
+
+ blink tr0, r63 ! return
+
+
--- /dev/null
+/*
+ * Copyright (C) 2003 Richard Curnow, SuperH UK Limited
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <asm/registers.h>
+
+/* THIS IS A PHYSICAL ADDRESS */
+#define HDSP2534_ADDR (0x04002100)
+
+#ifdef CONFIG_SH_CAYMAN
+
+static void poor_mans_delay(void)
+{
+ int i;
+ for (i = 0; i < 2500000; i++) {
+ } /* poor man's delay */
+}
+
+static void show_value(unsigned long x)
+{
+ int i;
+ unsigned nibble;
+ for (i = 0; i < 8; i++) {
+ nibble = ((x >> (i * 4)) & 0xf);
+
+ ctrl_outb(nibble + ((nibble > 9) ? 55 : 48),
+ HDSP2534_ADDR + 0xe0 + ((7 - i) << 2));
+ }
+}
+
+#endif
+
+void
+panic_handler(unsigned long panicPC, unsigned long panicSSR,
+ unsigned long panicEXPEVT)
+{
+#ifdef CONFIG_SH_CAYMAN
+ while (1) {
+ /* This piece of code displays the PC on the LED display */
+ show_value(panicPC);
+ poor_mans_delay();
+ show_value(panicSSR);
+ poor_mans_delay();
+ show_value(panicEXPEVT);
+ poor_mans_delay();
+ }
+#endif
+
+ /* Never return from the panic handler */
+ for (;;) ;
+
+}
--- /dev/null
+/*
+ * arch/sh64/lib/udelay.c
+ *
+ * Delay routines, using a pre-computed "loops_per_jiffy" value.
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <asm/param.h>
+
+extern unsigned long loops_per_jiffy;
+
+/*
+ * Use only for very small delays (< 1 msec).
+ *
+ * The active part of our cycle counter is only 32-bits wide, and
+ * we're treating the difference between two marks as signed. On
+ * a 1GHz box, that's about 2 seconds.
+ */
+
+void __delay(int loops)
+{
+ long long dummy;
+ __asm__ __volatile__("gettr tr0, %1\n\t"
+ "pta $+4, tr0\n\t"
+ "addi %0, -1, %0\n\t"
+ "bne %0, r63, tr0\n\t"
+ "ptabs %1, tr0\n\t":"=r"(loops),
+ "=r"(dummy)
+ :"0"(loops));
+}
+
+void __udelay(unsigned long long usecs, unsigned long lpj)
+{
+ usecs *= (((unsigned long long) HZ << 32) / 1000000) * lpj;
+ __delay((long long) usecs >> 32);
+}
+
+void __ndelay(unsigned long long nsecs, unsigned long lpj)
+{
+ nsecs *= (((unsigned long long) HZ << 32) / 1000000000) * lpj;
+ __delay((long long) nsecs >> 32);
+}
+
+void udelay(unsigned long usecs)
+{
+ __udelay(usecs, loops_per_jiffy);
+}
+
+void ndelay(unsigned long nsecs)
+{
+ __ndelay(nsecs, loops_per_jiffy);
+}
+
--- /dev/null
+#
+# Makefile for the Hitachi Cayman specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+obj-y := setup.o irq.o
+obj-$(CONFIG_HEARTBEAT) += led.o
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/irq_cayman.c
+ *
+ * SH-5 Cayman Interrupt Support
+ *
+ * This file handles the board specific parts of the Cayman interrupt system
+ *
+ * Copyright (C) 2002 Stuart Menefy
+ */
+
+#include <linux/config.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+#include <asm/io.h>
+#include <linux/irq.h>
+#include <linux/interrupt.h>
+#include <linux/signal.h>
+#include <asm/cayman.h>
+
+unsigned long epld_virt;
+
+#define EPLD_BASE 0x04002000
+#define EPLD_STATUS_BASE (epld_virt + 0x10)
+#define EPLD_MASK_BASE (epld_virt + 0x20)
+
+/* Note the SMSC SuperIO chip and SMSC LAN chip interrupts are all muxed onto
+ the same SH-5 interrupt */
+
+static irqreturn_t cayman_interrupt_smsc(int irq, void *dev_id, struct pt_regs *regs)
+{
+ printk(KERN_INFO "CAYMAN: spurious SMSC interrupt\n");
+ return IRQ_NONE;
+}
+
+static irqreturn_t cayman_interrupt_pci2(int irq, void *dev_id, struct pt_regs *regs)
+{
+ printk(KERN_INFO "CAYMAN: spurious PCI interrupt, IRQ %d\n", irq);
+ return IRQ_NONE;
+}
+
+static struct irqaction cayman_action_smsc = {
+ .name = "Cayman SMSC Mux",
+ .handler = cayman_interrupt_smsc,
+ .flags = SA_INTERRUPT,
+};
+
+static struct irqaction cayman_action_pci2 = {
+ .name = "Cayman PCI2 Mux",
+ .handler = cayman_interrupt_pci2,
+ .flags = SA_INTERRUPT,
+};
+
+static void enable_cayman_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned long mask;
+ unsigned int reg;
+ unsigned char bit;
+
+ irq -= START_EXT_IRQS;
+ reg = EPLD_MASK_BASE + ((irq / 8) << 2);
+ bit = 1<<(irq % 8);
+ save_and_cli(flags);
+ mask = ctrl_inl(reg);
+ mask |= bit;
+ ctrl_outl(mask, reg);
+ restore_flags(flags);
+}
+
+void disable_cayman_irq(unsigned int irq)
+{
+ unsigned long flags;
+ unsigned long mask;
+ unsigned int reg;
+ unsigned char bit;
+
+ irq -= START_EXT_IRQS;
+ reg = EPLD_MASK_BASE + ((irq / 8) << 2);
+ bit = 1<<(irq % 8);
+ save_and_cli(flags);
+ mask = ctrl_inl(reg);
+ mask &= ~bit;
+ ctrl_outl(mask, reg);
+ restore_flags(flags);
+}
+
+static void ack_cayman_irq(unsigned int irq)
+{
+ disable_cayman_irq(irq);
+}
+
+static void end_cayman_irq(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED|IRQ_INPROGRESS)))
+ enable_cayman_irq(irq);
+}
+
+static unsigned int startup_cayman_irq(unsigned int irq)
+{
+ enable_cayman_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void shutdown_cayman_irq(unsigned int irq)
+{
+ disable_cayman_irq(irq);
+}
+
+struct hw_interrupt_type cayman_irq_type = {
+ .typename = "Cayman-IRQ",
+ .startup = startup_cayman_irq,
+ .shutdown = shutdown_cayman_irq,
+ .enable = enable_cayman_irq,
+ .disable = disable_cayman_irq,
+ .ack = ack_cayman_irq,
+ .end = end_cayman_irq,
+};
+
+int cayman_irq_demux(int evt)
+{
+ int irq = intc_evt_to_irq[evt];
+
+ if (irq == SMSC_IRQ) {
+ unsigned long status;
+ int i;
+
+ status = ctrl_inl(EPLD_STATUS_BASE) &
+ ctrl_inl(EPLD_MASK_BASE) & 0xff;
+ if (status == 0) {
+ irq = -1;
+ } else {
+ for (i=0; i<8; i++) {
+ if (status & (1<<i))
+ break;
+ }
+ irq = START_EXT_IRQS + i;
+ }
+ }
+
+ if (irq == PCI2_IRQ) {
+ unsigned long status;
+ int i;
+
+ status = ctrl_inl(EPLD_STATUS_BASE + 3 * sizeof(u32)) &
+ ctrl_inl(EPLD_MASK_BASE + 3 * sizeof(u32)) & 0xff;
+ if (status == 0) {
+ irq = -1;
+ } else {
+ for (i=0; i<8; i++) {
+ if (status & (1<<i))
+ break;
+ }
+ irq = START_EXT_IRQS + (3 * 8) + i;
+ }
+ }
+
+ return irq;
+}
+
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_SYSCTL)
+int cayman_irq_describe(char* p, int irq)
+{
+ if (irq < NR_INTC_IRQS) {
+ return intc_irq_describe(p, irq);
+ } else if (irq < NR_INTC_IRQS + 8) {
+ return sprintf(p, "(SMSC %d)", irq - NR_INTC_IRQS);
+ } else if ((irq >= NR_INTC_IRQS + 24) && (irq < NR_INTC_IRQS + 32)) {
+ return sprintf(p, "(PCI2 %d)", irq - (NR_INTC_IRQS + 24));
+ }
+
+ return 0;
+}
+#endif
+
+void init_cayman_irq(void)
+{
+ int i;
+
+ epld_virt = onchip_remap(EPLD_BASE, 1024, "EPLD");
+ if (!epld_virt) {
+ printk(KERN_ERR "Cayman IRQ: Unable to remap EPLD\n");
+ return;
+ }
+
+ for (i=0; i<NR_EXT_IRQS; i++) {
+ irq_desc[START_EXT_IRQS + i].handler = &cayman_irq_type;
+ }
+
+ /* Setup the SMSC interrupt */
+ setup_irq(SMSC_IRQ, &cayman_action_smsc);
+ setup_irq(PCI2_IRQ, &cayman_action_pci2);
+}
--- /dev/null
+/*
+ * arch/sh64/kernel/led_cayman.c
+ *
+ * Copyright (C) 2002 Stuart Menefy <stuart.menefy@st.com>
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Flash the LEDs
+ */
+#include <asm/io.h>
+
+/*
+** It is supposed these functions to be used for a low level
+** debugging (via Cayman LEDs), hence to be available as soon
+** as possible.
+** Unfortunately Cayman LEDs relies on Cayman EPLD to be mapped
+** (this happen when IRQ are initialized... quite late).
+** These triky dependencies should be removed. Temporary, it
+** may be enough to NOP until EPLD is mapped.
+*/
+
+extern unsigned long epld_virt;
+
+#define LED_ADDR (epld_virt + 0x008)
+#define HDSP2534_ADDR (epld_virt + 0x100)
+
+void mach_led(int position, int value)
+{
+ if (!epld_virt)
+ return;
+
+ if (value)
+ ctrl_outl(0, LED_ADDR);
+ else
+ ctrl_outl(1, LED_ADDR);
+
+}
+
+void mach_alphanum(int position, unsigned char value)
+{
+ if (!epld_virt)
+ return;
+
+ ctrl_outb(value, HDSP2534_ADDR + 0xe0 + (position << 2));
+}
+
+void mach_alphanum_brightness(int setting)
+{
+ ctrl_outb(setting & 7, HDSP2534_ADDR + 0xc0);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mach-cayman/setup.c
+ *
+ * SH5 Cayman support
+ *
+ * This file handles the architecture-dependent parts of initialization
+ *
+ * Copyright David J. Mckay.
+ * Needs major work!
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time.
+ *
+ * lethal@linux-sh.org: 15th May 2003
+ * Use the generic procfs cpuinfo interface, just return a valid board name.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+#include <asm/processor.h>
+#include <asm/platform.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+
+#define RES_COUNT(res) ((sizeof((res))/sizeof(struct resource)))
+
+/*
+ * Platform Dependent Interrupt Priorities.
+ */
+
+/* Using defaults defined in irq.h */
+#define RES NO_PRIORITY /* Disabled */
+#define IR0 IRL0_PRIORITY /* IRLs */
+#define IR1 IRL1_PRIORITY
+#define IR2 IRL2_PRIORITY
+#define IR3 IRL3_PRIORITY
+#define PCA INTA_PRIORITY /* PCI Ints */
+#define PCB INTB_PRIORITY
+#define PCC INTC_PRIORITY
+#define PCD INTD_PRIORITY
+#define SER TOP_PRIORITY
+#define ERR TOP_PRIORITY
+#define PW0 TOP_PRIORITY
+#define PW1 TOP_PRIORITY
+#define PW2 TOP_PRIORITY
+#define PW3 TOP_PRIORITY
+#define DM0 NO_PRIORITY /* DMA Ints */
+#define DM1 NO_PRIORITY
+#define DM2 NO_PRIORITY
+#define DM3 NO_PRIORITY
+#define DAE NO_PRIORITY
+#define TU0 TIMER_PRIORITY /* TMU Ints */
+#define TU1 NO_PRIORITY
+#define TU2 NO_PRIORITY
+#define TI2 NO_PRIORITY
+#define ATI NO_PRIORITY /* RTC Ints */
+#define PRI NO_PRIORITY
+#define CUI RTC_PRIORITY
+#define ERI SCIF_PRIORITY /* SCIF Ints */
+#define RXI SCIF_PRIORITY
+#define BRI SCIF_PRIORITY
+#define TXI SCIF_PRIORITY
+#define ITI TOP_PRIORITY /* WDT Ints */
+
+/* Setup for the SMSC FDC37C935 */
+#define SMSC_SUPERIO_BASE 0x04000000
+#define SMSC_CONFIG_PORT_ADDR 0x3f0
+#define SMSC_INDEX_PORT_ADDR SMSC_CONFIG_PORT_ADDR
+#define SMSC_DATA_PORT_ADDR 0x3f1
+
+#define SMSC_ENTER_CONFIG_KEY 0x55
+#define SMSC_EXIT_CONFIG_KEY 0xaa
+
+#define SMCS_LOGICAL_DEV_INDEX 0x07
+#define SMSC_DEVICE_ID_INDEX 0x20
+#define SMSC_DEVICE_REV_INDEX 0x21
+#define SMSC_ACTIVATE_INDEX 0x30
+#define SMSC_PRIMARY_INT_INDEX 0x70
+#define SMSC_SECONDARY_INT_INDEX 0x72
+
+#define SMSC_KEYBOARD_DEVICE 7
+
+#define SMSC_SUPERIO_READ_INDEXED(index) ({ \
+ outb((index), SMSC_INDEX_PORT_ADDR); \
+ inb(SMSC_DATA_PORT_ADDR); })
+#define SMSC_SUPERIO_WRITE_INDEXED(val, index) ({ \
+ outb((index), SMSC_INDEX_PORT_ADDR); \
+ outb((val), SMSC_DATA_PORT_ADDR); })
+
+unsigned long smsc_superio_virt;
+
+/*
+ * Platform dependent structures: maps and parms block.
+ */
+struct resource io_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource kram_resources[] = {
+ { "Kernel code", 0, 0 }, /* These must be last in the array */
+ { "Kernel data", 0, 0 } /* These must be last in the array */
+};
+
+struct resource xram_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource rom_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct sh64_platform platform_parms = {
+ .readonly_rootfs = 1,
+ .initial_root_dev = 0x0100,
+ .loader_type = 1,
+ .io_res_p = io_resources,
+ .io_res_count = RES_COUNT(io_resources),
+ .kram_res_p = kram_resources,
+ .kram_res_count = RES_COUNT(kram_resources),
+ .xram_res_p = xram_resources,
+ .xram_res_count = RES_COUNT(xram_resources),
+ .rom_res_p = rom_resources,
+ .rom_res_count = RES_COUNT(rom_resources),
+};
+
+int platform_int_priority[NR_INTC_IRQS] = {
+ IR0, IR1, IR2, IR3, PCA, PCB, PCC, PCD, /* IRQ 0- 7 */
+ RES, RES, RES, RES, SER, ERR, PW3, PW2, /* IRQ 8-15 */
+ PW1, PW0, DM0, DM1, DM2, DM3, DAE, RES, /* IRQ 16-23 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 24-31 */
+ TU0, TU1, TU2, TI2, ATI, PRI, CUI, ERI, /* IRQ 32-39 */
+ RXI, BRI, TXI, RES, RES, RES, RES, RES, /* IRQ 40-47 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 48-55 */
+ RES, RES, RES, RES, RES, RES, RES, ITI, /* IRQ 56-63 */
+};
+
+static int __init smsc_superio_setup(void)
+{
+ unsigned char devid, devrev;
+
+ smsc_superio_virt = onchip_remap(SMSC_SUPERIO_BASE, 1024, "SMSC SuperIO");
+ if (!smsc_superio_virt) {
+ panic("Unable to remap SMSC SuperIO\n");
+ }
+
+ /* Initially the chip is in run state */
+ /* Put it into configuration state */
+ outb(SMSC_ENTER_CONFIG_KEY, SMSC_CONFIG_PORT_ADDR);
+ outb(SMSC_ENTER_CONFIG_KEY, SMSC_CONFIG_PORT_ADDR);
+
+ /* Read device ID info */
+ devid = SMSC_SUPERIO_READ_INDEXED(SMSC_DEVICE_ID_INDEX);
+ devrev = SMSC_SUPERIO_READ_INDEXED(SMSC_DEVICE_REV_INDEX);
+ printk("SMSC SuperIO devid %02x rev %02x\n", devid, devrev);
+
+ /* Select the keyboard device */
+ SMSC_SUPERIO_WRITE_INDEXED(SMSC_KEYBOARD_DEVICE, SMCS_LOGICAL_DEV_INDEX);
+
+ /* enable it */
+ SMSC_SUPERIO_WRITE_INDEXED(1, SMSC_ACTIVATE_INDEX);
+
+ /* Select the interrupts */
+ /* On a PC keyboard is IRQ1, mouse is IRQ12 */
+ SMSC_SUPERIO_WRITE_INDEXED(1, SMSC_PRIMARY_INT_INDEX);
+ SMSC_SUPERIO_WRITE_INDEXED(12, SMSC_SECONDARY_INT_INDEX);
+
+ /* Exit the configuraton state */
+ outb(SMSC_EXIT_CONFIG_KEY, SMSC_CONFIG_PORT_ADDR);
+
+ return 0;
+}
+
+/* This is grotty, but, because kernel is always referenced on the link line
+ * before any devices, this is safe.
+ */
+__initcall(smsc_superio_setup);
+
+void __init platform_setup(void)
+{
+ /* Cayman platform leaves the decision to head.S, for now */
+ platform_parms.fpu_flags = fpu_in_use;
+}
+
+void __init platform_monitor(void)
+{
+ /* Nothing yet .. */
+}
+
+void __init platform_reserve(void)
+{
+ /* Nothing yet .. */
+}
+
+const char *get_system_type(void)
+{
+ return "Hitachi Cayman";
+}
+
--- /dev/null
+#
+# Makefile for the ST50 Harp specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+O_TARGET := harp.o
+
+obj-y := setup.o
+
+include $(TOPDIR)/Rules.make
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mach-harp/setup.c
+ *
+ * SH-5 Simulator Platform Support
+ *
+ * This file handles the architecture-dependent parts of initialization
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time. *
+ *
+ * lethal@linux-sh.org: 15th May 2003
+ * Use the generic procfs cpuinfo interface, just return a valid board name.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <asm/processor.h>
+#include <asm/platform.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+
+#define RES_COUNT(res) ((sizeof((res))/sizeof(struct resource)))
+
+/*
+ * Platform Dependent Interrupt Priorities.
+ */
+
+/* Using defaults defined in irq.h */
+#define RES NO_PRIORITY /* Disabled */
+#define IR0 IRL0_PRIORITY /* IRLs */
+#define IR1 IRL1_PRIORITY
+#define IR2 IRL2_PRIORITY
+#define IR3 IRL3_PRIORITY
+#define PCA INTA_PRIORITY /* PCI Ints */
+#define PCB INTB_PRIORITY
+#define PCC INTC_PRIORITY
+#define PCD INTD_PRIORITY
+#define SER TOP_PRIORITY
+#define ERR TOP_PRIORITY
+#define PW0 TOP_PRIORITY
+#define PW1 TOP_PRIORITY
+#define PW2 TOP_PRIORITY
+#define PW3 TOP_PRIORITY
+#define DM0 NO_PRIORITY /* DMA Ints */
+#define DM1 NO_PRIORITY
+#define DM2 NO_PRIORITY
+#define DM3 NO_PRIORITY
+#define DAE NO_PRIORITY
+#define TU0 TIMER_PRIORITY /* TMU Ints */
+#define TU1 NO_PRIORITY
+#define TU2 NO_PRIORITY
+#define TI2 NO_PRIORITY
+#define ATI NO_PRIORITY /* RTC Ints */
+#define PRI NO_PRIORITY
+#define CUI RTC_PRIORITY
+#define ERI SCIF_PRIORITY /* SCIF Ints */
+#define RXI SCIF_PRIORITY
+#define BRI SCIF_PRIORITY
+#define TXI SCIF_PRIORITY
+#define ITI TOP_PRIORITY /* WDT Ints */
+
+/*
+ * Platform dependent structures: maps and parms block.
+ */
+struct resource io_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource kram_resources[] = {
+ { "Kernel code", 0, 0 }, /* These must be last in the array */
+ { "Kernel data", 0, 0 } /* These must be last in the array */
+};
+
+struct resource xram_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource rom_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct sh64_platform platform_parms = {
+ .readonly_rootfs = 1,
+ .initial_root_dev = 0x0100,
+ .loader_type = 1,
+ .io_res_p = io_resources,
+ .io_res_count = RES_COUNT(io_resources),
+ .kram_res_p = kram_resources,
+ .kram_res_count = RES_COUNT(kram_resources),
+ .xram_res_p = xram_resources,
+ .xram_res_count = RES_COUNT(xram_resources),
+ .rom_res_p = rom_resources,
+ .rom_res_count = RES_COUNT(rom_resources),
+};
+
+int platform_int_priority[NR_INTC_IRQS] = {
+ IR0, IR1, IR2, IR3, PCA, PCB, PCC, PCD, /* IRQ 0- 7 */
+ RES, RES, RES, RES, SER, ERR, PW3, PW2, /* IRQ 8-15 */
+ PW1, PW0, DM0, DM1, DM2, DM3, DAE, RES, /* IRQ 16-23 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 24-31 */
+ TU0, TU1, TU2, TI2, ATI, PRI, CUI, ERI, /* IRQ 32-39 */
+ RXI, BRI, TXI, RES, RES, RES, RES, RES, /* IRQ 40-47 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 48-55 */
+ RES, RES, RES, RES, RES, RES, RES, ITI, /* IRQ 56-63 */
+};
+
+void __init platform_setup(void)
+{
+ /* Harp platform leaves the decision to head.S, for now */
+ platform_parms.fpu_flags = fpu_in_use;
+}
+
+void __init platform_monitor(void)
+{
+ /* Nothing yet .. */
+}
+
+void __init platform_reserve(void)
+{
+ /* Nothing yet .. */
+}
+
+const char *get_system_type(void)
+{
+ return "ST50 Harp";
+}
+
--- /dev/null
+#
+# Makefile for the SH-5 ROM/RAM specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+O_TARGET := romram.o
+
+obj-y := setup.o
+
+include $(TOPDIR)/Rules.make
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mach-romram/setup.c
+ *
+ * SH-5 ROM/RAM Platform Support
+ *
+ * This file handles the architecture-dependent parts of initialization
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time. *
+ *
+ * lethal@linux-sh.org: 15th May 2003
+ * Use the generic procfs cpuinfo interface, just return a valid board name.
+ *
+ * Sean.McGoogan@superh.com 17th Feb 2004
+ * copied from arch/sh64/mach-harp/setup.c
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <asm/processor.h>
+#include <asm/platform.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+
+#define RES_COUNT(res) ((sizeof((res))/sizeof(struct resource)))
+
+/*
+ * Platform Dependent Interrupt Priorities.
+ */
+
+/* Using defaults defined in irq.h */
+#define RES NO_PRIORITY /* Disabled */
+#define IR0 IRL0_PRIORITY /* IRLs */
+#define IR1 IRL1_PRIORITY
+#define IR2 IRL2_PRIORITY
+#define IR3 IRL3_PRIORITY
+#define PCA INTA_PRIORITY /* PCI Ints */
+#define PCB INTB_PRIORITY
+#define PCC INTC_PRIORITY
+#define PCD INTD_PRIORITY
+#define SER TOP_PRIORITY
+#define ERR TOP_PRIORITY
+#define PW0 TOP_PRIORITY
+#define PW1 TOP_PRIORITY
+#define PW2 TOP_PRIORITY
+#define PW3 TOP_PRIORITY
+#define DM0 NO_PRIORITY /* DMA Ints */
+#define DM1 NO_PRIORITY
+#define DM2 NO_PRIORITY
+#define DM3 NO_PRIORITY
+#define DAE NO_PRIORITY
+#define TU0 TIMER_PRIORITY /* TMU Ints */
+#define TU1 NO_PRIORITY
+#define TU2 NO_PRIORITY
+#define TI2 NO_PRIORITY
+#define ATI NO_PRIORITY /* RTC Ints */
+#define PRI NO_PRIORITY
+#define CUI RTC_PRIORITY
+#define ERI SCIF_PRIORITY /* SCIF Ints */
+#define RXI SCIF_PRIORITY
+#define BRI SCIF_PRIORITY
+#define TXI SCIF_PRIORITY
+#define ITI TOP_PRIORITY /* WDT Ints */
+
+/*
+ * Platform dependent structures: maps and parms block.
+ */
+struct resource io_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource kram_resources[] = {
+ { "Kernel code", 0, 0 }, /* These must be last in the array */
+ { "Kernel data", 0, 0 } /* These must be last in the array */
+};
+
+struct resource xram_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct resource rom_resources[] = {
+ /* To be updated with external devices */
+};
+
+struct sh64_platform platform_parms = {
+ .readonly_rootfs = 1,
+ .initial_root_dev = 0x0100,
+ .loader_type = 1,
+ .io_res_p = io_resources,
+ .io_res_count = RES_COUNT(io_resources),
+ .kram_res_p = kram_resources,
+ .kram_res_count = RES_COUNT(kram_resources),
+ .xram_res_p = xram_resources,
+ .xram_res_count = RES_COUNT(xram_resources),
+ .rom_res_p = rom_resources,
+ .rom_res_count = RES_COUNT(rom_resources),
+};
+
+int platform_int_priority[NR_INTC_IRQS] = {
+ IR0, IR1, IR2, IR3, PCA, PCB, PCC, PCD, /* IRQ 0- 7 */
+ RES, RES, RES, RES, SER, ERR, PW3, PW2, /* IRQ 8-15 */
+ PW1, PW0, DM0, DM1, DM2, DM3, DAE, RES, /* IRQ 16-23 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 24-31 */
+ TU0, TU1, TU2, TI2, ATI, PRI, CUI, ERI, /* IRQ 32-39 */
+ RXI, BRI, TXI, RES, RES, RES, RES, RES, /* IRQ 40-47 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 48-55 */
+ RES, RES, RES, RES, RES, RES, RES, ITI, /* IRQ 56-63 */
+};
+
+void __init platform_setup(void)
+{
+ /* ROM/RAM platform leaves the decision to head.S, for now */
+ platform_parms.fpu_flags = fpu_in_use;
+}
+
+void __init platform_monitor(void)
+{
+ /* Nothing yet .. */
+}
+
+void __init platform_reserve(void)
+{
+ /* Nothing yet .. */
+}
+
+const char *get_system_type(void)
+{
+ return "ROM/RAM";
+}
+
--- /dev/null
+#
+# Makefile for the SH-5 Simulator specific parts of the kernel
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+O_TARGET := sim.o
+
+obj-y := setup.o
+
+include $(TOPDIR)/Rules.make
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mach-sim/setup.c
+ *
+ * ST50 Simulator Platform Support
+ *
+ * This file handles the architecture-dependent parts of initialization
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * lethal@linux-sh.org: 15th May 2003
+ * Use the generic procfs cpuinfo interface, just return a valid board name.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/bootmem.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <asm/addrspace.h>
+#include <asm/processor.h>
+#include <asm/platform.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+
+#ifdef CONFIG_BLK_DEV_INITRD
+#include "../rootfs/rootfs.h"
+#endif
+
+static __init void platform_monitor(void);
+static __init void platform_setup(void);
+static __init void platform_reserve(void);
+
+
+#define PHYS_MEMORY CONFIG_MEMORY_SIZE_IN_MB*1024*1024
+
+#if (PHYS_MEMORY < P1SEG_FOOTPRINT_RAM)
+#error "Invalid kernel configuration. Physical memory below footprint requirements."
+#endif
+
+#define RAM_DISK_START CONFIG_MEMORY_START+P1SEG_INITRD_BLOCK /* Top of 4MB */
+#ifdef PLATFORM_ROMFS_SIZE
+#define RAM_DISK_SIZE (PAGE_ALIGN(PLATFORM_ROMFS_SIZE)) /* Variable Top */
+#if ((RAM_DISK_START + RAM_DISK_SIZE) > (CONFIG_MEMORY_START + PHYS_MEMORY))
+#error "Invalid kernel configuration. ROM RootFS exceeding physical memory."
+#endif
+#else
+#define RAM_DISK_SIZE P1SEG_INITRD_BLOCK_SIZE /* Top of 4MB */
+#endif
+
+#define RES_COUNT(res) ((sizeof((res))/sizeof(struct resource)))
+
+/*
+ * Platform Dependent Interrupt Priorities.
+ */
+
+/* Using defaults defined in irq.h */
+#define RES NO_PRIORITY /* Disabled */
+#define IR0 IRL0_PRIORITY /* IRLs */
+#define IR1 IRL1_PRIORITY
+#define IR2 IRL2_PRIORITY
+#define IR3 IRL3_PRIORITY
+#define PCA INTA_PRIORITY /* PCI Ints */
+#define PCB INTB_PRIORITY
+#define PCC INTC_PRIORITY
+#define PCD INTD_PRIORITY
+#define SER TOP_PRIORITY
+#define ERR TOP_PRIORITY
+#define PW0 TOP_PRIORITY
+#define PW1 TOP_PRIORITY
+#define PW2 TOP_PRIORITY
+#define PW3 TOP_PRIORITY
+#define DM0 NO_PRIORITY /* DMA Ints */
+#define DM1 NO_PRIORITY
+#define DM2 NO_PRIORITY
+#define DM3 NO_PRIORITY
+#define DAE NO_PRIORITY
+#define TU0 TIMER_PRIORITY /* TMU Ints */
+#define TU1 NO_PRIORITY
+#define TU2 NO_PRIORITY
+#define TI2 NO_PRIORITY
+#define ATI NO_PRIORITY /* RTC Ints */
+#define PRI NO_PRIORITY
+#define CUI RTC_PRIORITY
+#define ERI SCIF_PRIORITY /* SCIF Ints */
+#define RXI SCIF_PRIORITY
+#define BRI SCIF_PRIORITY
+#define TXI SCIF_PRIORITY
+#define ITI TOP_PRIORITY /* WDT Ints */
+
+/*
+ * Platform dependent structures: maps and parms block.
+ */
+struct resource io_resources[] = {
+ /* Nothing yet .. */
+};
+
+struct resource kram_resources[] = {
+ { "Kernel code", 0, 0 }, /* These must be last in the array */
+ { "Kernel data", 0, 0 } /* These must be last in the array */
+};
+
+struct resource xram_resources[] = {
+ /* Nothing yet .. */
+};
+
+struct resource rom_resources[] = {
+ /* Nothing yet .. */
+};
+
+struct sh64_platform platform_parms = {
+ .readonly_rootfs = 1,
+ .initial_root_dev = 0x0100,
+ .loader_type = 1,
+ .initrd_start = RAM_DISK_START,
+ .initrd_size = RAM_DISK_SIZE,
+ .io_res_p = io_resources,
+ .io_res_count = RES_COUNT(io_resources),
+ .kram_res_p = kram_resources,
+ .kram_res_count = RES_COUNT(kram_resources),
+ .xram_res_p = xram_resources,
+ .xram_res_count = RES_COUNT(xram_resources),
+ .rom_res_p = rom_resources,
+ .rom_res_count = RES_COUNT(rom_resources),
+};
+
+int platform_int_priority[NR_IRQS] = {
+ IR0, IR1, IR2, IR3, PCA, PCB, PCC, PCD, /* IRQ 0- 7 */
+ RES, RES, RES, RES, SER, ERR, PW3, PW2, /* IRQ 8-15 */
+ PW1, PW0, DM0, DM1, DM2, DM3, DAE, RES, /* IRQ 16-23 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 24-31 */
+ TU0, TU1, TU2, TI2, ATI, PRI, CUI, ERI, /* IRQ 32-39 */
+ RXI, BRI, TXI, RES, RES, RES, RES, RES, /* IRQ 40-47 */
+ RES, RES, RES, RES, RES, RES, RES, RES, /* IRQ 48-55 */
+ RES, RES, RES, RES, RES, RES, RES, ITI, /* IRQ 56-63 */
+};
+
+void __init platform_setup(void)
+{
+ /* Simulator platform leaves the decision to head.S */
+ platform_parms.fpu_flags = fpu_in_use;
+}
+
+void __init platform_monitor(void)
+{
+ /* Nothing yet .. */
+}
+
+void __init platform_reserve(void)
+{
+ /* Nothing yet .. */
+}
+
+const char *get_system_type(void)
+{
+ return "SH-5 Simulator";
+}
+
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000, 2001 Paolo Alberelli
+# Copyright (C) 2003, 2004 Paul Mundt
+#
+# Makefile for the sh64-specific parts of the Linux memory manager.
+#
+# Note! Dependencies are done automagically by 'make dep', which also
+# removes any old dependencies. DON'T put your own dependencies here
+# unless it's something special (ie not a .c file).
+#
+
+obj-y := init.o fault.o ioremap.o extable.o cache.o tlbmiss.o tlb.o
+
+obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+
+# Special flags for tlbmiss.o. This puts restrictions on the number of
+# caller-save registers that the compiler can target when building this file.
+# This is required because the code is called from a context in entry.S where
+# very few registers have been saved in the exception handler (for speed
+# reasons).
+# The caller save registers that have been saved and which can be used are
+# r2,r3,r4,r5 : argument passing
+# r15, r18 : SP and LINK
+# tr0-4 : allow all caller-save TR's. The compiler seems to be able to make
+# use of them, so it's probably beneficial to performance to save them
+# and have them available for it.
+#
+# The resources not listed below are callee save, i.e. the compiler is free to
+# use any of them and will spill them to the stack itself.
+
+CFLAGS_tlbmiss.o += -ffixed-r7 \
+ -ffixed-r8 -ffixed-r9 -ffixed-r10 -ffixed-r11 -ffixed-r12 \
+ -ffixed-r13 -ffixed-r14 -ffixed-r16 -ffixed-r17 -ffixed-r19 \
+ -ffixed-r20 -ffixed-r21 -ffixed-r22 -ffixed-r23 \
+ -ffixed-r24 -ffixed-r25 -ffixed-r26 -ffixed-r27 \
+ -ffixed-r36 -ffixed-r37 -ffixed-r38 -ffixed-r39 -ffixed-r40 \
+ -ffixed-r41 -ffixed-r42 -ffixed-r43 \
+ -ffixed-r60 -ffixed-r61 -ffixed-r62 \
+ -fomit-frame-pointer
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/cache.c
+ *
+ * Original version Copyright (C) 2000, 2001 Paolo Alberelli
+ * Second version Copyright (C) benedict.gaster@superh.com 2002
+ * Third version Copyright Richard.Curnow@superh.com 2003
+ * Hacks to third version Copyright (C) 2003 Paul Mundt
+ */
+
+/****************************************************************************/
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/threads.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/cache.h>
+#include <asm/tlb.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h> /* for flush_itlb_range */
+
+#include <linux/proc_fs.h>
+
+/* This function is in entry.S */
+extern unsigned long switch_and_save_asid(unsigned long new_asid);
+
+/* Wired TLB entry for the D-cache */
+static unsigned long long dtlb_cache_slot;
+
+/**
+ * sh64_cache_init()
+ *
+ * This is pretty much just a straightforward clone of the SH
+ * detect_cpu_and_cache_system().
+ *
+ * This function is responsible for setting up all of the cache
+ * info dynamically as well as taking care of CPU probing and
+ * setting up the relevant subtype data.
+ *
+ * FIXME: For the time being, we only really support the SH5-101
+ * out of the box, and don't support dynamic probing for things
+ * like the SH5-103 or even cut2 of the SH5-101. Implement this
+ * later!
+ */
+int __init sh64_cache_init(void)
+{
+ /*
+ * First, setup some sane values for the I-cache.
+ */
+ cpu_data->icache.ways = 4;
+ cpu_data->icache.sets = 256;
+ cpu_data->icache.linesz = L1_CACHE_BYTES;
+
+ /*
+ * FIXME: This can probably be cleaned up a bit as well.. for example,
+ * do we really need the way shift _and_ the way_step_shift ?? Judging
+ * by the existing code, I would guess no.. is there any valid reason
+ * why we need to be tracking this around?
+ */
+ cpu_data->icache.way_shift = 13;
+ cpu_data->icache.entry_shift = 5;
+ cpu_data->icache.set_shift = 4;
+ cpu_data->icache.way_step_shift = 16;
+ cpu_data->icache.asid_shift = 2;
+
+ /*
+ * way offset = cache size / associativity, so just don't factor in
+ * associativity in the first place..
+ */
+ cpu_data->icache.way_ofs = cpu_data->icache.sets *
+ cpu_data->icache.linesz;
+
+ cpu_data->icache.asid_mask = 0x3fc;
+ cpu_data->icache.idx_mask = 0x1fe0;
+ cpu_data->icache.epn_mask = 0xffffe000;
+ cpu_data->icache.flags = 0;
+
+ /*
+ * Next, setup some sane values for the D-cache.
+ *
+ * On the SH5, these are pretty consistent with the I-cache settings,
+ * so we just copy over the existing definitions.. these can be fixed
+ * up later, especially if we add runtime CPU probing.
+ *
+ * Though in the meantime it saves us from having to duplicate all of
+ * the above definitions..
+ */
+ cpu_data->dcache = cpu_data->icache;
+
+ /*
+ * Setup any cache-related flags here
+ */
+#if defined(CONFIG_DCACHE_WRITE_THROUGH)
+ set_bit(SH_CACHE_MODE_WT, &(cpu_data->dcache.flags));
+#elif defined(CONFIG_DCACHE_WRITE_BACK)
+ set_bit(SH_CACHE_MODE_WB, &(cpu_data->dcache.flags));
+#endif
+
+ /*
+ * We also need to reserve a slot for the D-cache in the DTLB, so we
+ * do this now ..
+ */
+ dtlb_cache_slot = sh64_get_wired_dtlb_entry();
+
+ return 0;
+}
+
+/*##########################################################################*/
+
+/* From here onwards, a rewrite of the implementation,
+ by Richard.Curnow@superh.com.
+
+ The major changes in this compared to the old version are;
+ 1. use more selective purging through OCBP instead of using ALLOCO to purge
+ by natural replacement. This avoids purging out unrelated cache lines
+ that happen to be in the same set.
+ 2. exploit the APIs copy_user_page and clear_user_page better
+ 3. be more selective about I-cache purging, in particular use invalidate_all
+ more sparingly.
+
+ */
+
+/*##########################################################################
+ SUPPORT FUNCTIONS
+ ##########################################################################*/
+
+/****************************************************************************/
+/* The following group of functions deal with mapping and unmapping a temporary
+ page into the DTLB slot that have been set aside for our exclusive use. */
+/* In order to accomplish this, we use the generic interface for adding and
+ removing a wired slot entry as defined in arch/sh64/mm/tlb.c */
+/****************************************************************************/
+
+static unsigned long slot_own_flags;
+
+static inline void sh64_setup_dtlb_cache_slot(unsigned long eaddr, unsigned long asid, unsigned long paddr)
+{
+ local_irq_save(slot_own_flags);
+ sh64_setup_tlb_slot(dtlb_cache_slot, eaddr, asid, paddr);
+}
+
+static inline void sh64_teardown_dtlb_cache_slot(void)
+{
+ sh64_teardown_tlb_slot(dtlb_cache_slot);
+ local_irq_restore(slot_own_flags);
+}
+
+/****************************************************************************/
+
+#ifndef CONFIG_ICACHE_DISABLED
+
+static void __inline__ sh64_icache_inv_all(void)
+{
+ unsigned long long addr, flag, data;
+ unsigned int flags;
+
+ addr=ICCR0;
+ flag=ICCR0_ICI;
+ data=0;
+
+ /* Make this a critical section for safety (probably not strictly necessary.) */
+ local_irq_save(flags);
+
+ /* Without %1 it gets unexplicably wrong */
+ asm volatile("getcfg %3, 0, %0\n\t"
+ "or %0, %2, %0\n\t"
+ "putcfg %3, 0, %0\n\t"
+ "synci"
+ : "=&r" (data)
+ : "0" (data), "r" (flag), "r" (addr));
+
+ local_irq_restore(flags);
+}
+
+static void sh64_icache_inv_kernel_range(unsigned long start, unsigned long end)
+{
+ /* Invalidate range of addresses [start,end] from the I-cache, where
+ * the addresses lie in the kernel superpage. */
+
+ unsigned long long ullend, addr, aligned_start;
+#if (NEFF == 32)
+ aligned_start = (unsigned long long)(signed long long)(signed long) start;
+#else
+#error "NEFF != 32"
+#endif
+ aligned_start &= L1_CACHE_ALIGN_MASK;
+ addr = aligned_start;
+#if (NEFF == 32)
+ ullend = (unsigned long long) (signed long long) (signed long) end;
+#else
+#error "NEFF != 32"
+#endif
+ while (addr <= ullend) {
+ asm __volatile__ ("icbi %0, 0" : : "r" (addr));
+ addr += L1_CACHE_BYTES;
+ }
+}
+
+static void sh64_icache_inv_user_page(struct vm_area_struct *vma, unsigned long eaddr)
+{
+ /* If we get called, we know that vma->vm_flags contains VM_EXEC.
+ Also, eaddr is page-aligned. */
+
+ unsigned long long addr, end_addr;
+ unsigned long flags = 0;
+ unsigned long running_asid, vma_asid;
+ addr = eaddr;
+ end_addr = addr + PAGE_SIZE;
+
+ /* Check whether we can use the current ASID for the I-cache
+ invalidation. For example, if we're called via
+ access_process_vm->flush_cache_page->here, (e.g. when reading from
+ /proc), 'running_asid' will be that of the reader, not of the
+ victim.
+
+ Also, note the risk that we might get pre-empted between the ASID
+ compare and blocking IRQs, and before we regain control, the
+ pid->ASID mapping changes. However, the whole cache will get
+ invalidated when the mapping is renewed, so the worst that can
+ happen is that the loop below ends up invalidating somebody else's
+ cache entries.
+ */
+
+ running_asid = get_asid();
+ vma_asid = (vma->vm_mm->context & MMU_CONTEXT_ASID_MASK);
+ if (running_asid != vma_asid) {
+ local_irq_save(flags);
+ switch_and_save_asid(vma_asid);
+ }
+ while (addr < end_addr) {
+ /* Worth unrolling a little */
+ asm __volatile__("icbi %0, 0" : : "r" (addr));
+ asm __volatile__("icbi %0, 32" : : "r" (addr));
+ asm __volatile__("icbi %0, 64" : : "r" (addr));
+ asm __volatile__("icbi %0, 96" : : "r" (addr));
+ addr += 128;
+ }
+ if (running_asid != vma_asid) {
+ switch_and_save_asid(running_asid);
+ local_irq_restore(flags);
+ }
+}
+
+/****************************************************************************/
+
+static void sh64_icache_inv_user_page_range(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ /* Used for invalidating big chunks of I-cache, i.e. assume the range
+ is whole pages. If 'start' or 'end' is not page aligned, the code
+ is conservative and invalidates to the ends of the enclosing pages.
+ This is functionally OK, just a performance loss. */
+
+ /* See the comments below in sh64_dcache_purge_user_range() regarding
+ the choice of algorithm. However, for the I-cache option (2) isn't
+ available because there are no physical tags so aliases can't be
+ resolved. The icbi instruction has to be used through the user
+ mapping. Because icbi is cheaper than ocbp on a cache hit, it
+ would be cheaper to use the selective code for a large range than is
+ possible with the D-cache. Just assume 64 for now as a working
+ figure.
+ */
+
+ int n_pages;
+
+ if (!mm) return;
+
+ n_pages = ((end - start) >> PAGE_SHIFT);
+ if (n_pages >= 64) {
+ sh64_icache_inv_all();
+ } else {
+ unsigned long aligned_start;
+ unsigned long eaddr;
+ unsigned long after_last_page_start;
+ unsigned long mm_asid, current_asid;
+ unsigned long long flags = 0ULL;
+
+ mm_asid = mm->context & MMU_CONTEXT_ASID_MASK;
+ current_asid = get_asid();
+
+ if (mm_asid != current_asid) {
+ /* Switch ASID and run the invalidate loop under cli */
+ local_irq_save(flags);
+ switch_and_save_asid(mm_asid);
+ }
+
+ aligned_start = start & PAGE_MASK;
+ after_last_page_start = PAGE_SIZE + ((end - 1) & PAGE_MASK);
+
+ while (aligned_start < after_last_page_start) {
+ struct vm_area_struct *vma;
+ unsigned long vma_end;
+ vma = find_vma(mm, aligned_start);
+ if (!vma || (aligned_start <= vma->vm_end)) {
+ /* Avoid getting stuck in an error condition */
+ aligned_start += PAGE_SIZE;
+ continue;
+ }
+ vma_end = vma->vm_end;
+ if (vma->vm_flags & VM_EXEC) {
+ /* Executable */
+ eaddr = aligned_start;
+ while (eaddr < vma_end) {
+ sh64_icache_inv_user_page(vma, eaddr);
+ eaddr += PAGE_SIZE;
+ }
+ }
+ aligned_start = vma->vm_end; /* Skip to start of next region */
+ }
+ if (mm_asid != current_asid) {
+ switch_and_save_asid(current_asid);
+ local_irq_restore(flags);
+ }
+ }
+}
+
+static void sh64_icache_inv_user_small_range(struct mm_struct *mm,
+ unsigned long start, int len)
+{
+
+ /* Invalidate a small range of user context I-cache, not necessarily
+ page (or even cache-line) aligned. */
+
+ unsigned long long eaddr = start;
+ unsigned long long eaddr_end = start + len;
+ unsigned long current_asid, mm_asid;
+ unsigned long long flags;
+ unsigned long long epage_start;
+
+ /* Since this is used inside ptrace, the ASID in the mm context
+ typically won't match current_asid. We'll have to switch ASID to do
+ this. For safety, and given that the range will be small, do all
+ this under cli.
+
+ Note, there is a hazard that the ASID in mm->context is no longer
+ actually associated with mm, i.e. if the mm->context has started a
+ new cycle since mm was last active. However, this is just a
+ performance issue: all that happens is that we invalidate lines
+ belonging to another mm, so the owning process has to refill them
+ when that mm goes live again. mm itself can't have any cache
+ entries because there will have been a flush_cache_all when the new
+ mm->context cycle started. */
+
+ /* Align to start of cache line. Otherwise, suppose len==8 and start
+ was at 32N+28 : the last 4 bytes wouldn't get invalidated. */
+ eaddr = start & L1_CACHE_ALIGN_MASK;
+ eaddr_end = start + len;
+
+ local_irq_save(flags);
+ mm_asid = mm->context & MMU_CONTEXT_ASID_MASK;
+ current_asid = switch_and_save_asid(mm_asid);
+
+ epage_start = eaddr & PAGE_MASK;
+
+ while (eaddr < eaddr_end)
+ {
+ asm __volatile__("icbi %0, 0" : : "r" (eaddr));
+ eaddr += L1_CACHE_BYTES;
+ }
+ switch_and_save_asid(current_asid);
+ local_irq_restore(flags);
+}
+
+static void sh64_icache_inv_current_user_range(unsigned long start, unsigned long end)
+{
+ /* The icbi instruction never raises ITLBMISS. i.e. if there's not a
+ cache hit on the virtual tag the instruction ends there, without a
+ TLB lookup. */
+
+ unsigned long long aligned_start;
+ unsigned long long ull_end;
+ unsigned long long addr;
+
+ ull_end = end;
+
+ /* Just invalidate over the range using the natural addresses. TLB
+ miss handling will be OK (TBC). Since it's for the current process,
+ either we're already in the right ASID context, or the ASIDs have
+ been recycled since we were last active in which case we might just
+ invalidate another processes I-cache entries : no worries, just a
+ performance drop for him. */
+ aligned_start = start & L1_CACHE_ALIGN_MASK;
+ addr = aligned_start;
+ while (addr < ull_end) {
+ asm __volatile__ ("icbi %0, 0" : : "r" (addr));
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ addr += L1_CACHE_BYTES;
+ }
+}
+
+#endif /* !CONFIG_ICACHE_DISABLED */
+
+/****************************************************************************/
+
+#ifndef CONFIG_DCACHE_DISABLED
+
+/* Buffer used as the target of alloco instructions to purge data from cache
+ sets by natural eviction. -- RPC */
+#define DUMMY_ALLOCO_AREA_SIZE L1_CACHE_SIZE_BYTES + (1024 * 4)
+static unsigned char dummy_alloco_area[DUMMY_ALLOCO_AREA_SIZE] __cacheline_aligned = { 0, };
+
+/****************************************************************************/
+
+static void __inline__ sh64_dcache_purge_sets(int sets_to_purge_base, int n_sets)
+{
+ /* Purge all ways in a particular block of sets, specified by the base
+ set number and number of sets. Can handle wrap-around, if that's
+ needed. */
+
+ int dummy_buffer_base_set;
+ unsigned long long eaddr, eaddr0, eaddr1;
+ int j;
+ int set_offset;
+
+ dummy_buffer_base_set = ((int)&dummy_alloco_area & cpu_data->dcache.idx_mask) >> cpu_data->dcache.entry_shift;
+ set_offset = sets_to_purge_base - dummy_buffer_base_set;
+
+ for (j=0; j<n_sets; j++, set_offset++) {
+ set_offset &= (cpu_data->dcache.sets - 1);
+ eaddr0 = (unsigned long long)dummy_alloco_area + (set_offset << cpu_data->dcache.entry_shift);
+
+ /* Do one alloco which hits the required set per cache way. For
+ write-back mode, this will purge the #ways resident lines. There's
+ little point unrolling this loop because the allocos stall more if
+ they're too close together. */
+ eaddr1 = eaddr0 + cpu_data->dcache.way_ofs * cpu_data->dcache.ways;
+ for (eaddr=eaddr0; eaddr<eaddr1; eaddr+=cpu_data->dcache.way_ofs) {
+ asm __volatile__ ("alloco %0, 0" : : "r" (eaddr));
+ }
+
+ eaddr1 = eaddr0 + cpu_data->dcache.way_ofs * cpu_data->dcache.ways;
+ for (eaddr=eaddr0; eaddr<eaddr1; eaddr+=cpu_data->dcache.way_ofs) {
+ /* Load from each address. Required because alloco is a NOP if
+ the cache is write-through. Write-through is a config option. */
+ if (test_bit(SH_CACHE_MODE_WT, &(cpu_data->dcache.flags)))
+ *(volatile unsigned char *)(int)eaddr;
+ }
+ }
+
+ /* Don't use OCBI to invalidate the lines. That costs cycles directly.
+ If the dummy block is just left resident, it will naturally get
+ evicted as required. */
+
+ return;
+}
+
+/****************************************************************************/
+
+static void sh64_dcache_purge_all(void)
+{
+ /* Purge the entire contents of the dcache. The most efficient way to
+ achieve this is to use alloco instructions on a region of unused
+ memory equal in size to the cache, thereby causing the current
+ contents to be discarded by natural eviction. The alternative,
+ namely reading every tag, setting up a mapping for the corresponding
+ page and doing an OCBP for the line, would be much more expensive.
+ */
+
+ sh64_dcache_purge_sets(0, cpu_data->dcache.sets);
+
+ return;
+
+}
+
+/****************************************************************************/
+
+static void sh64_dcache_purge_kernel_range(unsigned long start, unsigned long end)
+{
+ /* Purge the range of addresses [start,end] from the D-cache. The
+ addresses lie in the superpage mapping. There's no harm if we
+ overpurge at either end - just a small performance loss. */
+ unsigned long long ullend, addr, aligned_start;
+#if (NEFF == 32)
+ aligned_start = (unsigned long long)(signed long long)(signed long) start;
+#else
+#error "NEFF != 32"
+#endif
+ aligned_start &= L1_CACHE_ALIGN_MASK;
+ addr = aligned_start;
+#if (NEFF == 32)
+ ullend = (unsigned long long) (signed long long) (signed long) end;
+#else
+#error "NEFF != 32"
+#endif
+ while (addr <= ullend) {
+ asm __volatile__ ("ocbp %0, 0" : : "r" (addr));
+ addr += L1_CACHE_BYTES;
+ }
+ return;
+}
+
+/* Assumes this address (+ (2**n_synbits) pages up from it) aren't used for
+ anything else in the kernel */
+#define MAGIC_PAGE0_START 0xffffffffec000000ULL
+
+static void sh64_dcache_purge_coloured_phy_page(unsigned long paddr, unsigned long eaddr)
+{
+ /* Purge the physical page 'paddr' from the cache. It's known that any
+ cache lines requiring attention have the same page colour as the the
+ address 'eaddr'.
+
+ This relies on the fact that the D-cache matches on physical tags
+ when no virtual tag matches. So we create an alias for the original
+ page and purge through that. (Alternatively, we could have done
+ this by switching ASID to match the original mapping and purged
+ through that, but that involves ASID switching cost + probably a
+ TLBMISS + refill anyway.)
+ */
+
+ unsigned long long magic_page_start;
+ unsigned long long magic_eaddr, magic_eaddr_end;
+
+ magic_page_start = MAGIC_PAGE0_START + (eaddr & CACHE_OC_SYN_MASK);
+
+ /* As long as the kernel is not pre-emptible, this doesn't need to be
+ under cli/sti. */
+
+ sh64_setup_dtlb_cache_slot(magic_page_start, get_asid(), paddr);
+
+ magic_eaddr = magic_page_start;
+ magic_eaddr_end = magic_eaddr + PAGE_SIZE;
+ while (magic_eaddr < magic_eaddr_end) {
+ /* Little point in unrolling this loop - the OCBPs are blocking
+ and won't go any quicker (i.e. the loop overhead is parallel
+ to part of the OCBP execution.) */
+ asm __volatile__ ("ocbp %0, 0" : : "r" (magic_eaddr));
+ magic_eaddr += L1_CACHE_BYTES;
+ }
+
+ sh64_teardown_dtlb_cache_slot();
+}
+
+/****************************************************************************/
+
+static void sh64_dcache_purge_phy_page(unsigned long paddr)
+{
+ /* Pure a page given its physical start address, by creating a
+ temporary 1 page mapping and purging across that. Even if we know
+ the virtual address (& vma or mm) of the page, the method here is
+ more elegant because it avoids issues of coping with page faults on
+ the purge instructions (i.e. no special-case code required in the
+ critical path in the TLB miss handling). */
+
+ unsigned long long eaddr_start, eaddr, eaddr_end;
+ int i;
+
+ /* As long as the kernel is not pre-emptible, this doesn't need to be
+ under cli/sti. */
+
+ eaddr_start = MAGIC_PAGE0_START;
+ for (i=0; i < (1 << CACHE_OC_N_SYNBITS); i++) {
+ sh64_setup_dtlb_cache_slot(eaddr_start, get_asid(), paddr);
+
+ eaddr = eaddr_start;
+ eaddr_end = eaddr + PAGE_SIZE;
+ while (eaddr < eaddr_end) {
+ asm __volatile__ ("ocbp %0, 0" : : "r" (eaddr));
+ eaddr += L1_CACHE_BYTES;
+ }
+
+ sh64_teardown_dtlb_cache_slot();
+ eaddr_start += PAGE_SIZE;
+ }
+}
+
+static void sh64_dcache_purge_virt_page(struct mm_struct *mm, unsigned long eaddr)
+{
+ unsigned long phys;
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ pgd = pgd_offset(mm, eaddr);
+ pmd = pmd_offset(pgd, eaddr);
+
+ if (pmd_none(*pmd) || pmd_bad(*pmd))
+ return;
+
+ pte = pte_offset_kernel(pmd, eaddr);
+ entry = *pte;
+
+ if (pte_none(entry) || !pte_present(entry))
+ return;
+
+ phys = pte_val(entry) & PAGE_MASK;
+
+ sh64_dcache_purge_phy_page(phys);
+}
+
+static void sh64_dcache_purge_user_page(struct mm_struct *mm, unsigned long eaddr)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+ unsigned long paddr;
+
+ /* NOTE : all the callers of this have mm->page_table_lock held, so the
+ following page table traversal is safe even on SMP/pre-emptible. */
+
+ if (!mm) return; /* No way to find physical address of page */
+ pgd = pgd_offset(mm, eaddr);
+ if (pgd_bad(*pgd)) return;
+
+ pmd = pmd_offset(pgd, eaddr);
+ if (pmd_none(*pmd) || pmd_bad(*pmd)) return;
+
+ pte = pte_offset_kernel(pmd, eaddr);
+ entry = *pte;
+ if (pte_none(entry) || !pte_present(entry)) return;
+
+ paddr = pte_val(entry) & PAGE_MASK;
+
+ sh64_dcache_purge_coloured_phy_page(paddr, eaddr);
+
+}
+/****************************************************************************/
+
+static void sh64_dcache_purge_user_range(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+ /* There are at least 5 choices for the implementation of this, with
+ pros (+), cons(-), comments(*):
+
+ 1. ocbp each line in the range through the original user's ASID
+ + no lines spuriously evicted
+ - tlbmiss handling (must either handle faults on demand => extra
+ special-case code in tlbmiss critical path), or map the page in
+ advance (=> flush_tlb_range in advance to avoid multiple hits)
+ - ASID switching
+ - expensive for large ranges
+
+ 2. temporarily map each page in the range to a special effective
+ address and ocbp through the temporary mapping; relies on the
+ fact that SH-5 OCB* always do TLB lookup and match on ptags (they
+ never look at the etags)
+ + no spurious evictions
+ - expensive for large ranges
+ * surely cheaper than (1)
+
+ 3. walk all the lines in the cache, check the tags, if a match
+ occurs create a page mapping to ocbp the line through
+ + no spurious evictions
+ - tag inspection overhead
+ - (especially for small ranges)
+ - potential cost of setting up/tearing down page mapping for
+ every line that matches the range
+ * cost partly independent of range size
+
+ 4. walk all the lines in the cache, check the tags, if a match
+ occurs use 4 * alloco to purge the line (+3 other probably
+ innocent victims) by natural eviction
+ + no tlb mapping overheads
+ - spurious evictions
+ - tag inspection overhead
+
+ 5. implement like flush_cache_all
+ + no tag inspection overhead
+ - spurious evictions
+ - bad for small ranges
+
+ (1) can be ruled out as more expensive than (2). (2) appears best
+ for small ranges. The choice between (3), (4) and (5) for large
+ ranges and the range size for the large/small boundary need
+ benchmarking to determine.
+
+ For now use approach (2) for small ranges and (5) for large ones.
+
+ */
+
+ int n_pages;
+
+ n_pages = ((end - start) >> PAGE_SHIFT);
+ if (n_pages >= 64) {
+#if 1
+ sh64_dcache_purge_all();
+#else
+ unsigned long long set, way;
+ unsigned long mm_asid = mm->context & MMU_CONTEXT_ASID_MASK;
+ for (set = 0; set < cpu_data->dcache.sets; set++) {
+ unsigned long long set_base_config_addr = CACHE_OC_ADDRESS_ARRAY + (set << cpu_data->dcache.set_shift);
+ for (way = 0; way < cpu_data->dcache.ways; way++) {
+ unsigned long long config_addr = set_base_config_addr + (way << cpu_data->dcache.way_step_shift);
+ unsigned long long tag0;
+ unsigned long line_valid;
+
+ asm __volatile__("getcfg %1, 0, %0" : "=r" (tag0) : "r" (config_addr));
+ line_valid = tag0 & SH_CACHE_VALID;
+ if (line_valid) {
+ unsigned long cache_asid;
+ unsigned long epn;
+
+ cache_asid = (tag0 & cpu_data->dcache.asid_mask) >> cpu_data->dcache.asid_shift;
+ /* The next line needs some
+ explanation. The virtual tags
+ encode bits [31:13] of the virtual
+ address, bit [12] of the 'tag' being
+ implied by the cache set index. */
+ epn = (tag0 & cpu_data->dcache.epn_mask) | ((set & 0x80) << cpu_data->dcache.entry_shift);
+
+ if ((cache_asid == mm_asid) && (start <= epn) && (epn < end)) {
+ /* TODO : could optimise this
+ call by batching multiple
+ adjacent sets together. */
+ sh64_dcache_purge_sets(set, 1);
+ break; /* Don't waste time inspecting other ways for this set */
+ }
+ }
+ }
+ }
+#endif
+ } else {
+ /* 'Small' range */
+ unsigned long aligned_start;
+ unsigned long eaddr;
+ unsigned long last_page_start;
+
+ aligned_start = start & PAGE_MASK;
+ /* 'end' is 1 byte beyond the end of the range */
+ last_page_start = (end - 1) & PAGE_MASK;
+
+ eaddr = aligned_start;
+ while (eaddr <= last_page_start) {
+ sh64_dcache_purge_user_page(mm, eaddr);
+ eaddr += PAGE_SIZE;
+ }
+ }
+ return;
+}
+
+static void sh64_dcache_wback_current_user_range(unsigned long start, unsigned long end)
+{
+ unsigned long long aligned_start;
+ unsigned long long ull_end;
+ unsigned long long addr;
+
+ ull_end = end;
+
+ /* Just wback over the range using the natural addresses. TLB miss
+ handling will be OK (TBC) : the range has just been written to by
+ the signal frame setup code, so the PTEs must exist.
+
+ Note, if we have CONFIG_PREEMPT and get preempted inside this loop,
+ it doesn't matter, even if the pid->ASID mapping changes whilst
+ we're away. In that case the cache will have been flushed when the
+ mapping was renewed. So the writebacks below will be nugatory (and
+ we'll doubtless have to fault the TLB entry/ies in again with the
+ new ASID), but it's a rare case.
+ */
+ aligned_start = start & L1_CACHE_ALIGN_MASK;
+ addr = aligned_start;
+ while (addr < ull_end) {
+ asm __volatile__ ("ocbwb %0, 0" : : "r" (addr));
+ addr += L1_CACHE_BYTES;
+ }
+}
+
+#endif /* !CONFIG_DCACHE_DISABLED */
+
+/****************************************************************************/
+
+/* These *MUST* lie in an area of virtual address space that's otherwise unused. */
+#define UNIQUE_EADDR_START 0xe0000000UL
+#define UNIQUE_EADDR_END 0xe8000000UL
+
+static unsigned long sh64_make_unique_eaddr(unsigned long user_eaddr, unsigned long paddr)
+{
+ /* Given a physical address paddr, and a user virtual address
+ user_eaddr which will eventually be mapped to it, create a one-off
+ kernel-private eaddr mapped to the same paddr. This is used for
+ creating special destination pages for copy_user_page and
+ clear_user_page */
+
+ static unsigned long current_pointer = UNIQUE_EADDR_START;
+ unsigned long coloured_pointer;
+
+ if (current_pointer == UNIQUE_EADDR_END) {
+ sh64_dcache_purge_all();
+ current_pointer = UNIQUE_EADDR_START;
+ }
+
+ coloured_pointer = (current_pointer & ~CACHE_OC_SYN_MASK) | (user_eaddr & CACHE_OC_SYN_MASK);
+ sh64_setup_dtlb_cache_slot(coloured_pointer, get_asid(), paddr);
+
+ current_pointer += (PAGE_SIZE << CACHE_OC_N_SYNBITS);
+
+ return coloured_pointer;
+}
+
+/****************************************************************************/
+
+static void sh64_copy_user_page_coloured(void *to, void *from, unsigned long address)
+{
+ void *coloured_to;
+
+ /* Discard any existing cache entries of the wrong colour. These are
+ present quite often, if the kernel has recently used the page
+ internally, then given it up, then it's been allocated to the user.
+ */
+ sh64_dcache_purge_coloured_phy_page(__pa(to), (unsigned long) to);
+
+ coloured_to = (void *) sh64_make_unique_eaddr(address, __pa(to));
+ sh64_page_copy(from, coloured_to);
+
+ sh64_teardown_dtlb_cache_slot();
+}
+
+static void sh64_clear_user_page_coloured(void *to, unsigned long address)
+{
+ void *coloured_to;
+
+ /* Discard any existing kernel-originated lines of the wrong colour (as
+ above) */
+ sh64_dcache_purge_coloured_phy_page(__pa(to), (unsigned long) to);
+
+ coloured_to = (void *) sh64_make_unique_eaddr(address, __pa(to));
+ sh64_page_clear(coloured_to);
+
+ sh64_teardown_dtlb_cache_slot();
+}
+
+/****************************************************************************/
+
+/*##########################################################################
+ EXTERNALLY CALLABLE API.
+ ##########################################################################*/
+
+/* These functions are described in Documentation/cachetlb.txt.
+ Each one of these functions varies in behaviour depending on whether the
+ I-cache and/or D-cache are configured out.
+
+ Note that the Linux term 'flush' corresponds to what is termed 'purge' in
+ the sh/sh64 jargon for the D-cache, i.e. write back dirty data then
+ invalidate the cache lines, and 'invalidate' for the I-cache.
+ */
+
+#undef FLUSH_TRACE
+
+void flush_cache_all(void)
+{
+ /* Invalidate the entire contents of both caches, after writing back to
+ memory any dirty data from the D-cache. */
+ sh64_dcache_purge_all();
+ sh64_icache_inv_all();
+}
+
+/****************************************************************************/
+
+void flush_cache_mm(struct mm_struct *mm)
+{
+ /* Invalidate an entire user-address space from both caches, after
+ writing back dirty data (e.g. for shared mmap etc). */
+
+ /* This could be coded selectively by inspecting all the tags then
+ doing 4*alloco on any set containing a match (as for
+ flush_cache_range), but fork/exit/execve (where this is called from)
+ are expensive anyway. */
+
+ /* Have to do a purge here, despite the comments re I-cache below.
+ There could be odd-coloured dirty data associated with the mm still
+ in the cache - if this gets written out through natural eviction
+ after the kernel has reused the page there will be chaos.
+ */
+
+ sh64_dcache_purge_all();
+
+ /* The mm being torn down won't ever be active again, so any Icache
+ lines tagged with its ASID won't be visible for the rest of the
+ lifetime of this ASID cycle. Before the ASID gets reused, there
+ will be a flush_cache_all. Hence we don't need to touch the
+ I-cache. This is similar to the lack of action needed in
+ flush_tlb_mm - see fault.c. */
+}
+
+/****************************************************************************/
+
+void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+
+ /* Invalidate (from both caches) the range [start,end) of virtual
+ addresses from the user address space specified by mm, after writing
+ back any dirty data.
+
+ Note(1), 'end' is 1 byte beyond the end of the range to flush.
+
+ Note(2), this is called with mm->page_table_lock held.*/
+
+ sh64_dcache_purge_user_range(mm, start, end);
+ sh64_icache_inv_user_page_range(mm, start, end);
+}
+
+/****************************************************************************/
+
+void flush_cache_page(struct vm_area_struct *vma, unsigned long eaddr)
+{
+ /* Invalidate any entries in either cache for the vma within the user
+ address space vma->vm_mm for the page starting at virtual address
+ 'eaddr'. This seems to be used primarily in breaking COW. Note,
+ the I-cache must be searched too in case the page in question is
+ both writable and being executed from (e.g. stack trampolines.)
+
+ Note(1), this is called with mm->page_table_lock held.
+ */
+
+ sh64_dcache_purge_virt_page(vma->vm_mm, eaddr);
+
+ if (vma->vm_flags & VM_EXEC) {
+ sh64_icache_inv_user_page(vma, eaddr);
+ }
+}
+
+/****************************************************************************/
+
+#ifndef CONFIG_DCACHE_DISABLED
+
+void copy_user_page(void *to, void *from, unsigned long address, struct page *page)
+{
+ /* 'from' and 'to' are kernel virtual addresses (within the superpage
+ mapping of the physical RAM). 'address' is the user virtual address
+ where the copy 'to' will be mapped after. This allows a custom
+ mapping to be used to ensure that the new copy is placed in the
+ right cache sets for the user to see it without having to bounce it
+ out via memory. Note however : the call to flush_page_to_ram in
+ (generic)/mm/memory.c:(break_cow) undoes all this good work in that one
+ very important case!
+
+ TBD : can we guarantee that on every call, any cache entries for
+ 'from' are in the same colour sets as 'address' also? i.e. is this
+ always used just to deal with COW? (I suspect not). */
+
+ /* There are two possibilities here for when the page 'from' was last accessed:
+ * by the kernel : this is OK, no purge required.
+ * by the/a user (e.g. for break_COW) : need to purge.
+
+ If the potential user mapping at 'address' is the same colour as
+ 'from' there is no need to purge any cache lines from the 'from'
+ page mapped into cache sets of colour 'address'. (The copy will be
+ accessing the page through 'from').
+ */
+
+ if (((address ^ (unsigned long) from) & CACHE_OC_SYN_MASK) != 0) {
+ sh64_dcache_purge_coloured_phy_page(__pa(from), address);
+ }
+
+ if (((address ^ (unsigned long) to) & CACHE_OC_SYN_MASK) == 0) {
+ /* No synonym problem on destination */
+ sh64_page_copy(from, to);
+ } else {
+ sh64_copy_user_page_coloured(to, from, address);
+ }
+
+ /* Note, don't need to flush 'from' page from the cache again - it's
+ done anyway by the generic code */
+}
+
+void clear_user_page(void *to, unsigned long address, struct page *page)
+{
+ /* 'to' is a kernel virtual address (within the superpage
+ mapping of the physical RAM). 'address' is the user virtual address
+ where the 'to' page will be mapped after. This allows a custom
+ mapping to be used to ensure that the new copy is placed in the
+ right cache sets for the user to see it without having to bounce it
+ out via memory.
+ */
+
+ if (((address ^ (unsigned long) to) & CACHE_OC_SYN_MASK) == 0) {
+ /* No synonym problem on destination */
+ sh64_page_clear(to);
+ } else {
+ sh64_clear_user_page_coloured(to, address);
+ }
+}
+
+#endif /* !CONFIG_DCACHE_DISABLED */
+
+/****************************************************************************/
+
+void flush_dcache_page(struct page *page)
+{
+ sh64_dcache_purge_phy_page(page_to_phys(page));
+ wmb();
+}
+
+/****************************************************************************/
+
+void flush_icache_range(unsigned long start, unsigned long end)
+{
+ /* Flush the range [start,end] of kernel virtual adddress space from
+ the I-cache. The corresponding range must be purged from the
+ D-cache also because the SH-5 doesn't have cache snooping between
+ the caches. The addresses will be visible through the superpage
+ mapping, therefore it's guaranteed that there no cache entries for
+ the range in cache sets of the wrong colour.
+
+ Primarily used for cohering the I-cache after a module has
+ been loaded. */
+
+ /* We also make sure to purge the same range from the D-cache since
+ flush_page_to_ram() won't be doing this for us! */
+
+ sh64_dcache_purge_kernel_range(start, end);
+ wmb();
+ sh64_icache_inv_kernel_range(start, end);
+}
+
+/****************************************************************************/
+
+void flush_icache_user_range(struct vm_area_struct *vma,
+ struct page *page, unsigned long addr, int len)
+{
+ /* Flush the range of user (defined by vma->vm_mm) address space
+ starting at 'addr' for 'len' bytes from the cache. The range does
+ not straddle a page boundary, the unique physical page containing
+ the range is 'page'. This seems to be used mainly for invalidating
+ an address range following a poke into the program text through the
+ ptrace() call from another process (e.g. for BRK instruction
+ insertion). */
+
+ sh64_dcache_purge_coloured_phy_page(page_to_phys(page), addr);
+ mb();
+
+ if (vma->vm_flags & VM_EXEC) {
+ sh64_icache_inv_user_small_range(vma->vm_mm, addr, len);
+ }
+}
+
+/*##########################################################################
+ ARCH/SH64 PRIVATE CALLABLE API.
+ ##########################################################################*/
+
+void flush_cache_sigtramp(unsigned long start, unsigned long end)
+{
+ /* For the address range [start,end), write back the data from the
+ D-cache and invalidate the corresponding region of the I-cache for
+ the current process. Used to flush signal trampolines on the stack
+ to make them executable. */
+
+ sh64_dcache_wback_current_user_range(start, end);
+ wmb();
+ sh64_icache_inv_current_user_range(start, end);
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/extable.c
+ *
+ * Copyright (C) 2003 Richard Curnow
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * Cloned from the 2.5 SH version..
+ */
+#include <linux/config.h>
+#include <linux/rwsem.h>
+#include <linux/module.h>
+#include <asm/uaccess.h>
+
+extern unsigned long copy_user_memcpy, copy_user_memcpy_end, __copy_user_fixup;
+
+static const struct exception_table_entry __copy_user_fixup_ex = {
+ .fixup = (unsigned long)&__copy_user_fixup,
+};
+
+/* Some functions that may trap due to a bad user-mode address have too many loads
+ and stores in them to make it at all practical to label each one and put them all in
+ the main exception table.
+
+ In particular, the fast memcpy routine is like this. It's fix-up is just to fall back
+ to a slow byte-at-a-time copy, which is handled the conventional way. So it's functionally
+ OK to just handle any trap occurring in the fast memcpy with that fixup. */
+static const struct exception_table_entry *check_exception_ranges(unsigned long addr)
+{
+ if ((addr >= (unsigned long)©_user_memcpy) &&
+ (addr <= (unsigned long)©_user_memcpy_end))
+ return &__copy_user_fixup_ex;
+
+ return NULL;
+}
+
+/* Simple binary search */
+const struct exception_table_entry *
+search_extable(const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ unsigned long value)
+{
+ const struct exception_table_entry *mid;
+
+ mid = check_exception_ranges(value);
+ if (mid)
+ return mid;
+
+ while (first <= last) {
+ long diff;
+
+ mid = (last - first) / 2 + first;
+ diff = mid->insn - value;
+ if (diff == 0)
+ return mid;
+ else if (diff < 0)
+ first = mid+1;
+ else
+ last = mid-1;
+ }
+
+ return NULL;
+}
+
+int fixup_exception(struct pt_regs *regs)
+{
+ const struct exception_table_entry *fixup;
+
+ fixup = search_exception_tables(regs->pc);
+ if (fixup) {
+ regs->pc = fixup->fixup;
+ return 1;
+ }
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/fault.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Richard Curnow (/proc/tlb, bug fixes)
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+#include <linux/signal.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/tlb.h>
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/hardirq.h>
+#include <asm/mmu_context.h>
+#include <asm/registers.h> /* required by inline asm statements */
+
+#if defined(CONFIG_SH64_PROC_TLB)
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+/* Count numbers of tlb refills in each region */
+static unsigned long long calls_to_update_mmu_cache = 0ULL;
+static unsigned long long calls_to_flush_tlb_page = 0ULL;
+static unsigned long long calls_to_flush_tlb_range = 0ULL;
+static unsigned long long calls_to_flush_tlb_mm = 0ULL;
+static unsigned long long calls_to_flush_tlb_all = 0ULL;
+unsigned long long calls_to_do_slow_page_fault = 0ULL;
+unsigned long long calls_to_do_fast_page_fault = 0ULL;
+
+/* Count size of ranges for flush_tlb_range */
+static unsigned long long flush_tlb_range_1 = 0ULL;
+static unsigned long long flush_tlb_range_2 = 0ULL;
+static unsigned long long flush_tlb_range_3_4 = 0ULL;
+static unsigned long long flush_tlb_range_5_7 = 0ULL;
+static unsigned long long flush_tlb_range_8_11 = 0ULL;
+static unsigned long long flush_tlb_range_12_15 = 0ULL;
+static unsigned long long flush_tlb_range_16_up = 0ULL;
+
+static unsigned long long page_not_present = 0ULL;
+
+#endif
+
+extern void die(const char *,struct pt_regs *,long);
+
+#define PFLAG(val,flag) (( (val) & (flag) ) ? #flag : "" )
+#define PPROT(flag) PFLAG(pgprot_val(prot),flag)
+
+static inline void print_prots(pgprot_t prot)
+{
+ printk("prot is 0x%08lx\n",pgprot_val(prot));
+
+ printk("%s %s %s %s %s\n",PPROT(_PAGE_SHARED),PPROT(_PAGE_READ),
+ PPROT(_PAGE_EXECUTE),PPROT(_PAGE_WRITE),PPROT(_PAGE_USER));
+}
+
+static inline void print_vma(struct vm_area_struct *vma)
+{
+ printk("vma start 0x%08lx\n", vma->vm_start);
+ printk("vma end 0x%08lx\n", vma->vm_end);
+
+ print_prots(vma->vm_page_prot);
+ printk("vm_flags 0x%08lx\n", vma->vm_flags);
+}
+
+static inline void print_task(struct task_struct *tsk)
+{
+ printk("Task pid %d\n", tsk->pid);
+}
+
+static pte_t *lookup_pte(struct mm_struct *mm, unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ dir = pgd_offset(mm, address);
+ if (pgd_none(*dir)) {
+ return NULL;
+ }
+
+ pmd = pmd_offset(dir, address);
+ if (pmd_none(*pmd)) {
+ return NULL;
+ }
+
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+
+ if (pte_none(entry)) {
+ return NULL;
+ }
+ if (!pte_present(entry)) {
+ return NULL;
+ }
+
+ return pte;
+}
+
+/*
+ * This routine handles page faults. It determines the address,
+ * and the problem, and then passes it off to one of the appropriate
+ * routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
+ unsigned long textaccess, unsigned long address)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ struct vm_area_struct * vma;
+ const struct exception_table_entry *fixup;
+ pte_t *pte;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_do_slow_page_fault;
+#endif
+
+ /* SIM
+ * Note this is now called with interrupts still disabled
+ * This is to cope with being called for a missing IO port
+ * address with interupts disabled. This should be fixed as
+ * soon as we have a better 'fast path' miss handler.
+ *
+ * Plus take care how you try and debug this stuff.
+ * For example, writing debug data to a port which you
+ * have just faulted on is not going to work.
+ */
+
+ tsk = current;
+ mm = tsk->mm;
+
+ /* Not an IO address, so reenable interrupts */
+ sti();
+
+ /*
+ * If we're in an interrupt or have no user
+ * context, we must not take the fault..
+ */
+ if (in_interrupt() || !mm)
+ goto no_context;
+
+ /* TLB misses upon some cache flushes get done under cli() */
+ down_read(&mm->mmap_sem);
+
+ vma = find_vma(mm, address);
+
+ if (!vma) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+#endif
+ goto bad_area;
+ }
+ if (vma->vm_start <= address) {
+ goto good_area;
+ }
+
+ if (!(vma->vm_flags & VM_GROWSDOWN)) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+
+ print_vma(vma);
+#endif
+ goto bad_area;
+ }
+ if (expand_stack(vma, address)) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+#endif
+ goto bad_area;
+ }
+/*
+ * Ok, we have a good vm_area for this memory access, so
+ * we can handle it..
+ */
+good_area:
+ if (writeaccess) {
+ if (!(vma->vm_flags & VM_WRITE))
+ goto bad_area;
+ } else {
+ if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
+ goto bad_area;
+ }
+
+ if (textaccess) {
+ if (!(vma->vm_flags & VM_EXEC))
+ goto bad_area;
+ }
+
+ /*
+ * If for any reason at all we couldn't handle the fault,
+ * make sure we exit gracefully rather than endlessly redo
+ * the fault.
+ */
+survive:
+ switch (handle_mm_fault(mm, vma, address, writeaccess)) {
+ case 1:
+ tsk->min_flt++;
+ break;
+ case 2:
+ tsk->maj_flt++;
+ break;
+ case 0:
+ goto do_sigbus;
+ default:
+ goto out_of_memory;
+ }
+ /* If we get here, the page fault has been handled. Do the TLB refill
+ now from the newly-setup PTE, to avoid having to fault again right
+ away on the same instruction. */
+ pte = lookup_pte (mm, address);
+ if (!pte) {
+ /* From empirical evidence, we can get here, due to
+ !pte_present(pte). (e.g. if a swap-in occurs, and the page
+ is swapped back out again before the process that wanted it
+ gets rescheduled?) */
+ goto no_pte;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+no_pte:
+
+ up_read(&mm->mmap_sem);
+ return;
+
+/*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+bad_area:
+#ifdef DEBUG_FAULT
+ printk("fault:bad area\n");
+#endif
+ up_read(&mm->mmap_sem);
+
+ if (user_mode(regs)) {
+ printk("user mode bad_area address=%08lx pid=%d (%s) pc=%08lx opcode=%08lx\n",
+ address, current->pid, current->comm,
+ (unsigned long) regs->pc,
+ *(unsigned long *)(u32)(regs->pc & ~3));
+ show_regs(regs);
+ if (tsk->pid == 1) {
+ panic("INIT had user mode bad_area\n");
+ }
+ tsk->thread.address = address;
+ tsk->thread.error_code = writeaccess;
+ force_sig(SIGSEGV, tsk);
+ return;
+ }
+
+no_context:
+#ifdef DEBUG_FAULT
+ printk("fault:No context\n");
+#endif
+ /* Are we prepared to handle this kernel fault? */
+ fixup = search_exception_tables(regs->pc);
+ if (fixup) {
+ regs->pc = fixup->fixup;
+ return;
+ }
+
+/*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ *
+ */
+ if (address < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request");
+ printk(" at virtual address %08lx\n", address);
+ printk(KERN_ALERT "pc = %08Lx%08Lx\n", regs->pc >> 32, regs->pc & 0xffffffff);
+ die("Oops", regs, writeaccess);
+ do_exit(SIGKILL);
+
+/*
+ * We ran out of memory, or some other thing happened to us that made
+ * us unable to handle the page fault gracefully.
+ */
+out_of_memory:
+ if (current->pid == 1) {
+ panic("INIT out of memory\n");
+ yield();
+ goto survive;
+ }
+ printk("fault:Out of memory\n");
+ up_read(&mm->mmap_sem);
+ if (current->pid == 1) {
+ yield();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
+ printk("VM: killing process %s\n", tsk->comm);
+ if (user_mode(regs))
+ do_exit(SIGKILL);
+ goto no_context;
+
+do_sigbus:
+ printk("fault:Do sigbus\n");
+ up_read(&mm->mmap_sem);
+
+ /*
+ * Send a sigbus, regardless of whether we were in kernel
+ * or user mode.
+ */
+ tsk->thread.address = address;
+ tsk->thread.error_code = writeaccess;
+ tsk->thread.trap_no = 14;
+ force_sig(SIGBUS, tsk);
+
+ /* Kernel mode? Handle exceptions or die */
+ if (!user_mode(regs))
+ goto no_context;
+}
+
+
+void flush_tlb_all(void);
+
+void update_mmu_cache(struct vm_area_struct * vma,
+ unsigned long address, pte_t pte)
+{
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_update_mmu_cache;
+#endif
+
+ /*
+ * This appears to get called once for every pte entry that gets
+ * established => I don't think it's efficient to try refilling the
+ * TLBs with the pages - some may not get accessed even. Also, for
+ * executable pages, it is impossible to determine reliably here which
+ * TLB they should be mapped into (or both even).
+ *
+ * So, just do nothing here and handle faults on demand. In the
+ * TLBMISS handling case, the refill is now done anyway after the pte
+ * has been fixed up, so that deals with most useful cases.
+ */
+}
+
+static void __flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ unsigned long long match, pteh=0, lpage;
+ unsigned long tlb;
+ struct mm_struct *mm;
+
+ mm = vma->vm_mm;
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ /*
+ * Sign-extend based on neff.
+ */
+ lpage = (page & NEFF_SIGN) ? (page | NEFF_MASK) : page;
+ match = ((mm->context & MMU_CONTEXT_ASID_MASK) << PTEH_ASID_SHIFT) | PTEH_VALID;
+ match |= lpage;
+
+ /* Do ITLB : don't bother for pages in non-exectutable VMAs */
+ if (vma->vm_flags & VM_EXEC) {
+ for_each_itlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ if (pteh == match) {
+ __flush_tlb_slot(tlb);
+ break;
+ }
+
+ }
+ }
+
+ /* Do DTLB : any page could potentially be in here. */
+ for_each_dtlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ if (pteh == match) {
+ __flush_tlb_slot(tlb);
+ break;
+ }
+
+ }
+}
+
+void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ unsigned long flags;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_page;
+#endif
+
+ if (vma->vm_mm) {
+ page &= PAGE_MASK;
+ local_irq_save(flags);
+ __flush_tlb_page(vma, page);
+ local_irq_restore(flags);
+ }
+}
+
+void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ unsigned long flags;
+ unsigned long long match, pteh=0, pteh_epn, pteh_low;
+ unsigned long tlb;
+ struct mm_struct *mm;
+
+ mm = vma->vm_mm;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_range;
+
+ {
+ unsigned long size = (end - 1) - start;
+ size >>= 12; /* divide by PAGE_SIZE */
+ size++; /* end=start+4096 => 1 page */
+ switch (size) {
+ case 1 : flush_tlb_range_1++; break;
+ case 2 : flush_tlb_range_2++; break;
+ case 3 ... 4 : flush_tlb_range_3_4++; break;
+ case 5 ... 7 : flush_tlb_range_5_7++; break;
+ case 8 ... 11 : flush_tlb_range_8_11++; break;
+ case 12 ... 15 : flush_tlb_range_12_15++; break;
+ default : flush_tlb_range_16_up++; break;
+ }
+ }
+#endif
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ local_irq_save(flags);
+
+ start &= PAGE_MASK;
+ end &= PAGE_MASK;
+
+ match = ((mm->context & MMU_CONTEXT_ASID_MASK) << PTEH_ASID_SHIFT) | PTEH_VALID;
+
+ /* Flush ITLB */
+ for_each_itlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ pteh_epn = pteh & PAGE_MASK;
+ pteh_low = pteh & ~PAGE_MASK;
+
+ if (pteh_low == match && pteh_epn >= start && pteh_epn <= end)
+ __flush_tlb_slot(tlb);
+ }
+
+ /* Flush DTLB */
+ for_each_dtlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ pteh_epn = pteh & PAGE_MASK;
+ pteh_low = pteh & ~PAGE_MASK;
+
+ if (pteh_low == match && pteh_epn >= start && pteh_epn <= end)
+ __flush_tlb_slot(tlb);
+ }
+
+ local_irq_restore(flags);
+}
+
+void flush_tlb_mm(struct mm_struct *mm)
+{
+ unsigned long flags;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_mm;
+#endif
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ local_irq_save(flags);
+
+ mm->context=NO_CONTEXT;
+ if(mm==current->mm)
+ activate_context(mm);
+
+ local_irq_restore(flags);
+
+}
+
+void flush_tlb_all(void)
+{
+ /* Invalidate all, including shared pages, excluding fixed TLBs */
+
+ unsigned long flags, tlb;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_all;
+#endif
+
+ local_irq_save(flags);
+
+ /* Flush each ITLB entry */
+ for_each_itlb_entry(tlb) {
+ __flush_tlb_slot(tlb);
+ }
+
+ /* Flush each DTLB entry */
+ for_each_dtlb_entry(tlb) {
+ __flush_tlb_slot(tlb);
+ }
+
+ local_irq_restore(flags);
+}
+
+void flush_tlb_kernel_range(unsigned long start, unsigned long end)
+{
+ /* FIXME: Optimize this later.. */
+ flush_tlb_all();
+}
+
+#if defined(CONFIG_SH64_PROC_TLB)
+/* Procfs interface to read the performance information */
+
+static int
+tlb_proc_info(char *buf, char **start, off_t fpos, int length, int *eof, void *data)
+{
+ int len=0;
+ len += sprintf(buf+len, "do_fast_page_fault called %12lld times\n", calls_to_do_fast_page_fault);
+ len += sprintf(buf+len, "do_slow_page_fault called %12lld times\n", calls_to_do_slow_page_fault);
+ len += sprintf(buf+len, "update_mmu_cache called %12lld times\n", calls_to_update_mmu_cache);
+ len += sprintf(buf+len, "flush_tlb_page called %12lld times\n", calls_to_flush_tlb_page);
+ len += sprintf(buf+len, "flush_tlb_range called %12lld times\n", calls_to_flush_tlb_range);
+ len += sprintf(buf+len, "flush_tlb_mm called %12lld times\n", calls_to_flush_tlb_mm);
+ len += sprintf(buf+len, "flush_tlb_all called %12lld times\n", calls_to_flush_tlb_all);
+ len += sprintf(buf+len, "flush_tlb_range_sizes\n"
+ " 1 : %12lld\n"
+ " 2 : %12lld\n"
+ " 3 - 4 : %12lld\n"
+ " 5 - 7 : %12lld\n"
+ " 8 - 11 : %12lld\n"
+ "12 - 15 : %12lld\n"
+ "16+ : %12lld\n",
+ flush_tlb_range_1, flush_tlb_range_2, flush_tlb_range_3_4,
+ flush_tlb_range_5_7, flush_tlb_range_8_11, flush_tlb_range_12_15,
+ flush_tlb_range_16_up);
+ len += sprintf(buf+len, "page not present %12lld times\n", page_not_present);
+ *eof = 1;
+ return len;
+}
+
+static int __init register_proc_tlb(void)
+{
+ create_proc_read_entry("tlb", 0, NULL, tlb_proc_info, NULL);
+ return 0;
+}
+
+__initcall(register_proc_tlb);
+
+#endif
--- /dev/null
+/*
+ * arch/sh64/mm/hugetlbpage.c
+ *
+ * SuperH HugeTLB page support.
+ *
+ * Cloned from sparc64 by Paul Mundt.
+ *
+ * Copyright (C) 2002, 2003 David S. Miller (davem@redhat.com)
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <linux/pagemap.h>
+#include <linux/smp_lock.h>
+#include <linux/slab.h>
+#include <linux/sysctl.h>
+
+#include <asm/mman.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+
+static pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte = NULL;
+
+ pgd = pgd_offset(mm, addr);
+ if (pgd) {
+ pmd = pmd_alloc(mm, pgd, addr);
+ if (pmd)
+ pte = pte_alloc_map(mm, pmd, addr);
+ }
+ return pte;
+}
+
+static pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte = NULL;
+
+ pgd = pgd_offset(mm, addr);
+ if (pgd) {
+ pmd = pmd_offset(pgd, addr);
+ if (pmd)
+ pte = pte_offset_map(pmd, addr);
+ }
+ return pte;
+}
+
+#define mk_pte_huge(entry) do { pte_val(entry) |= _PAGE_SZHUGE; } while (0)
+
+static void set_huge_pte(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page *page, pte_t * page_table, int write_access)
+{
+ unsigned long i;
+ pte_t entry;
+
+ mm->rss += (HPAGE_SIZE / PAGE_SIZE);
+
+ if (write_access)
+ entry = pte_mkwrite(pte_mkdirty(mk_pte(page,
+ vma->vm_page_prot)));
+ else
+ entry = pte_wrprotect(mk_pte(page, vma->vm_page_prot));
+ entry = pte_mkyoung(entry);
+ mk_pte_huge(entry);
+
+ for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
+ set_pte(page_table, entry);
+ page_table++;
+
+ pte_val(entry) += PAGE_SIZE;
+ }
+}
+
+/*
+ * This function checks for proper alignment of input addr and len parameters.
+ */
+int is_aligned_hugepage_range(unsigned long addr, unsigned long len)
+{
+ if (len & ~HPAGE_MASK)
+ return -EINVAL;
+ if (addr & ~HPAGE_MASK)
+ return -EINVAL;
+ return 0;
+}
+
+int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ struct vm_area_struct *vma)
+{
+ pte_t *src_pte, *dst_pte, entry;
+ struct page *ptepage;
+ unsigned long addr = vma->vm_start;
+ unsigned long end = vma->vm_end;
+ int i;
+
+ while (addr < end) {
+ dst_pte = huge_pte_alloc(dst, addr);
+ if (!dst_pte)
+ goto nomem;
+ src_pte = huge_pte_offset(src, addr);
+ BUG_ON(!src_pte || pte_none(*src_pte));
+ entry = *src_pte;
+ ptepage = pte_page(entry);
+ get_page(ptepage);
+ for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
+ set_pte(dst_pte, entry);
+ pte_val(entry) += PAGE_SIZE;
+ dst_pte++;
+ }
+ dst->rss += (HPAGE_SIZE / PAGE_SIZE);
+ addr += HPAGE_SIZE;
+ }
+ return 0;
+
+nomem:
+ return -ENOMEM;
+}
+
+int follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct page **pages, struct vm_area_struct **vmas,
+ unsigned long *position, int *length, int i)
+{
+ unsigned long vaddr = *position;
+ int remainder = *length;
+
+ WARN_ON(!is_vm_hugetlb_page(vma));
+
+ while (vaddr < vma->vm_end && remainder) {
+ if (pages) {
+ pte_t *pte;
+ struct page *page;
+
+ pte = huge_pte_offset(mm, vaddr);
+
+ /* hugetlb should be locked, and hence, prefaulted */
+ BUG_ON(!pte || pte_none(*pte));
+
+ page = pte_page(*pte);
+
+ WARN_ON(!PageCompound(page));
+
+ get_page(page);
+ pages[i] = page;
+ }
+
+ if (vmas)
+ vmas[i] = vma;
+
+ vaddr += PAGE_SIZE;
+ --remainder;
+ ++i;
+ }
+
+ *length = remainder;
+ *position = vaddr;
+
+ return i;
+}
+
+struct page *follow_huge_addr(struct mm_struct *mm,
+ unsigned long address, int write)
+{
+ return ERR_PTR(-EINVAL);
+}
+
+int pmd_huge(pmd_t pmd)
+{
+ return 0;
+}
+
+struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+ pmd_t *pmd, int write)
+{
+ return NULL;
+}
+
+void unmap_hugepage_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long address;
+ pte_t *pte;
+ struct page *page;
+ int i;
+
+ BUG_ON(start & (HPAGE_SIZE - 1));
+ BUG_ON(end & (HPAGE_SIZE - 1));
+
+ for (address = start; address < end; address += HPAGE_SIZE) {
+ pte = huge_pte_offset(mm, address);
+ BUG_ON(!pte);
+ if (pte_none(*pte))
+ continue;
+ page = pte_page(*pte);
+ put_page(page);
+ for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
+ pte_clear(pte);
+ pte++;
+ }
+ }
+ mm->rss -= (end - start) >> PAGE_SHIFT;
+ flush_tlb_range(vma, start, end);
+}
+
+int hugetlb_prefault(struct address_space *mapping, struct vm_area_struct *vma)
+{
+ struct mm_struct *mm = current->mm;
+ unsigned long addr;
+ int ret = 0;
+
+ BUG_ON(vma->vm_start & ~HPAGE_MASK);
+ BUG_ON(vma->vm_end & ~HPAGE_MASK);
+
+ spin_lock(&mm->page_table_lock);
+ for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) {
+ unsigned long idx;
+ pte_t *pte = huge_pte_alloc(mm, addr);
+ struct page *page;
+
+ if (!pte) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ if (!pte_none(*pte))
+ continue;
+
+ idx = ((addr - vma->vm_start) >> HPAGE_SHIFT)
+ + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT));
+ page = find_get_page(mapping, idx);
+ if (!page) {
+ /* charge the fs quota first */
+ if (hugetlb_get_quota(mapping)) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ page = alloc_huge_page();
+ if (!page) {
+ hugetlb_put_quota(mapping);
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC);
+ if (! ret) {
+ unlock_page(page);
+ } else {
+ hugetlb_put_quota(mapping);
+ free_huge_page(page);
+ goto out;
+ }
+ }
+ set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE);
+ }
+out:
+ spin_unlock(&mm->page_table_lock);
+ return ret;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/init.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/rwsem.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/bootmem.h>
+
+#include <asm/mmu_context.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/tlb.h>
+
+#ifdef CONFIG_BLK_DEV_INITRD
+#include <linux/blk.h>
+#endif
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
+
+/*
+ * Cache of MMU context last used.
+ */
+unsigned long mmu_context_cache;
+pgd_t * mmu_pdtp_cache;
+int after_bootmem = 0;
+
+/*
+ * BAD_PAGE is the page that is used for page faults when linux
+ * is out-of-memory. Older versions of linux just did a
+ * do_exit(), but using this instead means there is less risk
+ * for a process dying in kernel mode, possibly leaving an inode
+ * unused etc..
+ *
+ * BAD_PAGETABLE is the accompanying page-table: it is initialized
+ * to point to BAD_PAGE entries.
+ *
+ * ZERO_PAGE is a special page that is used for zero-initialized
+ * data and COW.
+ */
+
+extern unsigned char empty_zero_page[PAGE_SIZE];
+extern unsigned char empty_bad_page[PAGE_SIZE];
+extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+extern char _text, _etext, _edata, __bss_start, _end;
+extern char __init_begin, __init_end;
+
+/* It'd be good if these lines were in the standard header file. */
+#define START_PFN (NODE_DATA(0)->bdata->node_boot_start >> PAGE_SHIFT)
+#define MAX_LOW_PFN (NODE_DATA(0)->bdata->node_low_pfn)
+
+
+void show_mem(void)
+{
+ int i, total = 0, reserved = 0;
+ int shared = 0, cached = 0;
+
+ printk("Mem-info:\n");
+ show_free_areas();
+ printk("Free swap: %6ldkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ i = max_mapnr;
+ while (i-- > 0) {
+ total++;
+ if (PageReserved(mem_map+i))
+ reserved++;
+ else if (PageSwapCache(mem_map+i))
+ cached++;
+ else if (page_count(mem_map+i))
+ shared += page_count(mem_map+i) - 1;
+ }
+ printk("%d pages of RAM\n",total);
+ printk("%d reserved pages\n",reserved);
+ printk("%d pages shared\n",shared);
+ printk("%d pages swap cached\n",cached);
+ printk("%ld pages in page table cache\n",pgtable_cache_size);
+}
+
+/*
+ * paging_init() sets up the page tables.
+ *
+ * head.S already did a lot to set up address translation for the kernel.
+ * Here we comes with:
+ * . MMU enabled
+ * . ASID set (SR)
+ * . some 512MB regions being mapped of which the most relevant here is:
+ * . CACHED segment (ASID 0 [irrelevant], shared AND NOT user)
+ * . possible variable length regions being mapped as:
+ * . UNCACHED segment (ASID 0 [irrelevant], shared AND NOT user)
+ * . All of the memory regions are placed, independently from the platform
+ * on high addresses, above 0x80000000.
+ * . swapper_pg_dir is already cleared out by the .space directive
+ * in any case swapper does not require a real page directory since
+ * it's all kernel contained.
+ *
+ * Those pesky NULL-reference errors in the kernel are then
+ * dealt with by not mapping address 0x00000000 at all.
+ *
+ */
+void __init paging_init(void)
+{
+ unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
+
+ pgd_init((unsigned long)swapper_pg_dir);
+ pgd_init((unsigned long)swapper_pg_dir +
+ sizeof(pgd_t) * USER_PTRS_PER_PGD);
+
+ mmu_context_cache = MMU_CONTEXT_FIRST_VERSION;
+
+ /*
+ * All memory is good as ZONE_NORMAL (fall-through) and ZONE_DMA.
+ */
+ zones_size[ZONE_DMA] = MAX_LOW_PFN - START_PFN;
+
+ free_area_init_node(0, NODE_DATA(0), 0, zones_size, __MEMORY_START >> PAGE_SHIFT, 0);
+
+ /* XXX: MRB-remove - this doesn't seem sane, should this be done somewhere else ?*/
+ mem_map = NODE_DATA(0)->node_mem_map;
+}
+
+void __init mem_init(void)
+{
+ int codesize, reservedpages, datasize, initsize;
+ int tmp;
+
+ max_mapnr = num_physpages = MAX_LOW_PFN - START_PFN;
+ high_memory = (void *)__va(MAX_LOW_PFN * PAGE_SIZE);
+
+ /*
+ * Clear the zero-page.
+ * This is not required but we might want to re-use
+ * this very page to pass boot parameters, one day.
+ */
+ memset(empty_zero_page, 0, PAGE_SIZE);
+
+ /* this will put all low memory onto the freelists */
+ totalram_pages += free_all_bootmem_node(NODE_DATA(0));
+ reservedpages = 0;
+ for (tmp = 0; tmp < num_physpages; tmp++)
+ /*
+ * Only count reserved RAM pages
+ */
+ if (PageReserved(mem_map+tmp))
+ reservedpages++;
+
+ after_bootmem = 1;
+
+ codesize = (unsigned long) &_etext - (unsigned long) &_text;
+ datasize = (unsigned long) &_edata - (unsigned long) &_etext;
+ initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+
+ printk("Memory: %luk/%luk available (%dk kernel code, %dk reserved, %dk data, %dk init)\n",
+ (unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
+ max_mapnr << (PAGE_SHIFT-10),
+ codesize >> 10,
+ reservedpages << (PAGE_SHIFT-10),
+ datasize >> 10,
+ initsize >> 10);
+}
+
+void free_initmem(void)
+{
+ unsigned long addr;
+
+ addr = (unsigned long)(&__init_begin);
+ for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ set_page_count(virt_to_page(addr), 1);
+ free_page(addr);
+ totalram_pages++;
+ }
+ printk ("Freeing unused kernel memory: %ldk freed\n", (&__init_end - &__init_begin) >> 10);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+ unsigned long p;
+ for (p = start; p < end; p += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(p));
+ set_page_count(virt_to_page(p), 1);
+ free_page(p);
+ totalram_pages++;
+ }
+ printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
+}
+#endif
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/ioremap.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * Mostly derived from arch/sh/mm/ioremap.c which, in turn is mostly
+ * derived from arch/i386/mm/ioremap.c .
+ *
+ * (C) Copyright 1995 1996 Linus Torvalds
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <asm/io.h>
+#include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+#include <linux/ioport.h>
+#include <linux/bootmem.h>
+#include <linux/proc_fs.h>
+
+static void shmedia_mapioaddr(unsigned long, unsigned long);
+static unsigned long shmedia_ioremap(struct resource *, u32, int);
+
+static inline void remap_area_pte(pte_t * pte, unsigned long address, unsigned long size,
+ unsigned long phys_addr, unsigned long flags)
+{
+ unsigned long end;
+ unsigned long pfn;
+ pgprot_t pgprot = __pgprot(_PAGE_PRESENT | _PAGE_READ |
+ _PAGE_WRITE | _PAGE_DIRTY |
+ _PAGE_ACCESSED | _PAGE_SHARED | flags);
+
+ address &= ~PMD_MASK;
+ end = address + size;
+ if (end > PMD_SIZE)
+ end = PMD_SIZE;
+ if (address >= end)
+ BUG();
+
+ pfn = phys_addr >> PAGE_SHIFT;
+
+ pr_debug(" %s: pte %p address %lx size %lx phys_addr %lx\n",
+ __FUNCTION__,pte,address,size,phys_addr);
+
+ do {
+ if (!pte_none(*pte)) {
+ printk("remap_area_pte: page already exists\n");
+ BUG();
+ }
+
+ set_pte(pte, pfn_pte(pfn, pgprot));
+ address += PAGE_SIZE;
+ pfn++;
+ pte++;
+ } while (address && (address < end));
+}
+
+static inline int remap_area_pmd(pmd_t * pmd, unsigned long address, unsigned long size,
+ unsigned long phys_addr, unsigned long flags)
+{
+ unsigned long end;
+
+ address &= ~PGDIR_MASK;
+ end = address + size;
+
+ if (end > PGDIR_SIZE)
+ end = PGDIR_SIZE;
+
+ phys_addr -= address;
+
+ if (address >= end)
+ BUG();
+
+ do {
+ pte_t * pte = pte_alloc_kernel(&init_mm, pmd, address);
+ if (!pte)
+ return -ENOMEM;
+ remap_area_pte(pte, address, end - address, address + phys_addr, flags);
+ address = (address + PMD_SIZE) & PMD_MASK;
+ pmd++;
+ } while (address && (address < end));
+ return 0;
+}
+
+static int remap_area_pages(unsigned long address, unsigned long phys_addr,
+ unsigned long size, unsigned long flags)
+{
+ int error;
+ pgd_t * dir;
+ unsigned long end = address + size;
+
+ phys_addr -= address;
+ dir = pgd_offset_k(address);
+ flush_cache_all();
+ if (address >= end)
+ BUG();
+ spin_lock(&init_mm.page_table_lock);
+ do {
+ pmd_t *pmd = pmd_alloc(&init_mm, dir, address);
+ error = -ENOMEM;
+ if (!pmd)
+ break;
+ if (remap_area_pmd(pmd, address, end - address,
+ phys_addr + address, flags)) {
+ break;
+ }
+ error = 0;
+ address = (address + PGDIR_SIZE) & PGDIR_MASK;
+ dir++;
+ } while (address && (address < end));
+ spin_unlock(&init_mm.page_table_lock);
+ flush_tlb_all();
+ return 0;
+}
+
+/*
+ * Generic mapping function (not visible outside):
+ */
+
+/*
+ * Remap an arbitrary physical address space into the kernel virtual
+ * address space. Needed when the kernel wants to access high addresses
+ * directly.
+ *
+ * NOTE! We need to allow non-page-aligned mappings too: we will obviously
+ * have to convert them into an offset in a page-aligned mapping, but the
+ * caller shouldn't need to know that small detail.
+ */
+void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flags)
+{
+ void * addr;
+ struct vm_struct * area;
+ unsigned long offset, last_addr;
+
+ /* Don't allow wraparound or zero size */
+ last_addr = phys_addr + size - 1;
+ if (!size || last_addr < phys_addr)
+ return NULL;
+
+ /*
+ * Mappings have to be page-aligned
+ */
+ offset = phys_addr & ~PAGE_MASK;
+ phys_addr &= PAGE_MASK;
+ size = PAGE_ALIGN(last_addr + 1) - phys_addr;
+
+ /*
+ * Ok, go for it..
+ */
+ area = get_vm_area(size, VM_IOREMAP);
+ pr_debug("Get vm_area returns %p addr %p\n",area,area->addr);
+ if (!area)
+ return NULL;
+ area->phys_addr = phys_addr;
+ addr = area->addr;
+ if (remap_area_pages((unsigned long)addr, phys_addr, size, flags)) {
+ vunmap(addr);
+ return NULL;
+ }
+ return (void *) (offset + (char *)addr);
+}
+
+void iounmap(void *addr)
+{
+ struct vm_struct *area;
+
+ vfree((void *) (PAGE_MASK & (unsigned long) addr));
+ area = remove_vm_area((void *) (PAGE_MASK & (unsigned long) addr));
+ if (!area) {
+ printk(KERN_ERR "iounmap: bad address %p\n", addr);
+ return;
+ }
+
+ kfree(area);
+}
+
+static struct resource shmedia_iomap = {
+ .name = "shmedia_iomap",
+ .start = IOBASE_VADDR,
+ .end = IOBASE_END - 1,
+};
+
+static void shmedia_mapioaddr(unsigned long pa, unsigned long va);
+static void shmedia_unmapioaddr(unsigned long vaddr);
+static unsigned long shmedia_ioremap(struct resource *res, u32 pa, int sz);
+
+/*
+ * We have the same problem as the SPARC, so lets have the same comment:
+ * Our mini-allocator...
+ * Boy this is gross! We need it because we must map I/O for
+ * timers and interrupt controller before the kmalloc is available.
+ */
+
+#define XNMLN 15
+#define XNRES 10
+
+struct xresource {
+ struct resource xres; /* Must be first */
+ int xflag; /* 1 == used */
+ char xname[XNMLN+1];
+};
+
+static struct xresource xresv[XNRES];
+
+static struct xresource *xres_alloc(void)
+{
+ struct xresource *xrp;
+ int n;
+
+ xrp = xresv;
+ for (n = 0; n < XNRES; n++) {
+ if (xrp->xflag == 0) {
+ xrp->xflag = 1;
+ return xrp;
+ }
+ xrp++;
+ }
+ return NULL;
+}
+
+static void xres_free(struct xresource *xrp)
+{
+ xrp->xflag = 0;
+}
+
+static struct resource *shmedia_find_resource(struct resource *root,
+ unsigned long vaddr)
+{
+ struct resource *res;
+
+ for (res = root->child; res; res = res->sibling)
+ if (res->start <= vaddr && res->end >= vaddr)
+ return res;
+
+ return NULL;
+}
+
+static unsigned long shmedia_alloc_io(unsigned long phys, unsigned long size,
+ const char *name)
+{
+ static int printed_full = 0;
+ struct xresource *xres;
+ struct resource *res;
+ char *tack;
+ int tlen;
+
+ if (name == NULL) name = "???";
+
+ if ((xres = xres_alloc()) != 0) {
+ tack = xres->xname;
+ res = &xres->xres;
+ } else {
+ if (!printed_full) {
+ printk("%s: done with statics, switching to kmalloc\n",
+ __FUNCTION__);
+ printed_full = 1;
+ }
+ tlen = strlen(name);
+ tack = kmalloc(sizeof (struct resource) + tlen + 1, GFP_KERNEL);
+ if (!tack)
+ return -ENOMEM;
+ memset(tack, 0, sizeof(struct resource));
+ res = (struct resource *) tack;
+ tack += sizeof (struct resource);
+ }
+
+ strncpy(tack, name, XNMLN);
+ tack[XNMLN] = 0;
+ res->name = tack;
+
+ return shmedia_ioremap(res, phys, size);
+}
+
+static unsigned long shmedia_ioremap(struct resource *res, u32 pa, int sz)
+{
+ unsigned long offset = ((unsigned long) pa) & (~PAGE_MASK);
+ unsigned long round_sz = (offset + sz + PAGE_SIZE-1) & PAGE_MASK;
+ unsigned long va;
+ unsigned int psz;
+
+ if (allocate_resource(&shmedia_iomap, res, round_sz,
+ shmedia_iomap.start, shmedia_iomap.end,
+ PAGE_SIZE, NULL, NULL) != 0) {
+ panic("alloc_io_res(%s): cannot occupy\n",
+ (res->name != NULL)? res->name: "???");
+ }
+
+ va = res->start;
+ pa &= PAGE_MASK;
+
+ psz = (res->end - res->start + (PAGE_SIZE - 1)) / PAGE_SIZE;
+
+ /* log at boot time ... */
+ printk("mapioaddr: %6s [%2d page%s] va 0x%08lx pa 0x%08x\n",
+ ((res->name != NULL) ? res->name : "???"),
+ psz, psz == 1 ? " " : "s", va, pa);
+
+ for (psz = res->end - res->start + 1; psz != 0; psz -= PAGE_SIZE) {
+ shmedia_mapioaddr(pa, va);
+ va += PAGE_SIZE;
+ pa += PAGE_SIZE;
+ }
+
+ res->start += offset;
+ res->end = res->start + sz - 1; /* not strictly necessary.. */
+
+ return res->start;
+}
+
+static void shmedia_free_io(struct resource *res)
+{
+ unsigned long len = res->end - res->start + 1;
+
+ BUG_ON((len & (PAGE_SIZE - 1)) != 0);
+
+ while (len) {
+ len -= PAGE_SIZE;
+ shmedia_unmapioaddr(res->start + len);
+ }
+
+ release_resource(res);
+}
+
+static void *sh64_get_page(void)
+{
+ extern int after_bootmem;
+ void *page;
+
+ if (after_bootmem) {
+ page = (void *)get_zeroed_page(GFP_ATOMIC);
+ } else {
+ page = alloc_bootmem_pages(PAGE_SIZE);
+ }
+
+ if (!page || ((unsigned long)page & ~PAGE_MASK))
+ panic("sh64_get_page: Out of memory already?\n");
+
+ return page;
+}
+
+static void shmedia_mapioaddr(unsigned long pa, unsigned long va)
+{
+ pgd_t *pgdp;
+ pmd_t *pmdp;
+ pte_t *ptep, pte;
+ pgprot_t prot;
+ unsigned long flags = 1; /* 1 = CB0-1 device */
+
+ pr_debug("shmedia_mapiopage pa %08lx va %08lx\n", pa, va);
+
+ pgdp = pgd_offset_k(va);
+ if (pgd_none(*pgdp) || !pgd_present(*pgdp)) {
+ pmdp = (pmd_t *)sh64_get_page();
+ set_pgd(pgdp, __pgd((unsigned long)pmdp | _KERNPG_TABLE));
+ }
+
+ pmdp = pmd_offset(pgdp, va);
+ if (pmd_none(*pmdp) || !pmd_present(*pmdp) ) {
+ ptep = (pte_t *)sh64_get_page();
+ set_pmd(pmdp, __pmd((unsigned long)ptep + _PAGE_TABLE));
+ }
+
+ prot = __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE |
+ _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_SHARED | flags);
+
+ pte = pfn_pte(pa >> PAGE_SHIFT, prot);
+ ptep = pte_offset_kernel(pmdp, va);
+
+ if (!pte_none(*ptep) &&
+ pte_val(*ptep) != pte_val(pte))
+ pte_ERROR(*ptep);
+
+ set_pte(ptep, pte);
+
+ flush_tlb_kernel_range(va, PAGE_SIZE);
+}
+
+static void shmedia_unmapioaddr(unsigned long vaddr)
+{
+ pgd_t *pgdp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+
+ pgdp = pgd_offset_k(vaddr);
+ pmdp = pmd_offset(pgdp, vaddr);
+
+ if (pmd_none(*pmdp) || pmd_bad(*pmdp))
+ return;
+
+ ptep = pte_offset_kernel(pmdp, vaddr);
+
+ if (pte_none(*ptep) || !pte_present(*ptep))
+ return;
+
+ clear_page((void *)ptep);
+ pte_clear(ptep);
+}
+
+unsigned long onchip_remap(unsigned long phys, unsigned long size, const char *name)
+{
+ if (size < PAGE_SIZE)
+ size = PAGE_SIZE;
+
+ return shmedia_alloc_io(phys, size, name);
+}
+
+void onchip_unmap(unsigned long vaddr)
+{
+ struct resource *res;
+ unsigned int psz;
+
+ res = shmedia_find_resource(&shmedia_iomap, vaddr);
+ if (!res) {
+ printk(KERN_ERR "%s: Failed to free 0x%08lx\n",
+ __FUNCTION__, vaddr);
+ return;
+ }
+
+ psz = (res->end - res->start + (PAGE_SIZE - 1)) / PAGE_SIZE;
+
+ printk(KERN_DEBUG "unmapioaddr: %6s [%2d page%s] freed\n",
+ res->name, psz, psz == 1 ? " " : "s");
+
+ shmedia_free_io(res);
+
+ if ((char *)res >= (char *)xresv &&
+ (char *)res < (char *)&xresv[XNRES]) {
+ xres_free((struct xresource *)res);
+ } else {
+ kfree(res);
+ }
+}
+
+#ifdef CONFIG_PROC_FS
+static int
+ioremap_proc_info(char *buf, char **start, off_t fpos, int length, int *eof,
+ void *data)
+{
+ char *p = buf, *e = buf + length;
+ struct resource *r;
+ const char *nm;
+
+ for (r = ((struct resource *)data)->child; r != NULL; r = r->sibling) {
+ if (p + 32 >= e) /* Better than nothing */
+ break;
+ if ((nm = r->name) == 0) nm = "???";
+ p += sprintf(p, "%08lx-%08lx: %s\n", r->start, r->end, nm);
+ }
+
+ return p-buf;
+}
+#endif /* CONFIG_PROC_FS */
+
+static int __init register_proc_onchip(void)
+{
+#ifdef CONFIG_PROC_FS
+ create_proc_read_entry("io_map",0,0, ioremap_proc_info, &shmedia_iomap);
+#endif
+ return 0;
+}
+
+__initcall(register_proc_onchip);
--- /dev/null
+/*
+ * arch/sh64/mm/tlb.c
+ *
+ * Copyright (C) 2003 Paul Mundt <lethal@linux-sh.org>
+ * Copyright (C) 2003 Richard Curnow <richard.curnow@superh.com>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ */
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <asm/page.h>
+#include <asm/tlb.h>
+#include <asm/mmu_context.h>
+
+/**
+ * sh64_tlb_init
+ *
+ * Perform initial setup for the DTLB and ITLB.
+ */
+int __init sh64_tlb_init(void)
+{
+ /* Assign some sane DTLB defaults */
+ cpu_data->dtlb.entries = 64;
+ cpu_data->dtlb.step = 0x10;
+
+ cpu_data->dtlb.first = DTLB_FIXED | cpu_data->dtlb.step;
+ cpu_data->dtlb.next = cpu_data->dtlb.first;
+
+ cpu_data->dtlb.last = DTLB_FIXED |
+ ((cpu_data->dtlb.entries - 1) *
+ cpu_data->dtlb.step);
+
+ /* And again for the ITLB */
+ cpu_data->itlb.entries = 64;
+ cpu_data->itlb.step = 0x10;
+
+ cpu_data->itlb.first = ITLB_FIXED | cpu_data->itlb.step;
+ cpu_data->itlb.next = cpu_data->itlb.first;
+ cpu_data->itlb.last = ITLB_FIXED |
+ ((cpu_data->itlb.entries - 1) *
+ cpu_data->itlb.step);
+
+ return 0;
+}
+
+/**
+ * sh64_next_free_dtlb_entry
+ *
+ * Find the next available DTLB entry
+ */
+unsigned long long sh64_next_free_dtlb_entry(void)
+{
+ return cpu_data->dtlb.next;
+}
+
+/**
+ * sh64_get_wired_dtlb_entry
+ *
+ * Allocate a wired (locked-in) entry in the DTLB
+ */
+unsigned long long sh64_get_wired_dtlb_entry(void)
+{
+ unsigned long long entry = sh64_next_free_dtlb_entry();
+
+ cpu_data->dtlb.first += cpu_data->dtlb.step;
+ cpu_data->dtlb.next += cpu_data->dtlb.step;
+
+ return entry;
+}
+
+/**
+ * sh64_put_wired_dtlb_entry
+ *
+ * @entry: Address of TLB slot.
+ *
+ * Free a wired (locked-in) entry in the DTLB.
+ *
+ * Works like a stack, last one to allocate must be first one to free.
+ */
+int sh64_put_wired_dtlb_entry(unsigned long long entry)
+{
+ __flush_tlb_slot(entry);
+
+ /*
+ * We don't do any particularly useful tracking of wired entries,
+ * so this approach works like a stack .. last one to be allocated
+ * has to be the first one to be freed.
+ *
+ * We could potentially load wired entries into a list and work on
+ * rebalancing the list periodically (which also entails moving the
+ * contents of a TLB entry) .. though I have a feeling that this is
+ * more trouble than it's worth.
+ */
+
+ /*
+ * Entry must be valid .. we don't want any ITLB addresses!
+ */
+ if (entry <= DTLB_FIXED)
+ return -EINVAL;
+
+ /*
+ * Next, check if we're within range to be freed. (ie, must be the
+ * entry beneath the first 'free' entry!
+ */
+ if (entry < (cpu_data->dtlb.first - cpu_data->dtlb.step))
+ return -EINVAL;
+
+ /* If we are, then bring this entry back into the list */
+ cpu_data->dtlb.first -= cpu_data->dtlb.step;
+ cpu_data->dtlb.next = entry;
+
+ return 0;
+}
+
+/**
+ * sh64_setup_tlb_slot
+ *
+ * @config_addr: Address of TLB slot.
+ * @eaddr: Virtual address.
+ * @asid: Address Space Identifier.
+ * @paddr: Physical address.
+ *
+ * Load up a virtual<->physical translation for @eaddr<->@paddr in the
+ * pre-allocated TLB slot @config_addr (see sh64_get_wired_dtlb_entry).
+ */
+inline void sh64_setup_tlb_slot(unsigned long long config_addr,
+ unsigned long eaddr,
+ unsigned long asid,
+ unsigned long paddr)
+{
+ unsigned long long pteh, ptel;
+
+ /* Sign extension */
+#if (NEFF == 32)
+ pteh = (unsigned long long)(signed long long)(signed long) eaddr;
+#else
+#error "Can't sign extend more than 32 bits yet"
+#endif
+ pteh &= PAGE_MASK;
+ pteh |= (asid << PTEH_ASID_SHIFT) | PTEH_VALID;
+#if (NEFF == 32)
+ ptel = (unsigned long long)(signed long long)(signed long) paddr;
+#else
+#error "Can't sign extend more than 32 bits yet"
+#endif
+ ptel &= PAGE_MASK;
+ ptel |= (_PAGE_CACHABLE | _PAGE_READ | _PAGE_WRITE);
+
+ asm volatile("putcfg %0, 1, %1\n\t"
+ "putcfg %0, 0, %2\n"
+ : : "r" (config_addr), "r" (ptel), "r" (pteh));
+}
+
+/**
+ * sh64_teardown_tlb_slot
+ *
+ * @config_addr: Address of TLB slot.
+ *
+ * Teardown any existing mapping in the TLB slot @config_addr.
+ */
+inline void sh64_teardown_tlb_slot(unsigned long long config_addr)
+ __attribute__ ((alias("__flush_tlb_slot")));
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/tlbmiss.c
+ *
+ * Original code from fault.c
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * Fast PTE->TLB refill path
+ * Copyright (C) 2003 Richard.Curnow@superh.com
+ *
+ * IMPORTANT NOTES :
+ * The do_fast_page_fault function is called from a context in entry.S where very few registers
+ * have been saved. In particular, the code in this file must be compiled not to use ANY
+ * caller-save regiseters that are not part of the restricted save set. Also, it means that
+ * code in this file must not make calls to functions elsewhere in the kernel, or else the
+ * excepting context will see corruption in its caller-save registers. Plus, the entry.S save
+ * area is non-reentrant, so this code has to run with SR.BL==1, i.e. no interrupts taken inside
+ * it and panic on any exception.
+ *
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+
+#include <asm/system.h>
+#include <asm/tlb.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/hardirq.h>
+#include <asm/mmu_context.h>
+#include <asm/registers.h> /* required by inline asm statements */
+
+/* Callable from fault.c, so not static */
+inline void __do_tlb_refill(unsigned long address,
+ unsigned long long is_text_not_data, pte_t *pte)
+{
+ unsigned long long ptel;
+ unsigned long long pteh=0;
+ struct tlb_info *tlbp;
+ unsigned long long next;
+
+ /* Get PTEL first */
+ ptel = pte_val(*pte);
+
+ /*
+ * Set PTEH register
+ */
+ pteh = address & MMU_VPN_MASK;
+
+ /* Sign extend based on neff. */
+#if (NEFF == 32)
+ /* Faster sign extension */
+ pteh = (unsigned long long)(signed long long)(signed long)pteh;
+#else
+ /* General case */
+ pteh = (pteh & NEFF_SIGN) ? (pteh | NEFF_MASK) : pteh;
+#endif
+
+ /* Set the ASID. */
+ pteh |= get_asid() << PTEH_ASID_SHIFT;
+ pteh |= PTEH_VALID;
+
+ /* Set PTEL register, set_pte has performed the sign extension */
+ ptel &= _PAGE_FLAGS_HARDWARE_MASK; /* drop software flags */
+ ptel |= _PAGE_FLAGS_HARDWARE_DEFAULT; /* add default flags */
+
+ tlbp = is_text_not_data ? &(cpu_data->itlb) : &(cpu_data->dtlb);
+ next = tlbp->next;
+ __flush_tlb_slot(next);
+ asm volatile ("putcfg %0,1,%2\n\n\t"
+ "putcfg %0,0,%1\n"
+ : : "r" (next), "r" (pteh), "r" (ptel) );
+
+ next += TLB_STEP;
+ if (next > tlbp->last) next = tlbp->first;
+ tlbp->next = next;
+
+}
+
+static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long protection_flags,
+ unsigned long long textaccess,
+ unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ static pte_t *pte;
+ pte_t entry;
+
+ dir = pgd_offset_k(address);
+ pmd = pmd_offset(dir, address);
+
+ if (pmd_none(*pmd)) {
+ return 0;
+ }
+
+ if (pmd_bad(*pmd)) {
+ pmd_clear(pmd);
+ return 0;
+ }
+
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+
+ if (pte_none(entry) || !pte_present(entry)) {
+ return 0;
+ }
+
+ if ((pte_val(entry) & protection_flags) != protection_flags) {
+ return 0;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+ return 1;
+}
+
+static int handle_tlbmiss(struct mm_struct *mm, unsigned long long protection_flags,
+ unsigned long long textaccess,
+ unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ /* NB. The PGD currently only contains a single entry - there is no
+ page table tree stored for the top half of the address space since
+ virtual pages in that region should never be mapped in user mode.
+ (In kernel mode, the only things in that region are the 512Mb super
+ page (locked in), and vmalloc (modules) + I/O device pages (handled
+ by handle_vmalloc_fault), so no PGD for the upper half is required
+ by kernel mode either).
+
+ See how mm->pgd is allocated and initialised in pgd_alloc to see why
+ the next test is necessary. - RPC */
+ if (address >= (unsigned long) TASK_SIZE) {
+ /* upper half - never has page table entries. */
+ return 0;
+ }
+ dir = pgd_offset(mm, address);
+ if (pgd_none(*dir)) {
+ return 0;
+ }
+ if (!pgd_present(*dir)) {
+ return 0;
+ }
+
+ pmd = pmd_offset(dir, address);
+ if (pmd_none(*pmd)) {
+ return 0;
+ }
+ if (!pmd_present(*pmd)) {
+ return 0;
+ }
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+ if (pte_none(entry)) {
+ return 0;
+ }
+ if (!pte_present(entry)) {
+ return 0;
+ }
+
+ /* If the page doesn't have sufficient protection bits set to service the
+ kind of fault being handled, there's not much point doing the TLB refill.
+ Punt the fault to the general handler. */
+ if ((pte_val(entry) & protection_flags) != protection_flags) {
+ return 0;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+ return 1;
+}
+
+/* Put all this information into one structure so that everything is just arithmetic
+ relative to a single base address. This reduces the number of movi/shori pairs needed
+ just to load addresses of static data. */
+struct expevt_lookup {
+ unsigned short protection_flags[8];
+ unsigned char is_text_access[8];
+ unsigned char is_write_access[8];
+};
+
+#define PRU (1<<9)
+#define PRW (1<<8)
+#define PRX (1<<7)
+#define PRR (1<<6)
+
+#define DIRTY (_PAGE_DIRTY | _PAGE_ACCESSED)
+#define YOUNG (_PAGE_ACCESSED)
+
+/* Sized as 8 rather than 4 to allow checking the PTE's PRU bit against whether
+ the fault happened in user mode or privileged mode. */
+static struct expevt_lookup expevt_lookup_table = {
+ .protection_flags = {PRX, PRX, 0, 0, PRR, PRR, PRW, PRW},
+ .is_text_access = {1, 1, 0, 0, 0, 0, 0, 0}
+};
+
+/*
+ This routine handles page faults that can be serviced just by refilling a
+ TLB entry from an existing page table entry. (This case represents a very
+ large majority of page faults.) Return 1 if the fault was successfully
+ handled. Return 0 if the fault could not be handled. (This leads into the
+ general fault handling in fault.c which deals with mapping file-backed
+ pages, stack growth, segmentation faults, swapping etc etc)
+ */
+asmlinkage int do_fast_page_fault(unsigned long long ssr_md, unsigned long long expevt,
+ unsigned long address)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ unsigned long long textaccess;
+ unsigned long long protection_flags;
+ unsigned long long index;
+ unsigned long long expevt4;
+
+ /* The next few lines implement a way of hashing EXPEVT into a small array index
+ which can be used to lookup parameters specific to the type of TLBMISS being
+ handled. Note:
+ ITLBMISS has EXPEVT==0xa40
+ RTLBMISS has EXPEVT==0x040
+ WTLBMISS has EXPEVT==0x060
+ */
+
+ expevt4 = (expevt >> 4);
+ /* TODO : xor ssr_md into this expression too. Then we can check that PRU is set
+ when it needs to be. */
+ index = expevt4 ^ (expevt4 >> 5);
+ index &= 7;
+ protection_flags = expevt_lookup_table.protection_flags[index];
+ textaccess = expevt_lookup_table.is_text_access[index];
+
+#ifdef CONFIG_SH64_PROC_TLB
+ ++calls_to_do_fast_page_fault;
+#endif
+
+ /* SIM
+ * Note this is now called with interrupts still disabled
+ * This is to cope with being called for a missing IO port
+ * address with interupts disabled. This should be fixed as
+ * soon as we have a better 'fast path' miss handler.
+ *
+ * Plus take care how you try and debug this stuff.
+ * For example, writing debug data to a port which you
+ * have just faulted on is not going to work.
+ */
+
+ tsk = current;
+ mm = tsk->mm;
+
+ if ((address >= VMALLOC_START && address < VMALLOC_END) ||
+ (address >= IOBASE_VADDR && address < IOBASE_END)) {
+ if (ssr_md) {
+ /* Process-contexts can never have this address range mapped */
+ if (handle_vmalloc_fault(mm, protection_flags, textaccess, address)) {
+ return 1;
+ }
+ }
+ } else if (!in_interrupt() && mm) {
+ if (handle_tlbmiss(mm, protection_flags, textaccess, address)) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
--- /dev/null
+
+menu "Profiling support"
+ depends on EXPERIMENTAL
+
+config PROFILING
+ bool "Profiling support (EXPERIMENTAL)"
+ help
+ Say Y here to enable the extended profiling support mechanisms used
+ by profilers such as OProfile.
+
+
+config OPROFILE
+ tristate "OProfile system profiling (EXPERIMENTAL)"
+ depends on PROFILING
+ help
+ OProfile is a profiling system capable of profiling the
+ whole system, include the kernel, kernel modules, libraries,
+ and applications.
+
+ If unsure, say N.
+
+endmenu
+
--- /dev/null
+obj-$(CONFIG_OPROFILE) += oprofile.o
+
+DRIVER_OBJS = $(addprefix ../../../drivers/oprofile/, \
+ oprof.o cpu_buffer.o buffer_sync.o \
+ event_buffer.o oprofile_files.o \
+ oprofilefs.o oprofile_stats.o \
+ timer_int.o )
+
+profdrvr-y := op_model_null.o
+
+oprofile-y := $(DRIVER_OBJS) $(profdrvr-y)
+
--- /dev/null
+/*
+ * arch/sh/oprofile/op_model_null.c
+ *
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/kernel.h>
+#include <linux/oprofile.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+
+int __init oprofile_arch_init(struct oprofile_operations **ops)
+{
+ return -ENODEV;
+}
+
+void oprofile_arch_exit(void)
+{
+}
+
--- /dev/null
+/* clear_page.S: UltraSparc optimized clear page.
+ *
+ * Copyright (C) 1996, 1998, 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+ * Copyright (C) 1997 Jakub Jelinek (jakub@redhat.com)
+ */
+
+#include <asm/visasm.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/spitfire.h>
+
+ /* What we used to do was lock a TLB entry into a specific
+ * TLB slot, clear the page with interrupts disabled, then
+ * restore the original TLB entry. This was great for
+ * disturbing the TLB as little as possible, but it meant
+ * we had to keep interrupts disabled for a long time.
+ *
+ * Now, we simply use the normal TLB loading mechanism,
+ * and this makes the cpu choose a slot all by itself.
+ * Then we do a normal TLB flush on exit. We need only
+ * disable preemption during the clear.
+ */
+
+#define TTE_BITS_TOP (_PAGE_VALID | _PAGE_SZBITS)
+#define TTE_BITS_BOTTOM (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_L | _PAGE_W)
+
+ .text
+
+ .globl _clear_page
+_clear_page: /* %o0=dest */
+ ba,pt %xcc, clear_page_common
+ clr %o4
+
+ /* This thing is pretty important, it shows up
+ * on the profiles via do_anonymous_page().
+ */
+ .align 32
+ .globl clear_user_page
+clear_user_page: /* %o0=dest, %o1=vaddr */
+ lduw [%g6 + TI_PRE_COUNT], %o2
+ sethi %uhi(PAGE_OFFSET), %g2
+ sethi %hi(PAGE_SIZE), %o4
+
+ sllx %g2, 32, %g2
+ sethi %uhi(TTE_BITS_TOP), %g3
+
+ sllx %g3, 32, %g3
+ sub %o0, %g2, %g1 ! paddr
+
+ or %g3, TTE_BITS_BOTTOM, %g3
+ and %o1, %o4, %o0 ! vaddr D-cache alias bit
+
+ or %g1, %g3, %g1 ! TTE data
+ sethi %hi(TLBTEMP_BASE), %o3
+
+ add %o2, 1, %o4
+ add %o0, %o3, %o0 ! TTE vaddr
+
+ /* Disable preemption. */
+ mov TLB_TAG_ACCESS, %g3
+ stw %o4, [%g6 + TI_PRE_COUNT]
+
+ /* Load TLB entry. */
+ rdpr %pstate, %o4
+ wrpr %o4, PSTATE_IE, %pstate
+ stxa %o0, [%g3] ASI_DMMU
+ stxa %g1, [%g0] ASI_DTLB_DATA_IN
+ flush %g6
+ wrpr %o4, 0x0, %pstate
+
+ mov 1, %o4
+
+clear_page_common:
+ VISEntryHalf
+ membar #StoreLoad | #StoreStore | #LoadStore
+ fzero %f0
+ sethi %hi(PAGE_SIZE/64), %o1
+ mov %o0, %g1 ! remember vaddr for tlbflush
+ fzero %f2
+ or %o1, %lo(PAGE_SIZE/64), %o1
+ faddd %f0, %f2, %f4
+ fmuld %f0, %f2, %f6
+ faddd %f0, %f2, %f8
+ fmuld %f0, %f2, %f10
+
+ faddd %f0, %f2, %f12
+ fmuld %f0, %f2, %f14
+1: stda %f0, [%o0 + %g0] ASI_BLK_P
+ subcc %o1, 1, %o1
+ bne,pt %icc, 1b
+ add %o0, 0x40, %o0
+ membar #Sync
+ VISExitHalf
+
+ brz,pn %o4, out
+ nop
+
+ stxa %g0, [%g1] ASI_DMMU_DEMAP
+ membar #Sync
+ stw %o2, [%g6 + TI_PRE_COUNT]
+
+out: retl
+ nop
+
--- /dev/null
+/* clear_page.S: UltraSparc optimized copy page.
+ *
+ * Copyright (C) 1996, 1998, 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+ * Copyright (C) 1997 Jakub Jelinek (jakub@redhat.com)
+ */
+
+#include <asm/visasm.h>
+#include <asm/thread_info.h>
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/spitfire.h>
+#include <asm/head.h>
+
+ /* What we used to do was lock a TLB entry into a specific
+ * TLB slot, clear the page with interrupts disabled, then
+ * restore the original TLB entry. This was great for
+ * disturbing the TLB as little as possible, but it meant
+ * we had to keep interrupts disabled for a long time.
+ *
+ * Now, we simply use the normal TLB loading mechanism,
+ * and this makes the cpu choose a slot all by itself.
+ * Then we do a normal TLB flush on exit. We need only
+ * disable preemption during the clear.
+ */
+
+#define TTE_BITS_TOP (_PAGE_VALID | _PAGE_SZBITS)
+#define TTE_BITS_BOTTOM (_PAGE_CP | _PAGE_CV | _PAGE_P | _PAGE_L | _PAGE_W)
+#define DCACHE_SIZE (PAGE_SIZE * 2)
+
+#if (PAGE_SHIFT == 13) || (PAGE_SHIFT == 19)
+#define PAGE_SIZE_REM 0x80
+#elif (PAGE_SHIFT == 16) || (PAGE_SHIFT == 22)
+#define PAGE_SIZE_REM 0x100
+#else
+#error Wrong PAGE_SHIFT specified
+#endif
+
+#define TOUCH(reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7) \
+ fmovd %reg0, %f48; fmovd %reg1, %f50; \
+ fmovd %reg2, %f52; fmovd %reg3, %f54; \
+ fmovd %reg4, %f56; fmovd %reg5, %f58; \
+ fmovd %reg6, %f60; fmovd %reg7, %f62;
+
+ .text
+
+ .align 32
+ .globl copy_user_page
+copy_user_page: /* %o0=dest, %o1=src, %o2=vaddr */
+ lduw [%g6 + TI_PRE_COUNT], %o4
+ sethi %uhi(PAGE_OFFSET), %g2
+ sethi %hi(PAGE_SIZE), %o3
+
+ sllx %g2, 32, %g2
+ sethi %uhi(TTE_BITS_TOP), %g3
+
+ sllx %g3, 32, %g3
+ sub %o0, %g2, %g1 ! dest paddr
+
+ sub %o1, %g2, %g2 ! src paddr
+ or %g3, TTE_BITS_BOTTOM, %g3
+
+ and %o2, %o3, %o0 ! vaddr D-cache alias bit
+ or %g1, %g3, %g1 ! dest TTE data
+
+ or %g2, %g3, %g2 ! src TTE data
+ sethi %hi(TLBTEMP_BASE), %o3
+
+ sethi %hi(DCACHE_SIZE), %o1
+ add %o0, %o3, %o0 ! dest TTE vaddr
+
+ add %o4, 1, %o2
+ add %o0, %o1, %o1 ! src TTE vaddr
+
+ /* Disable preemption. */
+ mov TLB_TAG_ACCESS, %g3
+ stw %o2, [%g6 + TI_PRE_COUNT]
+
+ /* Load TLB entries. */
+ rdpr %pstate, %o2
+ wrpr %o2, PSTATE_IE, %pstate
+ stxa %o0, [%g3] ASI_DMMU
+ stxa %g1, [%g0] ASI_DTLB_DATA_IN
+ membar #Sync
+ stxa %o1, [%g3] ASI_DMMU
+ stxa %g2, [%g0] ASI_DTLB_DATA_IN
+ membar #Sync
+ wrpr %o2, 0x0, %pstate
+
+ BRANCH_IF_ANY_CHEETAH(g3,o2,1f)
+ ba,pt %xcc, 9f
+ nop
+
+1:
+ VISEntryHalf
+ membar #StoreLoad | #StoreStore | #LoadStore
+ sethi %hi((PAGE_SIZE/64)-2), %o2
+ mov %o0, %g1
+ prefetch [%o1 + 0x000], #one_read
+ or %o2, %lo((PAGE_SIZE/64)-2), %o2
+ prefetch [%o1 + 0x040], #one_read
+ prefetch [%o1 + 0x080], #one_read
+ prefetch [%o1 + 0x0c0], #one_read
+ ldd [%o1 + 0x000], %f0
+ prefetch [%o1 + 0x100], #one_read
+ ldd [%o1 + 0x008], %f2
+ prefetch [%o1 + 0x140], #one_read
+ ldd [%o1 + 0x010], %f4
+ prefetch [%o1 + 0x180], #one_read
+ fmovd %f0, %f16
+ ldd [%o1 + 0x018], %f6
+ fmovd %f2, %f18
+ ldd [%o1 + 0x020], %f8
+ fmovd %f4, %f20
+ ldd [%o1 + 0x028], %f10
+ fmovd %f6, %f22
+ ldd [%o1 + 0x030], %f12
+ fmovd %f8, %f24
+ ldd [%o1 + 0x038], %f14
+ fmovd %f10, %f26
+ ldd [%o1 + 0x040], %f0
+1: ldd [%o1 + 0x048], %f2
+ fmovd %f12, %f28
+ ldd [%o1 + 0x050], %f4
+ fmovd %f14, %f30
+ stda %f16, [%o0] ASI_BLK_P
+ ldd [%o1 + 0x058], %f6
+ fmovd %f0, %f16
+ ldd [%o1 + 0x060], %f8
+ fmovd %f2, %f18
+ ldd [%o1 + 0x068], %f10
+ fmovd %f4, %f20
+ ldd [%o1 + 0x070], %f12
+ fmovd %f6, %f22
+ ldd [%o1 + 0x078], %f14
+ fmovd %f8, %f24
+ ldd [%o1 + 0x080], %f0
+ prefetch [%o1 + 0x180], #one_read
+ fmovd %f10, %f26
+ subcc %o2, 1, %o2
+ add %o0, 0x40, %o0
+ bne,pt %xcc, 1b
+ add %o1, 0x40, %o1
+
+ ldd [%o1 + 0x048], %f2
+ fmovd %f12, %f28
+ ldd [%o1 + 0x050], %f4
+ fmovd %f14, %f30
+ stda %f16, [%o0] ASI_BLK_P
+ ldd [%o1 + 0x058], %f6
+ fmovd %f0, %f16
+ ldd [%o1 + 0x060], %f8
+ fmovd %f2, %f18
+ ldd [%o1 + 0x068], %f10
+ fmovd %f4, %f20
+ ldd [%o1 + 0x070], %f12
+ fmovd %f6, %f22
+ add %o0, 0x40, %o0
+ ldd [%o1 + 0x078], %f14
+ fmovd %f8, %f24
+ fmovd %f10, %f26
+ fmovd %f12, %f28
+ fmovd %f14, %f30
+ stda %f16, [%o0] ASI_BLK_P
+ membar #Sync
+ VISExitHalf
+ ba,pt %xcc, 5f
+ nop
+
+9:
+ VISEntry
+ ldub [%g6 + TI_FAULT_CODE], %g3
+ mov %o0, %g1
+ cmp %g3, 0
+ rd %asi, %g3
+ be,a,pt %icc, 1f
+ wr %g0, ASI_BLK_P, %asi
+ wr %g0, ASI_BLK_COMMIT_P, %asi
+1: ldda [%o1] ASI_BLK_P, %f0
+ add %o1, 0x40, %o1
+ ldda [%o1] ASI_BLK_P, %f16
+ add %o1, 0x40, %o1
+ sethi %hi(PAGE_SIZE), %o2
+1: TOUCH(f0, f2, f4, f6, f8, f10, f12, f14)
+ ldda [%o1] ASI_BLK_P, %f32
+ stda %f48, [%o0] %asi
+ add %o1, 0x40, %o1
+ sub %o2, 0x40, %o2
+ add %o0, 0x40, %o0
+ TOUCH(f16, f18, f20, f22, f24, f26, f28, f30)
+ ldda [%o1] ASI_BLK_P, %f0
+ stda %f48, [%o0] %asi
+ add %o1, 0x40, %o1
+ sub %o2, 0x40, %o2
+ add %o0, 0x40, %o0
+ TOUCH(f32, f34, f36, f38, f40, f42, f44, f46)
+ ldda [%o1] ASI_BLK_P, %f16
+ stda %f48, [%o0] %asi
+ sub %o2, 0x40, %o2
+ add %o1, 0x40, %o1
+ cmp %o2, PAGE_SIZE_REM
+ bne,pt %xcc, 1b
+ add %o0, 0x40, %o0
+#if (PAGE_SHIFT == 16) || (PAGE_SHIFT == 22)
+ TOUCH(f0, f2, f4, f6, f8, f10, f12, f14)
+ ldda [%o1] ASI_BLK_P, %f32
+ stda %f48, [%o0] %asi
+ add %o1, 0x40, %o1
+ sub %o2, 0x40, %o2
+ add %o0, 0x40, %o0
+ TOUCH(f16, f18, f20, f22, f24, f26, f28, f30)
+ ldda [%o1] ASI_BLK_P, %f0
+ stda %f48, [%o0] %asi
+ add %o1, 0x40, %o1
+ sub %o2, 0x40, %o2
+ add %o0, 0x40, %o0
+ membar #Sync
+ stda %f32, [%o0] %asi
+ add %o0, 0x40, %o0
+ stda %f0, [%o0] %asi
+#else
+ membar #Sync
+ stda %f0, [%o0] %asi
+ add %o0, 0x40, %o0
+ stda %f16, [%o0] %asi
+#endif
+ membar #Sync
+ wr %g3, 0x0, %asi
+ VISExit
+
+5:
+ stxa %g0, [%g1] ASI_DMMU_DEMAP
+ membar #Sync
+
+ sethi %hi(DCACHE_SIZE), %g2
+ stxa %g0, [%g1 + %g2] ASI_DMMU_DEMAP
+ membar #Sync
+
+ retl
+ stw %o4, [%g6 + TI_PRE_COUNT]
--- /dev/null
+#include <asm/bitops.h>
+
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+unsigned long find_next_bit(unsigned long *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = addr + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp &= (~0UL << offset);
+ if (size < 64)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while (size & ~63UL) {
+ if ((tmp = *(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp &= (~0UL >> (64 - size));
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __ffs(tmp);
+}
+
+/* find_next_zero_bit() finds the first zero bit in a bit string of length
+ * 'size' bits, starting the search at bit 'offset'. This is largely based
+ * on Linus's ALPHA routines, which are pretty portable BTW.
+ */
+
+unsigned long find_next_zero_bit(unsigned long *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = addr + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (64-offset);
+ if (size < 64)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while (size & ~63UL) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL << size;
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + ffz(tmp);
+}
+
+unsigned long find_next_zero_le_bit(unsigned long *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = addr + (offset >> 6);
+ unsigned long result = offset & ~63UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 63UL;
+ if(offset) {
+ tmp = __swab64p(p++);
+ tmp |= (~0UL >> (64-offset));
+ if(size < 64)
+ goto found_first;
+ if(~tmp)
+ goto found_middle;
+ size -= 64;
+ result += 64;
+ }
+ while(size & ~63) {
+ if(~(tmp = __swab64p(p++)))
+ goto found_middle;
+ result += 64;
+ size -= 64;
+ }
+ if(!size)
+ return result;
+ tmp = __swab64p(p);
+found_first:
+ tmp |= (~0UL << size);
+ if (tmp == ~0UL) /* Are any bits zero? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + ffz(tmp);
+}
--- /dev/null
+/* splock.S: Spinlock primitives too large to inline.
+ *
+ * Copyright (C) 2004 David S. Miller (davem@redhat.com)
+ */
+
+ .text
+ .align 64
+
+ .globl _raw_spin_lock_flags
+_raw_spin_lock_flags: /* %o0 = lock_ptr, %o1 = irq_flags */
+1: ldstub [%o0], %g7
+ brnz,pn %g7, 2f
+ membar #StoreLoad | #StoreStore
+ retl
+ nop
+
+2: rdpr %pil, %g2 ! Save PIL
+ wrpr %o1, %pil ! Set previous PIL
+3: ldub [%o0], %g7 ! Spin on lock set
+ brnz,pt %g7, 3b
+ membar #LoadLoad
+ ba,pt %xcc, 1b ! Retry lock acquire
+ wrpr %g2, %pil ! Restore PIL
--- /dev/null
+/* arch/sparc64/mm/tlb.c
+ *
+ * Copyright (C) 2004 David S. Miller <davem@redhat.com>
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/percpu.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+#include <asm/cacheflush.h>
+#include <asm/mmu_context.h>
+#include <asm/tlb.h>
+
+/* Heavily inspired by the ppc64 code. */
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers) =
+ { NULL, 0, 0, 0, 0, 0, { 0 }, { NULL }, };
+
+void flush_tlb_pending(void)
+{
+ struct mmu_gather *mp = &__get_cpu_var(mmu_gathers);
+
+ if (mp->tlb_nr) {
+ unsigned long context = mp->mm->context;
+
+ if (CTX_VALID(context)) {
+#ifdef CONFIG_SMP
+ smp_flush_tlb_pending(mp->mm, mp->tlb_nr,
+ &mp->vaddrs[0]);
+#else
+ __flush_tlb_pending(CTX_HWBITS(context), mp->tlb_nr,
+ &mp->vaddrs[0]);
+#endif
+ }
+ mp->tlb_nr = 0;
+ }
+}
+
+void tlb_batch_add(pte_t *ptep, pte_t orig)
+{
+ struct mmu_gather *mp = &__get_cpu_var(mmu_gathers);
+ struct page *ptepage;
+ struct mm_struct *mm;
+ unsigned long vaddr, nr;
+
+ ptepage = virt_to_page(ptep);
+ mm = (struct mm_struct *) ptepage->mapping;
+
+ /* It is more efficient to let flush_tlb_kernel_range()
+ * handle these cases.
+ */
+ if (mm == &init_mm)
+ return;
+
+ vaddr = ptepage->index +
+ (((unsigned long)ptep & ~PAGE_MASK) * PTRS_PER_PTE);
+ if (pte_exec(orig))
+ vaddr |= 0x1UL;
+
+ if (pte_dirty(orig)) {
+ unsigned long paddr, pfn = pte_pfn(orig);
+ struct address_space *mapping;
+ struct page *page;
+
+ if (!pfn_valid(pfn))
+ goto no_cache_flush;
+
+ page = pfn_to_page(pfn);
+ if (PageReserved(page))
+ goto no_cache_flush;
+
+ /* A real file page? */
+ mapping = page_mapping(page);
+ if (!mapping)
+ goto no_cache_flush;
+
+ paddr = (unsigned long) page_address(page);
+ if ((paddr ^ vaddr) & (1 << 13))
+ flush_dcache_page_all(mm, page);
+ }
+
+no_cache_flush:
+ if (mp->tlb_frozen)
+ return;
+
+ nr = mp->tlb_nr;
+
+ if (unlikely(nr != 0 && mm != mp->mm)) {
+ flush_tlb_pending();
+ nr = 0;
+ }
+
+ if (nr == 0)
+ mp->mm = mm;
+
+ mp->vaddrs[nr] = vaddr;
+ mp->tlb_nr = ++nr;
+ if (nr >= TLB_BATCH_NR)
+ flush_tlb_pending();
+}
+
+void flush_tlb_pgtables(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+ struct mmu_gather *mp = &__get_cpu_var(mmu_gathers);
+ unsigned long nr = mp->tlb_nr;
+ long s = start, e = end, vpte_base;
+
+ if (mp->tlb_frozen)
+ return;
+
+ /* Nobody should call us with start below VM hole and end above.
+ * See if it is really true.
+ */
+ BUG_ON(s > e);
+
+#if 0
+ /* Currently free_pgtables guarantees this. */
+ s &= PMD_MASK;
+ e = (e + PMD_SIZE - 1) & PMD_MASK;
+#endif
+ vpte_base = (tlb_type == spitfire ?
+ VPTE_BASE_SPITFIRE :
+ VPTE_BASE_CHEETAH);
+
+ if (unlikely(nr != 0 && mm != mp->mm)) {
+ flush_tlb_pending();
+ nr = 0;
+ }
+
+ if (nr == 0)
+ mp->mm = mm;
+
+ start = vpte_base + (s >> (PAGE_SHIFT - 3));
+ end = vpte_base + (e >> (PAGE_SHIFT - 3));
+ while (start < end) {
+ mp->vaddrs[nr] = start;
+ mp->tlb_nr = ++nr;
+ if (nr >= TLB_BATCH_NR) {
+ flush_tlb_pending();
+ nr = 0;
+ }
+ start += PAGE_SIZE;
+ }
+ if (nr)
+ flush_tlb_pending();
+}
+
+unsigned long __ptrs_per_pmd(void)
+{
+ if (test_thread_flag(TIF_32BIT))
+ return (1UL << (32 - (PAGE_SHIFT-3) - PAGE_SHIFT));
+ return REAL_PTRS_PER_PMD;
+}
--- /dev/null
+#define COW_MAJOR 60
+#define MAJOR_NR COW_MAJOR
+
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/ctype.h>
+#include <linux/stat.h>
+#include <linux/vmalloc.h>
+#include <linux/blkdev.h>
+#include <linux/blk.h>
+#include <linux/fs.h>
+#include <linux/genhd.h>
+#include <linux/devfs_fs.h>
+#include <asm/uaccess.h>
+#include "2_5compat.h"
+#include "cow.h"
+#include "ubd_user.h"
+
+#define COW_SHIFT 4
+
+struct cow {
+ int count;
+ char *cow_path;
+ dev_t cow_dev;
+ struct block_device *cow_bdev;
+ char *backing_path;
+ dev_t backing_dev;
+ struct block_device *backing_bdev;
+ int sectorsize;
+ unsigned long *bitmap;
+ unsigned long bitmap_len;
+ int bitmap_offset;
+ int data_offset;
+ devfs_handle_t devfs;
+ struct semaphore sem;
+ struct semaphore io_sem;
+ atomic_t working;
+ spinlock_t io_lock;
+ struct buffer_head *bh;
+ struct buffer_head *bhtail;
+ void *end_io;
+};
+
+#define DEFAULT_COW { \
+ .count = 0, \
+ .cow_path = NULL, \
+ .cow_dev = 0, \
+ .backing_path = NULL, \
+ .backing_dev = 0, \
+ .bitmap = NULL, \
+ .bitmap_len = 0, \
+ .bitmap_offset = 0, \
+ .data_offset = 0, \
+ .devfs = NULL, \
+ .working = ATOMIC_INIT(0), \
+ .io_lock = SPIN_LOCK_UNLOCKED, \
+}
+
+#define MAX_DEV (8)
+#define MAX_MINOR (MAX_DEV << COW_SHIFT)
+
+struct cow cow_dev[MAX_DEV] = { [ 0 ... MAX_DEV - 1 ] = DEFAULT_COW };
+
+/* Not modified by this driver */
+static int blk_sizes[MAX_MINOR] = { [ 0 ... MAX_MINOR - 1 ] = BLOCK_SIZE };
+static int hardsect_sizes[MAX_MINOR] = { [ 0 ... MAX_MINOR - 1 ] = 512 };
+
+/* Protected by cow_lock */
+static int sizes[MAX_MINOR] = { [ 0 ... MAX_MINOR - 1 ] = 0 };
+
+static struct hd_struct cow_part[MAX_MINOR] =
+ { [ 0 ... MAX_MINOR - 1 ] = { 0, 0, 0 } };
+
+/* Protected by io_request_lock */
+static request_queue_t *cow_queue;
+
+static int cow_open(struct inode *inode, struct file *filp);
+static int cow_release(struct inode * inode, struct file * file);
+static int cow_ioctl(struct inode * inode, struct file * file,
+ unsigned int cmd, unsigned long arg);
+static int cow_revalidate(kdev_t rdev);
+
+static struct block_device_operations cow_blops = {
+ .open = cow_open,
+ .release = cow_release,
+ .ioctl = cow_ioctl,
+ .revalidate = cow_revalidate,
+};
+
+/* Initialized in an initcall, and unchanged thereafter */
+devfs_handle_t cow_dir_handle;
+
+#define INIT_GENDISK(maj, name, parts, shift, bsizes, max, blops) \
+{ \
+ .major = maj, \
+ .major_name = name, \
+ .minor_shift = shift, \
+ .max_p = 1 << shift, \
+ .part = parts, \
+ .sizes = bsizes, \
+ .nr_real = max, \
+ .real_devices = NULL, \
+ .next = NULL, \
+ .fops = blops, \
+ .de_arr = NULL, \
+ .flags = 0 \
+}
+
+static spinlock_t cow_lock = SPIN_LOCK_UNLOCKED;
+
+static struct gendisk cow_gendisk = INIT_GENDISK(MAJOR_NR, "cow", cow_part,
+ COW_SHIFT, sizes, MAX_DEV,
+ &cow_blops);
+
+static int cow_add(int n)
+{
+ struct cow *dev = &cow_dev[n];
+ char name[sizeof("nnnnnn\0")];
+ int err = -ENODEV;
+
+ if(dev->cow_path == NULL)
+ goto out;
+
+ sprintf(name, "%d", n);
+ dev->devfs = devfs_register(cow_dir_handle, name, DEVFS_FL_REMOVABLE,
+ MAJOR_NR, n << COW_SHIFT, S_IFBLK |
+ S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP,
+ &cow_blops, NULL);
+
+ init_MUTEX_LOCKED(&dev->sem);
+ init_MUTEX(&dev->io_sem);
+
+ return(0);
+
+ out:
+ return(err);
+}
+
+/*
+ * Add buffer_head to back of pending list
+ */
+static void cow_add_bh(struct cow *cow, struct buffer_head *bh)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&cow->io_lock, flags);
+ if(cow->bhtail != NULL){
+ cow->bhtail->b_reqnext = bh;
+ cow->bhtail = bh;
+ }
+ else {
+ cow->bh = bh;
+ cow->bhtail = bh;
+ }
+ spin_unlock_irqrestore(&cow->io_lock, flags);
+}
+
+/*
+* Grab first pending buffer
+*/
+static struct buffer_head *cow_get_bh(struct cow *cow)
+{
+ struct buffer_head *bh;
+
+ spin_lock_irq(&cow->io_lock);
+ bh = cow->bh;
+ if(bh != NULL){
+ if(bh == cow->bhtail)
+ cow->bhtail = NULL;
+ cow->bh = bh->b_reqnext;
+ bh->b_reqnext = NULL;
+ }
+ spin_unlock_irq(&cow->io_lock);
+
+ return(bh);
+}
+
+static void cow_handle_bh(struct cow *cow, struct buffer_head *bh,
+ struct buffer_head **cow_bh, int ncow_bh)
+{
+ int i;
+
+ if(ncow_bh > 0)
+ ll_rw_block(WRITE, ncow_bh, cow_bh);
+
+ for(i = 0; i < ncow_bh ; i++){
+ wait_on_buffer(cow_bh[i]);
+ brelse(cow_bh[i]);
+ }
+
+ ll_rw_block(WRITE, 1, &bh);
+ brelse(bh);
+}
+
+static struct buffer_head *cow_new_bh(struct cow *dev, int sector)
+{
+ struct buffer_head *bh;
+
+ sector = (dev->bitmap_offset + sector / 8) / dev->sectorsize;
+ bh = getblk(dev->cow_dev, sector, dev->sectorsize);
+ memcpy(bh->b_data, dev->bitmap + sector / (8 * sizeof(dev->bitmap[0])),
+ dev->sectorsize);
+ return(bh);
+}
+
+/* Copied from loop.c, needed to avoid deadlocking in make_request. */
+
+static int cow_thread(void *data)
+{
+ struct cow *dev = data;
+ struct buffer_head *bh;
+
+ daemonize();
+ exit_files(current);
+
+ sprintf(current->comm, "cow%d", dev - cow_dev);
+
+ spin_lock_irq(¤t->sigmask_lock);
+ sigfillset(¤t->blocked);
+ flush_signals(current);
+ spin_unlock_irq(¤t->sigmask_lock);
+
+ atomic_inc(&dev->working);
+
+ current->policy = SCHED_OTHER;
+ current->nice = -20;
+
+ current->flags |= PF_NOIO;
+
+ /*
+ * up sem, we are running
+ */
+ up(&dev->sem);
+
+ for(;;){
+ int start, len, nbh, i, update_bitmap = 0;
+ struct buffer_head *cow_bh[2];
+
+ down_interruptible(&dev->io_sem);
+ /*
+ * could be upped because of tear-down, not because of
+ * pending work
+ */
+ if(!atomic_read(&dev->working))
+ break;
+
+ bh = cow_get_bh(dev);
+ if(bh == NULL){
+ printk(KERN_ERR "cow: missing bh\n");
+ continue;
+ }
+
+ start = bh->b_blocknr * bh->b_size / dev->sectorsize;
+ len = bh->b_size / dev->sectorsize;
+ for(i = 0; i < len ; i++){
+ if(ubd_test_bit(start + i,
+ (unsigned char *) dev->bitmap))
+ continue;
+
+ update_bitmap = 1;
+ ubd_set_bit(start + i, (unsigned char *) dev->bitmap);
+ }
+
+ cow_bh[0] = NULL;
+ cow_bh[1] = NULL;
+ nbh = 0;
+ if(update_bitmap){
+ cow_bh[0] = cow_new_bh(dev, start);
+ nbh++;
+ if(start / dev->sectorsize !=
+ (start + len) / dev->sectorsize){
+ cow_bh[1] = cow_new_bh(dev, start + len);
+ nbh++;
+ }
+ }
+
+ bh->b_dev = dev->cow_dev;
+ bh->b_blocknr += dev->data_offset / dev->sectorsize;
+
+ cow_handle_bh(dev, bh, cow_bh, nbh);
+
+ /*
+ * upped both for pending work and tear-down, lo_pending
+ * will hit zero then
+ */
+ if(atomic_dec_and_test(&dev->working))
+ break;
+ }
+
+ up(&dev->sem);
+ return(0);
+}
+
+static int cow_make_request(request_queue_t *q, int rw, struct buffer_head *bh)
+{
+ struct cow *dev;
+ int n, minor;
+
+ minor = MINOR(bh->b_rdev);
+ n = minor >> COW_SHIFT;
+ dev = &cow_dev[n];
+
+ dev->end_io = NULL;
+ if(ubd_test_bit(bh->b_rsector, (unsigned char *) dev->bitmap)){
+ bh->b_rdev = dev->cow_dev;
+ bh->b_rsector += dev->data_offset / dev->sectorsize;
+ }
+ else if(rw == WRITE){
+ bh->b_dev = dev->cow_dev;
+ bh->b_blocknr += dev->data_offset / dev->sectorsize;
+
+ cow_add_bh(dev, bh);
+ up(&dev->io_sem);
+ return(0);
+ }
+ else {
+ bh->b_rdev = dev->backing_dev;
+ }
+
+ return(1);
+}
+
+int cow_init(void)
+{
+ int i;
+
+ cow_dir_handle = devfs_mk_dir (NULL, "cow", NULL);
+ if (devfs_register_blkdev(MAJOR_NR, "cow", &cow_blops)) {
+ printk(KERN_ERR "cow: unable to get major %d\n", MAJOR_NR);
+ return -1;
+ }
+ read_ahead[MAJOR_NR] = 8; /* 8 sector (4kB) read-ahead */
+ blksize_size[MAJOR_NR] = blk_sizes;
+ blk_size[MAJOR_NR] = sizes;
+ INIT_HARDSECT(hardsect_size, MAJOR_NR, hardsect_sizes);
+
+ cow_queue = BLK_DEFAULT_QUEUE(MAJOR_NR);
+ blk_init_queue(cow_queue, NULL);
+ INIT_ELV(cow_queue, &cow_queue->elevator);
+ blk_queue_make_request(cow_queue, cow_make_request);
+
+ add_gendisk(&cow_gendisk);
+
+ for(i=0;i<MAX_DEV;i++)
+ cow_add(i);
+
+ return(0);
+}
+
+__initcall(cow_init);
+
+static int reader(__u64 start, char *buf, int count, void *arg)
+{
+ dev_t dev = *((dev_t *) arg);
+ struct buffer_head *bh;
+ __u64 block;
+ int cur, offset, left, n, blocksize = get_hardsect_size(dev);
+
+ if(blocksize == 0)
+ panic("Zero blocksize");
+
+ block = start / blocksize;
+ offset = start % blocksize;
+ left = count;
+ cur = 0;
+ while(left > 0){
+ n = (left > blocksize) ? blocksize : left;
+
+ bh = bread(dev, block, (n < 512) ? 512 : n);
+ if(bh == NULL)
+ return(-EIO);
+
+ n -= offset;
+ memcpy(&buf[cur], bh->b_data + offset, n);
+ block++;
+ left -= n;
+ cur += n;
+ offset = 0;
+ brelse(bh);
+ }
+
+ return(count);
+}
+
+static int cow_open(struct inode *inode, struct file *filp)
+{
+ int (*dev_ioctl)(struct inode *, struct file *, unsigned int,
+ unsigned long);
+ mm_segment_t fs;
+ struct cow *dev;
+ __u64 size;
+ __u32 version, align;
+ time_t mtime;
+ char *backing_file;
+ int n, offset, err = 0;
+
+ n = DEVICE_NR(inode->i_rdev);
+ if(n >= MAX_DEV)
+ return(-ENODEV);
+ dev = &cow_dev[n];
+ offset = n << COW_SHIFT;
+
+ spin_lock(&cow_lock);
+
+ if(dev->count == 0){
+ dev->cow_dev = name_to_kdev_t(dev->cow_path);
+ if(dev->cow_dev == 0){
+ printk(KERN_ERR "cow_open - name_to_kdev_t(\"%s\") "
+ "failed\n", dev->cow_path);
+ err = -ENODEV;
+ }
+
+ dev->backing_dev = name_to_kdev_t(dev->backing_path);
+ if(dev->backing_dev == 0){
+ printk(KERN_ERR "cow_open - name_to_kdev_t(\"%s\") "
+ "failed\n", dev->backing_path);
+ err = -ENODEV;
+ }
+
+ if(err)
+ goto out;
+
+ dev->cow_bdev = bdget(dev->cow_dev);
+ if(dev->cow_bdev == NULL){
+ printk(KERN_ERR "cow_open - bdget(\"%s\") failed\n",
+ dev->cow_path);
+ err = -ENOMEM;
+ }
+ dev->backing_bdev = bdget(dev->backing_dev);
+ if(dev->backing_bdev == NULL){
+ printk(KERN_ERR "cow_open - bdget(\"%s\") failed\n",
+ dev->backing_path);
+ err = -ENOMEM;
+ }
+
+ if(err)
+ goto out;
+
+ err = blkdev_get(dev->cow_bdev, FMODE_READ|FMODE_WRITE, 0,
+ BDEV_RAW);
+ if(err){
+ printk("cow_open - blkdev_get of COW device failed, "
+ "error = %d\n", err);
+ goto out;
+ }
+
+ err = blkdev_get(dev->backing_bdev, FMODE_READ, 0, BDEV_RAW);
+ if(err){
+ printk("cow_open - blkdev_get of backing device "
+ "failed, error = %d\n", err);
+ goto out;
+ }
+
+ err = read_cow_header(reader, &dev->cow_dev, &version,
+ &backing_file, &mtime, &size,
+ &dev->sectorsize, &align,
+ &dev->bitmap_offset);
+ if(err){
+ printk(KERN_ERR "cow_open - read_cow_header failed, "
+ "err = %d\n", err);
+ goto out;
+ }
+
+ cow_sizes(version, size, dev->sectorsize, align,
+ dev->bitmap_offset, &dev->bitmap_len,
+ &dev->data_offset);
+ dev->bitmap = (void *) vmalloc(dev->bitmap_len);
+ if(dev->bitmap == NULL){
+ err = -ENOMEM;
+ printk(KERN_ERR "Failed to vmalloc COW bitmap\n");
+ goto out;
+ }
+ flush_tlb_kernel_vm();
+
+ err = reader(dev->bitmap_offset, (char *) dev->bitmap,
+ dev->bitmap_len, &dev->cow_dev);
+ if(err < 0){
+ printk(KERN_ERR "Failed to read COW bitmap\n");
+ vfree(dev->bitmap);
+ goto out;
+ }
+
+ dev_ioctl = dev->backing_bdev->bd_op->ioctl;
+ fs = get_fs();
+ set_fs(KERNEL_DS);
+ err = (*dev_ioctl)(inode, filp, BLKGETSIZE,
+ (unsigned long) &sizes[offset]);
+ set_fs(fs);
+ if(err){
+ printk(KERN_ERR "cow_open - BLKGETSIZE failed, "
+ "error = %d\n", err);
+ goto out;
+ }
+
+ kernel_thread(cow_thread, dev,
+ CLONE_FS | CLONE_FILES | CLONE_SIGHAND);
+ down(&dev->sem);
+ }
+ dev->count++;
+ out:
+ spin_unlock(&cow_lock);
+ return(err);
+}
+
+static int cow_release(struct inode * inode, struct file * file)
+{
+ struct cow *dev;
+ int n, err;
+
+ n = DEVICE_NR(inode->i_rdev);
+ if(n >= MAX_DEV)
+ return(-ENODEV);
+ dev = &cow_dev[n];
+
+ spin_lock(&cow_lock);
+
+ if(--dev->count > 0)
+ goto out;
+
+ err = blkdev_put(dev->cow_bdev, BDEV_RAW);
+ if(err)
+ printk("cow_release - blkdev_put of cow device failed, "
+ "error = %d\n", err);
+ bdput(dev->cow_bdev);
+ dev->cow_bdev = 0;
+
+ err = blkdev_put(dev->backing_bdev, BDEV_RAW);
+ if(err)
+ printk("cow_release - blkdev_put of backing device failed, "
+ "error = %d\n", err);
+ bdput(dev->backing_bdev);
+ dev->backing_bdev = 0;
+
+ out:
+ spin_unlock(&cow_lock);
+ return(0);
+}
+
+static int cow_ioctl(struct inode * inode, struct file * file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct cow *dev;
+ int (*dev_ioctl)(struct inode *, struct file *, unsigned int,
+ unsigned long);
+ int n;
+
+ n = DEVICE_NR(inode->i_rdev);
+ if(n >= MAX_DEV)
+ return(-ENODEV);
+ dev = &cow_dev[n];
+
+ dev_ioctl = dev->backing_bdev->bd_op->ioctl;
+ return((*dev_ioctl)(inode, file, cmd, arg));
+}
+
+static int cow_revalidate(kdev_t rdev)
+{
+ printk(KERN_ERR "Need to implement cow_revalidate\n");
+ return(0);
+}
+
+static int parse_unit(char **ptr)
+{
+ char *str = *ptr, *end;
+ int n = -1;
+
+ if(isdigit(*str)) {
+ n = simple_strtoul(str, &end, 0);
+ if(end == str)
+ return(-1);
+ *ptr = end;
+ }
+ else if (('a' <= *str) && (*str <= 'h')) {
+ n = *str - 'a';
+ str++;
+ *ptr = str;
+ }
+ return(n);
+}
+
+static int cow_setup(char *str)
+{
+ struct cow *dev;
+ char *cow_name, *backing_name;
+ int unit;
+
+ unit = parse_unit(&str);
+ if(unit < 0){
+ printk(KERN_ERR "cow_setup - Couldn't parse unit number\n");
+ return(1);
+ }
+
+ if(*str != '='){
+ printk(KERN_ERR "cow_setup - Missing '=' after unit "
+ "number\n");
+ return(1);
+ }
+ str++;
+
+ cow_name = str;
+ backing_name = strchr(str, ',');
+ if(backing_name == NULL){
+ printk(KERN_ERR "cow_setup - missing backing device name\n");
+ return(0);
+ }
+ *backing_name = '\0';
+ backing_name++;
+
+ spin_lock(&cow_lock);
+
+ dev = &cow_dev[unit];
+ dev->cow_path = cow_name;
+ dev->backing_path = backing_name;
+
+ spin_unlock(&cow_lock);
+ return(0);
+}
+
+__setup("cow", cow_setup);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#ifndef __COW_SYS_H__
+#define __COW_SYS_H__
+
+#include "kern_util.h"
+#include "user_util.h"
+#include "os.h"
+#include "user.h"
+
+static inline void *cow_malloc(int size)
+{
+ return(um_kmalloc(size));
+}
+
+static inline void cow_free(void *ptr)
+{
+ kfree(ptr);
+}
+
+#define cow_printf printk
+
+static inline char *cow_strdup(char *str)
+{
+ return(uml_strdup(str));
+}
+
+static inline int cow_seek_file(int fd, __u64 offset)
+{
+ return(os_seek_file(fd, offset));
+}
+
+static inline int cow_file_size(char *file, __u64 *size_out)
+{
+ return(os_file_size(file, size_out));
+}
+
+static inline int cow_write_file(int fd, char *buf, int size)
+{
+ return(os_write_file(fd, buf, size));
+}
+
+#endif
+
+/*
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2001 Lennert Buytenhek (buytenh@gnu.org) and
+ * James Leu (jleu@mindspring.net).
+ * Copyright (C) 2001 by various other people who didn't put their name here.
+ * Licensed under the GPL.
+ */
+
+#include "linux/config.h"
+#include "linux/kernel.h"
+#include "linux/netdevice.h"
+#include "linux/rtnetlink.h"
+#include "linux/skbuff.h"
+#include "linux/socket.h"
+#include "linux/spinlock.h"
+#include "linux/module.h"
+#include "linux/init.h"
+#include "linux/etherdevice.h"
+#include "linux/list.h"
+#include "linux/inetdevice.h"
+#include "linux/ctype.h"
+#include "linux/bootmem.h"
+#include "user_util.h"
+#include "kern_util.h"
+#include "net_kern.h"
+#include "net_user.h"
+#include "mconsole_kern.h"
+#include "init.h"
+#include "irq_user.h"
+
+static spinlock_t opened_lock = SPIN_LOCK_UNLOCKED;
+LIST_HEAD(opened);
+
+static int uml_net_rx(struct net_device *dev)
+{
+ struct uml_net_private *lp = dev->priv;
+ int pkt_len;
+ struct sk_buff *skb;
+
+ /* If we can't allocate memory, try again next round. */
+ if ((skb = dev_alloc_skb(dev->mtu)) == NULL) {
+ lp->stats.rx_dropped++;
+ return 0;
+ }
+
+ skb->dev = dev;
+ skb_put(skb, dev->mtu);
+ skb->mac.raw = skb->data;
+ pkt_len = (*lp->read)(lp->fd, &skb, lp);
+
+ if (pkt_len > 0) {
+ skb_trim(skb, pkt_len);
+ skb->protocol = (*lp->protocol)(skb);
+ netif_rx(skb);
+
+ lp->stats.rx_bytes += skb->len;
+ lp->stats.rx_packets++;
+ return pkt_len;
+ }
+
+ kfree_skb(skb);
+ return pkt_len;
+}
+
+void uml_net_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ struct uml_net_private *lp = dev->priv;
+ int err;
+
+ if(!netif_running(dev))
+ return;
+
+ spin_lock(&lp->lock);
+ while((err = uml_net_rx(dev)) > 0) ;
+ if(err < 0) {
+ printk(KERN_ERR
+ "Device '%s' read returned %d, shutting it down\n",
+ dev->name, err);
+ dev_close(dev);
+ goto out;
+ }
+ reactivate_fd(lp->fd, UM_ETH_IRQ);
+
+ out:
+ spin_unlock(&lp->lock);
+}
+
+static int uml_net_open(struct net_device *dev)
+{
+ struct uml_net_private *lp = dev->priv;
+ char addr[sizeof("255.255.255.255\0")];
+ int err;
+
+ spin_lock(&lp->lock);
+
+ if(lp->fd >= 0){
+ err = -ENXIO;
+ goto out;
+ }
+
+ if(!lp->have_mac){
+ dev_ip_addr(dev, addr, &lp->mac[2]);
+ set_ether_mac(dev, lp->mac);
+ }
+
+ lp->fd = (*lp->open)(&lp->user);
+ if(lp->fd < 0){
+ err = lp->fd;
+ goto out;
+ }
+
+ err = um_request_irq(dev->irq, lp->fd, IRQ_READ, uml_net_interrupt,
+ SA_INTERRUPT | SA_SHIRQ, dev->name, dev);
+ if(err != 0){
+ printk(KERN_ERR "uml_net_open: failed to get irq(%d)\n", err);
+ if(lp->close != NULL) (*lp->close)(lp->fd, &lp->user);
+ lp->fd = -1;
+ err = -ENETUNREACH;
+ }
+
+ lp->tl.data = (unsigned long) &lp->user;
+ netif_start_queue(dev);
+
+ spin_lock(&opened_lock);
+ list_add(&lp->list, &opened);
+ spin_unlock(&opened_lock);
+ MOD_INC_USE_COUNT;
+ out:
+ spin_unlock(&lp->lock);
+ return(err);
+}
+
+static int uml_net_close(struct net_device *dev)
+{
+ struct uml_net_private *lp = dev->priv;
+
+ netif_stop_queue(dev);
+ spin_lock(&lp->lock);
+
+ free_irq(dev->irq, dev);
+ if(lp->close != NULL) (*lp->close)(lp->fd, &lp->user);
+ lp->fd = -1;
+ spin_lock(&opened_lock);
+ list_del(&lp->list);
+ spin_unlock(&opened_lock);
+
+ MOD_DEC_USE_COUNT;
+ spin_unlock(&lp->lock);
+ return 0;
+}
+
+static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct uml_net_private *lp = dev->priv;
+ unsigned long flags;
+ int len;
+
+ netif_stop_queue(dev);
+
+ spin_lock_irqsave(&lp->lock, flags);
+
+ len = (*lp->write)(lp->fd, &skb, lp);
+
+ if(len == skb->len) {
+ lp->stats.tx_packets++;
+ lp->stats.tx_bytes += skb->len;
+ dev->trans_start = jiffies;
+ netif_start_queue(dev);
+
+ /* this is normally done in the interrupt when tx finishes */
+ netif_wake_queue(dev);
+ }
+ else if(len == 0){
+ netif_start_queue(dev);
+ lp->stats.tx_dropped++;
+ }
+ else {
+ netif_start_queue(dev);
+ printk(KERN_ERR "uml_net_start_xmit: failed(%d)\n", len);
+ }
+
+ spin_unlock_irqrestore(&lp->lock, flags);
+
+ dev_kfree_skb(skb);
+
+ return 0;
+}
+
+static struct net_device_stats *uml_net_get_stats(struct net_device *dev)
+{
+ struct uml_net_private *lp = dev->priv;
+ return &lp->stats;
+}
+
+static void uml_net_set_multicast_list(struct net_device *dev)
+{
+ if (dev->flags & IFF_PROMISC) return;
+ else if (dev->mc_count) dev->flags |= IFF_ALLMULTI;
+ else dev->flags &= ~IFF_ALLMULTI;
+}
+
+static void uml_net_tx_timeout(struct net_device *dev)
+{
+ dev->trans_start = jiffies;
+ netif_wake_queue(dev);
+}
+
+static int uml_net_set_mac(struct net_device *dev, void *addr)
+{
+ struct uml_net_private *lp = dev->priv;
+ struct sockaddr *hwaddr = addr;
+
+ spin_lock(&lp->lock);
+ memcpy(dev->dev_addr, hwaddr->sa_data, ETH_ALEN);
+ spin_unlock(&lp->lock);
+
+ return(0);
+}
+
+static int uml_net_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct uml_net_private *lp = dev->priv;
+ int err = 0;
+
+ spin_lock(&lp->lock);
+
+ new_mtu = (*lp->set_mtu)(new_mtu, &lp->user);
+ if(new_mtu < 0){
+ err = new_mtu;
+ goto out;
+ }
+
+ dev->mtu = new_mtu;
+
+ out:
+ spin_unlock(&lp->lock);
+ return err;
+}
+
+static int uml_net_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ return(-EINVAL);
+}
+
+void uml_net_user_timer_expire(unsigned long _conn)
+{
+#ifdef undef
+ struct connection *conn = (struct connection *)_conn;
+
+ dprintk(KERN_INFO "uml_net_user_timer_expire [%p]\n", conn);
+ do_connect(conn);
+#endif
+}
+
+/*
+ * default do nothing hard header packet routines for struct net_device init.
+ * real ethernet transports will overwrite with real routines.
+ */
+static int uml_net_hard_header(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, void *daddr, void *saddr, unsigned len)
+{
+ return(0); /* no change */
+}
+
+static int uml_net_rebuild_header(struct sk_buff *skb)
+{
+ return(0); /* ignore */
+}
+
+static int uml_net_header_cache(struct neighbour *neigh, struct hh_cache *hh)
+{
+ return(-1); /* fail */
+}
+
+static void uml_net_header_cache_update(struct hh_cache *hh,
+ struct net_device *dev, unsigned char * haddr)
+{
+ /* ignore */
+}
+
+static int uml_net_header_parse(struct sk_buff *skb, unsigned char *haddr)
+{
+ return(0); /* nothing */
+}
+
+static spinlock_t devices_lock = SPIN_LOCK_UNLOCKED;
+static struct list_head devices = LIST_HEAD_INIT(devices);
+
+static int eth_configure(int n, void *init, char *mac,
+ struct transport *transport)
+{
+ struct uml_net *device;
+ struct net_device *dev;
+ struct uml_net_private *lp;
+ int err, size;
+
+ size = transport->private_size + sizeof(struct uml_net_private) +
+ sizeof(((struct uml_net_private *) 0)->user);
+
+ device = kmalloc(sizeof(*device), GFP_KERNEL);
+ if (device == NULL) {
+ printk(KERN_ERR "eth_configure failed to allocate uml_net\n");
+ return(1);
+ }
+
+ memset(device, 0, sizeof(*device));
+ INIT_LIST_HEAD(&device->list);
+ device->index = n;
+
+ spin_lock(&devices_lock);
+ list_add(&device->list, &devices);
+ spin_unlock(&devices_lock);
+
+ if (setup_etheraddr(mac, device->mac))
+ device->have_mac = 1;
+
+ printk(KERN_INFO "Netdevice %d ", n);
+ if (device->have_mac)
+ printk("(%02x:%02x:%02x:%02x:%02x:%02x) ",
+ device->mac[0], device->mac[1],
+ device->mac[2], device->mac[3],
+ device->mac[4], device->mac[5]);
+ printk(": ");
+ dev = alloc_etherdev(size);
+ if (dev == NULL) {
+ printk(KERN_ERR "eth_configure: failed to allocate device\n");
+ return 1;
+ }
+
+ /* If this name ends up conflicting with an existing registered
+ * netdevice, that is OK, register_netdev{,ice}() will notice this
+ * and fail.
+ */
+ snprintf(dev->name, sizeof(dev->name), "eth%d", n);
+ device->dev = dev;
+
+ dev->hard_header = uml_net_hard_header;
+ dev->rebuild_header = uml_net_rebuild_header;
+ dev->hard_header_cache = uml_net_header_cache;
+ dev->header_cache_update= uml_net_header_cache_update;
+ dev->hard_header_parse = uml_net_header_parse;
+
+ (*transport->kern->init)(dev, init);
+
+ dev->mtu = transport->user->max_packet;
+ dev->open = uml_net_open;
+ dev->hard_start_xmit = uml_net_start_xmit;
+ dev->stop = uml_net_close;
+ dev->get_stats = uml_net_get_stats;
+ dev->set_multicast_list = uml_net_set_multicast_list;
+ dev->tx_timeout = uml_net_tx_timeout;
+ dev->set_mac_address = uml_net_set_mac;
+ dev->change_mtu = uml_net_change_mtu;
+ dev->do_ioctl = uml_net_ioctl;
+ dev->watchdog_timeo = (HZ >> 1);
+ dev->irq = UM_ETH_IRQ;
+
+ rtnl_lock();
+ err = register_netdevice(dev);
+ rtnl_unlock();
+ if (err) {
+ device->dev = NULL;
+ /* XXX: should we call ->remove() here? */
+ free_netdev(dev);
+ return 1;
+ }
+ lp = dev->priv;
+
+ INIT_LIST_HEAD(&lp->list);
+ spin_lock_init(&lp->lock);
+ lp->dev = dev;
+ lp->fd = -1;
+ lp->mac = { 0xfe, 0xfd, 0x0, 0x0, 0x0, 0x0 };
+ lp->have_mac = device->have_mac;
+ lp->protocol = transport->kern->protocol;
+ lp->open = transport->user->open;
+ lp->close = transport->user->close;
+ lp->remove = transport->user->remove;
+ lp->read = transport->kern->read;
+ lp->write = transport->kern->write;
+ lp->add_address = transport->user->add_address;
+ lp->delete_address = transport->user->delete_address;
+ lp->set_mtu = transport->user->set_mtu;
+
+ init_timer(&lp->tl);
+ lp->tl.function = uml_net_user_timer_expire;
+ if (lp->have_mac)
+ memcpy(lp->mac, device->mac, sizeof(lp->mac));
+
+ if (transport->user->init)
+ (*transport->user->init)(&lp->user, dev);
+
+ if (device->have_mac)
+ set_ether_mac(dev, device->mac);
+ return(0);
+}
+
+static struct uml_net *find_device(int n)
+{
+ struct uml_net *device;
+ struct list_head *ele;
+
+ spin_lock(&devices_lock);
+ list_for_each(ele, &devices){
+ device = list_entry(ele, struct uml_net, list);
+ if(device->index == n)
+ goto out;
+ }
+ device = NULL;
+ out:
+ spin_unlock(&devices_lock);
+ return(device);
+}
+
+static int eth_parse(char *str, int *index_out, char **str_out)
+{
+ char *end;
+ int n;
+
+ n = simple_strtoul(str, &end, 0);
+ if(end == str){
+ printk(KERN_ERR "eth_setup: Failed to parse '%s'\n", str);
+ return(1);
+ }
+ if(n < 0){
+ printk(KERN_ERR "eth_setup: device %d is negative\n", n);
+ return(1);
+ }
+ str = end;
+ if(*str != '='){
+ printk(KERN_ERR
+ "eth_setup: expected '=' after device number\n");
+ return(1);
+ }
+ str++;
+ if(find_device(n)){
+ printk(KERN_ERR "eth_setup: Device %d already configured\n",
+ n);
+ return(1);
+ }
+ if(index_out) *index_out = n;
+ *str_out = str;
+ return(0);
+}
+
+struct eth_init {
+ struct list_head list;
+ char *init;
+ int index;
+};
+
+/* Filled in at boot time. Will need locking if the transports become
+ * modular.
+ */
+struct list_head transports = LIST_HEAD_INIT(transports);
+
+/* Filled in during early boot */
+struct list_head eth_cmd_line = LIST_HEAD_INIT(eth_cmd_line);
+
+static int check_transport(struct transport *transport, char *eth, int n,
+ void **init_out, char **mac_out)
+{
+ int len;
+
+ len = strlen(transport->name);
+ if(strncmp(eth, transport->name, len))
+ return(0);
+
+ eth += len;
+ if(*eth == ',')
+ eth++;
+ else if(*eth != '\0')
+ return(0);
+
+ *init_out = kmalloc(transport->setup_size, GFP_KERNEL);
+ if(*init_out == NULL)
+ return(1);
+
+ if(!transport->setup(eth, mac_out, *init_out)){
+ kfree(*init_out);
+ *init_out = NULL;
+ }
+ return(1);
+}
+
+void register_transport(struct transport *new)
+{
+ struct list_head *ele, *next;
+ struct eth_init *eth;
+ void *init;
+ char *mac = NULL;
+ int match;
+
+ list_add(&new->list, &transports);
+
+ list_for_each_safe(ele, next, ð_cmd_line){
+ eth = list_entry(ele, struct eth_init, list);
+ match = check_transport(new, eth->init, eth->index, &init,
+ &mac);
+ if(!match)
+ continue;
+ else if(init != NULL){
+ eth_configure(eth->index, init, mac, new);
+ kfree(init);
+ }
+ list_del(ð->list);
+ }
+}
+
+static int eth_setup_common(char *str, int index)
+{
+ struct list_head *ele;
+ struct transport *transport;
+ void *init;
+ char *mac = NULL;
+
+ list_for_each(ele, &transports){
+ transport = list_entry(ele, struct transport, list);
+ if(!check_transport(transport, str, index, &init, &mac))
+ continue;
+ if(init != NULL){
+ eth_configure(index, init, mac, transport);
+ kfree(init);
+ }
+ return(1);
+ }
+ return(0);
+}
+
+static int eth_setup(char *str)
+{
+ struct eth_init *new;
+ int n, err;
+
+ err = eth_parse(str, &n, &str);
+ if(err) return(1);
+
+ new = alloc_bootmem(sizeof(new));
+ if (new == NULL){
+ printk("eth_init : alloc_bootmem failed\n");
+ return(1);
+ }
+
+ INIT_LIST_HEAD(&new->list);
+ new->index = n;
+ new->init = str;
+
+ list_add_tail(&new->list, ð_cmd_line);
+ return(1);
+}
+
+__setup("eth", eth_setup);
+__uml_help(eth_setup,
+"eth[0-9]+=<transport>,<options>\n"
+" Configure a network device.\n\n"
+);
+
+static int eth_init(void)
+{
+ struct list_head *ele, *next;
+ struct eth_init *eth;
+
+ list_for_each_safe(ele, next, ð_cmd_line){
+ eth = list_entry(ele, struct eth_init, list);
+
+ if(eth_setup_common(eth->init, eth->index))
+ list_del(ð->list);
+ }
+
+ return(1);
+}
+
+__initcall(eth_init);
+
+static int net_config(char *str)
+{
+ int n, err;
+
+ err = eth_parse(str, &n, &str);
+ if(err) return(err);
+
+ str = uml_strdup(str);
+ if(str == NULL){
+ printk(KERN_ERR "net_config failed to strdup string\n");
+ return(-1);
+ }
+ err = !eth_setup_common(str, n);
+ if(err)
+ kfree(str);
+ return(err);
+}
+
+static int net_remove(char *str)
+{
+ struct uml_net *device;
+ struct net_device *dev;
+ struct uml_net_private *lp;
+ char *end;
+ int n;
+
+ n = simple_strtoul(str, &end, 0);
+ if((*end != '\0') || (end == str))
+ return(-1);
+
+ device = find_device(n);
+ if(device == NULL)
+ return(0);
+
+ dev = device->dev;
+ lp = dev->priv;
+ if(lp->fd > 0) return(-1);
+ if(lp->remove != NULL) (*lp->remove)(&lp->user);
+ unregister_netdev(dev);
+
+ list_del(&device->list);
+ free_netdev(device);
+ return(0);
+}
+
+static struct mc_device net_mc = {
+ .name = "eth",
+ .config = net_config,
+ .get_config = NULL,
+ .remove = net_remove,
+};
+
+static int uml_inetaddr_event(struct notifier_block *this, unsigned long event,
+ void *ptr)
+{
+ struct in_ifaddr *ifa = ptr;
+ u32 addr = ifa->ifa_address;
+ u32 netmask = ifa->ifa_mask;
+ struct net_device *dev = ifa->ifa_dev->dev;
+ struct uml_net_private *lp;
+ void (*proc)(unsigned char *, unsigned char *, void *);
+ unsigned char addr_buf[4], netmask_buf[4];
+
+ if(dev->open != uml_net_open) return(NOTIFY_DONE);
+
+ lp = dev->priv;
+
+ proc = NULL;
+ switch (event){
+ case NETDEV_UP:
+ proc = lp->add_address;
+ break;
+ case NETDEV_DOWN:
+ proc = lp->delete_address;
+ break;
+ }
+ if(proc != NULL){
+ addr_buf[0] = addr & 0xff;
+ addr_buf[1] = (addr >> 8) & 0xff;
+ addr_buf[2] = (addr >> 16) & 0xff;
+ addr_buf[3] = addr >> 24;
+ netmask_buf[0] = netmask & 0xff;
+ netmask_buf[1] = (netmask >> 8) & 0xff;
+ netmask_buf[2] = (netmask >> 16) & 0xff;
+ netmask_buf[3] = netmask >> 24;
+ (*proc)(addr_buf, netmask_buf, &lp->user);
+ }
+ return(NOTIFY_DONE);
+}
+
+struct notifier_block uml_inetaddr_notifier = {
+ .notifier_call = uml_inetaddr_event,
+};
+
+static int uml_net_init(void)
+{
+ struct list_head *ele;
+ struct uml_net_private *lp;
+ struct in_device *ip;
+ struct in_ifaddr *in;
+
+ mconsole_register_dev(&net_mc);
+ register_inetaddr_notifier(¨_inetaddr_notifier);
+
+ /* Devices may have been opened already, so the uml_inetaddr_notifier
+ * didn't get a chance to run for them. This fakes it so that
+ * addresses which have already been set up get handled properly.
+ */
+ list_for_each(ele, &opened){
+ lp = list_entry(ele, struct uml_net_private, list);
+ ip = lp->dev->ip_ptr;
+ if(ip == NULL) continue;
+ in = ip->ifa_list;
+ while(in != NULL){
+ uml_inetaddr_event(NULL, NETDEV_UP, in);
+ in = in->ifa_next;
+ }
+ }
+
+ return(0);
+}
+
+__initcall(uml_net_init);
+
+static void close_devices(void)
+{
+ struct list_head *ele;
+ struct uml_net_private *lp;
+
+ list_for_each(ele, &opened){
+ lp = list_entry(ele, struct uml_net_private, list);
+ if(lp->close != NULL) (*lp->close)(lp->fd, &lp->user);
+ if(lp->remove != NULL) (*lp->remove)(&lp->user);
+ }
+}
+
+__uml_exitcall(close_devices);
+
+int setup_etheraddr(char *str, unsigned char *addr)
+{
+ char *end;
+ int i;
+
+ if(str == NULL)
+ return(0);
+ for(i=0;i<6;i++){
+ addr[i] = simple_strtoul(str, &end, 16);
+ if((end == str) ||
+ ((*end != ':') && (*end != ',') && (*end != '\0'))){
+ printk(KERN_ERR
+ "setup_etheraddr: failed to parse '%s' "
+ "as an ethernet address\n", str);
+ return(0);
+ }
+ str = end + 1;
+ }
+ if(addr[0] & 1){
+ printk(KERN_ERR
+ "Attempt to assign a broadcast ethernet address to a "
+ "device disallowed\n");
+ return(0);
+ }
+ return(1);
+}
+
+void dev_ip_addr(void *d, char *buf, char *bin_buf)
+{
+ struct net_device *dev = d;
+ struct in_device *ip = dev->ip_ptr;
+ struct in_ifaddr *in;
+ u32 addr;
+
+ if((ip == NULL) || ((in = ip->ifa_list) == NULL)){
+ printk(KERN_WARNING "dev_ip_addr - device not assigned an "
+ "IP address\n");
+ return;
+ }
+ addr = in->ifa_address;
+ sprintf(buf, "%d.%d.%d.%d", addr & 0xff, (addr >> 8) & 0xff,
+ (addr >> 16) & 0xff, addr >> 24);
+ if(bin_buf){
+ bin_buf[0] = addr & 0xff;
+ bin_buf[1] = (addr >> 8) & 0xff;
+ bin_buf[2] = (addr >> 16) & 0xff;
+ bin_buf[3] = addr >> 24;
+ }
+}
+
+void set_ether_mac(void *d, unsigned char *addr)
+{
+ struct net_device *dev = d;
+
+ memcpy(dev->dev_addr, addr, ETH_ALEN);
+}
+
+struct sk_buff *ether_adjust_skb(struct sk_buff *skb, int extra)
+{
+ if((skb != NULL) && (skb_tailroom(skb) < extra)){
+ struct sk_buff *skb2;
+
+ skb2 = skb_copy_expand(skb, 0, extra, GFP_ATOMIC);
+ dev_kfree_skb(skb);
+ skb = skb2;
+ }
+ if(skb != NULL) skb_put(skb, extra);
+ return(skb);
+}
+
+void iter_addresses(void *d, void (*cb)(unsigned char *, unsigned char *,
+ void *),
+ void *arg)
+{
+ struct net_device *dev = d;
+ struct in_device *ip = dev->ip_ptr;
+ struct in_ifaddr *in;
+ unsigned char address[4], netmask[4];
+
+ if(ip == NULL) return;
+ in = ip->ifa_list;
+ while(in != NULL){
+ address[0] = in->ifa_address & 0xff;
+ address[1] = (in->ifa_address >> 8) & 0xff;
+ address[2] = (in->ifa_address >> 16) & 0xff;
+ address[3] = in->ifa_address >> 24;
+ netmask[0] = in->ifa_mask & 0xff;
+ netmask[1] = (in->ifa_mask >> 8) & 0xff;
+ netmask[2] = (in->ifa_mask >> 16) & 0xff;
+ netmask[3] = in->ifa_mask >> 24;
+ (*cb)(address, netmask, arg);
+ in = in->ifa_next;
+ }
+}
+
+int dev_netmask(void *d, void *m)
+{
+ struct net_device *dev = d;
+ struct in_device *ip = dev->ip_ptr;
+ struct in_ifaddr *in;
+ __u32 *mask_out = m;
+
+ if(ip == NULL)
+ return(1);
+
+ in = ip->ifa_list;
+ if(in == NULL)
+ return(1);
+
+ *mask_out = in->ifa_mask;
+ return(0);
+}
+
+void *get_output_buffer(int *len_out)
+{
+ void *ret;
+
+ ret = (void *) __get_free_pages(GFP_KERNEL, 0);
+ if(ret) *len_out = PAGE_SIZE;
+ else *len_out = 0;
+ return(ret);
+}
+
+void free_output_buffer(void *buffer)
+{
+ free_pages((unsigned long) buffer, 0);
+}
+
+int tap_setup_common(char *str, char *type, char **dev_name, char **mac_out,
+ char **gate_addr)
+{
+ char *remain;
+
+ remain = split_if_spec(str, dev_name, mac_out, gate_addr, NULL);
+ if(remain != NULL){
+ printk("tap_setup_common - Extra garbage on specification : "
+ "'%s'\n", remain);
+ return(1);
+ }
+
+ return(0);
+}
+
+unsigned short eth_protocol(struct sk_buff *skb)
+{
+ return(eth_type_trans(skb, skb->dev));
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef AIO_H__
+#define AIO_H__
+
+enum aio_type { AIO_READ, AIO_WRITE, AIO_MMAP };
+
+struct aio_thread_reply {
+ void *data;
+ int err;
+};
+
+struct aio_context {
+ int reply_fd;
+};
+
+#define INIT_AIO_CONTEXT { .reply_fd = -1 }
+
+extern int submit_aio(enum aio_type type, int fd, char *buf, int len,
+ unsigned long long offset, int reply_fd, void *data);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __FILEHANDLE_H__
+#define __FILEHANDLE_H__
+
+#include "linux/list.h"
+#include "linux/fs.h"
+#include "os.h"
+
+struct file_handle {
+ struct list_head list;
+ int fd;
+ char *(*get_name)(struct inode *);
+ struct inode *inode;
+ struct openflags flags;
+};
+
+extern struct file_handle bad_filehandle;
+
+extern int open_file(char *name, struct openflags flags, int mode);
+extern void *open_dir(char *file);
+extern int open_filehandle(char *name, struct openflags flags, int mode,
+ struct file_handle *fh);
+extern int read_file(struct file_handle *fh, unsigned long long offset,
+ char *buf, int len);
+extern int write_file(struct file_handle *fh, unsigned long long offset,
+ const char *buf, int len);
+extern int truncate_file(struct file_handle *fh, unsigned long long size);
+extern int close_file(struct file_handle *fh);
+extern void not_reclaimable(struct file_handle *fh);
+extern void is_reclaimable(struct file_handle *fh,
+ char *(name_proc)(struct inode *),
+ struct inode *inode);
+extern int filehandle_fd(struct file_handle *fh);
+extern int make_pipe(struct file_handle *fhs);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/* Automatically generated by arch/um/kernel/skas/util/mk_ptregs */
+
+#ifndef __SKAS_PT_REGS_
+#define __SKAS_PT_REGS_
+
+#define HOST_FRAME_SIZE 17
+#define HOST_FP_SIZE 27
+#define HOST_XFP_SIZE 128
+#define HOST_IP 12
+#define HOST_SP 15
+#define HOST_EFLAGS 14
+#define HOST_EAX 6
+#define HOST_EBX 0
+#define HOST_ECX 1
+#define HOST_EDX 2
+#define HOST_ESI 3
+#define HOST_EDI 4
+#define HOST_EBP 5
+#define HOST_CS 13
+#define HOST_SS 16
+#define HOST_DS 7
+#define HOST_FS 9
+#define HOST_ES 8
+#define HOST_GS 10
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/slab.h"
+#include "linux/list.h"
+#include "linux/spinlock.h"
+#include "linux/fs.h"
+#include "linux/errno.h"
+#include "filehandle.h"
+#include "os.h"
+#include "kern_util.h"
+
+static spinlock_t open_files_lock = SPIN_LOCK_UNLOCKED;
+static struct list_head open_files = LIST_HEAD_INIT(open_files);
+
+#define NUM_RECLAIM 128
+
+static void reclaim_fds(void)
+{
+ struct file_handle *victim;
+ int closed = NUM_RECLAIM;
+
+ spin_lock(&open_files_lock);
+ while(!list_empty(&open_files) && closed--){
+ victim = list_entry(open_files.prev, struct file_handle, list);
+ os_close_file(victim->fd);
+ victim->fd = -1;
+ list_del_init(&victim->list);
+ }
+ spin_unlock(&open_files_lock);
+}
+
+int open_file(char *name, struct openflags flags, int mode)
+{
+ int fd;
+
+ fd = os_open_file(name, flags, mode);
+ if(fd != -EMFILE)
+ return(fd);
+
+ reclaim_fds();
+ fd = os_open_file(name, flags, mode);
+
+ return(fd);
+}
+
+void *open_dir(char *file)
+{
+ void *dir;
+ int err;
+
+ dir = os_open_dir(file, &err);
+ if(dir != NULL)
+ return(dir);
+ if(err != -EMFILE)
+ return(ERR_PTR(err));
+
+ reclaim_fds();
+
+ dir = os_open_dir(file, &err);
+ if(dir == NULL)
+ dir = ERR_PTR(err);
+
+ return(dir);
+}
+
+void not_reclaimable(struct file_handle *fh)
+{
+ char *name;
+
+ if(fh->get_name == NULL)
+ return;
+
+ if(list_empty(&fh->list)){
+ name = (*fh->get_name)(fh->inode);
+ if(name != NULL){
+ fh->fd = open_file(name, fh->flags, 0);
+ kfree(name);
+ }
+ else printk("File descriptor %d has no name\n", fh->fd);
+ }
+ else {
+ spin_lock(&open_files_lock);
+ list_del_init(&fh->list);
+ spin_unlock(&open_files_lock);
+ }
+}
+
+void is_reclaimable(struct file_handle *fh, char *(name_proc)(struct inode *),
+ struct inode *inode)
+{
+ fh->get_name = name_proc;
+ fh->inode = inode;
+
+ spin_lock(&open_files_lock);
+ list_add(&fh->list, &open_files);
+ spin_unlock(&open_files_lock);
+}
+
+static int active_handle(struct file_handle *fh)
+{
+ int fd;
+ char *name;
+
+ if(!list_empty(&fh->list))
+ list_move(&fh->list, &open_files);
+
+ if(fh->fd != -1)
+ return(0);
+
+ if(fh->inode == NULL)
+ return(-ENOENT);
+
+ name = (*fh->get_name)(fh->inode);
+ if(name == NULL)
+ return(-ENOMEM);
+
+ fd = open_file(name, fh->flags, 0);
+ kfree(name);
+ if(fd < 0)
+ return(fd);
+
+ fh->fd = fd;
+ is_reclaimable(fh, fh->get_name, fh->inode);
+
+ return(0);
+}
+
+int filehandle_fd(struct file_handle *fh)
+{
+ int err;
+
+ err = active_handle(fh);
+ if(err)
+ return(err);
+
+ return(fh->fd);
+}
+
+static void init_fh(struct file_handle *fh, int fd, struct openflags flags)
+{
+ flags.c = 0;
+ *fh = ((struct file_handle) { .list = LIST_HEAD_INIT(fh->list),
+ .fd = fd,
+ .get_name = NULL,
+ .inode = NULL,
+ .flags = flags });
+}
+
+int open_filehandle(char *name, struct openflags flags, int mode,
+ struct file_handle *fh)
+{
+ int fd;
+
+ fd = open_file(name, flags, mode);
+ if(fd < 0)
+ return(fd);
+
+ init_fh(fh, fd, flags);
+ return(0);
+}
+
+int close_file(struct file_handle *fh)
+{
+ spin_lock(&open_files_lock);
+ list_del(&fh->list);
+ spin_unlock(&open_files_lock);
+
+ os_close_file(fh->fd);
+
+ fh->fd = -1;
+ return(0);
+}
+
+int read_file(struct file_handle *fh, unsigned long long offset, char *buf,
+ int len)
+{
+ int err;
+
+ err = active_handle(fh);
+ if(err)
+ return(err);
+
+ err = os_seek_file(fh->fd, offset);
+ if(err)
+ return(err);
+
+ return(os_read_file(fh->fd, buf, len));
+}
+
+int write_file(struct file_handle *fh, unsigned long long offset,
+ const char *buf, int len)
+{
+ int err;
+
+ err = active_handle(fh);
+ if(err)
+ return(err);
+
+ if(offset != -1)
+ err = os_seek_file(fh->fd, offset);
+ if(err)
+ return(err);
+
+ return(os_write_file(fh->fd, buf, len));
+}
+
+int truncate_file(struct file_handle *fh, unsigned long long size)
+{
+ int err;
+
+ err = active_handle(fh);
+ if(err)
+ return(err);
+
+ return(os_truncate_fd(fh->fd, size));
+}
+
+int make_pipe(struct file_handle *fhs)
+{
+ int fds[2], err;
+
+ err = os_pipe(fds, 1, 1);
+ if(err && (err != -EMFILE))
+ return(err);
+
+ if(err){
+ reclaim_fds();
+ err = os_pipe(fds, 1, 1);
+ }
+ if(err)
+ return(err);
+
+ init_fh(&fhs[0], fds[0], OPENFLAGS());
+ init_fh(&fhs[1], fds[1], OPENFLAGS());
+ return(0);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2000 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/config.h"
+
+#ifdef CONFIG_SMP
+
+#include "linux/sched.h"
+#include "linux/module.h"
+#include "linux/threads.h"
+#include "linux/interrupt.h"
+#include "linux/err.h"
+#include "asm/smp.h"
+#include "asm/processor.h"
+#include "asm/spinlock.h"
+#include "asm/hardirq.h"
+#include "user_util.h"
+#include "kern_util.h"
+#include "kern.h"
+#include "irq_user.h"
+#include "os.h"
+
+/* CPU online map, set by smp_boot_cpus */
+unsigned long cpu_online_map = cpumask_of_cpu(0);
+
+EXPORT_SYMBOL(cpu_online_map);
+
+/* Per CPU bogomips and other parameters
+ * The only piece used here is the ipi pipe, which is set before SMP is
+ * started and never changed.
+ */
+struct cpuinfo_um cpu_data[NR_CPUS];
+
+spinlock_t um_bh_lock = SPIN_LOCK_UNLOCKED;
+
+atomic_t global_bh_count;
+
+/* Not used by UML */
+unsigned char global_irq_holder = NO_PROC_ID;
+unsigned volatile long global_irq_lock;
+
+/* Set when the idlers are all forked */
+int smp_threads_ready = 0;
+
+/* A statistic, can be a little off */
+int num_reschedules_sent = 0;
+
+/* Small, random number, never changed */
+unsigned long cache_decay_ticks = 5;
+
+/* Not changed after boot */
+struct task_struct *idle_threads[NR_CPUS];
+
+void smp_send_reschedule(int cpu)
+{
+ write(cpu_data[cpu].ipi_pipe[1], "R", 1);
+ num_reschedules_sent++;
+}
+
+static void show(char * str)
+{
+ int cpu = smp_processor_id();
+
+ printk(KERN_INFO "\n%s, CPU %d:\n", str, cpu);
+}
+
+#define MAXCOUNT 100000000
+
+static inline void wait_on_bh(void)
+{
+ int count = MAXCOUNT;
+ do {
+ if (!--count) {
+ show("wait_on_bh");
+ count = ~0;
+ }
+ /* nothing .. wait for the other bh's to go away */
+ } while (atomic_read(&global_bh_count) != 0);
+}
+
+/*
+ * This is called when we want to synchronize with
+ * bottom half handlers. We need to wait until
+ * no other CPU is executing any bottom half handler.
+ *
+ * Don't wait if we're already running in an interrupt
+ * context or are inside a bh handler.
+ */
+void synchronize_bh(void)
+{
+ if (atomic_read(&global_bh_count) && !in_interrupt())
+ wait_on_bh();
+}
+
+void smp_send_stop(void)
+{
+ int i;
+
+ printk(KERN_INFO "Stopping all CPUs...");
+ for(i = 0; i < num_online_cpus(); i++){
+ if(i == current->thread_info->cpu)
+ continue;
+ write(cpu_data[i].ipi_pipe[1], "S", 1);
+ }
+ printk("done\n");
+}
+
+static cpumask_t smp_commenced_mask;
+static cpumask_t smp_callin_map = CPU_MASK_NONE;
+
+static int idle_proc(void *cpup)
+{
+ int cpu = (int) cpup, err;
+
+ err = os_pipe(cpu_data[cpu].ipi_pipe, 1, 1);
+ if(err)
+ panic("CPU#%d failed to create IPI pipe, errno = %d", cpu,
+ -err);
+
+ activate_ipi(cpu_data[cpu].ipi_pipe[0],
+ current->thread.mode.tt.extern_pid);
+
+ wmb();
+ if (cpu_test_and_set(cpu, &smp_callin_map)) {
+ printk("huh, CPU#%d already present??\n", cpu);
+ BUG();
+ }
+
+ while (!cpu_isset(cpu, &smp_commenced_mask))
+ cpu_relax();
+
+ cpu_set(cpu, cpu_online_map);
+ default_idle();
+ return(0);
+}
+
+static struct task_struct *idle_thread(int cpu)
+{
+ struct task_struct *new_task;
+ unsigned char c;
+
+ current->thread.request.u.thread.proc = idle_proc;
+ current->thread.request.u.thread.arg = (void *) cpu;
+ new_task = do_fork(CLONE_VM | CLONE_IDLETASK, 0, NULL, 0, NULL, NULL);
+ if(IS_ERR(new_task)) panic("do_fork failed in idle_thread");
+
+ cpu_tasks[cpu] = ((struct cpu_task)
+ { .pid = new_task->thread.mode.tt.extern_pid,
+ .task = new_task } );
+ idle_threads[cpu] = new_task;
+ CHOOSE_MODE(write(new_task->thread.mode.tt.switch_pipe[1], &c,
+ sizeof(c)),
+ ({ panic("skas mode doesn't support SMP"); }));
+ return(new_task);
+}
+
+void smp_prepare_cpus(unsigned int maxcpus)
+{
+ struct task_struct *idle;
+ unsigned long waittime;
+ int err, cpu;
+
+ cpu_set(0, cpu_online_map);
+ cpu_set(0, smp_callin_map);
+
+ err = os_pipe(cpu_data[0].ipi_pipe, 1, 1);
+ if(err) panic("CPU#0 failed to create IPI pipe, errno = %d", -err);
+
+ activate_ipi(cpu_data[0].ipi_pipe[0],
+ current->thread.mode.tt.extern_pid);
+
+ for(cpu = 1; cpu < ncpus; cpu++){
+ printk("Booting processor %d...\n", cpu);
+
+ idle = idle_thread(cpu);
+
+ init_idle(idle, cpu);
+ unhash_process(idle);
+
+ waittime = 200000000;
+ while (waittime-- && !cpu_isset(cpu, smp_callin_map))
+ cpu_relax();
+
+ if (cpu_isset(cpu, smp_callin_map))
+ printk("done\n");
+ else printk("failed\n");
+ }
+}
+
+void smp_prepare_boot_cpu(void)
+{
+ cpu_set(smp_processor_id(), cpu_online_map);
+}
+
+int __cpu_up(unsigned int cpu)
+{
+ cpu_set(cpu, smp_commenced_mask);
+ while (!cpu_isset(cpu, cpu_online_map))
+ mb();
+ return(0);
+}
+
+int setup_profiling_timer(unsigned int multiplier)
+{
+ printk(KERN_INFO "setup_profiling_timer\n");
+ return(0);
+}
+
+void smp_call_function_slave(int cpu);
+
+void IPI_handler(int cpu)
+{
+ unsigned char c;
+ int fd;
+
+ fd = cpu_data[cpu].ipi_pipe[0];
+ while (read(fd, &c, 1) == 1) {
+ switch (c) {
+ case 'C':
+ smp_call_function_slave(cpu);
+ break;
+
+ case 'R':
+ set_tsk_need_resched(current);
+ break;
+
+ case 'S':
+ printk("CPU#%d stopping\n", cpu);
+ while(1)
+ pause();
+ break;
+
+ default:
+ printk("CPU#%d received unknown IPI [%c]!\n", cpu, c);
+ break;
+ }
+ }
+}
+
+int hard_smp_processor_id(void)
+{
+ return(pid_to_processor_id(os_getpid()));
+}
+
+static spinlock_t call_lock = SPIN_LOCK_UNLOCKED;
+static atomic_t scf_started;
+static atomic_t scf_finished;
+static void (*func)(void *info);
+static void *info;
+
+void smp_call_function_slave(int cpu)
+{
+ atomic_inc(&scf_started);
+ (*func)(info);
+ atomic_inc(&scf_finished);
+}
+
+int smp_call_function(void (*_func)(void *info), void *_info, int nonatomic,
+ int wait)
+{
+ int cpus = num_online_cpus() - 1;
+ int i;
+
+ if (!cpus)
+ return 0;
+
+ spin_lock_bh(&call_lock);
+ atomic_set(&scf_started, 0);
+ atomic_set(&scf_finished, 0);
+ func = _func;
+ info = _info;
+
+ for (i=0;i<NR_CPUS;i++)
+ if((i != current->thread_info->cpu) &&
+ cpu_isset(i, cpu_online_map))
+ write(cpu_data[i].ipi_pipe[1], "C", 1);
+
+ while (atomic_read(&scf_started) != cpus)
+ barrier();
+
+ if (wait)
+ while (atomic_read(&scf_finished) != cpus)
+ barrier();
+
+ spin_unlock_bh(&call_lock);
+ return 0;
+}
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
+++ /dev/null
-#include <stdio.h>
-#include <unistd.h>
-#include <fcntl.h>
-#include <dirent.h>
-#include <errno.h>
-#include <utime.h>
-#include <string.h>
-#include <sys/stat.h>
-#include <sys/vfs.h>
-#include <sys/ioctl.h>
-#include "user_util.h"
-#include "mem_user.h"
-#include "uml-config.h"
-
-/* Had to steal this from linux/module.h because that file can't be included
- * since this includes various user-level headers.
- */
-
-struct module_symbol
-{
- unsigned long value;
- const char *name;
-};
-
-/* Indirect stringification. */
-
-#define __MODULE_STRING_1(x) #x
-#define __MODULE_STRING(x) __MODULE_STRING_1(x)
-
-#if !defined(__AUTOCONF_INCLUDED__)
-
-#define __EXPORT_SYMBOL(sym,str) error config_must_be_included_before_module
-#define EXPORT_SYMBOL(var) error config_must_be_included_before_module
-#define EXPORT_SYMBOL_NOVERS(var) error config_must_be_included_before_module
-
-#elif !defined(UML_CONFIG_MODULES)
-
-#define __EXPORT_SYMBOL(sym,str)
-#define EXPORT_SYMBOL(var)
-#define EXPORT_SYMBOL_NOVERS(var)
-
-#else
-
-#define __EXPORT_SYMBOL(sym, str) \
-const char __kstrtab_##sym[] \
-__attribute__((section(".kstrtab"))) = str; \
-const struct module_symbol __ksymtab_##sym \
-__attribute__((section("__ksymtab"))) = \
-{ (unsigned long)&sym, __kstrtab_##sym }
-
-#if defined(__MODVERSIONS__) || !defined(UML_CONFIG_MODVERSIONS)
-#define EXPORT_SYMBOL(var) __EXPORT_SYMBOL(var, __MODULE_STRING(var))
-#else
-#define EXPORT_SYMBOL(var) __EXPORT_SYMBOL(var, __MODULE_STRING(__VERSIONED_SYMBOL(var)))
-#endif
-
-#define EXPORT_SYMBOL_NOVERS(var) __EXPORT_SYMBOL(var, __MODULE_STRING(var))
-
-#endif
-
-EXPORT_SYMBOL(__errno_location);
-
-EXPORT_SYMBOL(access);
-EXPORT_SYMBOL(open);
-EXPORT_SYMBOL(open64);
-EXPORT_SYMBOL(close);
-EXPORT_SYMBOL(read);
-EXPORT_SYMBOL(write);
-EXPORT_SYMBOL(dup2);
-EXPORT_SYMBOL(__xstat);
-EXPORT_SYMBOL(__lxstat);
-EXPORT_SYMBOL(__lxstat64);
-EXPORT_SYMBOL(lseek);
-EXPORT_SYMBOL(lseek64);
-EXPORT_SYMBOL(chown);
-EXPORT_SYMBOL(truncate);
-EXPORT_SYMBOL(utime);
-EXPORT_SYMBOL(chmod);
-EXPORT_SYMBOL(rename);
-EXPORT_SYMBOL(__xmknod);
-
-EXPORT_SYMBOL(symlink);
-EXPORT_SYMBOL(link);
-EXPORT_SYMBOL(unlink);
-EXPORT_SYMBOL(readlink);
-
-EXPORT_SYMBOL(mkdir);
-EXPORT_SYMBOL(rmdir);
-EXPORT_SYMBOL(opendir);
-EXPORT_SYMBOL(readdir);
-EXPORT_SYMBOL(closedir);
-EXPORT_SYMBOL(seekdir);
-EXPORT_SYMBOL(telldir);
-
-EXPORT_SYMBOL(ioctl);
-
-extern ssize_t pread64 (int __fd, void *__buf, size_t __nbytes,
- __off64_t __offset);
-extern ssize_t pwrite64 (int __fd, __const void *__buf, size_t __n,
- __off64_t __offset);
-EXPORT_SYMBOL(pread64);
-EXPORT_SYMBOL(pwrite64);
-
-EXPORT_SYMBOL(statfs);
-EXPORT_SYMBOL(statfs64);
-
-EXPORT_SYMBOL(memcpy);
-EXPORT_SYMBOL(getuid);
-
-EXPORT_SYMBOL(memset);
-EXPORT_SYMBOL(strstr);
-
-EXPORT_SYMBOL(find_iomem);
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include <signal.h>
+#include <errno.h>
+#include <sched.h>
+#include <sys/syscall.h>
+#include "os.h"
+#include "helper.h"
+#include "aio.h"
+#include "init.h"
+#include "user.h"
+#include "mode.h"
+
+struct aio_thread_req {
+ enum aio_type type;
+ int io_fd;
+ unsigned long long offset;
+ char *buf;
+ int len;
+ int reply_fd;
+ void *data;
+};
+
+static int aio_req_fd_r = -1;
+static int aio_req_fd_w = -1;
+
+#if defined(HAVE_AIO_ABI)
+#include <linux/aio_abi.h>
+
+/* If we have the headers, we are going to build with AIO enabled.
+ * If we don't have aio in libc, we define the necessary stubs here.
+ */
+
+#if !defined(HAVE_AIO_LIBC)
+
+#define __NR_io_setup 245
+#define __NR_io_getevents 247
+#define __NR_io_submit 248
+
+static long io_setup(int n, aio_context_t *ctxp)
+{
+ return(syscall(__NR_io_setup, n, ctxp));
+}
+
+static long io_submit(aio_context_t ctx, long nr, struct iocb **iocbpp)
+{
+ return(syscall(__NR_io_submit, ctx, nr, iocbpp));
+}
+
+static long io_getevents(aio_context_t ctx_id, long min_nr, long nr,
+ struct io_event *events, struct timespec *timeout)
+{
+ return(syscall(__NR_io_getevents, ctx_id, min_nr, nr, events, timeout));
+}
+
+#endif
+
+/* The AIO_MMAP cases force the mmapped page into memory here
+ * rather than in whatever place first touches the data. I used
+ * to do this by touching the page, but that's delicate because
+ * gcc is prone to optimizing that away. So, what's done here
+ * is we read from the descriptor from which the page was
+ * mapped. The caller is required to pass an offset which is
+ * inside the page that was mapped. Thus, when the read
+ * returns, we know that the page is in the page cache, and
+ * that it now backs the mmapped area.
+ */
+
+static int do_aio(aio_context_t ctx, enum aio_type type, int fd, char *buf,
+ int len, unsigned long long offset, void *data)
+{
+ struct iocb iocb, *iocbp = &iocb;
+ char c;
+ int err;
+
+ iocb = ((struct iocb) { .aio_data = (unsigned long) data,
+ .aio_reqprio = 0,
+ .aio_fildes = fd,
+ .aio_buf = (unsigned long) buf,
+ .aio_nbytes = len,
+ .aio_offset = offset,
+ .aio_reserved1 = 0,
+ .aio_reserved2 = 0,
+ .aio_reserved3 = 0 });
+
+ switch(type){
+ case AIO_READ:
+ iocb.aio_lio_opcode = IOCB_CMD_PREAD;
+ err = io_submit(ctx, 1, &iocbp);
+ break;
+ case AIO_WRITE:
+ iocb.aio_lio_opcode = IOCB_CMD_PWRITE;
+ err = io_submit(ctx, 1, &iocbp);
+ break;
+ case AIO_MMAP:
+ iocb.aio_lio_opcode = IOCB_CMD_PREAD;
+ iocb.aio_buf = (unsigned long) &c;
+ iocb.aio_nbytes = sizeof(c);
+ err = io_submit(ctx, 1, &iocbp);
+ break;
+ default:
+ printk("Bogus op in do_aio - %d\n", type);
+ err = -EINVAL;
+ break;
+ }
+ if(err > 0)
+ err = 0;
+
+ return(err);
+}
+
+static aio_context_t ctx = 0;
+
+static int aio_thread(void *arg)
+{
+ struct aio_thread_reply reply;
+ struct io_event event;
+ int err, n, reply_fd;
+
+ signal(SIGWINCH, SIG_IGN);
+
+ while(1){
+ n = io_getevents(ctx, 1, 1, &event, NULL);
+ if(n < 0){
+ if(errno == EINTR)
+ continue;
+ printk("aio_thread - io_getevents failed, "
+ "errno = %d\n", errno);
+ }
+ else {
+ reply = ((struct aio_thread_reply)
+ { .data = (void *) event.data,
+ .err = event.res });
+ reply_fd =
+ ((struct aio_context *) event.data)->reply_fd;
+ err = os_write_file(reply_fd, &reply, sizeof(reply));
+ if(err != sizeof(reply))
+ printk("not_aio_thread - write failed, "
+ "fd = %d, err = %d\n",
+ aio_req_fd_r, -err);
+ }
+ }
+ return(0);
+}
+
+#endif
+
+static int do_not_aio(struct aio_thread_req *req)
+{
+ char c;
+ int err;
+
+ switch(req->type){
+ case AIO_READ:
+ err = os_seek_file(req->io_fd, req->offset);
+ if(err)
+ goto out;
+
+ err = os_read_file(req->io_fd, req->buf, req->len);
+ break;
+ case AIO_WRITE:
+ err = os_seek_file(req->io_fd, req->offset);
+ if(err)
+ goto out;
+
+ err = os_write_file(req->io_fd, req->buf, req->len);
+ break;
+ case AIO_MMAP:
+ err = os_seek_file(req->io_fd, req->offset);
+ if(err)
+ goto out;
+
+ err = os_read_file(req->io_fd, &c, sizeof(c));
+ break;
+ default:
+ printk("do_not_aio - bad request type : %d\n", req->type);
+ err = -EINVAL;
+ break;
+ }
+
+ out:
+ return(err);
+}
+
+static int not_aio_thread(void *arg)
+{
+ struct aio_thread_req req;
+ struct aio_thread_reply reply;
+ int err;
+
+ signal(SIGWINCH, SIG_IGN);
+ while(1){
+ err = os_read_file(aio_req_fd_r, &req, sizeof(req));
+ if(err != sizeof(req)){
+ if(err < 0)
+ printk("not_aio_thread - read failed, fd = %d, "
+ "err = %d\n", aio_req_fd_r, -err);
+ else {
+ printk("not_aio_thread - short read, fd = %d, "
+ "length = %d\n", aio_req_fd_r, err);
+ }
+ continue;
+ }
+ err = do_not_aio(&req);
+ reply = ((struct aio_thread_reply) { .data = req.data,
+ .err = err });
+ err = os_write_file(req.reply_fd, &reply, sizeof(reply));
+ if(err != sizeof(reply))
+ printk("not_aio_thread - write failed, fd = %d, "
+ "err = %d\n", aio_req_fd_r, -err);
+ }
+}
+
+static int aio_pid = -1;
+
+static int init_aio_24(void)
+{
+ unsigned long stack;
+ int fds[2], err;
+
+ err = os_pipe(fds, 1, 1);
+ if(err)
+ goto out;
+
+ aio_req_fd_w = fds[0];
+ aio_req_fd_r = fds[1];
+ err = run_helper_thread(not_aio_thread, NULL,
+ CLONE_FILES | CLONE_VM | SIGCHLD, &stack, 0);
+ if(err < 0)
+ goto out_close_pipe;
+
+ aio_pid = err;
+ goto out;
+
+ out_close_pipe:
+ os_close_file(fds[0]);
+ os_close_file(fds[1]);
+ aio_req_fd_w = -1;
+ aio_req_fd_r = -1;
+ out:
+ return(0);
+}
+
+#ifdef HAVE_AIO_ABI
+#define DEFAULT_24_AIO 0
+static int init_aio_26(void)
+{
+ unsigned long stack;
+ int err;
+
+ if(io_setup(256, &ctx)){
+ printk("aio_thread failed to initialize context, err = %d\n",
+ errno);
+ return(-errno);
+ }
+
+ err = run_helper_thread(aio_thread, NULL,
+ CLONE_FILES | CLONE_VM | SIGCHLD, &stack, 0);
+ if(err < 0)
+ return(-errno);
+
+ aio_pid = err;
+ err = 0;
+ out:
+ return(err);
+}
+
+int submit_aio_26(enum aio_type type, int io_fd, char *buf, int len,
+ unsigned long long offset, int reply_fd, void *data)
+{
+ struct aio_thread_reply reply;
+ int err;
+
+ ((struct aio_context *) data)->reply_fd = reply_fd;
+
+ err = do_aio(ctx, type, io_fd, buf, len, offset, data);
+ if(err){
+ reply = ((struct aio_thread_reply) { .data = data,
+ .err = err });
+ err = os_write_file(reply_fd, &reply, sizeof(reply));
+ if(err != sizeof(reply))
+ printk("submit_aio_26 - write failed, "
+ "fd = %d, err = %d\n", reply_fd, -err);
+ else err = 0;
+ }
+
+ return(err);
+}
+
+#else
+#define DEFAULT_24_AIO 1
+static int init_aio_26(void)
+{
+ return(-ENOSYS);
+}
+
+int submit_aio_26(enum aio_type type, int io_fd, char *buf, int len,
+ unsigned long long offset, int reply_fd, void *data)
+{
+ return(-ENOSYS);
+}
+#endif
+
+static int aio_24 = DEFAULT_24_AIO;
+
+static int __init set_aio_24(char *name, int *add)
+{
+ aio_24 = 1;
+ return(0);
+}
+
+__uml_setup("aio=2.4", set_aio_24,
+"aio=2.4\n"
+" This is used to force UML to use 2.4-style AIO even when 2.6 AIO is\n"
+" available. 2.4 AIO is a single thread that handles one request at a\n"
+" time, synchronously. 2.6 AIO is a thread which uses 2.5 AIO interface\n"
+" to handle an arbitrary number of pending requests. 2.6 AIO is not\n"
+" available in tt mode, on 2.4 hosts, or when UML is built with\n"
+" /usr/include/linux/aio_abi no available.\n\n"
+);
+
+static int init_aio(void)
+{
+ int err;
+
+ CHOOSE_MODE(({
+ if(!aio_24){
+ printk("Disabling 2.6 AIO in tt mode\n");
+ aio_24 = 1;
+ } }), (void) 0);
+
+ if(!aio_24){
+ err = init_aio_26();
+ if(err && (errno == ENOSYS)){
+ printk("2.6 AIO not supported on the host - "
+ "reverting to 2.4 AIO\n");
+ aio_24 = 1;
+ }
+ else return(err);
+ }
+
+ if(aio_24)
+ return(init_aio_24());
+
+ return(0);
+}
+
+__initcall(init_aio);
+
+static void exit_aio(void)
+{
+ if(aio_pid != -1)
+ os_kill_process(aio_pid, 1);
+}
+
+__uml_exitcall(exit_aio);
+
+int submit_aio_24(enum aio_type type, int io_fd, char *buf, int len,
+ unsigned long long offset, int reply_fd, void *data)
+{
+ struct aio_thread_req req = { .type = type,
+ .io_fd = io_fd,
+ .offset = offset,
+ .buf = buf,
+ .len = len,
+ .reply_fd = reply_fd,
+ .data = data,
+ };
+ int err;
+
+ err = os_write_file(aio_req_fd_w, &req, sizeof(req));
+ if(err == sizeof(req))
+ err = 0;
+
+ return(err);
+}
+
+int submit_aio(enum aio_type type, int io_fd, char *buf, int len,
+ unsigned long long offset, int reply_fd, void *data)
+{
+ if(aio_24)
+ return(submit_aio_24(type, io_fd, buf, len, offset, reply_fd,
+ data));
+ else {
+ return(submit_aio_26(type, io_fd, buf, len, offset, reply_fd,
+ data));
+ }
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#include <stdlib.h>
+#include <sys/time.h>
+
+unsigned long long os_usecs(void)
+{
+ struct timeval tv;
+
+ gettimeofday(&tv, NULL);
+ return((unsigned long long) tv.tv_sec * 1000000 + tv.tv_usec);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#include <linux/bitops.h>
+#include <linux/module.h>
+
+/**
+ * find_next_bit - find the first set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+int find_next_bit(const unsigned long *addr, int size, int offset)
+{
+ const unsigned long *p = addr + (offset >> 5);
+ int set = 0, bit = offset & 31, res;
+
+ if (bit) {
+ /*
+ * Look for nonzero in the first 32 bits:
+ */
+ __asm__("bsfl %1,%0\n\t"
+ "jne 1f\n\t"
+ "movl $32, %0\n"
+ "1:"
+ : "=r" (set)
+ : "r" (*p >> bit));
+ if (set < (32 - bit))
+ return set + offset;
+ set = 32 - bit;
+ p++;
+ }
+ /*
+ * No set bit yet, search remaining full words for a bit
+ */
+ res = find_first_bit (p, size - 32 * (p - addr));
+ return (offset + set + res);
+}
+EXPORT_SYMBOL(find_next_bit);
+
+/**
+ * find_next_zero_bit - find the first zero bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+int find_next_zero_bit(const unsigned long *addr, int size, int offset)
+{
+ unsigned long * p = ((unsigned long *) addr) + (offset >> 5);
+ int set = 0, bit = offset & 31, res;
+
+ if (bit) {
+ /*
+ * Look for zero in the first 32 bits.
+ */
+ __asm__("bsfl %1,%0\n\t"
+ "jne 1f\n\t"
+ "movl $32, %0\n"
+ "1:"
+ : "=r" (set)
+ : "r" (~(*p >> bit)));
+ if (set < (32 - bit))
+ return set + offset;
+ set = 32 - bit;
+ p++;
+ }
+ /*
+ * No zero yet, search remaining full bytes for a zero
+ */
+ res = find_first_zero_bit (p, size - 32 * (p - (unsigned long *) addr));
+ return (offset + set + res);
+}
+EXPORT_SYMBOL(find_next_zero_bit);
+++ /dev/null
-/*
- * linux/arch/i386/mm/extable.c
- */
-
-#include <linux/config.h>
-#include <linux/module.h>
-#include <linux/spinlock.h>
-#include <asm/uaccess.h>
-
-/* Simple binary search */
-const struct exception_table_entry *
-search_extable(const struct exception_table_entry *first,
- const struct exception_table_entry *last,
- unsigned long value)
-{
- while (first <= last) {
- const struct exception_table_entry *mid;
- long diff;
-
- mid = (last - first) / 2 + first;
- diff = mid->insn - value;
- if (diff == 0)
- return mid;
- else if (diff < 0)
- first = mid+1;
- else
- last = mid-1;
- }
- return NULL;
-}
--- /dev/null
+/*
+ * i386 semaphore implementation.
+ *
+ * (C) Copyright 1999 Linus Torvalds
+ *
+ * Portions Copyright 1999 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * rw semaphores implemented November 1999 by Benjamin LaHaise <bcrl@redhat.com>
+ */
+#include <linux/config.h>
+#include <linux/sched.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <asm/semaphore.h>
+
+/*
+ * Semaphores are implemented using a two-way counter:
+ * The "count" variable is decremented for each process
+ * that tries to acquire the semaphore, while the "sleeping"
+ * variable is a count of such acquires.
+ *
+ * Notably, the inline "up()" and "down()" functions can
+ * efficiently test if they need to do any extra work (up
+ * needs to do something only if count was negative before
+ * the increment operation.
+ *
+ * "sleeping" and the contention routine ordering is protected
+ * by the spinlock in the semaphore's waitqueue head.
+ *
+ * Note that these functions are only called when there is
+ * contention on the lock, and as such all this is the
+ * "non-critical" part of the whole semaphore business. The
+ * critical part is the inline stuff in <asm/semaphore.h>
+ * where we want to avoid any extra jumps and calls.
+ */
+
+/*
+ * Logic:
+ * - only on a boundary condition do we need to care. When we go
+ * from a negative count to a non-negative, we wake people up.
+ * - when we go from a non-negative count to a negative do we
+ * (a) synchronize with the "sleeper" count and (b) make sure
+ * that we're on the wakeup list before we synchronize so that
+ * we cannot lose wakeup events.
+ */
+
+asmlinkage void __up(struct semaphore *sem)
+{
+ wake_up(&sem->wait);
+}
+
+asmlinkage void __sched __down(struct semaphore * sem)
+{
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+ unsigned long flags;
+
+ tsk->state = TASK_UNINTERRUPTIBLE;
+ spin_lock_irqsave(&sem->wait.lock, flags);
+ add_wait_queue_exclusive_locked(&sem->wait, &wait);
+
+ sem->sleepers++;
+ for (;;) {
+ int sleepers = sem->sleepers;
+
+ /*
+ * Add "everybody else" into it. They aren't
+ * playing, because we own the spinlock in
+ * the wait_queue_head.
+ */
+ if (!atomic_add_negative(sleepers - 1, &sem->count)) {
+ sem->sleepers = 0;
+ break;
+ }
+ sem->sleepers = 1; /* us - see -1 above */
+ spin_unlock_irqrestore(&sem->wait.lock, flags);
+
+ schedule();
+
+ spin_lock_irqsave(&sem->wait.lock, flags);
+ tsk->state = TASK_UNINTERRUPTIBLE;
+ }
+ remove_wait_queue_locked(&sem->wait, &wait);
+ wake_up_locked(&sem->wait);
+ spin_unlock_irqrestore(&sem->wait.lock, flags);
+ tsk->state = TASK_RUNNING;
+}
+
+asmlinkage int __sched __down_interruptible(struct semaphore * sem)
+{
+ int retval = 0;
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+ unsigned long flags;
+
+ tsk->state = TASK_INTERRUPTIBLE;
+ spin_lock_irqsave(&sem->wait.lock, flags);
+ add_wait_queue_exclusive_locked(&sem->wait, &wait);
+
+ sem->sleepers++;
+ for (;;) {
+ int sleepers = sem->sleepers;
+
+ /*
+ * With signals pending, this turns into
+ * the trylock failure case - we won't be
+ * sleeping, and we* can't get the lock as
+ * it has contention. Just correct the count
+ * and exit.
+ */
+ if (signal_pending(current)) {
+ retval = -EINTR;
+ sem->sleepers = 0;
+ atomic_add(sleepers, &sem->count);
+ break;
+ }
+
+ /*
+ * Add "everybody else" into it. They aren't
+ * playing, because we own the spinlock in
+ * wait_queue_head. The "-1" is because we're
+ * still hoping to get the semaphore.
+ */
+ if (!atomic_add_negative(sleepers - 1, &sem->count)) {
+ sem->sleepers = 0;
+ break;
+ }
+ sem->sleepers = 1; /* us - see -1 above */
+ spin_unlock_irqrestore(&sem->wait.lock, flags);
+
+ schedule();
+
+ spin_lock_irqsave(&sem->wait.lock, flags);
+ tsk->state = TASK_INTERRUPTIBLE;
+ }
+ remove_wait_queue_locked(&sem->wait, &wait);
+ wake_up_locked(&sem->wait);
+ spin_unlock_irqrestore(&sem->wait.lock, flags);
+
+ tsk->state = TASK_RUNNING;
+ return retval;
+}
+
+/*
+ * Trylock failed - make sure we correct for
+ * having decremented the count.
+ *
+ * We could have done the trylock with a
+ * single "cmpxchg" without failure cases,
+ * but then it wouldn't work on a 386.
+ */
+asmlinkage int __down_trylock(struct semaphore * sem)
+{
+ int sleepers;
+ unsigned long flags;
+
+ spin_lock_irqsave(&sem->wait.lock, flags);
+ sleepers = sem->sleepers + 1;
+ sem->sleepers = 0;
+
+ /*
+ * Add "everybody else" and us into it. They aren't
+ * playing, because we own the spinlock in the
+ * wait_queue_head.
+ */
+ if (!atomic_add_negative(sleepers, &sem->count)) {
+ wake_up_locked(&sem->wait);
+ }
+
+ spin_unlock_irqrestore(&sem->wait.lock, flags);
+ return 1;
+}
+
+
+/*
+ * The semaphore operations have a special calling sequence that
+ * allow us to do a simpler in-line version of them. These routines
+ * need to convert that sequence back into the C sequence when
+ * there is contention on the semaphore.
+ *
+ * %ecx contains the semaphore pointer on entry. Save the C-clobbered
+ * registers (%eax, %edx and %ecx) except %eax when used as a return
+ * value..
+ */
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __down_failed\n"
+"__down_failed:\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "pushl %ebp\n\t"
+ "movl %esp,%ebp\n\t"
+#endif
+ "pushl %eax\n\t"
+ "pushl %edx\n\t"
+ "pushl %ecx\n\t"
+ "call __down\n\t"
+ "popl %ecx\n\t"
+ "popl %edx\n\t"
+ "popl %eax\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "movl %ebp,%esp\n\t"
+ "popl %ebp\n\t"
+#endif
+ "ret"
+);
+
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __down_failed_interruptible\n"
+"__down_failed_interruptible:\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "pushl %ebp\n\t"
+ "movl %esp,%ebp\n\t"
+#endif
+ "pushl %edx\n\t"
+ "pushl %ecx\n\t"
+ "call __down_interruptible\n\t"
+ "popl %ecx\n\t"
+ "popl %edx\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "movl %ebp,%esp\n\t"
+ "popl %ebp\n\t"
+#endif
+ "ret"
+);
+
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __down_failed_trylock\n"
+"__down_failed_trylock:\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "pushl %ebp\n\t"
+ "movl %esp,%ebp\n\t"
+#endif
+ "pushl %edx\n\t"
+ "pushl %ecx\n\t"
+ "call __down_trylock\n\t"
+ "popl %ecx\n\t"
+ "popl %edx\n\t"
+#if defined(CONFIG_FRAME_POINTER)
+ "movl %ebp,%esp\n\t"
+ "popl %ebp\n\t"
+#endif
+ "ret"
+);
+
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __up_wakeup\n"
+"__up_wakeup:\n\t"
+ "pushl %eax\n\t"
+ "pushl %edx\n\t"
+ "pushl %ecx\n\t"
+ "call __up\n\t"
+ "popl %ecx\n\t"
+ "popl %edx\n\t"
+ "popl %eax\n\t"
+ "ret"
+);
+
+/*
+ * rw spinlock fallbacks
+ */
+#if defined(CONFIG_SMP)
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __write_lock_failed\n"
+"__write_lock_failed:\n\t"
+ LOCK "addl $" RW_LOCK_BIAS_STR ",(%eax)\n"
+"1: rep; nop\n\t"
+ "cmpl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
+ "jne 1b\n\t"
+ LOCK "subl $" RW_LOCK_BIAS_STR ",(%eax)\n\t"
+ "jnz __write_lock_failed\n\t"
+ "ret"
+);
+
+asm(
+".section .sched.text\n"
+".align 4\n"
+".globl __read_lock_failed\n"
+"__read_lock_failed:\n\t"
+ LOCK "incl (%eax)\n"
+"1: rep; nop\n\t"
+ "cmpl $1,(%eax)\n\t"
+ "js 1b\n\t"
+ LOCK "decl (%eax)\n\t"
+ "js __read_lock_failed\n\t"
+ "ret"
+);
+#endif
--- /dev/null
+/*
+ * sys-i386/time.c
+ * Created 25.9.2002 Sapan Bhatia
+ *
+ */
+
+unsigned long long time_stamp(void)
+{
+ unsigned long low, high;
+
+ asm("rdtsc" : "=a" (low), "=d" (high));
+ return((((unsigned long long) high) << 32) + low);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#include <linux/init.h>
+#include <linux/sched.h>
+
+/* Don't do any NUMA setup on Opteron right now. They seem to be
+ better off with flat scheduling. This is just for SMT. */
+
+#ifdef CONFIG_SCHED_SMT
+
+static struct sched_group sched_group_cpus[NR_CPUS];
+static struct sched_group sched_group_phys[NR_CPUS];
+static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
+static DEFINE_PER_CPU(struct sched_domain, phys_domains);
+__init void arch_init_sched_domains(void)
+{
+ int i;
+ struct sched_group *first = NULL, *last = NULL;
+
+ /* Set up domains */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ struct sched_domain *phys_domain = &per_cpu(phys_domains, i);
+
+ *cpu_domain = SD_SIBLING_INIT;
+ /* Disable SMT NICE for CMP */
+ /* RED-PEN use a generic flag */
+ if (cpu_data[i].x86_vendor == X86_VENDOR_AMD)
+ cpu_domain->flags &= ~SD_SHARE_CPUPOWER;
+ cpu_domain->span = cpu_sibling_map[i];
+ cpu_domain->parent = phys_domain;
+ cpu_domain->groups = &sched_group_cpus[i];
+
+ *phys_domain = SD_CPU_INIT;
+ phys_domain->span = cpu_possible_map;
+ phys_domain->groups = &sched_group_phys[first_cpu(cpu_domain->span)];
+ }
+
+ /* Set up CPU (sibling) groups */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ int j;
+ first = last = NULL;
+
+ if (i != first_cpu(cpu_domain->span))
+ continue;
+
+ for_each_cpu_mask(j, cpu_domain->span) {
+ struct sched_group *cpu = &sched_group_cpus[j];
+
+ cpus_clear(cpu->cpumask);
+ cpu_set(j, cpu->cpumask);
+ cpu->cpu_power = SCHED_LOAD_SCALE;
+
+ if (!first)
+ first = cpu;
+ if (last)
+ last->next = cpu;
+ last = cpu;
+ }
+ last->next = first;
+ }
+
+ first = last = NULL;
+ /* Set up physical groups */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ struct sched_group *cpu = &sched_group_phys[i];
+
+ if (i != first_cpu(cpu_domain->span))
+ continue;
+
+ cpu->cpumask = cpu_domain->span;
+ /*
+ * Make each extra sibling increase power by 10% of
+ * the basic CPU. This is very arbitrary.
+ */
+ cpu->cpu_power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE*(cpus_weight(cpu->cpumask)-1) / 10;
+
+ if (!first)
+ first = cpu;
+ if (last)
+ last->next = cpu;
+ last = cpu;
+ }
+ last->next = first;
+
+ mb();
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ cpu_attach_domain(cpu_domain, i);
+ }
+}
+
+#endif
--- /dev/null
+/*
+ * Cryptographic API.
+ *
+ * Khazad Algorithm
+ *
+ * The Khazad algorithm was developed by Paulo S. L. M. Barreto and
+ * Vincent Rijmen. It was a finalist in the NESSIE encryption contest.
+ *
+ * The original authors have disclaimed all copyright interest in this
+ * code and thus put it in the public domain. The subsequent authors
+ * have put this under the GNU General Public License.
+ *
+ * By Aaron Grothe ajgrothe@yahoo.com, August 1, 2004
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define KHAZAD_KEY_SIZE 16
+#define KHAZAD_BLOCK_SIZE 8
+#define KHAZAD_ROUNDS 8
+
+struct khazad_ctx {
+ u64 E[KHAZAD_ROUNDS + 1];
+ u64 D[KHAZAD_ROUNDS + 1];
+};
+
+static const u64 T0[256] = {
+ 0xbad3d268bbb96a01ULL, 0x54fc4d19e59a66b1ULL, 0x2f71bc93e26514cdULL,
+ 0x749ccdb925871b51ULL, 0x53f55102f7a257a4ULL, 0xd3686bb8d0d6be03ULL,
+ 0xd26b6fbdd6deb504ULL, 0x4dd72964b35285feULL, 0x50f05d0dfdba4aadULL,
+ 0xace98a26cf09e063ULL, 0x8d8a0e83091c9684ULL, 0xbfdcc679a5914d1aULL,
+ 0x7090ddad3da7374dULL, 0x52f65507f1aa5ca3ULL, 0x9ab352c87ba417e1ULL,
+ 0x4cd42d61b55a8ef9ULL, 0xea238f65460320acULL, 0xd56273a6c4e68411ULL,
+ 0x97a466f155cc68c2ULL, 0xd16e63b2dcc6a80dULL, 0x3355ccffaa85d099ULL,
+ 0x51f35908fbb241aaULL, 0x5bed712ac7e20f9cULL, 0xa6f7a204f359ae55ULL,
+ 0xde7f5f81febec120ULL, 0x48d83d75ad7aa2e5ULL, 0xa8e59a32d729cc7fULL,
+ 0x99b65ec771bc0ae8ULL, 0xdb704b90e096e63bULL, 0x3256c8faac8ddb9eULL,
+ 0xb7c4e65195d11522ULL, 0xfc19d72b32b3aaceULL, 0xe338ab48704b7393ULL,
+ 0x9ebf42dc63843bfdULL, 0x91ae7eef41fc52d0ULL, 0x9bb056cd7dac1ce6ULL,
+ 0xe23baf4d76437894ULL, 0xbbd0d66dbdb16106ULL, 0x41c319589b32f1daULL,
+ 0x6eb2a5cb7957e517ULL, 0xa5f2ae0bf941b35cULL, 0xcb400bc08016564bULL,
+ 0x6bbdb1da677fc20cULL, 0x95a26efb59dc7eccULL, 0xa1febe1fe1619f40ULL,
+ 0xf308eb1810cbc3e3ULL, 0xb1cefe4f81e12f30ULL, 0x0206080a0c10160eULL,
+ 0xcc4917db922e675eULL, 0xc45137f3a26e3f66ULL, 0x1d2774694ee8cf53ULL,
+ 0x143c504478a09c6cULL, 0xc3582be8b0560e73ULL, 0x63a591f2573f9a34ULL,
+ 0xda734f95e69eed3cULL, 0x5de76934d3d2358eULL, 0x5fe1613edfc22380ULL,
+ 0xdc79578bf2aed72eULL, 0x7d87e99413cf486eULL, 0xcd4a13de94266c59ULL,
+ 0x7f81e19e1fdf5e60ULL, 0x5aee752fc1ea049bULL, 0x6cb4adc17547f319ULL,
+ 0x5ce46d31d5da3e89ULL, 0xf704fb0c08ebefffULL, 0x266a98bed42d47f2ULL,
+ 0xff1cdb2438abb7c7ULL, 0xed2a937e543b11b9ULL, 0xe825876f4a1336a2ULL,
+ 0x9dba4ed3699c26f4ULL, 0x6fb1a1ce7f5fee10ULL, 0x8e8f028c03048b8dULL,
+ 0x192b647d56c8e34fULL, 0xa0fdba1ae7699447ULL, 0xf00de7171ad3deeaULL,
+ 0x89861e97113cba98ULL, 0x0f113c332278692dULL, 0x07091c1b12383115ULL,
+ 0xafec8629c511fd6aULL, 0xfb10cb30208b9bdbULL, 0x0818202830405838ULL,
+ 0x153f54417ea8976bULL, 0x0d1734392e687f23ULL, 0x040c101418202c1cULL,
+ 0x0103040506080b07ULL, 0x64ac8de94507ab21ULL, 0xdf7c5b84f8b6ca27ULL,
+ 0x769ac5b329970d5fULL, 0x798bf9800bef6472ULL, 0xdd7a538ef4a6dc29ULL,
+ 0x3d47f4c98ef5b2b3ULL, 0x163a584e74b08a62ULL, 0x3f41fcc382e5a4bdULL,
+ 0x3759dcebb2a5fc85ULL, 0x6db7a9c4734ff81eULL, 0x3848e0d890dd95a8ULL,
+ 0xb9d6de67b1a17708ULL, 0x7395d1a237bf2a44ULL, 0xe926836a4c1b3da5ULL,
+ 0x355fd4e1beb5ea8bULL, 0x55ff491ce3926db6ULL, 0x7193d9a83baf3c4aULL,
+ 0x7b8df18a07ff727cULL, 0x8c890a860f149d83ULL, 0x7296d5a731b72143ULL,
+ 0x88851a921734b19fULL, 0xf607ff090ee3e4f8ULL, 0x2a7ea882fc4d33d6ULL,
+ 0x3e42f8c684edafbaULL, 0x5ee2653bd9ca2887ULL, 0x27699cbbd2254cf5ULL,
+ 0x46ca0543890ac0cfULL, 0x0c14303c28607424ULL, 0x65af89ec430fa026ULL,
+ 0x68b8bdd56d67df05ULL, 0x61a399f85b2f8c3aULL, 0x03050c0f0a181d09ULL,
+ 0xc15e23e2bc46187dULL, 0x57f94116ef827bb8ULL, 0xd6677fa9cefe9918ULL,
+ 0xd976439aec86f035ULL, 0x58e87d25cdfa1295ULL, 0xd875479fea8efb32ULL,
+ 0x66aa85e34917bd2fULL, 0xd7647bacc8f6921fULL, 0x3a4ee8d29ccd83a6ULL,
+ 0xc84507cf8a0e4b42ULL, 0x3c44f0cc88fdb9b4ULL, 0xfa13cf35268390dcULL,
+ 0x96a762f453c463c5ULL, 0xa7f4a601f551a552ULL, 0x98b55ac277b401efULL,
+ 0xec29977b52331abeULL, 0xb8d5da62b7a97c0fULL, 0xc7543bfca876226fULL,
+ 0xaeef822cc319f66dULL, 0x69bbb9d06b6fd402ULL, 0x4bdd317aa762bfecULL,
+ 0xabe0963ddd31d176ULL, 0xa9e69e37d121c778ULL, 0x67a981e64f1fb628ULL,
+ 0x0a1e28223c504e36ULL, 0x47c901468f02cbc8ULL, 0xf20bef1d16c3c8e4ULL,
+ 0xb5c2ee5b99c1032cULL, 0x226688aacc0d6beeULL, 0xe532b356647b4981ULL,
+ 0xee2f9f715e230cb0ULL, 0xbedfc27ca399461dULL, 0x2b7dac87fa4538d1ULL,
+ 0x819e3ebf217ce2a0ULL, 0x1236485a6c90a67eULL, 0x839836b52d6cf4aeULL,
+ 0x1b2d6c775ad8f541ULL, 0x0e1238362470622aULL, 0x23658cafca0560e9ULL,
+ 0xf502f30604fbf9f1ULL, 0x45cf094c8312ddc6ULL, 0x216384a5c61576e7ULL,
+ 0xce4f1fd19e3e7150ULL, 0x49db3970ab72a9e2ULL, 0x2c74b09ce87d09c4ULL,
+ 0xf916c33a2c9b8dd5ULL, 0xe637bf596e635488ULL, 0xb6c7e25493d91e25ULL,
+ 0x2878a088f05d25d8ULL, 0x17395c4b72b88165ULL, 0x829b32b02b64ffa9ULL,
+ 0x1a2e68725cd0fe46ULL, 0x8b80169d1d2cac96ULL, 0xfe1fdf213ea3bcc0ULL,
+ 0x8a8312981b24a791ULL, 0x091b242d3648533fULL, 0xc94603ca8c064045ULL,
+ 0x879426a1354cd8b2ULL, 0x4ed2256bb94a98f7ULL, 0xe13ea3427c5b659dULL,
+ 0x2e72b896e46d1fcaULL, 0xe431b75362734286ULL, 0xe03da7477a536e9aULL,
+ 0xeb208b60400b2babULL, 0x90ad7aea47f459d7ULL, 0xa4f1aa0eff49b85bULL,
+ 0x1e22786644f0d25aULL, 0x85922eab395ccebcULL, 0x60a09dfd5d27873dULL,
+ 0x0000000000000000ULL, 0x256f94b1de355afbULL, 0xf401f70302f3f2f6ULL,
+ 0xf10ee3121cdbd5edULL, 0x94a16afe5fd475cbULL, 0x0b1d2c273a584531ULL,
+ 0xe734bb5c686b5f8fULL, 0x759fc9bc238f1056ULL, 0xef2c9b74582b07b7ULL,
+ 0x345cd0e4b8bde18cULL, 0x3153c4f5a695c697ULL, 0xd46177a3c2ee8f16ULL,
+ 0xd06d67b7dacea30aULL, 0x869722a43344d3b5ULL, 0x7e82e59b19d75567ULL,
+ 0xadea8e23c901eb64ULL, 0xfd1ad32e34bba1c9ULL, 0x297ba48df6552edfULL,
+ 0x3050c0f0a09dcd90ULL, 0x3b4decd79ac588a1ULL, 0x9fbc46d9658c30faULL,
+ 0xf815c73f2a9386d2ULL, 0xc6573ff9ae7e2968ULL, 0x13354c5f6a98ad79ULL,
+ 0x060a181e14303a12ULL, 0x050f14111e28271bULL, 0xc55233f6a4663461ULL,
+ 0x113344556688bb77ULL, 0x7799c1b62f9f0658ULL, 0x7c84ed9115c74369ULL,
+ 0x7a8ef58f01f7797bULL, 0x7888fd850de76f75ULL, 0x365ad8eeb4adf782ULL,
+ 0x1c24706c48e0c454ULL, 0x394be4dd96d59eafULL, 0x59eb7920cbf21992ULL,
+ 0x1828607850c0e848ULL, 0x56fa4513e98a70bfULL, 0xb3c8f6458df1393eULL,
+ 0xb0cdfa4a87e92437ULL, 0x246c90b4d83d51fcULL, 0x206080a0c01d7de0ULL,
+ 0xb2cbf2408bf93239ULL, 0x92ab72e04be44fd9ULL, 0xa3f8b615ed71894eULL,
+ 0xc05d27e7ba4e137aULL, 0x44cc0d49851ad6c1ULL, 0x62a695f751379133ULL,
+ 0x103040506080b070ULL, 0xb4c1ea5e9fc9082bULL, 0x84912aae3f54c5bbULL,
+ 0x43c511529722e7d4ULL, 0x93a876e54dec44deULL, 0xc25b2fedb65e0574ULL,
+ 0x4ade357fa16ab4ebULL, 0xbddace73a9815b14ULL, 0x8f8c0689050c808aULL,
+ 0x2d77b499ee7502c3ULL, 0xbcd9ca76af895013ULL, 0x9cb94ad66f942df3ULL,
+ 0x6abeb5df6177c90bULL, 0x40c01d5d9d3afaddULL, 0xcf4c1bd498367a57ULL,
+ 0xa2fbb210eb798249ULL, 0x809d3aba2774e9a7ULL, 0x4fd1216ebf4293f0ULL,
+ 0x1f217c6342f8d95dULL, 0xca430fc5861e5d4cULL, 0xaae39238db39da71ULL,
+ 0x42c61557912aecd3ULL
+};
+
+static const u64 T1[256] = {
+ 0xd3ba68d2b9bb016aULL, 0xfc54194d9ae5b166ULL, 0x712f93bc65e2cd14ULL,
+ 0x9c74b9cd8725511bULL, 0xf5530251a2f7a457ULL, 0x68d3b86bd6d003beULL,
+ 0x6bd2bd6fded604b5ULL, 0xd74d642952b3fe85ULL, 0xf0500d5dbafdad4aULL,
+ 0xe9ac268a09cf63e0ULL, 0x8a8d830e1c098496ULL, 0xdcbf79c691a51a4dULL,
+ 0x9070addda73d4d37ULL, 0xf6520755aaf1a35cULL, 0xb39ac852a47be117ULL,
+ 0xd44c612d5ab5f98eULL, 0x23ea658f0346ac20ULL, 0x62d5a673e6c41184ULL,
+ 0xa497f166cc55c268ULL, 0x6ed1b263c6dc0da8ULL, 0x5533ffcc85aa99d0ULL,
+ 0xf3510859b2fbaa41ULL, 0xed5b2a71e2c79c0fULL, 0xf7a604a259f355aeULL,
+ 0x7fde815fbefe20c1ULL, 0xd848753d7aade5a2ULL, 0xe5a8329a29d77fccULL,
+ 0xb699c75ebc71e80aULL, 0x70db904b96e03be6ULL, 0x5632fac88dac9edbULL,
+ 0xc4b751e6d1952215ULL, 0x19fc2bd7b332ceaaULL, 0x38e348ab4b709373ULL,
+ 0xbf9edc428463fd3bULL, 0xae91ef7efc41d052ULL, 0xb09bcd56ac7de61cULL,
+ 0x3be24daf43769478ULL, 0xd0bb6dd6b1bd0661ULL, 0xc3415819329bdaf1ULL,
+ 0xb26ecba5577917e5ULL, 0xf2a50bae41f95cb3ULL, 0x40cbc00b16804b56ULL,
+ 0xbd6bdab17f670cc2ULL, 0xa295fb6edc59cc7eULL, 0xfea11fbe61e1409fULL,
+ 0x08f318ebcb10e3c3ULL, 0xceb14ffee181302fULL, 0x06020a08100c0e16ULL,
+ 0x49ccdb172e925e67ULL, 0x51c4f3376ea2663fULL, 0x271d6974e84e53cfULL,
+ 0x3c144450a0786c9cULL, 0x58c3e82b56b0730eULL, 0xa563f2913f57349aULL,
+ 0x73da954f9ee63cedULL, 0xe75d3469d2d38e35ULL, 0xe15f3e61c2df8023ULL,
+ 0x79dc8b57aef22ed7ULL, 0x877d94e9cf136e48ULL, 0x4acdde132694596cULL,
+ 0x817f9ee1df1f605eULL, 0xee5a2f75eac19b04ULL, 0xb46cc1ad477519f3ULL,
+ 0xe45c316ddad5893eULL, 0x04f70cfbeb08ffefULL, 0x6a26be982dd4f247ULL,
+ 0x1cff24dbab38c7b7ULL, 0x2aed7e933b54b911ULL, 0x25e86f87134aa236ULL,
+ 0xba9dd34e9c69f426ULL, 0xb16fcea15f7f10eeULL, 0x8f8e8c0204038d8bULL,
+ 0x2b197d64c8564fe3ULL, 0xfda01aba69e74794ULL, 0x0df017e7d31aeadeULL,
+ 0x8689971e3c1198baULL, 0x110f333c78222d69ULL, 0x09071b1c38121531ULL,
+ 0xecaf298611c56afdULL, 0x10fb30cb8b20db9bULL, 0x1808282040303858ULL,
+ 0x3f154154a87e6b97ULL, 0x170d3934682e237fULL, 0x0c04141020181c2cULL,
+ 0x030105040806070bULL, 0xac64e98d074521abULL, 0x7cdf845bb6f827caULL,
+ 0x9a76b3c597295f0dULL, 0x8b7980f9ef0b7264ULL, 0x7add8e53a6f429dcULL,
+ 0x473dc9f4f58eb3b2ULL, 0x3a164e58b074628aULL, 0x413fc3fce582bda4ULL,
+ 0x5937ebdca5b285fcULL, 0xb76dc4a94f731ef8ULL, 0x4838d8e0dd90a895ULL,
+ 0xd6b967dea1b10877ULL, 0x9573a2d1bf37442aULL, 0x26e96a831b4ca53dULL,
+ 0x5f35e1d4b5be8beaULL, 0xff551c4992e3b66dULL, 0x9371a8d9af3b4a3cULL,
+ 0x8d7b8af1ff077c72ULL, 0x898c860a140f839dULL, 0x9672a7d5b7314321ULL,
+ 0x8588921a34179fb1ULL, 0x07f609ffe30ef8e4ULL, 0x7e2a82a84dfcd633ULL,
+ 0x423ec6f8ed84baafULL, 0xe25e3b65cad98728ULL, 0x6927bb9c25d2f54cULL,
+ 0xca4643050a89cfc0ULL, 0x140c3c3060282474ULL, 0xaf65ec890f4326a0ULL,
+ 0xb868d5bd676d05dfULL, 0xa361f8992f5b3a8cULL, 0x05030f0c180a091dULL,
+ 0x5ec1e22346bc7d18ULL, 0xf957164182efb87bULL, 0x67d6a97ffece1899ULL,
+ 0x76d99a4386ec35f0ULL, 0xe858257dfacd9512ULL, 0x75d89f478eea32fbULL,
+ 0xaa66e38517492fbdULL, 0x64d7ac7bf6c81f92ULL, 0x4e3ad2e8cd9ca683ULL,
+ 0x45c8cf070e8a424bULL, 0x443cccf0fd88b4b9ULL, 0x13fa35cf8326dc90ULL,
+ 0xa796f462c453c563ULL, 0xf4a701a651f552a5ULL, 0xb598c25ab477ef01ULL,
+ 0x29ec7b973352be1aULL, 0xd5b862daa9b70f7cULL, 0x54c7fc3b76a86f22ULL,
+ 0xefae2c8219c36df6ULL, 0xbb69d0b96f6b02d4ULL, 0xdd4b7a3162a7ecbfULL,
+ 0xe0ab3d9631dd76d1ULL, 0xe6a9379e21d178c7ULL, 0xa967e6811f4f28b6ULL,
+ 0x1e0a2228503c364eULL, 0xc9474601028fc8cbULL, 0x0bf21defc316e4c8ULL,
+ 0xc2b55beec1992c03ULL, 0x6622aa880dccee6bULL, 0x32e556b37b648149ULL,
+ 0x2fee719f235eb00cULL, 0xdfbe7cc299a31d46ULL, 0x7d2b87ac45fad138ULL,
+ 0x9e81bf3e7c21a0e2ULL, 0x36125a48906c7ea6ULL, 0x9883b5366c2daef4ULL,
+ 0x2d1b776cd85a41f5ULL, 0x120e363870242a62ULL, 0x6523af8c05cae960ULL,
+ 0x02f506f3fb04f1f9ULL, 0xcf454c091283c6ddULL, 0x6321a58415c6e776ULL,
+ 0x4fced11f3e9e5071ULL, 0xdb49703972abe2a9ULL, 0x742c9cb07de8c409ULL,
+ 0x16f93ac39b2cd58dULL, 0x37e659bf636e8854ULL, 0xc7b654e2d993251eULL,
+ 0x782888a05df0d825ULL, 0x39174b5cb8726581ULL, 0x9b82b032642ba9ffULL,
+ 0x2e1a7268d05c46feULL, 0x808b9d162c1d96acULL, 0x1ffe21dfa33ec0bcULL,
+ 0x838a9812241b91a7ULL, 0x1b092d2448363f53ULL, 0x46c9ca03068c4540ULL,
+ 0x9487a1264c35b2d8ULL, 0xd24e6b254ab9f798ULL, 0x3ee142a35b7c9d65ULL,
+ 0x722e96b86de4ca1fULL, 0x31e453b773628642ULL, 0x3de047a7537a9a6eULL,
+ 0x20eb608b0b40ab2bULL, 0xad90ea7af447d759ULL, 0xf1a40eaa49ff5bb8ULL,
+ 0x221e6678f0445ad2ULL, 0x9285ab2e5c39bcceULL, 0xa060fd9d275d3d87ULL,
+ 0x0000000000000000ULL, 0x6f25b19435defb5aULL, 0x01f403f7f302f6f2ULL,
+ 0x0ef112e3db1cedd5ULL, 0xa194fe6ad45fcb75ULL, 0x1d0b272c583a3145ULL,
+ 0x34e75cbb6b688f5fULL, 0x9f75bcc98f235610ULL, 0x2cef749b2b58b707ULL,
+ 0x5c34e4d0bdb88ce1ULL, 0x5331f5c495a697c6ULL, 0x61d4a377eec2168fULL,
+ 0x6dd0b767ceda0aa3ULL, 0x9786a4224433b5d3ULL, 0x827e9be5d7196755ULL,
+ 0xeaad238e01c964ebULL, 0x1afd2ed3bb34c9a1ULL, 0x7b298da455f6df2eULL,
+ 0x5030f0c09da090cdULL, 0x4d3bd7ecc59aa188ULL, 0xbc9fd9468c65fa30ULL,
+ 0x15f83fc7932ad286ULL, 0x57c6f93f7eae6829ULL, 0x35135f4c986a79adULL,
+ 0x0a061e183014123aULL, 0x0f051114281e1b27ULL, 0x52c5f63366a46134ULL,
+ 0x33115544886677bbULL, 0x9977b6c19f2f5806ULL, 0x847c91edc7156943ULL,
+ 0x8e7a8ff5f7017b79ULL, 0x887885fde70d756fULL, 0x5a36eed8adb482f7ULL,
+ 0x241c6c70e04854c4ULL, 0x4b39dde4d596af9eULL, 0xeb592079f2cb9219ULL,
+ 0x28187860c05048e8ULL, 0xfa5613458ae9bf70ULL, 0xc8b345f6f18d3e39ULL,
+ 0xcdb04afae9873724ULL, 0x6c24b4903dd8fc51ULL, 0x6020a0801dc0e07dULL,
+ 0xcbb240f2f98b3932ULL, 0xab92e072e44bd94fULL, 0xf8a315b671ed4e89ULL,
+ 0x5dc0e7274eba7a13ULL, 0xcc44490d1a85c1d6ULL, 0xa662f79537513391ULL,
+ 0x30105040806070b0ULL, 0xc1b45eeac99f2b08ULL, 0x9184ae2a543fbbc5ULL,
+ 0xc54352112297d4e7ULL, 0xa893e576ec4dde44ULL, 0x5bc2ed2f5eb67405ULL,
+ 0xde4a7f356aa1ebb4ULL, 0xdabd73ce81a9145bULL, 0x8c8f89060c058a80ULL,
+ 0x772d99b475eec302ULL, 0xd9bc76ca89af1350ULL, 0xb99cd64a946ff32dULL,
+ 0xbe6adfb577610bc9ULL, 0xc0405d1d3a9dddfaULL, 0x4ccfd41b3698577aULL,
+ 0xfba210b279eb4982ULL, 0x9d80ba3a7427a7e9ULL, 0xd14f6e2142bff093ULL,
+ 0x211f637cf8425dd9ULL, 0x43cac50f1e864c5dULL, 0xe3aa389239db71daULL,
+ 0xc64257152a91d3ecULL
+};
+
+static const u64 T2[256] = {
+ 0xd268bad36a01bbb9ULL, 0x4d1954fc66b1e59aULL, 0xbc932f7114cde265ULL,
+ 0xcdb9749c1b512587ULL, 0x510253f557a4f7a2ULL, 0x6bb8d368be03d0d6ULL,
+ 0x6fbdd26bb504d6deULL, 0x29644dd785feb352ULL, 0x5d0d50f04aadfdbaULL,
+ 0x8a26ace9e063cf09ULL, 0x0e838d8a9684091cULL, 0xc679bfdc4d1aa591ULL,
+ 0xddad7090374d3da7ULL, 0x550752f65ca3f1aaULL, 0x52c89ab317e17ba4ULL,
+ 0x2d614cd48ef9b55aULL, 0x8f65ea2320ac4603ULL, 0x73a6d5628411c4e6ULL,
+ 0x66f197a468c255ccULL, 0x63b2d16ea80ddcc6ULL, 0xccff3355d099aa85ULL,
+ 0x590851f341aafbb2ULL, 0x712a5bed0f9cc7e2ULL, 0xa204a6f7ae55f359ULL,
+ 0x5f81de7fc120febeULL, 0x3d7548d8a2e5ad7aULL, 0x9a32a8e5cc7fd729ULL,
+ 0x5ec799b60ae871bcULL, 0x4b90db70e63be096ULL, 0xc8fa3256db9eac8dULL,
+ 0xe651b7c4152295d1ULL, 0xd72bfc19aace32b3ULL, 0xab48e3387393704bULL,
+ 0x42dc9ebf3bfd6384ULL, 0x7eef91ae52d041fcULL, 0x56cd9bb01ce67dacULL,
+ 0xaf4de23b78947643ULL, 0xd66dbbd06106bdb1ULL, 0x195841c3f1da9b32ULL,
+ 0xa5cb6eb2e5177957ULL, 0xae0ba5f2b35cf941ULL, 0x0bc0cb40564b8016ULL,
+ 0xb1da6bbdc20c677fULL, 0x6efb95a27ecc59dcULL, 0xbe1fa1fe9f40e161ULL,
+ 0xeb18f308c3e310cbULL, 0xfe4fb1ce2f3081e1ULL, 0x080a0206160e0c10ULL,
+ 0x17dbcc49675e922eULL, 0x37f3c4513f66a26eULL, 0x74691d27cf534ee8ULL,
+ 0x5044143c9c6c78a0ULL, 0x2be8c3580e73b056ULL, 0x91f263a59a34573fULL,
+ 0x4f95da73ed3ce69eULL, 0x69345de7358ed3d2ULL, 0x613e5fe12380dfc2ULL,
+ 0x578bdc79d72ef2aeULL, 0xe9947d87486e13cfULL, 0x13decd4a6c599426ULL,
+ 0xe19e7f815e601fdfULL, 0x752f5aee049bc1eaULL, 0xadc16cb4f3197547ULL,
+ 0x6d315ce43e89d5daULL, 0xfb0cf704efff08ebULL, 0x98be266a47f2d42dULL,
+ 0xdb24ff1cb7c738abULL, 0x937eed2a11b9543bULL, 0x876fe82536a24a13ULL,
+ 0x4ed39dba26f4699cULL, 0xa1ce6fb1ee107f5fULL, 0x028c8e8f8b8d0304ULL,
+ 0x647d192be34f56c8ULL, 0xba1aa0fd9447e769ULL, 0xe717f00ddeea1ad3ULL,
+ 0x1e978986ba98113cULL, 0x3c330f11692d2278ULL, 0x1c1b070931151238ULL,
+ 0x8629afecfd6ac511ULL, 0xcb30fb109bdb208bULL, 0x2028081858383040ULL,
+ 0x5441153f976b7ea8ULL, 0x34390d177f232e68ULL, 0x1014040c2c1c1820ULL,
+ 0x040501030b070608ULL, 0x8de964acab214507ULL, 0x5b84df7cca27f8b6ULL,
+ 0xc5b3769a0d5f2997ULL, 0xf980798b64720befULL, 0x538edd7adc29f4a6ULL,
+ 0xf4c93d47b2b38ef5ULL, 0x584e163a8a6274b0ULL, 0xfcc33f41a4bd82e5ULL,
+ 0xdceb3759fc85b2a5ULL, 0xa9c46db7f81e734fULL, 0xe0d8384895a890ddULL,
+ 0xde67b9d67708b1a1ULL, 0xd1a273952a4437bfULL, 0x836ae9263da54c1bULL,
+ 0xd4e1355fea8bbeb5ULL, 0x491c55ff6db6e392ULL, 0xd9a871933c4a3bafULL,
+ 0xf18a7b8d727c07ffULL, 0x0a868c899d830f14ULL, 0xd5a77296214331b7ULL,
+ 0x1a928885b19f1734ULL, 0xff09f607e4f80ee3ULL, 0xa8822a7e33d6fc4dULL,
+ 0xf8c63e42afba84edULL, 0x653b5ee22887d9caULL, 0x9cbb27694cf5d225ULL,
+ 0x054346cac0cf890aULL, 0x303c0c1474242860ULL, 0x89ec65afa026430fULL,
+ 0xbdd568b8df056d67ULL, 0x99f861a38c3a5b2fULL, 0x0c0f03051d090a18ULL,
+ 0x23e2c15e187dbc46ULL, 0x411657f97bb8ef82ULL, 0x7fa9d6679918cefeULL,
+ 0x439ad976f035ec86ULL, 0x7d2558e81295cdfaULL, 0x479fd875fb32ea8eULL,
+ 0x85e366aabd2f4917ULL, 0x7bacd764921fc8f6ULL, 0xe8d23a4e83a69ccdULL,
+ 0x07cfc8454b428a0eULL, 0xf0cc3c44b9b488fdULL, 0xcf35fa1390dc2683ULL,
+ 0x62f496a763c553c4ULL, 0xa601a7f4a552f551ULL, 0x5ac298b501ef77b4ULL,
+ 0x977bec291abe5233ULL, 0xda62b8d57c0fb7a9ULL, 0x3bfcc754226fa876ULL,
+ 0x822caeeff66dc319ULL, 0xb9d069bbd4026b6fULL, 0x317a4bddbfeca762ULL,
+ 0x963dabe0d176dd31ULL, 0x9e37a9e6c778d121ULL, 0x81e667a9b6284f1fULL,
+ 0x28220a1e4e363c50ULL, 0x014647c9cbc88f02ULL, 0xef1df20bc8e416c3ULL,
+ 0xee5bb5c2032c99c1ULL, 0x88aa22666beecc0dULL, 0xb356e5324981647bULL,
+ 0x9f71ee2f0cb05e23ULL, 0xc27cbedf461da399ULL, 0xac872b7d38d1fa45ULL,
+ 0x3ebf819ee2a0217cULL, 0x485a1236a67e6c90ULL, 0x36b58398f4ae2d6cULL,
+ 0x6c771b2df5415ad8ULL, 0x38360e12622a2470ULL, 0x8caf236560e9ca05ULL,
+ 0xf306f502f9f104fbULL, 0x094c45cfddc68312ULL, 0x84a5216376e7c615ULL,
+ 0x1fd1ce4f71509e3eULL, 0x397049dba9e2ab72ULL, 0xb09c2c7409c4e87dULL,
+ 0xc33af9168dd52c9bULL, 0xbf59e63754886e63ULL, 0xe254b6c71e2593d9ULL,
+ 0xa088287825d8f05dULL, 0x5c4b1739816572b8ULL, 0x32b0829bffa92b64ULL,
+ 0x68721a2efe465cd0ULL, 0x169d8b80ac961d2cULL, 0xdf21fe1fbcc03ea3ULL,
+ 0x12988a83a7911b24ULL, 0x242d091b533f3648ULL, 0x03cac94640458c06ULL,
+ 0x26a18794d8b2354cULL, 0x256b4ed298f7b94aULL, 0xa342e13e659d7c5bULL,
+ 0xb8962e721fcae46dULL, 0xb753e43142866273ULL, 0xa747e03d6e9a7a53ULL,
+ 0x8b60eb202bab400bULL, 0x7aea90ad59d747f4ULL, 0xaa0ea4f1b85bff49ULL,
+ 0x78661e22d25a44f0ULL, 0x2eab8592cebc395cULL, 0x9dfd60a0873d5d27ULL,
+ 0x0000000000000000ULL, 0x94b1256f5afbde35ULL, 0xf703f401f2f602f3ULL,
+ 0xe312f10ed5ed1cdbULL, 0x6afe94a175cb5fd4ULL, 0x2c270b1d45313a58ULL,
+ 0xbb5ce7345f8f686bULL, 0xc9bc759f1056238fULL, 0x9b74ef2c07b7582bULL,
+ 0xd0e4345ce18cb8bdULL, 0xc4f53153c697a695ULL, 0x77a3d4618f16c2eeULL,
+ 0x67b7d06da30adaceULL, 0x22a48697d3b53344ULL, 0xe59b7e82556719d7ULL,
+ 0x8e23adeaeb64c901ULL, 0xd32efd1aa1c934bbULL, 0xa48d297b2edff655ULL,
+ 0xc0f03050cd90a09dULL, 0xecd73b4d88a19ac5ULL, 0x46d99fbc30fa658cULL,
+ 0xc73ff81586d22a93ULL, 0x3ff9c6572968ae7eULL, 0x4c5f1335ad796a98ULL,
+ 0x181e060a3a121430ULL, 0x1411050f271b1e28ULL, 0x33f6c5523461a466ULL,
+ 0x44551133bb776688ULL, 0xc1b6779906582f9fULL, 0xed917c84436915c7ULL,
+ 0xf58f7a8e797b01f7ULL, 0xfd8578886f750de7ULL, 0xd8ee365af782b4adULL,
+ 0x706c1c24c45448e0ULL, 0xe4dd394b9eaf96d5ULL, 0x792059eb1992cbf2ULL,
+ 0x60781828e84850c0ULL, 0x451356fa70bfe98aULL, 0xf645b3c8393e8df1ULL,
+ 0xfa4ab0cd243787e9ULL, 0x90b4246c51fcd83dULL, 0x80a020607de0c01dULL,
+ 0xf240b2cb32398bf9ULL, 0x72e092ab4fd94be4ULL, 0xb615a3f8894eed71ULL,
+ 0x27e7c05d137aba4eULL, 0x0d4944ccd6c1851aULL, 0x95f762a691335137ULL,
+ 0x40501030b0706080ULL, 0xea5eb4c1082b9fc9ULL, 0x2aae8491c5bb3f54ULL,
+ 0x115243c5e7d49722ULL, 0x76e593a844de4decULL, 0x2fedc25b0574b65eULL,
+ 0x357f4adeb4eba16aULL, 0xce73bdda5b14a981ULL, 0x06898f8c808a050cULL,
+ 0xb4992d7702c3ee75ULL, 0xca76bcd95013af89ULL, 0x4ad69cb92df36f94ULL,
+ 0xb5df6abec90b6177ULL, 0x1d5d40c0fadd9d3aULL, 0x1bd4cf4c7a579836ULL,
+ 0xb210a2fb8249eb79ULL, 0x3aba809de9a72774ULL, 0x216e4fd193f0bf42ULL,
+ 0x7c631f21d95d42f8ULL, 0x0fc5ca435d4c861eULL, 0x9238aae3da71db39ULL,
+ 0x155742c6ecd3912aULL
+};
+
+static const u64 T3[256] = {
+ 0x68d2d3ba016ab9bbULL, 0x194dfc54b1669ae5ULL, 0x93bc712fcd1465e2ULL,
+ 0xb9cd9c74511b8725ULL, 0x0251f553a457a2f7ULL, 0xb86b68d303bed6d0ULL,
+ 0xbd6f6bd204b5ded6ULL, 0x6429d74dfe8552b3ULL, 0x0d5df050ad4abafdULL,
+ 0x268ae9ac63e009cfULL, 0x830e8a8d84961c09ULL, 0x79c6dcbf1a4d91a5ULL,
+ 0xaddd90704d37a73dULL, 0x0755f652a35caaf1ULL, 0xc852b39ae117a47bULL,
+ 0x612dd44cf98e5ab5ULL, 0x658f23eaac200346ULL, 0xa67362d51184e6c4ULL,
+ 0xf166a497c268cc55ULL, 0xb2636ed10da8c6dcULL, 0xffcc553399d085aaULL,
+ 0x0859f351aa41b2fbULL, 0x2a71ed5b9c0fe2c7ULL, 0x04a2f7a655ae59f3ULL,
+ 0x815f7fde20c1befeULL, 0x753dd848e5a27aadULL, 0x329ae5a87fcc29d7ULL,
+ 0xc75eb699e80abc71ULL, 0x904b70db3be696e0ULL, 0xfac856329edb8dacULL,
+ 0x51e6c4b72215d195ULL, 0x2bd719fcceaab332ULL, 0x48ab38e393734b70ULL,
+ 0xdc42bf9efd3b8463ULL, 0xef7eae91d052fc41ULL, 0xcd56b09be61cac7dULL,
+ 0x4daf3be294784376ULL, 0x6dd6d0bb0661b1bdULL, 0x5819c341daf1329bULL,
+ 0xcba5b26e17e55779ULL, 0x0baef2a55cb341f9ULL, 0xc00b40cb4b561680ULL,
+ 0xdab1bd6b0cc27f67ULL, 0xfb6ea295cc7edc59ULL, 0x1fbefea1409f61e1ULL,
+ 0x18eb08f3e3c3cb10ULL, 0x4ffeceb1302fe181ULL, 0x0a0806020e16100cULL,
+ 0xdb1749cc5e672e92ULL, 0xf33751c4663f6ea2ULL, 0x6974271d53cfe84eULL,
+ 0x44503c146c9ca078ULL, 0xe82b58c3730e56b0ULL, 0xf291a563349a3f57ULL,
+ 0x954f73da3ced9ee6ULL, 0x3469e75d8e35d2d3ULL, 0x3e61e15f8023c2dfULL,
+ 0x8b5779dc2ed7aef2ULL, 0x94e9877d6e48cf13ULL, 0xde134acd596c2694ULL,
+ 0x9ee1817f605edf1fULL, 0x2f75ee5a9b04eac1ULL, 0xc1adb46c19f34775ULL,
+ 0x316de45c893edad5ULL, 0x0cfb04f7ffefeb08ULL, 0xbe986a26f2472dd4ULL,
+ 0x24db1cffc7b7ab38ULL, 0x7e932aedb9113b54ULL, 0x6f8725e8a236134aULL,
+ 0xd34eba9df4269c69ULL, 0xcea1b16f10ee5f7fULL, 0x8c028f8e8d8b0403ULL,
+ 0x7d642b194fe3c856ULL, 0x1abafda0479469e7ULL, 0x17e70df0eaded31aULL,
+ 0x971e868998ba3c11ULL, 0x333c110f2d697822ULL, 0x1b1c090715313812ULL,
+ 0x2986ecaf6afd11c5ULL, 0x30cb10fbdb9b8b20ULL, 0x2820180838584030ULL,
+ 0x41543f156b97a87eULL, 0x3934170d237f682eULL, 0x14100c041c2c2018ULL,
+ 0x05040301070b0806ULL, 0xe98dac6421ab0745ULL, 0x845b7cdf27cab6f8ULL,
+ 0xb3c59a765f0d9729ULL, 0x80f98b797264ef0bULL, 0x8e537add29dca6f4ULL,
+ 0xc9f4473db3b2f58eULL, 0x4e583a16628ab074ULL, 0xc3fc413fbda4e582ULL,
+ 0xebdc593785fca5b2ULL, 0xc4a9b76d1ef84f73ULL, 0xd8e04838a895dd90ULL,
+ 0x67ded6b90877a1b1ULL, 0xa2d19573442abf37ULL, 0x6a8326e9a53d1b4cULL,
+ 0xe1d45f358beab5beULL, 0x1c49ff55b66d92e3ULL, 0xa8d993714a3caf3bULL,
+ 0x8af18d7b7c72ff07ULL, 0x860a898c839d140fULL, 0xa7d596724321b731ULL,
+ 0x921a85889fb13417ULL, 0x09ff07f6f8e4e30eULL, 0x82a87e2ad6334dfcULL,
+ 0xc6f8423ebaafed84ULL, 0x3b65e25e8728cad9ULL, 0xbb9c6927f54c25d2ULL,
+ 0x4305ca46cfc00a89ULL, 0x3c30140c24746028ULL, 0xec89af6526a00f43ULL,
+ 0xd5bdb86805df676dULL, 0xf899a3613a8c2f5bULL, 0x0f0c0503091d180aULL,
+ 0xe2235ec17d1846bcULL, 0x1641f957b87b82efULL, 0xa97f67d61899feceULL,
+ 0x9a4376d935f086ecULL, 0x257de8589512facdULL, 0x9f4775d832fb8eeaULL,
+ 0xe385aa662fbd1749ULL, 0xac7b64d71f92f6c8ULL, 0xd2e84e3aa683cd9cULL,
+ 0xcf0745c8424b0e8aULL, 0xccf0443cb4b9fd88ULL, 0x35cf13fadc908326ULL,
+ 0xf462a796c563c453ULL, 0x01a6f4a752a551f5ULL, 0xc25ab598ef01b477ULL,
+ 0x7b9729ecbe1a3352ULL, 0x62dad5b80f7ca9b7ULL, 0xfc3b54c76f2276a8ULL,
+ 0x2c82efae6df619c3ULL, 0xd0b9bb6902d46f6bULL, 0x7a31dd4becbf62a7ULL,
+ 0x3d96e0ab76d131ddULL, 0x379ee6a978c721d1ULL, 0xe681a96728b61f4fULL,
+ 0x22281e0a364e503cULL, 0x4601c947c8cb028fULL, 0x1def0bf2e4c8c316ULL,
+ 0x5beec2b52c03c199ULL, 0xaa886622ee6b0dccULL, 0x56b332e581497b64ULL,
+ 0x719f2feeb00c235eULL, 0x7cc2dfbe1d4699a3ULL, 0x87ac7d2bd13845faULL,
+ 0xbf3e9e81a0e27c21ULL, 0x5a4836127ea6906cULL, 0xb5369883aef46c2dULL,
+ 0x776c2d1b41f5d85aULL, 0x3638120e2a627024ULL, 0xaf8c6523e96005caULL,
+ 0x06f302f5f1f9fb04ULL, 0x4c09cf45c6dd1283ULL, 0xa5846321e77615c6ULL,
+ 0xd11f4fce50713e9eULL, 0x7039db49e2a972abULL, 0x9cb0742cc4097de8ULL,
+ 0x3ac316f9d58d9b2cULL, 0x59bf37e68854636eULL, 0x54e2c7b6251ed993ULL,
+ 0x88a07828d8255df0ULL, 0x4b5c39176581b872ULL, 0xb0329b82a9ff642bULL,
+ 0x72682e1a46fed05cULL, 0x9d16808b96ac2c1dULL, 0x21df1ffec0bca33eULL,
+ 0x9812838a91a7241bULL, 0x2d241b093f534836ULL, 0xca0346c94540068cULL,
+ 0xa1269487b2d84c35ULL, 0x6b25d24ef7984ab9ULL, 0x42a33ee19d655b7cULL,
+ 0x96b8722eca1f6de4ULL, 0x53b731e486427362ULL, 0x47a73de09a6e537aULL,
+ 0x608b20ebab2b0b40ULL, 0xea7aad90d759f447ULL, 0x0eaaf1a45bb849ffULL,
+ 0x6678221e5ad2f044ULL, 0xab2e9285bcce5c39ULL, 0xfd9da0603d87275dULL,
+ 0x0000000000000000ULL, 0xb1946f25fb5a35deULL, 0x03f701f4f6f2f302ULL,
+ 0x12e30ef1edd5db1cULL, 0xfe6aa194cb75d45fULL, 0x272c1d0b3145583aULL,
+ 0x5cbb34e78f5f6b68ULL, 0xbcc99f7556108f23ULL, 0x749b2cefb7072b58ULL,
+ 0xe4d05c348ce1bdb8ULL, 0xf5c4533197c695a6ULL, 0xa37761d4168feec2ULL,
+ 0xb7676dd00aa3cedaULL, 0xa4229786b5d34433ULL, 0x9be5827e6755d719ULL,
+ 0x238eeaad64eb01c9ULL, 0x2ed31afdc9a1bb34ULL, 0x8da47b29df2e55f6ULL,
+ 0xf0c0503090cd9da0ULL, 0xd7ec4d3ba188c59aULL, 0xd946bc9ffa308c65ULL,
+ 0x3fc715f8d286932aULL, 0xf93f57c668297eaeULL, 0x5f4c351379ad986aULL,
+ 0x1e180a06123a3014ULL, 0x11140f051b27281eULL, 0xf63352c5613466a4ULL,
+ 0x5544331177bb8866ULL, 0xb6c1997758069f2fULL, 0x91ed847c6943c715ULL,
+ 0x8ff58e7a7b79f701ULL, 0x85fd8878756fe70dULL, 0xeed85a3682f7adb4ULL,
+ 0x6c70241c54c4e048ULL, 0xdde44b39af9ed596ULL, 0x2079eb599219f2cbULL,
+ 0x7860281848e8c050ULL, 0x1345fa56bf708ae9ULL, 0x45f6c8b33e39f18dULL,
+ 0x4afacdb03724e987ULL, 0xb4906c24fc513dd8ULL, 0xa0806020e07d1dc0ULL,
+ 0x40f2cbb23932f98bULL, 0xe072ab92d94fe44bULL, 0x15b6f8a34e8971edULL,
+ 0xe7275dc07a134ebaULL, 0x490dcc44c1d61a85ULL, 0xf795a66233913751ULL,
+ 0x5040301070b08060ULL, 0x5eeac1b42b08c99fULL, 0xae2a9184bbc5543fULL,
+ 0x5211c543d4e72297ULL, 0xe576a893de44ec4dULL, 0xed2f5bc274055eb6ULL,
+ 0x7f35de4aebb46aa1ULL, 0x73cedabd145b81a9ULL, 0x89068c8f8a800c05ULL,
+ 0x99b4772dc30275eeULL, 0x76cad9bc135089afULL, 0xd64ab99cf32d946fULL,
+ 0xdfb5be6a0bc97761ULL, 0x5d1dc040ddfa3a9dULL, 0xd41b4ccf577a3698ULL,
+ 0x10b2fba2498279ebULL, 0xba3a9d80a7e97427ULL, 0x6e21d14ff09342bfULL,
+ 0x637c211f5dd9f842ULL, 0xc50f43ca4c5d1e86ULL, 0x3892e3aa71da39dbULL,
+ 0x5715c642d3ec2a91ULL
+};
+
+static const u64 T4[256] = {
+ 0xbbb96a01bad3d268ULL, 0xe59a66b154fc4d19ULL, 0xe26514cd2f71bc93ULL,
+ 0x25871b51749ccdb9ULL, 0xf7a257a453f55102ULL, 0xd0d6be03d3686bb8ULL,
+ 0xd6deb504d26b6fbdULL, 0xb35285fe4dd72964ULL, 0xfdba4aad50f05d0dULL,
+ 0xcf09e063ace98a26ULL, 0x091c96848d8a0e83ULL, 0xa5914d1abfdcc679ULL,
+ 0x3da7374d7090ddadULL, 0xf1aa5ca352f65507ULL, 0x7ba417e19ab352c8ULL,
+ 0xb55a8ef94cd42d61ULL, 0x460320acea238f65ULL, 0xc4e68411d56273a6ULL,
+ 0x55cc68c297a466f1ULL, 0xdcc6a80dd16e63b2ULL, 0xaa85d0993355ccffULL,
+ 0xfbb241aa51f35908ULL, 0xc7e20f9c5bed712aULL, 0xf359ae55a6f7a204ULL,
+ 0xfebec120de7f5f81ULL, 0xad7aa2e548d83d75ULL, 0xd729cc7fa8e59a32ULL,
+ 0x71bc0ae899b65ec7ULL, 0xe096e63bdb704b90ULL, 0xac8ddb9e3256c8faULL,
+ 0x95d11522b7c4e651ULL, 0x32b3aacefc19d72bULL, 0x704b7393e338ab48ULL,
+ 0x63843bfd9ebf42dcULL, 0x41fc52d091ae7eefULL, 0x7dac1ce69bb056cdULL,
+ 0x76437894e23baf4dULL, 0xbdb16106bbd0d66dULL, 0x9b32f1da41c31958ULL,
+ 0x7957e5176eb2a5cbULL, 0xf941b35ca5f2ae0bULL, 0x8016564bcb400bc0ULL,
+ 0x677fc20c6bbdb1daULL, 0x59dc7ecc95a26efbULL, 0xe1619f40a1febe1fULL,
+ 0x10cbc3e3f308eb18ULL, 0x81e12f30b1cefe4fULL, 0x0c10160e0206080aULL,
+ 0x922e675ecc4917dbULL, 0xa26e3f66c45137f3ULL, 0x4ee8cf531d277469ULL,
+ 0x78a09c6c143c5044ULL, 0xb0560e73c3582be8ULL, 0x573f9a3463a591f2ULL,
+ 0xe69eed3cda734f95ULL, 0xd3d2358e5de76934ULL, 0xdfc223805fe1613eULL,
+ 0xf2aed72edc79578bULL, 0x13cf486e7d87e994ULL, 0x94266c59cd4a13deULL,
+ 0x1fdf5e607f81e19eULL, 0xc1ea049b5aee752fULL, 0x7547f3196cb4adc1ULL,
+ 0xd5da3e895ce46d31ULL, 0x08ebeffff704fb0cULL, 0xd42d47f2266a98beULL,
+ 0x38abb7c7ff1cdb24ULL, 0x543b11b9ed2a937eULL, 0x4a1336a2e825876fULL,
+ 0x699c26f49dba4ed3ULL, 0x7f5fee106fb1a1ceULL, 0x03048b8d8e8f028cULL,
+ 0x56c8e34f192b647dULL, 0xe7699447a0fdba1aULL, 0x1ad3deeaf00de717ULL,
+ 0x113cba9889861e97ULL, 0x2278692d0f113c33ULL, 0x1238311507091c1bULL,
+ 0xc511fd6aafec8629ULL, 0x208b9bdbfb10cb30ULL, 0x3040583808182028ULL,
+ 0x7ea8976b153f5441ULL, 0x2e687f230d173439ULL, 0x18202c1c040c1014ULL,
+ 0x06080b0701030405ULL, 0x4507ab2164ac8de9ULL, 0xf8b6ca27df7c5b84ULL,
+ 0x29970d5f769ac5b3ULL, 0x0bef6472798bf980ULL, 0xf4a6dc29dd7a538eULL,
+ 0x8ef5b2b33d47f4c9ULL, 0x74b08a62163a584eULL, 0x82e5a4bd3f41fcc3ULL,
+ 0xb2a5fc853759dcebULL, 0x734ff81e6db7a9c4ULL, 0x90dd95a83848e0d8ULL,
+ 0xb1a17708b9d6de67ULL, 0x37bf2a447395d1a2ULL, 0x4c1b3da5e926836aULL,
+ 0xbeb5ea8b355fd4e1ULL, 0xe3926db655ff491cULL, 0x3baf3c4a7193d9a8ULL,
+ 0x07ff727c7b8df18aULL, 0x0f149d838c890a86ULL, 0x31b721437296d5a7ULL,
+ 0x1734b19f88851a92ULL, 0x0ee3e4f8f607ff09ULL, 0xfc4d33d62a7ea882ULL,
+ 0x84edafba3e42f8c6ULL, 0xd9ca28875ee2653bULL, 0xd2254cf527699cbbULL,
+ 0x890ac0cf46ca0543ULL, 0x286074240c14303cULL, 0x430fa02665af89ecULL,
+ 0x6d67df0568b8bdd5ULL, 0x5b2f8c3a61a399f8ULL, 0x0a181d0903050c0fULL,
+ 0xbc46187dc15e23e2ULL, 0xef827bb857f94116ULL, 0xcefe9918d6677fa9ULL,
+ 0xec86f035d976439aULL, 0xcdfa129558e87d25ULL, 0xea8efb32d875479fULL,
+ 0x4917bd2f66aa85e3ULL, 0xc8f6921fd7647bacULL, 0x9ccd83a63a4ee8d2ULL,
+ 0x8a0e4b42c84507cfULL, 0x88fdb9b43c44f0ccULL, 0x268390dcfa13cf35ULL,
+ 0x53c463c596a762f4ULL, 0xf551a552a7f4a601ULL, 0x77b401ef98b55ac2ULL,
+ 0x52331abeec29977bULL, 0xb7a97c0fb8d5da62ULL, 0xa876226fc7543bfcULL,
+ 0xc319f66daeef822cULL, 0x6b6fd40269bbb9d0ULL, 0xa762bfec4bdd317aULL,
+ 0xdd31d176abe0963dULL, 0xd121c778a9e69e37ULL, 0x4f1fb62867a981e6ULL,
+ 0x3c504e360a1e2822ULL, 0x8f02cbc847c90146ULL, 0x16c3c8e4f20bef1dULL,
+ 0x99c1032cb5c2ee5bULL, 0xcc0d6bee226688aaULL, 0x647b4981e532b356ULL,
+ 0x5e230cb0ee2f9f71ULL, 0xa399461dbedfc27cULL, 0xfa4538d12b7dac87ULL,
+ 0x217ce2a0819e3ebfULL, 0x6c90a67e1236485aULL, 0x2d6cf4ae839836b5ULL,
+ 0x5ad8f5411b2d6c77ULL, 0x2470622a0e123836ULL, 0xca0560e923658cafULL,
+ 0x04fbf9f1f502f306ULL, 0x8312ddc645cf094cULL, 0xc61576e7216384a5ULL,
+ 0x9e3e7150ce4f1fd1ULL, 0xab72a9e249db3970ULL, 0xe87d09c42c74b09cULL,
+ 0x2c9b8dd5f916c33aULL, 0x6e635488e637bf59ULL, 0x93d91e25b6c7e254ULL,
+ 0xf05d25d82878a088ULL, 0x72b8816517395c4bULL, 0x2b64ffa9829b32b0ULL,
+ 0x5cd0fe461a2e6872ULL, 0x1d2cac968b80169dULL, 0x3ea3bcc0fe1fdf21ULL,
+ 0x1b24a7918a831298ULL, 0x3648533f091b242dULL, 0x8c064045c94603caULL,
+ 0x354cd8b2879426a1ULL, 0xb94a98f74ed2256bULL, 0x7c5b659de13ea342ULL,
+ 0xe46d1fca2e72b896ULL, 0x62734286e431b753ULL, 0x7a536e9ae03da747ULL,
+ 0x400b2babeb208b60ULL, 0x47f459d790ad7aeaULL, 0xff49b85ba4f1aa0eULL,
+ 0x44f0d25a1e227866ULL, 0x395ccebc85922eabULL, 0x5d27873d60a09dfdULL,
+ 0x0000000000000000ULL, 0xde355afb256f94b1ULL, 0x02f3f2f6f401f703ULL,
+ 0x1cdbd5edf10ee312ULL, 0x5fd475cb94a16afeULL, 0x3a5845310b1d2c27ULL,
+ 0x686b5f8fe734bb5cULL, 0x238f1056759fc9bcULL, 0x582b07b7ef2c9b74ULL,
+ 0xb8bde18c345cd0e4ULL, 0xa695c6973153c4f5ULL, 0xc2ee8f16d46177a3ULL,
+ 0xdacea30ad06d67b7ULL, 0x3344d3b5869722a4ULL, 0x19d755677e82e59bULL,
+ 0xc901eb64adea8e23ULL, 0x34bba1c9fd1ad32eULL, 0xf6552edf297ba48dULL,
+ 0xa09dcd903050c0f0ULL, 0x9ac588a13b4decd7ULL, 0x658c30fa9fbc46d9ULL,
+ 0x2a9386d2f815c73fULL, 0xae7e2968c6573ff9ULL, 0x6a98ad7913354c5fULL,
+ 0x14303a12060a181eULL, 0x1e28271b050f1411ULL, 0xa4663461c55233f6ULL,
+ 0x6688bb7711334455ULL, 0x2f9f06587799c1b6ULL, 0x15c743697c84ed91ULL,
+ 0x01f7797b7a8ef58fULL, 0x0de76f757888fd85ULL, 0xb4adf782365ad8eeULL,
+ 0x48e0c4541c24706cULL, 0x96d59eaf394be4ddULL, 0xcbf2199259eb7920ULL,
+ 0x50c0e84818286078ULL, 0xe98a70bf56fa4513ULL, 0x8df1393eb3c8f645ULL,
+ 0x87e92437b0cdfa4aULL, 0xd83d51fc246c90b4ULL, 0xc01d7de0206080a0ULL,
+ 0x8bf93239b2cbf240ULL, 0x4be44fd992ab72e0ULL, 0xed71894ea3f8b615ULL,
+ 0xba4e137ac05d27e7ULL, 0x851ad6c144cc0d49ULL, 0x5137913362a695f7ULL,
+ 0x6080b07010304050ULL, 0x9fc9082bb4c1ea5eULL, 0x3f54c5bb84912aaeULL,
+ 0x9722e7d443c51152ULL, 0x4dec44de93a876e5ULL, 0xb65e0574c25b2fedULL,
+ 0xa16ab4eb4ade357fULL, 0xa9815b14bddace73ULL, 0x050c808a8f8c0689ULL,
+ 0xee7502c32d77b499ULL, 0xaf895013bcd9ca76ULL, 0x6f942df39cb94ad6ULL,
+ 0x6177c90b6abeb5dfULL, 0x9d3afadd40c01d5dULL, 0x98367a57cf4c1bd4ULL,
+ 0xeb798249a2fbb210ULL, 0x2774e9a7809d3abaULL, 0xbf4293f04fd1216eULL,
+ 0x42f8d95d1f217c63ULL, 0x861e5d4cca430fc5ULL, 0xdb39da71aae39238ULL,
+ 0x912aecd342c61557ULL
+};
+
+static const u64 T5[256] = {
+ 0xb9bb016ad3ba68d2ULL, 0x9ae5b166fc54194dULL, 0x65e2cd14712f93bcULL,
+ 0x8725511b9c74b9cdULL, 0xa2f7a457f5530251ULL, 0xd6d003be68d3b86bULL,
+ 0xded604b56bd2bd6fULL, 0x52b3fe85d74d6429ULL, 0xbafdad4af0500d5dULL,
+ 0x09cf63e0e9ac268aULL, 0x1c0984968a8d830eULL, 0x91a51a4ddcbf79c6ULL,
+ 0xa73d4d379070adddULL, 0xaaf1a35cf6520755ULL, 0xa47be117b39ac852ULL,
+ 0x5ab5f98ed44c612dULL, 0x0346ac2023ea658fULL, 0xe6c4118462d5a673ULL,
+ 0xcc55c268a497f166ULL, 0xc6dc0da86ed1b263ULL, 0x85aa99d05533ffccULL,
+ 0xb2fbaa41f3510859ULL, 0xe2c79c0fed5b2a71ULL, 0x59f355aef7a604a2ULL,
+ 0xbefe20c17fde815fULL, 0x7aade5a2d848753dULL, 0x29d77fcce5a8329aULL,
+ 0xbc71e80ab699c75eULL, 0x96e03be670db904bULL, 0x8dac9edb5632fac8ULL,
+ 0xd1952215c4b751e6ULL, 0xb332ceaa19fc2bd7ULL, 0x4b70937338e348abULL,
+ 0x8463fd3bbf9edc42ULL, 0xfc41d052ae91ef7eULL, 0xac7de61cb09bcd56ULL,
+ 0x437694783be24dafULL, 0xb1bd0661d0bb6dd6ULL, 0x329bdaf1c3415819ULL,
+ 0x577917e5b26ecba5ULL, 0x41f95cb3f2a50baeULL, 0x16804b5640cbc00bULL,
+ 0x7f670cc2bd6bdab1ULL, 0xdc59cc7ea295fb6eULL, 0x61e1409ffea11fbeULL,
+ 0xcb10e3c308f318ebULL, 0xe181302fceb14ffeULL, 0x100c0e1606020a08ULL,
+ 0x2e925e6749ccdb17ULL, 0x6ea2663f51c4f337ULL, 0xe84e53cf271d6974ULL,
+ 0xa0786c9c3c144450ULL, 0x56b0730e58c3e82bULL, 0x3f57349aa563f291ULL,
+ 0x9ee63ced73da954fULL, 0xd2d38e35e75d3469ULL, 0xc2df8023e15f3e61ULL,
+ 0xaef22ed779dc8b57ULL, 0xcf136e48877d94e9ULL, 0x2694596c4acdde13ULL,
+ 0xdf1f605e817f9ee1ULL, 0xeac19b04ee5a2f75ULL, 0x477519f3b46cc1adULL,
+ 0xdad5893ee45c316dULL, 0xeb08ffef04f70cfbULL, 0x2dd4f2476a26be98ULL,
+ 0xab38c7b71cff24dbULL, 0x3b54b9112aed7e93ULL, 0x134aa23625e86f87ULL,
+ 0x9c69f426ba9dd34eULL, 0x5f7f10eeb16fcea1ULL, 0x04038d8b8f8e8c02ULL,
+ 0xc8564fe32b197d64ULL, 0x69e74794fda01abaULL, 0xd31aeade0df017e7ULL,
+ 0x3c1198ba8689971eULL, 0x78222d69110f333cULL, 0x3812153109071b1cULL,
+ 0x11c56afdecaf2986ULL, 0x8b20db9b10fb30cbULL, 0x4030385818082820ULL,
+ 0xa87e6b973f154154ULL, 0x682e237f170d3934ULL, 0x20181c2c0c041410ULL,
+ 0x0806070b03010504ULL, 0x074521abac64e98dULL, 0xb6f827ca7cdf845bULL,
+ 0x97295f0d9a76b3c5ULL, 0xef0b72648b7980f9ULL, 0xa6f429dc7add8e53ULL,
+ 0xf58eb3b2473dc9f4ULL, 0xb074628a3a164e58ULL, 0xe582bda4413fc3fcULL,
+ 0xa5b285fc5937ebdcULL, 0x4f731ef8b76dc4a9ULL, 0xdd90a8954838d8e0ULL,
+ 0xa1b10877d6b967deULL, 0xbf37442a9573a2d1ULL, 0x1b4ca53d26e96a83ULL,
+ 0xb5be8bea5f35e1d4ULL, 0x92e3b66dff551c49ULL, 0xaf3b4a3c9371a8d9ULL,
+ 0xff077c728d7b8af1ULL, 0x140f839d898c860aULL, 0xb73143219672a7d5ULL,
+ 0x34179fb18588921aULL, 0xe30ef8e407f609ffULL, 0x4dfcd6337e2a82a8ULL,
+ 0xed84baaf423ec6f8ULL, 0xcad98728e25e3b65ULL, 0x25d2f54c6927bb9cULL,
+ 0x0a89cfc0ca464305ULL, 0x60282474140c3c30ULL, 0x0f4326a0af65ec89ULL,
+ 0x676d05dfb868d5bdULL, 0x2f5b3a8ca361f899ULL, 0x180a091d05030f0cULL,
+ 0x46bc7d185ec1e223ULL, 0x82efb87bf9571641ULL, 0xfece189967d6a97fULL,
+ 0x86ec35f076d99a43ULL, 0xfacd9512e858257dULL, 0x8eea32fb75d89f47ULL,
+ 0x17492fbdaa66e385ULL, 0xf6c81f9264d7ac7bULL, 0xcd9ca6834e3ad2e8ULL,
+ 0x0e8a424b45c8cf07ULL, 0xfd88b4b9443cccf0ULL, 0x8326dc9013fa35cfULL,
+ 0xc453c563a796f462ULL, 0x51f552a5f4a701a6ULL, 0xb477ef01b598c25aULL,
+ 0x3352be1a29ec7b97ULL, 0xa9b70f7cd5b862daULL, 0x76a86f2254c7fc3bULL,
+ 0x19c36df6efae2c82ULL, 0x6f6b02d4bb69d0b9ULL, 0x62a7ecbfdd4b7a31ULL,
+ 0x31dd76d1e0ab3d96ULL, 0x21d178c7e6a9379eULL, 0x1f4f28b6a967e681ULL,
+ 0x503c364e1e0a2228ULL, 0x028fc8cbc9474601ULL, 0xc316e4c80bf21defULL,
+ 0xc1992c03c2b55beeULL, 0x0dccee6b6622aa88ULL, 0x7b64814932e556b3ULL,
+ 0x235eb00c2fee719fULL, 0x99a31d46dfbe7cc2ULL, 0x45fad1387d2b87acULL,
+ 0x7c21a0e29e81bf3eULL, 0x906c7ea636125a48ULL, 0x6c2daef49883b536ULL,
+ 0xd85a41f52d1b776cULL, 0x70242a62120e3638ULL, 0x05cae9606523af8cULL,
+ 0xfb04f1f902f506f3ULL, 0x1283c6ddcf454c09ULL, 0x15c6e7766321a584ULL,
+ 0x3e9e50714fced11fULL, 0x72abe2a9db497039ULL, 0x7de8c409742c9cb0ULL,
+ 0x9b2cd58d16f93ac3ULL, 0x636e885437e659bfULL, 0xd993251ec7b654e2ULL,
+ 0x5df0d825782888a0ULL, 0xb872658139174b5cULL, 0x642ba9ff9b82b032ULL,
+ 0xd05c46fe2e1a7268ULL, 0x2c1d96ac808b9d16ULL, 0xa33ec0bc1ffe21dfULL,
+ 0x241b91a7838a9812ULL, 0x48363f531b092d24ULL, 0x068c454046c9ca03ULL,
+ 0x4c35b2d89487a126ULL, 0x4ab9f798d24e6b25ULL, 0x5b7c9d653ee142a3ULL,
+ 0x6de4ca1f722e96b8ULL, 0x7362864231e453b7ULL, 0x537a9a6e3de047a7ULL,
+ 0x0b40ab2b20eb608bULL, 0xf447d759ad90ea7aULL, 0x49ff5bb8f1a40eaaULL,
+ 0xf0445ad2221e6678ULL, 0x5c39bcce9285ab2eULL, 0x275d3d87a060fd9dULL,
+ 0x0000000000000000ULL, 0x35defb5a6f25b194ULL, 0xf302f6f201f403f7ULL,
+ 0xdb1cedd50ef112e3ULL, 0xd45fcb75a194fe6aULL, 0x583a31451d0b272cULL,
+ 0x6b688f5f34e75cbbULL, 0x8f2356109f75bcc9ULL, 0x2b58b7072cef749bULL,
+ 0xbdb88ce15c34e4d0ULL, 0x95a697c65331f5c4ULL, 0xeec2168f61d4a377ULL,
+ 0xceda0aa36dd0b767ULL, 0x4433b5d39786a422ULL, 0xd7196755827e9be5ULL,
+ 0x01c964ebeaad238eULL, 0xbb34c9a11afd2ed3ULL, 0x55f6df2e7b298da4ULL,
+ 0x9da090cd5030f0c0ULL, 0xc59aa1884d3bd7ecULL, 0x8c65fa30bc9fd946ULL,
+ 0x932ad28615f83fc7ULL, 0x7eae682957c6f93fULL, 0x986a79ad35135f4cULL,
+ 0x3014123a0a061e18ULL, 0x281e1b270f051114ULL, 0x66a4613452c5f633ULL,
+ 0x886677bb33115544ULL, 0x9f2f58069977b6c1ULL, 0xc7156943847c91edULL,
+ 0xf7017b798e7a8ff5ULL, 0xe70d756f887885fdULL, 0xadb482f75a36eed8ULL,
+ 0xe04854c4241c6c70ULL, 0xd596af9e4b39dde4ULL, 0xf2cb9219eb592079ULL,
+ 0xc05048e828187860ULL, 0x8ae9bf70fa561345ULL, 0xf18d3e39c8b345f6ULL,
+ 0xe9873724cdb04afaULL, 0x3dd8fc516c24b490ULL, 0x1dc0e07d6020a080ULL,
+ 0xf98b3932cbb240f2ULL, 0xe44bd94fab92e072ULL, 0x71ed4e89f8a315b6ULL,
+ 0x4eba7a135dc0e727ULL, 0x1a85c1d6cc44490dULL, 0x37513391a662f795ULL,
+ 0x806070b030105040ULL, 0xc99f2b08c1b45eeaULL, 0x543fbbc59184ae2aULL,
+ 0x2297d4e7c5435211ULL, 0xec4dde44a893e576ULL, 0x5eb674055bc2ed2fULL,
+ 0x6aa1ebb4de4a7f35ULL, 0x81a9145bdabd73ceULL, 0x0c058a808c8f8906ULL,
+ 0x75eec302772d99b4ULL, 0x89af1350d9bc76caULL, 0x946ff32db99cd64aULL,
+ 0x77610bc9be6adfb5ULL, 0x3a9dddfac0405d1dULL, 0x3698577a4ccfd41bULL,
+ 0x79eb4982fba210b2ULL, 0x7427a7e99d80ba3aULL, 0x42bff093d14f6e21ULL,
+ 0xf8425dd9211f637cULL, 0x1e864c5d43cac50fULL, 0x39db71dae3aa3892ULL,
+ 0x2a91d3ecc6425715ULL
+};
+
+static const u64 T6[256] = {
+ 0x6a01bbb9d268bad3ULL, 0x66b1e59a4d1954fcULL, 0x14cde265bc932f71ULL,
+ 0x1b512587cdb9749cULL, 0x57a4f7a2510253f5ULL, 0xbe03d0d66bb8d368ULL,
+ 0xb504d6de6fbdd26bULL, 0x85feb35229644dd7ULL, 0x4aadfdba5d0d50f0ULL,
+ 0xe063cf098a26ace9ULL, 0x9684091c0e838d8aULL, 0x4d1aa591c679bfdcULL,
+ 0x374d3da7ddad7090ULL, 0x5ca3f1aa550752f6ULL, 0x17e17ba452c89ab3ULL,
+ 0x8ef9b55a2d614cd4ULL, 0x20ac46038f65ea23ULL, 0x8411c4e673a6d562ULL,
+ 0x68c255cc66f197a4ULL, 0xa80ddcc663b2d16eULL, 0xd099aa85ccff3355ULL,
+ 0x41aafbb2590851f3ULL, 0x0f9cc7e2712a5bedULL, 0xae55f359a204a6f7ULL,
+ 0xc120febe5f81de7fULL, 0xa2e5ad7a3d7548d8ULL, 0xcc7fd7299a32a8e5ULL,
+ 0x0ae871bc5ec799b6ULL, 0xe63be0964b90db70ULL, 0xdb9eac8dc8fa3256ULL,
+ 0x152295d1e651b7c4ULL, 0xaace32b3d72bfc19ULL, 0x7393704bab48e338ULL,
+ 0x3bfd638442dc9ebfULL, 0x52d041fc7eef91aeULL, 0x1ce67dac56cd9bb0ULL,
+ 0x78947643af4de23bULL, 0x6106bdb1d66dbbd0ULL, 0xf1da9b32195841c3ULL,
+ 0xe5177957a5cb6eb2ULL, 0xb35cf941ae0ba5f2ULL, 0x564b80160bc0cb40ULL,
+ 0xc20c677fb1da6bbdULL, 0x7ecc59dc6efb95a2ULL, 0x9f40e161be1fa1feULL,
+ 0xc3e310cbeb18f308ULL, 0x2f3081e1fe4fb1ceULL, 0x160e0c10080a0206ULL,
+ 0x675e922e17dbcc49ULL, 0x3f66a26e37f3c451ULL, 0xcf534ee874691d27ULL,
+ 0x9c6c78a05044143cULL, 0x0e73b0562be8c358ULL, 0x9a34573f91f263a5ULL,
+ 0xed3ce69e4f95da73ULL, 0x358ed3d269345de7ULL, 0x2380dfc2613e5fe1ULL,
+ 0xd72ef2ae578bdc79ULL, 0x486e13cfe9947d87ULL, 0x6c59942613decd4aULL,
+ 0x5e601fdfe19e7f81ULL, 0x049bc1ea752f5aeeULL, 0xf3197547adc16cb4ULL,
+ 0x3e89d5da6d315ce4ULL, 0xefff08ebfb0cf704ULL, 0x47f2d42d98be266aULL,
+ 0xb7c738abdb24ff1cULL, 0x11b9543b937eed2aULL, 0x36a24a13876fe825ULL,
+ 0x26f4699c4ed39dbaULL, 0xee107f5fa1ce6fb1ULL, 0x8b8d0304028c8e8fULL,
+ 0xe34f56c8647d192bULL, 0x9447e769ba1aa0fdULL, 0xdeea1ad3e717f00dULL,
+ 0xba98113c1e978986ULL, 0x692d22783c330f11ULL, 0x311512381c1b0709ULL,
+ 0xfd6ac5118629afecULL, 0x9bdb208bcb30fb10ULL, 0x5838304020280818ULL,
+ 0x976b7ea85441153fULL, 0x7f232e6834390d17ULL, 0x2c1c18201014040cULL,
+ 0x0b07060804050103ULL, 0xab2145078de964acULL, 0xca27f8b65b84df7cULL,
+ 0x0d5f2997c5b3769aULL, 0x64720beff980798bULL, 0xdc29f4a6538edd7aULL,
+ 0xb2b38ef5f4c93d47ULL, 0x8a6274b0584e163aULL, 0xa4bd82e5fcc33f41ULL,
+ 0xfc85b2a5dceb3759ULL, 0xf81e734fa9c46db7ULL, 0x95a890dde0d83848ULL,
+ 0x7708b1a1de67b9d6ULL, 0x2a4437bfd1a27395ULL, 0x3da54c1b836ae926ULL,
+ 0xea8bbeb5d4e1355fULL, 0x6db6e392491c55ffULL, 0x3c4a3bafd9a87193ULL,
+ 0x727c07fff18a7b8dULL, 0x9d830f140a868c89ULL, 0x214331b7d5a77296ULL,
+ 0xb19f17341a928885ULL, 0xe4f80ee3ff09f607ULL, 0x33d6fc4da8822a7eULL,
+ 0xafba84edf8c63e42ULL, 0x2887d9ca653b5ee2ULL, 0x4cf5d2259cbb2769ULL,
+ 0xc0cf890a054346caULL, 0x74242860303c0c14ULL, 0xa026430f89ec65afULL,
+ 0xdf056d67bdd568b8ULL, 0x8c3a5b2f99f861a3ULL, 0x1d090a180c0f0305ULL,
+ 0x187dbc4623e2c15eULL, 0x7bb8ef82411657f9ULL, 0x9918cefe7fa9d667ULL,
+ 0xf035ec86439ad976ULL, 0x1295cdfa7d2558e8ULL, 0xfb32ea8e479fd875ULL,
+ 0xbd2f491785e366aaULL, 0x921fc8f67bacd764ULL, 0x83a69ccde8d23a4eULL,
+ 0x4b428a0e07cfc845ULL, 0xb9b488fdf0cc3c44ULL, 0x90dc2683cf35fa13ULL,
+ 0x63c553c462f496a7ULL, 0xa552f551a601a7f4ULL, 0x01ef77b45ac298b5ULL,
+ 0x1abe5233977bec29ULL, 0x7c0fb7a9da62b8d5ULL, 0x226fa8763bfcc754ULL,
+ 0xf66dc319822caeefULL, 0xd4026b6fb9d069bbULL, 0xbfeca762317a4bddULL,
+ 0xd176dd31963dabe0ULL, 0xc778d1219e37a9e6ULL, 0xb6284f1f81e667a9ULL,
+ 0x4e363c5028220a1eULL, 0xcbc88f02014647c9ULL, 0xc8e416c3ef1df20bULL,
+ 0x032c99c1ee5bb5c2ULL, 0x6beecc0d88aa2266ULL, 0x4981647bb356e532ULL,
+ 0x0cb05e239f71ee2fULL, 0x461da399c27cbedfULL, 0x38d1fa45ac872b7dULL,
+ 0xe2a0217c3ebf819eULL, 0xa67e6c90485a1236ULL, 0xf4ae2d6c36b58398ULL,
+ 0xf5415ad86c771b2dULL, 0x622a247038360e12ULL, 0x60e9ca058caf2365ULL,
+ 0xf9f104fbf306f502ULL, 0xddc68312094c45cfULL, 0x76e7c61584a52163ULL,
+ 0x71509e3e1fd1ce4fULL, 0xa9e2ab72397049dbULL, 0x09c4e87db09c2c74ULL,
+ 0x8dd52c9bc33af916ULL, 0x54886e63bf59e637ULL, 0x1e2593d9e254b6c7ULL,
+ 0x25d8f05da0882878ULL, 0x816572b85c4b1739ULL, 0xffa92b6432b0829bULL,
+ 0xfe465cd068721a2eULL, 0xac961d2c169d8b80ULL, 0xbcc03ea3df21fe1fULL,
+ 0xa7911b2412988a83ULL, 0x533f3648242d091bULL, 0x40458c0603cac946ULL,
+ 0xd8b2354c26a18794ULL, 0x98f7b94a256b4ed2ULL, 0x659d7c5ba342e13eULL,
+ 0x1fcae46db8962e72ULL, 0x42866273b753e431ULL, 0x6e9a7a53a747e03dULL,
+ 0x2bab400b8b60eb20ULL, 0x59d747f47aea90adULL, 0xb85bff49aa0ea4f1ULL,
+ 0xd25a44f078661e22ULL, 0xcebc395c2eab8592ULL, 0x873d5d279dfd60a0ULL,
+ 0x0000000000000000ULL, 0x5afbde3594b1256fULL, 0xf2f602f3f703f401ULL,
+ 0xd5ed1cdbe312f10eULL, 0x75cb5fd46afe94a1ULL, 0x45313a582c270b1dULL,
+ 0x5f8f686bbb5ce734ULL, 0x1056238fc9bc759fULL, 0x07b7582b9b74ef2cULL,
+ 0xe18cb8bdd0e4345cULL, 0xc697a695c4f53153ULL, 0x8f16c2ee77a3d461ULL,
+ 0xa30adace67b7d06dULL, 0xd3b5334422a48697ULL, 0x556719d7e59b7e82ULL,
+ 0xeb64c9018e23adeaULL, 0xa1c934bbd32efd1aULL, 0x2edff655a48d297bULL,
+ 0xcd90a09dc0f03050ULL, 0x88a19ac5ecd73b4dULL, 0x30fa658c46d99fbcULL,
+ 0x86d22a93c73ff815ULL, 0x2968ae7e3ff9c657ULL, 0xad796a984c5f1335ULL,
+ 0x3a121430181e060aULL, 0x271b1e281411050fULL, 0x3461a46633f6c552ULL,
+ 0xbb77668844551133ULL, 0x06582f9fc1b67799ULL, 0x436915c7ed917c84ULL,
+ 0x797b01f7f58f7a8eULL, 0x6f750de7fd857888ULL, 0xf782b4add8ee365aULL,
+ 0xc45448e0706c1c24ULL, 0x9eaf96d5e4dd394bULL, 0x1992cbf2792059ebULL,
+ 0xe84850c060781828ULL, 0x70bfe98a451356faULL, 0x393e8df1f645b3c8ULL,
+ 0x243787e9fa4ab0cdULL, 0x51fcd83d90b4246cULL, 0x7de0c01d80a02060ULL,
+ 0x32398bf9f240b2cbULL, 0x4fd94be472e092abULL, 0x894eed71b615a3f8ULL,
+ 0x137aba4e27e7c05dULL, 0xd6c1851a0d4944ccULL, 0x9133513795f762a6ULL,
+ 0xb070608040501030ULL, 0x082b9fc9ea5eb4c1ULL, 0xc5bb3f542aae8491ULL,
+ 0xe7d49722115243c5ULL, 0x44de4dec76e593a8ULL, 0x0574b65e2fedc25bULL,
+ 0xb4eba16a357f4adeULL, 0x5b14a981ce73bddaULL, 0x808a050c06898f8cULL,
+ 0x02c3ee75b4992d77ULL, 0x5013af89ca76bcd9ULL, 0x2df36f944ad69cb9ULL,
+ 0xc90b6177b5df6abeULL, 0xfadd9d3a1d5d40c0ULL, 0x7a5798361bd4cf4cULL,
+ 0x8249eb79b210a2fbULL, 0xe9a727743aba809dULL, 0x93f0bf42216e4fd1ULL,
+ 0xd95d42f87c631f21ULL, 0x5d4c861e0fc5ca43ULL, 0xda71db399238aae3ULL,
+ 0xecd3912a155742c6ULL
+};
+
+static const u64 T7[256] = {
+ 0x016ab9bb68d2d3baULL, 0xb1669ae5194dfc54ULL, 0xcd1465e293bc712fULL,
+ 0x511b8725b9cd9c74ULL, 0xa457a2f70251f553ULL, 0x03bed6d0b86b68d3ULL,
+ 0x04b5ded6bd6f6bd2ULL, 0xfe8552b36429d74dULL, 0xad4abafd0d5df050ULL,
+ 0x63e009cf268ae9acULL, 0x84961c09830e8a8dULL, 0x1a4d91a579c6dcbfULL,
+ 0x4d37a73daddd9070ULL, 0xa35caaf10755f652ULL, 0xe117a47bc852b39aULL,
+ 0xf98e5ab5612dd44cULL, 0xac200346658f23eaULL, 0x1184e6c4a67362d5ULL,
+ 0xc268cc55f166a497ULL, 0x0da8c6dcb2636ed1ULL, 0x99d085aaffcc5533ULL,
+ 0xaa41b2fb0859f351ULL, 0x9c0fe2c72a71ed5bULL, 0x55ae59f304a2f7a6ULL,
+ 0x20c1befe815f7fdeULL, 0xe5a27aad753dd848ULL, 0x7fcc29d7329ae5a8ULL,
+ 0xe80abc71c75eb699ULL, 0x3be696e0904b70dbULL, 0x9edb8dacfac85632ULL,
+ 0x2215d19551e6c4b7ULL, 0xceaab3322bd719fcULL, 0x93734b7048ab38e3ULL,
+ 0xfd3b8463dc42bf9eULL, 0xd052fc41ef7eae91ULL, 0xe61cac7dcd56b09bULL,
+ 0x947843764daf3be2ULL, 0x0661b1bd6dd6d0bbULL, 0xdaf1329b5819c341ULL,
+ 0x17e55779cba5b26eULL, 0x5cb341f90baef2a5ULL, 0x4b561680c00b40cbULL,
+ 0x0cc27f67dab1bd6bULL, 0xcc7edc59fb6ea295ULL, 0x409f61e11fbefea1ULL,
+ 0xe3c3cb1018eb08f3ULL, 0x302fe1814ffeceb1ULL, 0x0e16100c0a080602ULL,
+ 0x5e672e92db1749ccULL, 0x663f6ea2f33751c4ULL, 0x53cfe84e6974271dULL,
+ 0x6c9ca07844503c14ULL, 0x730e56b0e82b58c3ULL, 0x349a3f57f291a563ULL,
+ 0x3ced9ee6954f73daULL, 0x8e35d2d33469e75dULL, 0x8023c2df3e61e15fULL,
+ 0x2ed7aef28b5779dcULL, 0x6e48cf1394e9877dULL, 0x596c2694de134acdULL,
+ 0x605edf1f9ee1817fULL, 0x9b04eac12f75ee5aULL, 0x19f34775c1adb46cULL,
+ 0x893edad5316de45cULL, 0xffefeb080cfb04f7ULL, 0xf2472dd4be986a26ULL,
+ 0xc7b7ab3824db1cffULL, 0xb9113b547e932aedULL, 0xa236134a6f8725e8ULL,
+ 0xf4269c69d34eba9dULL, 0x10ee5f7fcea1b16fULL, 0x8d8b04038c028f8eULL,
+ 0x4fe3c8567d642b19ULL, 0x479469e71abafda0ULL, 0xeaded31a17e70df0ULL,
+ 0x98ba3c11971e8689ULL, 0x2d697822333c110fULL, 0x153138121b1c0907ULL,
+ 0x6afd11c52986ecafULL, 0xdb9b8b2030cb10fbULL, 0x3858403028201808ULL,
+ 0x6b97a87e41543f15ULL, 0x237f682e3934170dULL, 0x1c2c201814100c04ULL,
+ 0x070b080605040301ULL, 0x21ab0745e98dac64ULL, 0x27cab6f8845b7cdfULL,
+ 0x5f0d9729b3c59a76ULL, 0x7264ef0b80f98b79ULL, 0x29dca6f48e537addULL,
+ 0xb3b2f58ec9f4473dULL, 0x628ab0744e583a16ULL, 0xbda4e582c3fc413fULL,
+ 0x85fca5b2ebdc5937ULL, 0x1ef84f73c4a9b76dULL, 0xa895dd90d8e04838ULL,
+ 0x0877a1b167ded6b9ULL, 0x442abf37a2d19573ULL, 0xa53d1b4c6a8326e9ULL,
+ 0x8beab5bee1d45f35ULL, 0xb66d92e31c49ff55ULL, 0x4a3caf3ba8d99371ULL,
+ 0x7c72ff078af18d7bULL, 0x839d140f860a898cULL, 0x4321b731a7d59672ULL,
+ 0x9fb13417921a8588ULL, 0xf8e4e30e09ff07f6ULL, 0xd6334dfc82a87e2aULL,
+ 0xbaafed84c6f8423eULL, 0x8728cad93b65e25eULL, 0xf54c25d2bb9c6927ULL,
+ 0xcfc00a894305ca46ULL, 0x247460283c30140cULL, 0x26a00f43ec89af65ULL,
+ 0x05df676dd5bdb868ULL, 0x3a8c2f5bf899a361ULL, 0x091d180a0f0c0503ULL,
+ 0x7d1846bce2235ec1ULL, 0xb87b82ef1641f957ULL, 0x1899fecea97f67d6ULL,
+ 0x35f086ec9a4376d9ULL, 0x9512facd257de858ULL, 0x32fb8eea9f4775d8ULL,
+ 0x2fbd1749e385aa66ULL, 0x1f92f6c8ac7b64d7ULL, 0xa683cd9cd2e84e3aULL,
+ 0x424b0e8acf0745c8ULL, 0xb4b9fd88ccf0443cULL, 0xdc90832635cf13faULL,
+ 0xc563c453f462a796ULL, 0x52a551f501a6f4a7ULL, 0xef01b477c25ab598ULL,
+ 0xbe1a33527b9729ecULL, 0x0f7ca9b762dad5b8ULL, 0x6f2276a8fc3b54c7ULL,
+ 0x6df619c32c82efaeULL, 0x02d46f6bd0b9bb69ULL, 0xecbf62a77a31dd4bULL,
+ 0x76d131dd3d96e0abULL, 0x78c721d1379ee6a9ULL, 0x28b61f4fe681a967ULL,
+ 0x364e503c22281e0aULL, 0xc8cb028f4601c947ULL, 0xe4c8c3161def0bf2ULL,
+ 0x2c03c1995beec2b5ULL, 0xee6b0dccaa886622ULL, 0x81497b6456b332e5ULL,
+ 0xb00c235e719f2feeULL, 0x1d4699a37cc2dfbeULL, 0xd13845fa87ac7d2bULL,
+ 0xa0e27c21bf3e9e81ULL, 0x7ea6906c5a483612ULL, 0xaef46c2db5369883ULL,
+ 0x41f5d85a776c2d1bULL, 0x2a6270243638120eULL, 0xe96005caaf8c6523ULL,
+ 0xf1f9fb0406f302f5ULL, 0xc6dd12834c09cf45ULL, 0xe77615c6a5846321ULL,
+ 0x50713e9ed11f4fceULL, 0xe2a972ab7039db49ULL, 0xc4097de89cb0742cULL,
+ 0xd58d9b2c3ac316f9ULL, 0x8854636e59bf37e6ULL, 0x251ed99354e2c7b6ULL,
+ 0xd8255df088a07828ULL, 0x6581b8724b5c3917ULL, 0xa9ff642bb0329b82ULL,
+ 0x46fed05c72682e1aULL, 0x96ac2c1d9d16808bULL, 0xc0bca33e21df1ffeULL,
+ 0x91a7241b9812838aULL, 0x3f5348362d241b09ULL, 0x4540068cca0346c9ULL,
+ 0xb2d84c35a1269487ULL, 0xf7984ab96b25d24eULL, 0x9d655b7c42a33ee1ULL,
+ 0xca1f6de496b8722eULL, 0x8642736253b731e4ULL, 0x9a6e537a47a73de0ULL,
+ 0xab2b0b40608b20ebULL, 0xd759f447ea7aad90ULL, 0x5bb849ff0eaaf1a4ULL,
+ 0x5ad2f0446678221eULL, 0xbcce5c39ab2e9285ULL, 0x3d87275dfd9da060ULL,
+ 0x0000000000000000ULL, 0xfb5a35deb1946f25ULL, 0xf6f2f30203f701f4ULL,
+ 0xedd5db1c12e30ef1ULL, 0xcb75d45ffe6aa194ULL, 0x3145583a272c1d0bULL,
+ 0x8f5f6b685cbb34e7ULL, 0x56108f23bcc99f75ULL, 0xb7072b58749b2cefULL,
+ 0x8ce1bdb8e4d05c34ULL, 0x97c695a6f5c45331ULL, 0x168feec2a37761d4ULL,
+ 0x0aa3cedab7676dd0ULL, 0xb5d34433a4229786ULL, 0x6755d7199be5827eULL,
+ 0x64eb01c9238eeaadULL, 0xc9a1bb342ed31afdULL, 0xdf2e55f68da47b29ULL,
+ 0x90cd9da0f0c05030ULL, 0xa188c59ad7ec4d3bULL, 0xfa308c65d946bc9fULL,
+ 0xd286932a3fc715f8ULL, 0x68297eaef93f57c6ULL, 0x79ad986a5f4c3513ULL,
+ 0x123a30141e180a06ULL, 0x1b27281e11140f05ULL, 0x613466a4f63352c5ULL,
+ 0x77bb886655443311ULL, 0x58069f2fb6c19977ULL, 0x6943c71591ed847cULL,
+ 0x7b79f7018ff58e7aULL, 0x756fe70d85fd8878ULL, 0x82f7adb4eed85a36ULL,
+ 0x54c4e0486c70241cULL, 0xaf9ed596dde44b39ULL, 0x9219f2cb2079eb59ULL,
+ 0x48e8c05078602818ULL, 0xbf708ae91345fa56ULL, 0x3e39f18d45f6c8b3ULL,
+ 0x3724e9874afacdb0ULL, 0xfc513dd8b4906c24ULL, 0xe07d1dc0a0806020ULL,
+ 0x3932f98b40f2cbb2ULL, 0xd94fe44be072ab92ULL, 0x4e8971ed15b6f8a3ULL,
+ 0x7a134ebae7275dc0ULL, 0xc1d61a85490dcc44ULL, 0x33913751f795a662ULL,
+ 0x70b0806050403010ULL, 0x2b08c99f5eeac1b4ULL, 0xbbc5543fae2a9184ULL,
+ 0xd4e722975211c543ULL, 0xde44ec4de576a893ULL, 0x74055eb6ed2f5bc2ULL,
+ 0xebb46aa17f35de4aULL, 0x145b81a973cedabdULL, 0x8a800c0589068c8fULL,
+ 0xc30275ee99b4772dULL, 0x135089af76cad9bcULL, 0xf32d946fd64ab99cULL,
+ 0x0bc97761dfb5be6aULL, 0xddfa3a9d5d1dc040ULL, 0x577a3698d41b4ccfULL,
+ 0x498279eb10b2fba2ULL, 0xa7e97427ba3a9d80ULL, 0xf09342bf6e21d14fULL,
+ 0x5dd9f842637c211fULL, 0x4c5d1e86c50f43caULL, 0x71da39db3892e3aaULL,
+ 0xd3ec2a915715c642ULL
+};
+
+static const u64 c[KHAZAD_ROUNDS + 1] = {
+ 0xba542f7453d3d24dULL, 0x50ac8dbf70529a4cULL, 0xead597d133515ba6ULL,
+ 0xde48a899db32b7fcULL, 0xe39e919be2bb416eULL, 0xa5cb6b95a1f3b102ULL,
+ 0xccc41d14c363da5dULL, 0x5fdc7dcd7f5a6c5cULL, 0xf726ffede89d6f8eULL
+};
+
+static int khazad_setkey(void *ctx_arg, const u8 *in_key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct khazad_ctx *ctx = ctx_arg;
+ int r;
+ const u64 *S = T7;
+ u64 K2, K1;
+
+ if (key_len != 16)
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ K2 = ((u64)in_key[ 0] << 56) ^
+ ((u64)in_key[ 1] << 48) ^
+ ((u64)in_key[ 2] << 40) ^
+ ((u64)in_key[ 3] << 32) ^
+ ((u64)in_key[ 4] << 24) ^
+ ((u64)in_key[ 5] << 16) ^
+ ((u64)in_key[ 6] << 8) ^
+ ((u64)in_key[ 7] );
+ K1 = ((u64)in_key[ 8] << 56) ^
+ ((u64)in_key[ 9] << 48) ^
+ ((u64)in_key[10] << 40) ^
+ ((u64)in_key[11] << 32) ^
+ ((u64)in_key[12] << 24) ^
+ ((u64)in_key[13] << 16) ^
+ ((u64)in_key[14] << 8) ^
+ ((u64)in_key[15] );
+
+ /* setup the encrypt key */
+ for (r = 0; r <= KHAZAD_ROUNDS; r++) {
+ ctx->E[r] = T0[(int)(K1 >> 56) ] ^
+ T1[(int)(K1 >> 48) & 0xff] ^
+ T2[(int)(K1 >> 40) & 0xff] ^
+ T3[(int)(K1 >> 32) & 0xff] ^
+ T4[(int)(K1 >> 24) & 0xff] ^
+ T5[(int)(K1 >> 16) & 0xff] ^
+ T6[(int)(K1 >> 8) & 0xff] ^
+ T7[(int)(K1 ) & 0xff] ^
+ c[r] ^ K2;
+ K2 = K1;
+ K1 = ctx->E[r];
+ }
+ /* Setup the decrypt key */
+ ctx->D[0] = ctx->E[KHAZAD_ROUNDS];
+ for (r = 1; r < KHAZAD_ROUNDS; r++) {
+ K1 = ctx->E[KHAZAD_ROUNDS - r];
+ ctx->D[r] = T0[(int)S[(int)(K1 >> 56) ] & 0xff] ^
+ T1[(int)S[(int)(K1 >> 48) & 0xff] & 0xff] ^
+ T2[(int)S[(int)(K1 >> 40) & 0xff] & 0xff] ^
+ T3[(int)S[(int)(K1 >> 32) & 0xff] & 0xff] ^
+ T4[(int)S[(int)(K1 >> 24) & 0xff] & 0xff] ^
+ T5[(int)S[(int)(K1 >> 16) & 0xff] & 0xff] ^
+ T6[(int)S[(int)(K1 >> 8) & 0xff] & 0xff] ^
+ T7[(int)S[(int)(K1 ) & 0xff] & 0xff];
+ }
+ ctx->D[KHAZAD_ROUNDS] = ctx->E[0];
+
+ return 0;
+
+}
+
+static void khazad_crypt(const u64 roundKey[KHAZAD_ROUNDS + 1],
+ u8 *ciphertext, const u8 *plaintext)
+{
+
+ int r;
+ u64 state;
+
+ state = ((u64)plaintext[0] << 56) ^
+ ((u64)plaintext[1] << 48) ^
+ ((u64)plaintext[2] << 40) ^
+ ((u64)plaintext[3] << 32) ^
+ ((u64)plaintext[4] << 24) ^
+ ((u64)plaintext[5] << 16) ^
+ ((u64)plaintext[6] << 8) ^
+ ((u64)plaintext[7] ) ^
+ roundKey[0];
+
+ for (r = 1; r < KHAZAD_ROUNDS; r++) {
+ state = T0[(int)(state >> 56) ] ^
+ T1[(int)(state >> 48) & 0xff] ^
+ T2[(int)(state >> 40) & 0xff] ^
+ T3[(int)(state >> 32) & 0xff] ^
+ T4[(int)(state >> 24) & 0xff] ^
+ T5[(int)(state >> 16) & 0xff] ^
+ T6[(int)(state >> 8) & 0xff] ^
+ T7[(int)(state ) & 0xff] ^
+ roundKey[r];
+ }
+
+ state = (T0[(int)(state >> 56) ] & 0xff00000000000000ULL) ^
+ (T1[(int)(state >> 48) & 0xff] & 0x00ff000000000000ULL) ^
+ (T2[(int)(state >> 40) & 0xff] & 0x0000ff0000000000ULL) ^
+ (T3[(int)(state >> 32) & 0xff] & 0x000000ff00000000ULL) ^
+ (T4[(int)(state >> 24) & 0xff] & 0x00000000ff000000ULL) ^
+ (T5[(int)(state >> 16) & 0xff] & 0x0000000000ff0000ULL) ^
+ (T6[(int)(state >> 8) & 0xff] & 0x000000000000ff00ULL) ^
+ (T7[(int)(state ) & 0xff] & 0x00000000000000ffULL) ^
+ roundKey[KHAZAD_ROUNDS];
+
+ ciphertext[0] = (u8)(state >> 56);
+ ciphertext[1] = (u8)(state >> 48);
+ ciphertext[2] = (u8)(state >> 40);
+ ciphertext[3] = (u8)(state >> 32);
+ ciphertext[4] = (u8)(state >> 24);
+ ciphertext[5] = (u8)(state >> 16);
+ ciphertext[6] = (u8)(state >> 8);
+ ciphertext[7] = (u8)(state );
+
+}
+
+static void khazad_encrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ struct khazad_ctx *ctx = ctx_arg;
+ khazad_crypt(ctx->E, dst, src);
+}
+
+static void khazad_decrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ struct khazad_ctx *ctx = ctx_arg;
+ khazad_crypt(ctx->D, dst, src);
+}
+
+static struct crypto_alg khazad_alg = {
+ .cra_name = "khazad",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = KHAZAD_BLOCK_SIZE,
+ .cra_ctxsize = sizeof (struct khazad_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(khazad_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = KHAZAD_KEY_SIZE,
+ .cia_max_keysize = KHAZAD_KEY_SIZE,
+ .cia_setkey = khazad_setkey,
+ .cia_encrypt = khazad_encrypt,
+ .cia_decrypt = khazad_decrypt } }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = crypto_register_alg(&khazad_alg);
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&khazad_alg);
+}
+
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Khazad Cryptographic Algorithm");
--- /dev/null
+/*
+ * Cryptographic API.
+ *
+ * TEA and Xtended TEA Algorithms
+ *
+ * The TEA and Xtended TEA algorithms were developed by David Wheeler
+ * and Roger Needham at the Computer Laboratory of Cambridge University.
+ *
+ * Copyright (c) 2004 Aaron Grothe ajgrothe@yahoo.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define TEA_KEY_SIZE 16
+#define TEA_BLOCK_SIZE 8
+#define TEA_ROUNDS 32
+#define TEA_DELTA 0x9e3779b9
+
+#define XTEA_KEY_SIZE 16
+#define XTEA_BLOCK_SIZE 8
+#define XTEA_ROUNDS 32
+#define XTEA_DELTA 0x9e3779b9
+
+#define u32_in(x) le32_to_cpu(*(const u32 *)(x))
+#define u32_out(to, from) (*(u32 *)(to) = cpu_to_le32(from))
+
+struct tea_ctx {
+ u32 KEY[4];
+};
+
+struct xtea_ctx {
+ u32 KEY[4];
+};
+
+static int tea_setkey(void *ctx_arg, const u8 *in_key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ if (key_len != 16)
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->KEY[0] = u32_in (in_key);
+ ctx->KEY[1] = u32_in (in_key + 4);
+ ctx->KEY[2] = u32_in (in_key + 8);
+ ctx->KEY[3] = u32_in (in_key + 12);
+
+ return 0;
+
+}
+
+static void tea_encrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ u32 y, z, n, sum = 0;
+ u32 k0, k1, k2, k3;
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ k0 = ctx->KEY[0];
+ k1 = ctx->KEY[1];
+ k2 = ctx->KEY[2];
+ k3 = ctx->KEY[3];
+
+ n = TEA_ROUNDS;
+
+ while (n-- > 0) {
+ sum += TEA_DELTA;
+ y += ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1);
+ z += ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3);
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+}
+
+static void tea_decrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ u32 y, z, n, sum;
+ u32 k0, k1, k2, k3;
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ k0 = ctx->KEY[0];
+ k1 = ctx->KEY[1];
+ k2 = ctx->KEY[2];
+ k3 = ctx->KEY[3];
+
+ sum = TEA_DELTA << 5;
+
+ n = TEA_ROUNDS;
+
+ while (n-- > 0) {
+ z -= ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3);
+ y -= ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1);
+ sum -= TEA_DELTA;
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static int xtea_setkey(void *ctx_arg, const u8 *in_key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct xtea_ctx *ctx = ctx_arg;
+
+ if (key_len != 16)
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->KEY[0] = u32_in (in_key);
+ ctx->KEY[1] = u32_in (in_key + 4);
+ ctx->KEY[2] = u32_in (in_key + 8);
+ ctx->KEY[3] = u32_in (in_key + 12);
+
+ return 0;
+
+}
+
+static void xtea_encrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+
+ u32 y, z, sum = 0;
+ u32 limit = XTEA_DELTA * XTEA_ROUNDS;
+
+ struct xtea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ while (sum != limit) {
+ y += (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum&3];
+ sum += TEA_DELTA;
+ z += (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 &3];
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static void xtea_decrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+
+ u32 y, z, sum;
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ sum = XTEA_DELTA * XTEA_ROUNDS;
+
+ while (sum) {
+ z -= (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 & 3];
+ sum -= XTEA_DELTA;
+ y -= (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum & 3];
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static struct crypto_alg tea_alg = {
+ .cra_name = "tea",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = TEA_BLOCK_SIZE,
+ .cra_ctxsize = sizeof (struct tea_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(tea_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = TEA_KEY_SIZE,
+ .cia_max_keysize = TEA_KEY_SIZE,
+ .cia_setkey = tea_setkey,
+ .cia_encrypt = tea_encrypt,
+ .cia_decrypt = tea_decrypt } }
+};
+
+static struct crypto_alg xtea_alg = {
+ .cra_name = "xtea",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = XTEA_BLOCK_SIZE,
+ .cra_ctxsize = sizeof (struct xtea_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(xtea_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = XTEA_KEY_SIZE,
+ .cia_max_keysize = XTEA_KEY_SIZE,
+ .cia_setkey = xtea_setkey,
+ .cia_encrypt = xtea_encrypt,
+ .cia_decrypt = xtea_decrypt } }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = crypto_register_alg(&tea_alg);
+ if (ret < 0)
+ goto out;
+
+ ret = crypto_register_alg(&xtea_alg);
+ if (ret < 0) {
+ crypto_unregister_alg(&tea_alg);
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&tea_alg);
+ crypto_unregister_alg(&xtea_alg);
+}
+
+MODULE_ALIAS("xtea");
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("TEA & XTEA Cryptographic Algorithms");
--- /dev/null
+/*
+ * sx8.c: Driver for Promise SATA SX8 looks-like-I2O hardware
+ *
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * Author/maintainer: Jeff Garzik <jgarzik@pobox.com>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/sched.h>
+#include <linux/devfs_fs_kernel.h>
+#include <linux/interrupt.h>
+#include <linux/compiler.h>
+#include <linux/workqueue.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/time.h>
+#include <linux/hdreg.h>
+#include <asm/io.h>
+#include <asm/semaphore.h>
+#include <asm/uaccess.h>
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Promise SATA SX8 block driver");
+
+#if 0
+#define CARM_DEBUG
+#define CARM_VERBOSE_DEBUG
+#else
+#undef CARM_DEBUG
+#undef CARM_VERBOSE_DEBUG
+#endif
+#undef CARM_NDEBUG
+
+#define DRV_NAME "sx8"
+#define DRV_VERSION "0.8"
+#define PFX DRV_NAME ": "
+
+#define NEXT_RESP(idx) ((idx + 1) % RMSG_Q_LEN)
+
+/* 0xf is just arbitrary, non-zero noise; this is sorta like poisoning */
+#define TAG_ENCODE(tag) (((tag) << 16) | 0xf)
+#define TAG_DECODE(tag) (((tag) >> 16) & 0x1f)
+#define TAG_VALID(tag) ((((tag) & 0xf) == 0xf) && (TAG_DECODE(tag) < 32))
+
+/* note: prints function name for you */
+#ifdef CARM_DEBUG
+#define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args)
+#ifdef CARM_VERBOSE_DEBUG
+#define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args)
+#else
+#define VPRINTK(fmt, args...)
+#endif /* CARM_VERBOSE_DEBUG */
+#else
+#define DPRINTK(fmt, args...)
+#define VPRINTK(fmt, args...)
+#endif /* CARM_DEBUG */
+
+#ifdef CARM_NDEBUG
+#define assert(expr)
+#else
+#define assert(expr) \
+ if(unlikely(!(expr))) { \
+ printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ }
+#endif
+
+/* defines only for the constants which don't work well as enums */
+struct carm_host;
+
+enum {
+ /* adapter-wide limits */
+ CARM_MAX_PORTS = 8,
+ CARM_SHM_SIZE = (4096 << 7),
+ CARM_MINORS_PER_MAJOR = 256 / CARM_MAX_PORTS,
+ CARM_MAX_WAIT_Q = CARM_MAX_PORTS + 1,
+
+ /* command message queue limits */
+ CARM_MAX_REQ = 64, /* max command msgs per host */
+ CARM_MAX_Q = 1, /* one command at a time */
+ CARM_MSG_LOW_WATER = (CARM_MAX_REQ / 4), /* refill mark */
+
+ /* S/G limits, host-wide and per-request */
+ CARM_MAX_REQ_SG = 32, /* max s/g entries per request */
+ CARM_SG_BOUNDARY = 0xffffUL, /* s/g segment boundary */
+ CARM_MAX_HOST_SG = 600, /* max s/g entries per host */
+ CARM_SG_LOW_WATER = (CARM_MAX_HOST_SG / 4), /* re-fill mark */
+
+ /* hardware registers */
+ CARM_IHQP = 0x1c,
+ CARM_INT_STAT = 0x10, /* interrupt status */
+ CARM_INT_MASK = 0x14, /* interrupt mask */
+ CARM_HMUC = 0x18, /* host message unit control */
+ RBUF_ADDR_LO = 0x20, /* response msg DMA buf low 32 bits */
+ RBUF_ADDR_HI = 0x24, /* response msg DMA buf high 32 bits */
+ RBUF_BYTE_SZ = 0x28,
+ CARM_RESP_IDX = 0x2c,
+ CARM_CMS0 = 0x30, /* command message size reg 0 */
+ CARM_LMUC = 0x48,
+ CARM_HMPHA = 0x6c,
+ CARM_INITC = 0xb5,
+
+ /* bits in CARM_INT_{STAT,MASK} */
+ INT_RESERVED = 0xfffffff0,
+ INT_WATCHDOG = (1 << 3), /* watchdog timer */
+ INT_Q_OVERFLOW = (1 << 2), /* cmd msg q overflow */
+ INT_Q_AVAILABLE = (1 << 1), /* cmd msg q has free space */
+ INT_RESPONSE = (1 << 0), /* response msg available */
+ INT_ACK_MASK = INT_WATCHDOG | INT_Q_OVERFLOW,
+ INT_DEF_MASK = INT_RESERVED | INT_Q_OVERFLOW |
+ INT_RESPONSE,
+
+ /* command messages, and related register bits */
+ CARM_HAVE_RESP = 0x01,
+ CARM_MSG_READ = 1,
+ CARM_MSG_WRITE = 2,
+ CARM_MSG_VERIFY = 3,
+ CARM_MSG_GET_CAPACITY = 4,
+ CARM_MSG_FLUSH = 5,
+ CARM_MSG_IOCTL = 6,
+ CARM_MSG_ARRAY = 8,
+ CARM_MSG_MISC = 9,
+ CARM_CME = (1 << 2),
+ CARM_RME = (1 << 1),
+ CARM_WZBC = (1 << 0),
+ CARM_RMI = (1 << 0),
+ CARM_Q_FULL = (1 << 3),
+ CARM_MSG_SIZE = 288,
+ CARM_Q_LEN = 48,
+
+ /* CARM_MSG_IOCTL messages */
+ CARM_IOC_SCAN_CHAN = 5, /* scan channels for devices */
+ CARM_IOC_GET_TCQ = 13, /* get tcq/ncq depth */
+ CARM_IOC_SET_TCQ = 14, /* set tcq/ncq depth */
+
+ IOC_SCAN_CHAN_NODEV = 0x1f,
+ IOC_SCAN_CHAN_OFFSET = 0x40,
+
+ /* CARM_MSG_ARRAY messages */
+ CARM_ARRAY_INFO = 0,
+
+ ARRAY_NO_EXIST = (1 << 31),
+
+ /* response messages */
+ RMSG_SZ = 8, /* sizeof(struct carm_response) */
+ RMSG_Q_LEN = 48, /* resp. msg list length */
+ RMSG_OK = 1, /* bit indicating msg was successful */
+ /* length of entire resp. msg buffer */
+ RBUF_LEN = RMSG_SZ * RMSG_Q_LEN,
+
+ PDC_SHM_SIZE = (4096 << 7), /* length of entire h/w buffer */
+
+ /* CARM_MSG_MISC messages */
+ MISC_GET_FW_VER = 2,
+ MISC_ALLOC_MEM = 3,
+ MISC_SET_TIME = 5,
+
+ /* MISC_GET_FW_VER feature bits */
+ FW_VER_4PORT = (1 << 2), /* 1=4 ports, 0=8 ports */
+ FW_VER_NON_RAID = (1 << 1), /* 1=non-RAID firmware, 0=RAID */
+ FW_VER_ZCR = (1 << 0), /* zero channel RAID (whatever that is) */
+
+ /* carm_host flags */
+ FL_NON_RAID = FW_VER_NON_RAID,
+ FL_4PORT = FW_VER_4PORT,
+ FL_FW_VER_MASK = (FW_VER_NON_RAID | FW_VER_4PORT),
+ FL_DAC = (1 << 16),
+ FL_DYN_MAJOR = (1 << 17),
+};
+
+enum scatter_gather_types {
+ SGT_32BIT = 0,
+ SGT_64BIT = 1,
+};
+
+enum host_states {
+ HST_INVALID, /* invalid state; never used */
+ HST_ALLOC_BUF, /* setting up master SHM area */
+ HST_ERROR, /* we never leave here */
+ HST_PORT_SCAN, /* start dev scan */
+ HST_DEV_SCAN_START, /* start per-device probe */
+ HST_DEV_SCAN, /* continue per-device probe */
+ HST_DEV_ACTIVATE, /* activate devices we found */
+ HST_PROBE_FINISHED, /* probe is complete */
+ HST_PROBE_START, /* initiate probe */
+ HST_SYNC_TIME, /* tell firmware what time it is */
+ HST_GET_FW_VER, /* get firmware version, adapter port cnt */
+};
+
+#ifdef CARM_DEBUG
+static const char *state_name[] = {
+ "HST_INVALID",
+ "HST_ALLOC_BUF",
+ "HST_ERROR",
+ "HST_PORT_SCAN",
+ "HST_DEV_SCAN_START",
+ "HST_DEV_SCAN",
+ "HST_DEV_ACTIVATE",
+ "HST_PROBE_FINISHED",
+ "HST_PROBE_START",
+ "HST_SYNC_TIME",
+ "HST_GET_FW_VER",
+};
+#endif
+
+struct carm_port {
+ unsigned int port_no;
+ unsigned int n_queued;
+ struct gendisk *disk;
+ struct carm_host *host;
+
+ /* attached device characteristics */
+ u64 capacity;
+ char name[41];
+ u16 dev_geom_head;
+ u16 dev_geom_sect;
+ u16 dev_geom_cyl;
+};
+
+struct carm_request {
+ unsigned int tag;
+ int n_elem;
+ unsigned int msg_type;
+ unsigned int msg_subtype;
+ unsigned int msg_bucket;
+ struct request *rq;
+ struct carm_port *port;
+ struct scatterlist sg[CARM_MAX_REQ_SG];
+};
+
+struct carm_host {
+ unsigned long flags;
+ void *mmio;
+ void *shm;
+ dma_addr_t shm_dma;
+
+ int major;
+ int id;
+ char name[32];
+
+ spinlock_t lock;
+ struct pci_dev *pdev;
+ unsigned int state;
+ u32 fw_ver;
+
+ request_queue_t *oob_q;
+ unsigned int n_oob;
+
+ unsigned int hw_sg_used;
+
+ unsigned int resp_idx;
+
+ unsigned int wait_q_prod;
+ unsigned int wait_q_cons;
+ request_queue_t *wait_q[CARM_MAX_WAIT_Q];
+
+ unsigned int n_msgs;
+ u64 msg_alloc;
+ struct carm_request req[CARM_MAX_REQ];
+ void *msg_base;
+ dma_addr_t msg_dma;
+
+ int cur_scan_dev;
+ unsigned long dev_active;
+ unsigned long dev_present;
+ struct carm_port port[CARM_MAX_PORTS];
+
+ struct work_struct fsm_task;
+
+ struct semaphore probe_sem;
+};
+
+struct carm_response {
+ u32 ret_handle;
+ u32 status;
+} __attribute__((packed));
+
+struct carm_msg_sg {
+ u32 start;
+ u32 len;
+} __attribute__((packed));
+
+struct carm_msg_rw {
+ u8 type;
+ u8 id;
+ u8 sg_count;
+ u8 sg_type;
+ u32 handle;
+ u32 lba;
+ u16 lba_count;
+ u16 lba_high;
+ struct carm_msg_sg sg[32];
+} __attribute__((packed));
+
+struct carm_msg_allocbuf {
+ u8 type;
+ u8 subtype;
+ u8 n_sg;
+ u8 sg_type;
+ u32 handle;
+ u32 addr;
+ u32 len;
+ u32 evt_pool;
+ u32 n_evt;
+ u32 rbuf_pool;
+ u32 n_rbuf;
+ u32 msg_pool;
+ u32 n_msg;
+ struct carm_msg_sg sg[8];
+} __attribute__((packed));
+
+struct carm_msg_ioctl {
+ u8 type;
+ u8 subtype;
+ u8 array_id;
+ u8 reserved1;
+ u32 handle;
+ u32 data_addr;
+ u32 reserved2;
+} __attribute__((packed));
+
+struct carm_msg_sync_time {
+ u8 type;
+ u8 subtype;
+ u16 reserved1;
+ u32 handle;
+ u32 reserved2;
+ u32 timestamp;
+} __attribute__((packed));
+
+struct carm_msg_get_fw_ver {
+ u8 type;
+ u8 subtype;
+ u16 reserved1;
+ u32 handle;
+ u32 data_addr;
+ u32 reserved2;
+} __attribute__((packed));
+
+struct carm_fw_ver {
+ u32 version;
+ u8 features;
+ u8 reserved1;
+ u16 reserved2;
+} __attribute__((packed));
+
+struct carm_array_info {
+ u32 size;
+
+ u16 size_hi;
+ u16 stripe_size;
+
+ u32 mode;
+
+ u16 stripe_blk_sz;
+ u16 reserved1;
+
+ u16 cyl;
+ u16 head;
+
+ u16 sect;
+ u8 array_id;
+ u8 reserved2;
+
+ char name[40];
+
+ u32 array_status;
+
+ /* device list continues beyond this point? */
+} __attribute__((packed));
+
+static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static void carm_remove_one (struct pci_dev *pdev);
+static int carm_bdev_ioctl(struct inode *ino, struct file *fil,
+ unsigned int cmd, unsigned long arg);
+
+static struct pci_device_id carm_pci_tbl[] = {
+ { PCI_VENDOR_ID_PROMISE, 0x8000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+ { PCI_VENDOR_ID_PROMISE, 0x8002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+ { } /* terminate list */
+};
+MODULE_DEVICE_TABLE(pci, carm_pci_tbl);
+
+static struct pci_driver carm_driver = {
+ .name = DRV_NAME,
+ .id_table = carm_pci_tbl,
+ .probe = carm_init_one,
+ .remove = carm_remove_one,
+};
+
+static struct block_device_operations carm_bd_ops = {
+ .owner = THIS_MODULE,
+ .ioctl = carm_bdev_ioctl,
+};
+
+static unsigned int carm_host_id;
+static unsigned long carm_major_alloc;
+
+
+
+static int carm_bdev_ioctl(struct inode *ino, struct file *fil,
+ unsigned int cmd, unsigned long arg)
+{
+ void __user *usermem = (void __user *) arg;
+ struct carm_port *port = ino->i_bdev->bd_disk->private_data;
+ struct hd_geometry geom;
+
+ switch (cmd) {
+ case HDIO_GETGEO:
+ if (!usermem)
+ return -EINVAL;
+
+ geom.heads = (u8) port->dev_geom_head;
+ geom.sectors = (u8) port->dev_geom_sect;
+ geom.cylinders = port->dev_geom_cyl;
+ geom.start = get_start_sect(ino->i_bdev);
+
+ if (copy_to_user(usermem, &geom, sizeof(geom)))
+ return -EFAULT;
+ return 0;
+
+ default:
+ break;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+static const u32 msg_sizes[] = { 32, 64, 128, CARM_MSG_SIZE };
+
+static inline int carm_lookup_bucket(u32 msg_size)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+ if (msg_size <= msg_sizes[i])
+ return i;
+
+ return -ENOENT;
+}
+
+static void carm_init_buckets(void *mmio)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+ writel(msg_sizes[i], mmio + CARM_CMS0 + (4 * i));
+}
+
+static inline void *carm_ref_msg(struct carm_host *host,
+ unsigned int msg_idx)
+{
+ return host->msg_base + (msg_idx * CARM_MSG_SIZE);
+}
+
+static inline dma_addr_t carm_ref_msg_dma(struct carm_host *host,
+ unsigned int msg_idx)
+{
+ return host->msg_dma + (msg_idx * CARM_MSG_SIZE);
+}
+
+static int carm_send_msg(struct carm_host *host,
+ struct carm_request *crq)
+{
+ void *mmio = host->mmio;
+ u32 msg = (u32) carm_ref_msg_dma(host, crq->tag);
+ u32 cm_bucket = crq->msg_bucket;
+ u32 tmp;
+ int rc = 0;
+
+ VPRINTK("ENTER\n");
+
+ tmp = readl(mmio + CARM_HMUC);
+ if (tmp & CARM_Q_FULL) {
+#if 0
+ tmp = readl(mmio + CARM_INT_MASK);
+ tmp |= INT_Q_AVAILABLE;
+ writel(tmp, mmio + CARM_INT_MASK);
+ readl(mmio + CARM_INT_MASK); /* flush */
+#endif
+ DPRINTK("host msg queue full\n");
+ rc = -EBUSY;
+ } else {
+ writel(msg | (cm_bucket << 1), mmio + CARM_IHQP);
+ readl(mmio + CARM_IHQP); /* flush */
+ }
+
+ return rc;
+}
+
+static struct carm_request *carm_get_request(struct carm_host *host)
+{
+ unsigned int i;
+
+ /* obey global hardware limit on S/G entries */
+ if (host->hw_sg_used >= (CARM_MAX_HOST_SG - CARM_MAX_REQ_SG))
+ return NULL;
+
+ for (i = 0; i < CARM_MAX_Q; i++)
+ if ((host->msg_alloc & (1ULL << i)) == 0) {
+ struct carm_request *crq = &host->req[i];
+ crq->port = NULL;
+ crq->n_elem = 0;
+
+ host->msg_alloc |= (1ULL << i);
+ host->n_msgs++;
+
+ assert(host->n_msgs <= CARM_MAX_REQ);
+ return crq;
+ }
+
+ DPRINTK("no request available, returning NULL\n");
+ return NULL;
+}
+
+static int carm_put_request(struct carm_host *host, struct carm_request *crq)
+{
+ assert(crq->tag < CARM_MAX_Q);
+
+ if (unlikely((host->msg_alloc & (1ULL << crq->tag)) == 0))
+ return -EINVAL; /* tried to clear a tag that was not active */
+
+ assert(host->hw_sg_used >= crq->n_elem);
+
+ host->msg_alloc &= ~(1ULL << crq->tag);
+ host->hw_sg_used -= crq->n_elem;
+ host->n_msgs--;
+
+ return 0;
+}
+
+static struct carm_request *carm_get_special(struct carm_host *host)
+{
+ unsigned long flags;
+ struct carm_request *crq = NULL;
+ struct request *rq;
+ int tries = 5000;
+
+ while (tries-- > 0) {
+ spin_lock_irqsave(&host->lock, flags);
+ crq = carm_get_request(host);
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ if (crq)
+ break;
+ msleep(10);
+ }
+
+ if (!crq)
+ return NULL;
+
+ rq = blk_get_request(host->oob_q, WRITE /* bogus */, GFP_KERNEL);
+ if (!rq) {
+ spin_lock_irqsave(&host->lock, flags);
+ carm_put_request(host, crq);
+ spin_unlock_irqrestore(&host->lock, flags);
+ return NULL;
+ }
+
+ crq->rq = rq;
+ return crq;
+}
+
+static int carm_array_info (struct carm_host *host, unsigned int array_idx)
+{
+ struct carm_msg_ioctl *ioc;
+ unsigned int idx;
+ u32 msg_data;
+ dma_addr_t msg_dma;
+ struct carm_request *crq;
+ int rc;
+
+ crq = carm_get_special(host);
+ if (!crq) {
+ rc = -ENOMEM;
+ goto err_out;
+ }
+
+ idx = crq->tag;
+
+ ioc = carm_ref_msg(host, idx);
+ msg_dma = carm_ref_msg_dma(host, idx);
+ msg_data = (u32) (msg_dma + sizeof(struct carm_array_info));
+
+ crq->msg_type = CARM_MSG_ARRAY;
+ crq->msg_subtype = CARM_ARRAY_INFO;
+ rc = carm_lookup_bucket(sizeof(struct carm_msg_ioctl) +
+ sizeof(struct carm_array_info));
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_ARRAY;
+ ioc->subtype = CARM_ARRAY_INFO;
+ ioc->array_id = (u8) array_idx;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ spin_lock_irq(&host->lock);
+ assert(host->state == HST_DEV_SCAN_START ||
+ host->state == HST_DEV_SCAN);
+ spin_unlock_irq(&host->lock);
+
+ DPRINTK("blk_insert_request, tag == %u\n", idx);
+ blk_insert_request(host->oob_q, crq->rq, 1, crq, 0);
+
+ return 0;
+
+err_out:
+ spin_lock_irq(&host->lock);
+ host->state = HST_ERROR;
+ spin_unlock_irq(&host->lock);
+ return rc;
+}
+
+typedef unsigned int (*carm_sspc_t)(struct carm_host *, unsigned int, void *);
+
+static int carm_send_special (struct carm_host *host, carm_sspc_t func)
+{
+ struct carm_request *crq;
+ struct carm_msg_ioctl *ioc;
+ void *mem;
+ unsigned int idx, msg_size;
+ int rc;
+
+ crq = carm_get_special(host);
+ if (!crq)
+ return -ENOMEM;
+
+ idx = crq->tag;
+
+ mem = carm_ref_msg(host, idx);
+
+ msg_size = func(host, idx, mem);
+
+ ioc = mem;
+ crq->msg_type = ioc->type;
+ crq->msg_subtype = ioc->subtype;
+ rc = carm_lookup_bucket(msg_size);
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ DPRINTK("blk_insert_request, tag == %u\n", idx);
+ blk_insert_request(host->oob_q, crq->rq, 1, crq, 0);
+
+ return 0;
+}
+
+static unsigned int carm_fill_sync_time(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct timeval tv;
+ struct carm_msg_sync_time *st = mem;
+
+ do_gettimeofday(&tv);
+
+ memset(st, 0, sizeof(*st));
+ st->type = CARM_MSG_MISC;
+ st->subtype = MISC_SET_TIME;
+ st->handle = cpu_to_le32(TAG_ENCODE(idx));
+ st->timestamp = cpu_to_le32(tv.tv_sec);
+
+ return sizeof(struct carm_msg_sync_time);
+}
+
+static unsigned int carm_fill_alloc_buf(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_allocbuf *ab = mem;
+
+ memset(ab, 0, sizeof(*ab));
+ ab->type = CARM_MSG_MISC;
+ ab->subtype = MISC_ALLOC_MEM;
+ ab->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ab->n_sg = 1;
+ ab->sg_type = SGT_32BIT;
+ ab->addr = cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+ ab->len = cpu_to_le32(PDC_SHM_SIZE >> 1);
+ ab->evt_pool = cpu_to_le32(host->shm_dma + (16 * 1024));
+ ab->n_evt = cpu_to_le32(1024);
+ ab->rbuf_pool = cpu_to_le32(host->shm_dma);
+ ab->n_rbuf = cpu_to_le32(RMSG_Q_LEN);
+ ab->msg_pool = cpu_to_le32(host->shm_dma + RBUF_LEN);
+ ab->n_msg = cpu_to_le32(CARM_Q_LEN);
+ ab->sg[0].start = cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+ ab->sg[0].len = cpu_to_le32(65536);
+
+ return sizeof(struct carm_msg_allocbuf);
+}
+
+static unsigned int carm_fill_scan_channels(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_ioctl *ioc = mem;
+ u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) +
+ IOC_SCAN_CHAN_OFFSET);
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_IOCTL;
+ ioc->subtype = CARM_IOC_SCAN_CHAN;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ /* fill output data area with "no device" default values */
+ mem += IOC_SCAN_CHAN_OFFSET;
+ memset(mem, IOC_SCAN_CHAN_NODEV, CARM_MAX_PORTS);
+
+ return IOC_SCAN_CHAN_OFFSET + CARM_MAX_PORTS;
+}
+
+static unsigned int carm_fill_get_fw_ver(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_get_fw_ver *ioc = mem;
+ u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) + sizeof(*ioc));
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_MISC;
+ ioc->subtype = MISC_GET_FW_VER;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ return sizeof(struct carm_msg_get_fw_ver) +
+ sizeof(struct carm_fw_ver);
+}
+
+static inline void carm_end_request_queued(struct carm_host *host,
+ struct carm_request *crq,
+ int uptodate)
+{
+ struct request *req = crq->rq;
+ int rc;
+
+ rc = end_that_request_first(req, uptodate, req->hard_nr_sectors);
+ assert(rc == 0);
+
+ end_that_request_last(req);
+
+ rc = carm_put_request(host, crq);
+ assert(rc == 0);
+}
+
+static inline void carm_push_q (struct carm_host *host, request_queue_t *q)
+{
+ unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q;
+
+ blk_stop_queue(q);
+ VPRINTK("STOPPED QUEUE %p\n", q);
+
+ host->wait_q[idx] = q;
+ host->wait_q_prod++;
+ BUG_ON(host->wait_q_prod == host->wait_q_cons); /* overrun */
+}
+
+static inline request_queue_t *carm_pop_q(struct carm_host *host)
+{
+ unsigned int idx;
+
+ if (host->wait_q_prod == host->wait_q_cons)
+ return NULL;
+
+ idx = host->wait_q_cons % CARM_MAX_WAIT_Q;
+ host->wait_q_cons++;
+
+ return host->wait_q[idx];
+}
+
+static inline void carm_round_robin(struct carm_host *host)
+{
+ request_queue_t *q = carm_pop_q(host);
+ if (q) {
+ blk_start_queue(q);
+ VPRINTK("STARTED QUEUE %p\n", q);
+ }
+}
+
+static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq,
+ int is_ok)
+{
+ carm_end_request_queued(host, crq, is_ok);
+ if (CARM_MAX_Q == 1)
+ carm_round_robin(host);
+ else if ((host->n_msgs <= CARM_MSG_LOW_WATER) &&
+ (host->hw_sg_used <= CARM_SG_LOW_WATER)) {
+ carm_round_robin(host);
+ }
+}
+
+static void carm_oob_rq_fn(request_queue_t *q)
+{
+ struct carm_host *host = q->queuedata;
+ struct carm_request *crq;
+ struct request *rq;
+ int rc;
+
+ while (1) {
+ DPRINTK("get req\n");
+ rq = elv_next_request(q);
+ if (!rq)
+ break;
+
+ blkdev_dequeue_request(rq);
+
+ crq = rq->special;
+ assert(crq != NULL);
+ assert(crq->rq == rq);
+
+ crq->n_elem = 0;
+
+ DPRINTK("send req\n");
+ rc = carm_send_msg(host, crq);
+ if (rc) {
+ blk_requeue_request(q, rq);
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+ }
+}
+
+static void carm_rq_fn(request_queue_t *q)
+{
+ struct carm_port *port = q->queuedata;
+ struct carm_host *host = port->host;
+ struct carm_msg_rw *msg;
+ struct carm_request *crq;
+ struct request *rq;
+ struct scatterlist *sg;
+ int writing = 0, pci_dir, i, n_elem, rc;
+ u32 tmp;
+ unsigned int msg_size;
+
+queue_one_request:
+ VPRINTK("get req\n");
+ rq = elv_next_request(q);
+ if (!rq)
+ return;
+
+ crq = carm_get_request(host);
+ if (!crq) {
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+ crq->rq = rq;
+
+ blkdev_dequeue_request(rq);
+
+ if (rq_data_dir(rq) == WRITE) {
+ writing = 1;
+ pci_dir = PCI_DMA_TODEVICE;
+ } else {
+ pci_dir = PCI_DMA_FROMDEVICE;
+ }
+
+ /* get scatterlist from block layer */
+ sg = &crq->sg[0];
+ n_elem = blk_rq_map_sg(q, rq, sg);
+ if (n_elem <= 0) {
+ carm_end_rq(host, crq, 0);
+ return; /* request with no s/g entries? */
+ }
+
+ /* map scatterlist to PCI bus addresses */
+ n_elem = pci_map_sg(host->pdev, sg, n_elem, pci_dir);
+ if (n_elem <= 0) {
+ carm_end_rq(host, crq, 0);
+ return; /* request with no s/g entries? */
+ }
+ crq->n_elem = n_elem;
+ crq->port = port;
+ host->hw_sg_used += n_elem;
+
+ /*
+ * build read/write message
+ */
+
+ VPRINTK("build msg\n");
+ msg = (struct carm_msg_rw *) carm_ref_msg(host, crq->tag);
+
+ if (writing) {
+ msg->type = CARM_MSG_WRITE;
+ crq->msg_type = CARM_MSG_WRITE;
+ } else {
+ msg->type = CARM_MSG_READ;
+ crq->msg_type = CARM_MSG_READ;
+ }
+
+ msg->id = port->port_no;
+ msg->sg_count = n_elem;
+ msg->sg_type = SGT_32BIT;
+ msg->handle = cpu_to_le32(TAG_ENCODE(crq->tag));
+ msg->lba = cpu_to_le32(rq->sector & 0xffffffff);
+ tmp = (rq->sector >> 16) >> 16;
+ msg->lba_high = cpu_to_le16( (u16) tmp );
+ msg->lba_count = cpu_to_le16(rq->nr_sectors);
+
+ msg_size = sizeof(struct carm_msg_rw) - sizeof(msg->sg);
+ for (i = 0; i < n_elem; i++) {
+ struct carm_msg_sg *carm_sg = &msg->sg[i];
+ carm_sg->start = cpu_to_le32(sg_dma_address(&crq->sg[i]));
+ carm_sg->len = cpu_to_le32(sg_dma_len(&crq->sg[i]));
+ msg_size += sizeof(struct carm_msg_sg);
+ }
+
+ rc = carm_lookup_bucket(msg_size);
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ /*
+ * queue read/write message to hardware
+ */
+
+ VPRINTK("send msg, tag == %u\n", crq->tag);
+ rc = carm_send_msg(host, crq);
+ if (rc) {
+ carm_put_request(host, crq);
+ blk_requeue_request(q, rq);
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+
+ goto queue_one_request;
+}
+
+static void carm_handle_array_info(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+ int is_ok)
+{
+ struct carm_port *port;
+ u8 *msg_data = mem + sizeof(struct carm_array_info);
+ struct carm_array_info *desc = (struct carm_array_info *) msg_data;
+ u64 lo, hi;
+ int cur_port;
+ size_t slen;
+
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ if (!is_ok)
+ goto out;
+ if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST)
+ goto out;
+
+ cur_port = host->cur_scan_dev;
+
+ /* should never occur */
+ if ((cur_port < 0) || (cur_port >= CARM_MAX_PORTS)) {
+ printk(KERN_ERR PFX "BUG: cur_scan_dev==%d, array_id==%d\n",
+ cur_port, (int) desc->array_id);
+ goto out;
+ }
+
+ port = &host->port[cur_port];
+
+ lo = (u64) le32_to_cpu(desc->size);
+ hi = (u64) le32_to_cpu(desc->size_hi);
+
+ port->capacity = lo | (hi << 32);
+ port->dev_geom_head = le16_to_cpu(desc->head);
+ port->dev_geom_sect = le16_to_cpu(desc->sect);
+ port->dev_geom_cyl = le16_to_cpu(desc->cyl);
+
+ host->dev_active |= (1 << cur_port);
+
+ strncpy(port->name, desc->name, sizeof(port->name));
+ port->name[sizeof(port->name) - 1] = 0;
+ slen = strlen(port->name);
+ while (slen && (port->name[slen - 1] == ' ')) {
+ port->name[slen - 1] = 0;
+ slen--;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): port %u device %Lu sectors\n",
+ pci_name(host->pdev), port->port_no,
+ (unsigned long long) port->capacity);
+ printk(KERN_INFO DRV_NAME "(%s): port %u device \"%s\"\n",
+ pci_name(host->pdev), port->port_no, port->name);
+
+out:
+ assert(host->state == HST_DEV_SCAN);
+ schedule_work(&host->fsm_task);
+}
+
+static void carm_handle_scan_chan(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+ int is_ok)
+{
+ u8 *msg_data = mem + IOC_SCAN_CHAN_OFFSET;
+ unsigned int i, dev_count = 0;
+ int new_state = HST_DEV_SCAN_START;
+
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ if (!is_ok) {
+ new_state = HST_ERROR;
+ goto out;
+ }
+
+ /* TODO: scan and support non-disk devices */
+ for (i = 0; i < 8; i++)
+ if (msg_data[i] == 0) { /* direct-access device (disk) */
+ host->dev_present |= (1 << i);
+ dev_count++;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): found %u interesting devices\n",
+ pci_name(host->pdev), dev_count);
+
+out:
+ assert(host->state == HST_PORT_SCAN);
+ host->state = new_state;
+ schedule_work(&host->fsm_task);
+}
+
+static void carm_handle_generic(struct carm_host *host,
+ struct carm_request *crq, int is_ok,
+ int cur_state, int next_state)
+{
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ assert(host->state == cur_state);
+ if (is_ok)
+ host->state = next_state;
+ else
+ host->state = HST_ERROR;
+ schedule_work(&host->fsm_task);
+}
+
+static inline void carm_handle_rw(struct carm_host *host,
+ struct carm_request *crq, int is_ok)
+{
+ int pci_dir;
+
+ VPRINTK("ENTER\n");
+
+ if (rq_data_dir(crq->rq) == WRITE)
+ pci_dir = PCI_DMA_TODEVICE;
+ else
+ pci_dir = PCI_DMA_FROMDEVICE;
+
+ pci_unmap_sg(host->pdev, &crq->sg[0], crq->n_elem, pci_dir);
+
+ carm_end_rq(host, crq, is_ok);
+}
+
+static inline void carm_handle_resp(struct carm_host *host,
+ u32 ret_handle_le, u32 status)
+{
+ u32 handle = le32_to_cpu(ret_handle_le);
+ unsigned int msg_idx;
+ struct carm_request *crq;
+ int is_ok = (status == RMSG_OK);
+ u8 *mem;
+
+ VPRINTK("ENTER, handle == 0x%x\n", handle);
+
+ if (unlikely(!TAG_VALID(handle))) {
+ printk(KERN_ERR DRV_NAME "(%s): BUG: invalid tag 0x%x\n",
+ pci_name(host->pdev), handle);
+ return;
+ }
+
+ msg_idx = TAG_DECODE(handle);
+ VPRINTK("tag == %u\n", msg_idx);
+
+ crq = &host->req[msg_idx];
+
+ /* fast path */
+ if (likely(crq->msg_type == CARM_MSG_READ ||
+ crq->msg_type == CARM_MSG_WRITE)) {
+ carm_handle_rw(host, crq, is_ok);
+ return;
+ }
+
+ mem = carm_ref_msg(host, msg_idx);
+
+ switch (crq->msg_type) {
+ case CARM_MSG_IOCTL: {
+ switch (crq->msg_subtype) {
+ case CARM_IOC_SCAN_CHAN:
+ carm_handle_scan_chan(host, crq, mem, is_ok);
+ break;
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ case CARM_MSG_MISC: {
+ switch (crq->msg_subtype) {
+ case MISC_ALLOC_MEM:
+ carm_handle_generic(host, crq, is_ok,
+ HST_ALLOC_BUF, HST_SYNC_TIME);
+ break;
+ case MISC_SET_TIME:
+ carm_handle_generic(host, crq, is_ok,
+ HST_SYNC_TIME, HST_GET_FW_VER);
+ break;
+ case MISC_GET_FW_VER: {
+ struct carm_fw_ver *ver = (struct carm_fw_ver *)
+ mem + sizeof(struct carm_msg_get_fw_ver);
+ if (is_ok) {
+ host->fw_ver = le32_to_cpu(ver->version);
+ host->flags |= (ver->features & FL_FW_VER_MASK);
+ }
+ carm_handle_generic(host, crq, is_ok,
+ HST_GET_FW_VER, HST_PORT_SCAN);
+ break;
+ }
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ case CARM_MSG_ARRAY: {
+ switch (crq->msg_subtype) {
+ case CARM_ARRAY_INFO:
+ carm_handle_array_info(host, crq, mem, is_ok);
+ break;
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+
+ return;
+
+err_out:
+ printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n",
+ pci_name(host->pdev), crq->msg_type, crq->msg_subtype);
+ carm_end_rq(host, crq, 0);
+}
+
+static inline void carm_handle_responses(struct carm_host *host)
+{
+ void *mmio = host->mmio;
+ struct carm_response *resp = (struct carm_response *) host->shm;
+ unsigned int work = 0;
+ unsigned int idx = host->resp_idx % RMSG_Q_LEN;
+
+ while (1) {
+ u32 status = le32_to_cpu(resp[idx].status);
+
+ if (status == 0xffffffff) {
+ VPRINTK("ending response on index %u\n", idx);
+ writel(idx << 3, mmio + CARM_RESP_IDX);
+ break;
+ }
+
+ /* response to a message we sent */
+ else if ((status & (1 << 31)) == 0) {
+ VPRINTK("handling msg response on index %u\n", idx);
+ carm_handle_resp(host, resp[idx].ret_handle, status);
+ resp[idx].status = 0xffffffff;
+ }
+
+ /* asynchronous events the hardware throws our way */
+ else if ((status & 0xff000000) == (1 << 31)) {
+ u8 *evt_type_ptr = (u8 *) &resp[idx];
+ u8 evt_type = *evt_type_ptr;
+ printk(KERN_WARNING DRV_NAME "(%s): unhandled event type %d\n",
+ pci_name(host->pdev), (int) evt_type);
+ resp[idx].status = 0xffffffff;
+ }
+
+ idx = NEXT_RESP(idx);
+ work++;
+ }
+
+ VPRINTK("EXIT, work==%u\n", work);
+ host->resp_idx += work;
+}
+
+static irqreturn_t carm_interrupt(int irq, void *__host, struct pt_regs *regs)
+{
+ struct carm_host *host = __host;
+ void *mmio;
+ u32 mask;
+ int handled = 0;
+ unsigned long flags;
+
+ if (!host) {
+ VPRINTK("no host\n");
+ return IRQ_NONE;
+ }
+
+ spin_lock_irqsave(&host->lock, flags);
+
+ mmio = host->mmio;
+
+ /* reading should also clear interrupts */
+ mask = readl(mmio + CARM_INT_STAT);
+
+ if (mask == 0 || mask == 0xffffffff) {
+ VPRINTK("no work, mask == 0x%x\n", mask);
+ goto out;
+ }
+
+ if (mask & INT_ACK_MASK)
+ writel(mask, mmio + CARM_INT_STAT);
+
+ if (unlikely(host->state == HST_INVALID)) {
+ VPRINTK("not initialized yet, mask = 0x%x\n", mask);
+ goto out;
+ }
+
+ if (mask & CARM_HAVE_RESP) {
+ handled = 1;
+ carm_handle_responses(host);
+ }
+
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+ VPRINTK("EXIT\n");
+ return IRQ_RETVAL(handled);
+}
+
+static void carm_fsm_task (void *_data)
+{
+ struct carm_host *host = _data;
+ unsigned long flags;
+ unsigned int state;
+ int rc, i, next_dev;
+ int reschedule = 0;
+ int new_state = HST_INVALID;
+
+ spin_lock_irqsave(&host->lock, flags);
+ state = host->state;
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ DPRINTK("ENTER, state == %s\n", state_name[state]);
+
+ switch (state) {
+ case HST_PROBE_START:
+ new_state = HST_ALLOC_BUF;
+ reschedule = 1;
+ break;
+
+ case HST_ALLOC_BUF:
+ rc = carm_send_special(host, carm_fill_alloc_buf);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_SYNC_TIME:
+ rc = carm_send_special(host, carm_fill_sync_time);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_GET_FW_VER:
+ rc = carm_send_special(host, carm_fill_get_fw_ver);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_PORT_SCAN:
+ rc = carm_send_special(host, carm_fill_scan_channels);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_DEV_SCAN_START:
+ host->cur_scan_dev = -1;
+ new_state = HST_DEV_SCAN;
+ reschedule = 1;
+ break;
+
+ case HST_DEV_SCAN:
+ next_dev = -1;
+ for (i = host->cur_scan_dev + 1; i < CARM_MAX_PORTS; i++)
+ if (host->dev_present & (1 << i)) {
+ next_dev = i;
+ break;
+ }
+
+ if (next_dev >= 0) {
+ host->cur_scan_dev = next_dev;
+ rc = carm_array_info(host, next_dev);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ } else {
+ new_state = HST_DEV_ACTIVATE;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_DEV_ACTIVATE: {
+ int activated = 0;
+ for (i = 0; i < CARM_MAX_PORTS; i++)
+ if (host->dev_active & (1 << i)) {
+ struct carm_port *port = &host->port[i];
+ struct gendisk *disk = port->disk;
+
+ set_capacity(disk, port->capacity);
+ add_disk(disk);
+ activated++;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): %d ports activated\n",
+ pci_name(host->pdev), activated);
+
+ new_state = HST_PROBE_FINISHED;
+ reschedule = 1;
+ break;
+ }
+
+ case HST_PROBE_FINISHED:
+ up(&host->probe_sem);
+ break;
+
+ case HST_ERROR:
+ /* FIXME: TODO */
+ break;
+
+ default:
+ /* should never occur */
+ printk(KERN_ERR PFX "BUG: unknown state %d\n", state);
+ assert(0);
+ break;
+ }
+
+ if (new_state != HST_INVALID) {
+ spin_lock_irqsave(&host->lock, flags);
+ host->state = new_state;
+ spin_unlock_irqrestore(&host->lock, flags);
+ }
+ if (reschedule)
+ schedule_work(&host->fsm_task);
+}
+
+static int carm_init_wait(void *mmio, u32 bits, unsigned int test_bit)
+{
+ unsigned int i;
+
+ for (i = 0; i < 50000; i++) {
+ u32 tmp = readl(mmio + CARM_LMUC);
+ udelay(100);
+
+ if (test_bit) {
+ if ((tmp & bits) == bits)
+ return 0;
+ } else {
+ if ((tmp & bits) == 0)
+ return 0;
+ }
+
+ cond_resched();
+ }
+
+ printk(KERN_ERR PFX "carm_init_wait timeout, bits == 0x%x, test_bit == %s\n",
+ bits, test_bit ? "yes" : "no");
+ return -EBUSY;
+}
+
+static void carm_init_responses(struct carm_host *host)
+{
+ void *mmio = host->mmio;
+ unsigned int i;
+ struct carm_response *resp = (struct carm_response *) host->shm;
+
+ for (i = 0; i < RMSG_Q_LEN; i++)
+ resp[i].status = 0xffffffff;
+
+ writel(0, mmio + CARM_RESP_IDX);
+}
+
+static int carm_init_host(struct carm_host *host)
+{
+ void *mmio = host->mmio;
+ u32 tmp;
+ u8 tmp8;
+ int rc;
+
+ DPRINTK("ENTER\n");
+
+ writel(0, mmio + CARM_INT_MASK);
+
+ tmp8 = readb(mmio + CARM_INITC);
+ if (tmp8 & 0x01) {
+ tmp8 &= ~0x01;
+ writeb(tmp8, CARM_INITC);
+ readb(mmio + CARM_INITC); /* flush */
+
+ DPRINTK("snooze...\n");
+ msleep(5000);
+ }
+
+ tmp = readl(mmio + CARM_HMUC);
+ if (tmp & CARM_CME) {
+ DPRINTK("CME bit present, waiting\n");
+ rc = carm_init_wait(mmio, CARM_CME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 1 failed\n");
+ return rc;
+ }
+ }
+ if (tmp & CARM_RME) {
+ DPRINTK("RME bit present, waiting\n");
+ rc = carm_init_wait(mmio, CARM_RME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 2 failed\n");
+ return rc;
+ }
+ }
+
+ tmp &= ~(CARM_RME | CARM_CME);
+ writel(tmp, mmio + CARM_HMUC);
+ readl(mmio + CARM_HMUC); /* flush */
+
+ rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 0);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 3 failed\n");
+ return rc;
+ }
+
+ carm_init_buckets(mmio);
+
+ writel(host->shm_dma & 0xffffffff, mmio + RBUF_ADDR_LO);
+ writel((host->shm_dma >> 16) >> 16, mmio + RBUF_ADDR_HI);
+ writel(RBUF_LEN, mmio + RBUF_BYTE_SZ);
+
+ tmp = readl(mmio + CARM_HMUC);
+ tmp |= (CARM_RME | CARM_CME | CARM_WZBC);
+ writel(tmp, mmio + CARM_HMUC);
+ readl(mmio + CARM_HMUC); /* flush */
+
+ rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 4 failed\n");
+ return rc;
+ }
+
+ writel(0, mmio + CARM_HMPHA);
+ writel(INT_DEF_MASK, mmio + CARM_INT_MASK);
+
+ carm_init_responses(host);
+
+ /* start initialization, probing state machine */
+ spin_lock_irq(&host->lock);
+ assert(host->state == HST_INVALID);
+ host->state = HST_PROBE_START;
+ spin_unlock_irq(&host->lock);
+ schedule_work(&host->fsm_task);
+
+ DPRINTK("EXIT\n");
+ return 0;
+}
+
+static int carm_init_disks(struct carm_host *host)
+{
+ unsigned int i;
+ int rc = 0;
+
+ for (i = 0; i < CARM_MAX_PORTS; i++) {
+ struct gendisk *disk;
+ request_queue_t *q;
+ struct carm_port *port;
+
+ port = &host->port[i];
+ port->host = host;
+ port->port_no = i;
+
+ disk = alloc_disk(CARM_MINORS_PER_MAJOR);
+ if (!disk) {
+ rc = -ENOMEM;
+ break;
+ }
+
+ port->disk = disk;
+ sprintf(disk->disk_name, DRV_NAME "%u_%u", host->id, i);
+ sprintf(disk->devfs_name, DRV_NAME "/%u_%u", host->id, i);
+ disk->major = host->major;
+ disk->first_minor = i * CARM_MINORS_PER_MAJOR;
+ disk->fops = &carm_bd_ops;
+ disk->private_data = port;
+
+ q = blk_init_queue(carm_rq_fn, &host->lock);
+ if (!q) {
+ rc = -ENOMEM;
+ break;
+ }
+ disk->queue = q;
+ blk_queue_max_hw_segments(q, CARM_MAX_REQ_SG);
+ blk_queue_max_phys_segments(q, CARM_MAX_REQ_SG);
+ blk_queue_segment_boundary(q, CARM_SG_BOUNDARY);
+
+ q->queuedata = port;
+ }
+
+ return rc;
+}
+
+static void carm_free_disks(struct carm_host *host)
+{
+ unsigned int i;
+
+ for (i = 0; i < CARM_MAX_PORTS; i++) {
+ struct gendisk *disk = host->port[i].disk;
+ if (disk) {
+ request_queue_t *q = disk->queue;
+
+ if (disk->flags & GENHD_FL_UP)
+ del_gendisk(disk);
+ if (q)
+ blk_cleanup_queue(q);
+ put_disk(disk);
+ }
+ }
+}
+
+static int carm_init_shm(struct carm_host *host)
+{
+ host->shm = pci_alloc_consistent(host->pdev, CARM_SHM_SIZE,
+ &host->shm_dma);
+ if (!host->shm)
+ return -ENOMEM;
+
+ host->msg_base = host->shm + RBUF_LEN;
+ host->msg_dma = host->shm_dma + RBUF_LEN;
+
+ memset(host->shm, 0xff, RBUF_LEN);
+ memset(host->msg_base, 0, PDC_SHM_SIZE - RBUF_LEN);
+
+ return 0;
+}
+
+static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static unsigned int printed_version;
+ struct carm_host *host;
+ unsigned int pci_dac;
+ int rc;
+ request_queue_t *q;
+ unsigned int i;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+#if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */
+ rc = pci_set_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (!rc) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): consistent DMA mask failure\n",
+ pci_name(pdev));
+ goto err_out_regions;
+ }
+ pci_dac = 1;
+ } else {
+#endif
+ rc = pci_set_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): DMA mask failure\n",
+ pci_name(pdev));
+ goto err_out_regions;
+ }
+ pci_dac = 0;
+#if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */
+ }
+#endif
+
+ host = kmalloc(sizeof(*host), GFP_KERNEL);
+ if (!host) {
+ printk(KERN_ERR DRV_NAME "(%s): memory alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ memset(host, 0, sizeof(*host));
+ host->pdev = pdev;
+ host->flags = pci_dac ? FL_DAC : 0;
+ spin_lock_init(&host->lock);
+ INIT_WORK(&host->fsm_task, carm_fsm_task, host);
+ init_MUTEX_LOCKED(&host->probe_sem);
+
+ for (i = 0; i < ARRAY_SIZE(host->req); i++)
+ host->req[i].tag = i;
+
+ host->mmio = ioremap(pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ if (!host->mmio) {
+ printk(KERN_ERR DRV_NAME "(%s): MMIO alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_kfree;
+ }
+
+ rc = carm_init_shm(host);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): DMA SHM alloc failure\n",
+ pci_name(pdev));
+ goto err_out_iounmap;
+ }
+
+ q = blk_init_queue(carm_oob_rq_fn, &host->lock);
+ if (!q) {
+ printk(KERN_ERR DRV_NAME "(%s): OOB queue alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_pci_free;
+ }
+ host->oob_q = q;
+ q->queuedata = host;
+
+ /*
+ * Figure out which major to use: 160, 161, or dynamic
+ */
+ if (!test_and_set_bit(0, &carm_major_alloc))
+ host->major = 160;
+ else if (!test_and_set_bit(1, &carm_major_alloc))
+ host->major = 161;
+ else
+ host->flags |= FL_DYN_MAJOR;
+
+ host->id = carm_host_id;
+ sprintf(host->name, DRV_NAME "%d", carm_host_id);
+
+ rc = register_blkdev(host->major, host->name);
+ if (rc < 0)
+ goto err_out_free_majors;
+ if (host->flags & FL_DYN_MAJOR)
+ host->major = rc;
+
+ devfs_mk_dir(DRV_NAME);
+
+ rc = carm_init_disks(host);
+ if (rc)
+ goto err_out_blkdev_disks;
+
+ pci_set_master(pdev);
+
+ rc = request_irq(pdev->irq, carm_interrupt, SA_SHIRQ, DRV_NAME, host);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): irq alloc failure\n",
+ pci_name(pdev));
+ goto err_out_blkdev_disks;
+ }
+
+ rc = carm_init_host(host);
+ if (rc)
+ goto err_out_free_irq;
+
+ DPRINTK("waiting for probe_sem\n");
+ down(&host->probe_sem);
+
+ printk(KERN_INFO "%s: pci %s, ports %d, io %lx, irq %u, major %d\n",
+ host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
+ pci_resource_start(pdev, 0), pdev->irq, host->major);
+
+ carm_host_id++;
+ pci_set_drvdata(pdev, host);
+ return 0;
+
+err_out_free_irq:
+ free_irq(pdev->irq, host);
+err_out_blkdev_disks:
+ carm_free_disks(host);
+ unregister_blkdev(host->major, host->name);
+err_out_free_majors:
+ if (host->major == 160)
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+ blk_cleanup_queue(host->oob_q);
+err_out_pci_free:
+ pci_free_consistent(pdev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+err_out_iounmap:
+ iounmap(host->mmio);
+err_out_kfree:
+ kfree(host);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+static void carm_remove_one (struct pci_dev *pdev)
+{
+ struct carm_host *host = pci_get_drvdata(pdev);
+
+ if (!host) {
+ printk(KERN_ERR PFX "BUG: no host data for PCI(%s)\n",
+ pci_name(pdev));
+ return;
+ }
+
+ free_irq(pdev->irq, host);
+ carm_free_disks(host);
+ devfs_remove(DRV_NAME);
+ unregister_blkdev(host->major, host->name);
+ if (host->major == 160)
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+ blk_cleanup_queue(host->oob_q);
+ pci_free_consistent(pdev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+ iounmap(host->mmio);
+ kfree(host);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static int __init carm_init(void)
+{
+ return pci_module_init(&carm_driver);
+}
+
+static void __exit carm_exit(void)
+{
+ pci_unregister_driver(&carm_driver);
+}
+
+module_init(carm_init);
+module_exit(carm_exit);
+
+
--- /dev/null
+/**
+ * \file drm_irq.h
+ * IRQ support
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+#ifndef __HAVE_SHARED_IRQ
+#define __HAVE_SHARED_IRQ 0
+#endif
+
+#if __HAVE_SHARED_IRQ
+#define DRM_IRQ_TYPE SA_SHIRQ
+#else
+#define DRM_IRQ_TYPE 0
+#endif
+
+/**
+ * Get interrupt from bus id.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_irq_busid structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Finds the PCI device with the specified bus id and gets its IRQ number.
+ * This IOCTL is deprecated, and will now return EINVAL for any busid not equal
+ * to that of the device that this DRM instance attached to.
+ */
+int DRM(irq_by_busid)(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_irq_busid_t p;
+
+ if (copy_from_user(&p, (drm_irq_busid_t *)arg, sizeof(p)))
+ return -EFAULT;
+
+ if ((p.busnum >> 8) != dev->pci_domain ||
+ (p.busnum & 0xff) != dev->pci_bus ||
+ p.devnum != dev->pci_slot ||
+ p.funcnum != dev->pci_func)
+ return -EINVAL;
+
+ p.irq = dev->irq;
+
+ DRM_DEBUG("%d:%d:%d => IRQ %d\n",
+ p.busnum, p.devnum, p.funcnum, p.irq);
+ if (copy_to_user((drm_irq_busid_t *)arg, &p, sizeof(p)))
+ return -EFAULT;
+ return 0;
+}
+
+#if __HAVE_IRQ
+
+/**
+ * Install IRQ handler.
+ *
+ * \param dev DRM device.
+ * \param irq IRQ number.
+ *
+ * Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver
+ * \c DRM(driver_irq_preinstall)() and \c DRM(driver_irq_postinstall)() functions
+ * before and after the installation.
+ */
+int DRM(irq_install)( drm_device_t *dev )
+{
+ int ret;
+
+ if ( dev->irq == 0 )
+ return -EINVAL;
+
+ down( &dev->struct_sem );
+
+ /* Driver must have been initialized */
+ if ( !dev->dev_private ) {
+ up( &dev->struct_sem );
+ return -EINVAL;
+ }
+
+ if ( dev->irq_enabled ) {
+ up( &dev->struct_sem );
+ return -EBUSY;
+ }
+ dev->irq_enabled = 1;
+ up( &dev->struct_sem );
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+#if __HAVE_DMA
+ dev->dma->next_buffer = NULL;
+ dev->dma->next_queue = NULL;
+ dev->dma->this_buffer = NULL;
+#endif
+
+#if __HAVE_IRQ_BH
+ INIT_WORK(&dev->work, DRM(irq_immediate_bh), dev);
+#endif
+
+#if __HAVE_VBL_IRQ
+ init_waitqueue_head(&dev->vbl_queue);
+
+ spin_lock_init( &dev->vbl_lock );
+
+ INIT_LIST_HEAD( &dev->vbl_sigs.head );
+
+ dev->vbl_pending = 0;
+#endif
+
+ /* Before installing handler */
+ DRM(driver_irq_preinstall)(dev);
+
+ /* Install handler */
+ ret = request_irq( dev->irq, DRM(irq_handler),
+ DRM_IRQ_TYPE, dev->devname, dev );
+ if ( ret < 0 ) {
+ down( &dev->struct_sem );
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+ return ret;
+ }
+
+ /* After installing handler */
+ DRM(driver_irq_postinstall)(dev);
+
+ return 0;
+}
+
+/**
+ * Uninstall the IRQ handler.
+ *
+ * \param dev DRM device.
+ *
+ * Calls the driver's \c DRM(driver_irq_uninstall)() function, and stops the irq.
+ */
+int DRM(irq_uninstall)( drm_device_t *dev )
+{
+ int irq_enabled;
+
+ down( &dev->struct_sem );
+ irq_enabled = dev->irq_enabled;
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+
+ if ( !irq_enabled )
+ return -EINVAL;
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+ DRM(driver_irq_uninstall)( dev );
+
+ free_irq( dev->irq, dev );
+
+ return 0;
+}
+
+/**
+ * IRQ control ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_control structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Calls irq_install() or irq_uninstall() according to \p arg.
+ */
+int DRM(control)( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+
+ if ( copy_from_user( &ctl, (drm_control_t *)arg, sizeof(ctl) ) )
+ return -EFAULT;
+
+ switch ( ctl.func ) {
+ case DRM_INST_HANDLER:
+ if (dev->if_version < DRM_IF_VERSION(1, 2) &&
+ ctl.irq != dev->irq)
+ return -EINVAL;
+ return DRM(irq_install)( dev );
+ case DRM_UNINST_HANDLER:
+ return DRM(irq_uninstall)( dev );
+ default:
+ return -EINVAL;
+ }
+}
+
+#if __HAVE_VBL_IRQ
+
+/**
+ * Wait for VBLANK.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param data user argument, pointing to a drm_wait_vblank structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the IRQ is installed.
+ *
+ * If a signal is requested checks if this task has already scheduled the same signal
+ * for the same vblank sequence number - nothing to be done in
+ * that case. If the number of tasks waiting for the interrupt exceeds 100 the
+ * function fails. Otherwise adds a new entry to drm_device::vbl_sigs for this
+ * task.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+int DRM(wait_vblank)( DRM_IOCTL_ARGS )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_wait_vblank_t vblwait;
+ struct timeval now;
+ int ret = 0;
+ unsigned int flags;
+
+ if (!dev->irq)
+ return -EINVAL;
+
+ DRM_COPY_FROM_USER_IOCTL( vblwait, (drm_wait_vblank_t *)data,
+ sizeof(vblwait) );
+
+ switch ( vblwait.request.type & ~_DRM_VBLANK_FLAGS_MASK ) {
+ case _DRM_VBLANK_RELATIVE:
+ vblwait.request.sequence += atomic_read( &dev->vbl_received );
+ vblwait.request.type &= ~_DRM_VBLANK_RELATIVE;
+ case _DRM_VBLANK_ABSOLUTE:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ flags = vblwait.request.type & _DRM_VBLANK_FLAGS_MASK;
+
+ if ( flags & _DRM_VBLANK_SIGNAL ) {
+ unsigned long irqflags;
+ drm_vbl_sig_t *vbl_sig;
+
+ vblwait.reply.sequence = atomic_read( &dev->vbl_received );
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ /* Check if this task has already scheduled the same signal
+ * for the same vblank sequence number; nothing to be done in
+ * that case
+ */
+ list_for_each_entry( vbl_sig, &dev->vbl_sigs.head, head ) {
+ if (vbl_sig->sequence == vblwait.request.sequence
+ && vbl_sig->info.si_signo == vblwait.request.signal
+ && vbl_sig->task == current)
+ {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ goto done;
+ }
+ }
+
+ if ( dev->vbl_pending >= 100 ) {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ return -EBUSY;
+ }
+
+ dev->vbl_pending++;
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+
+ if ( !( vbl_sig = DRM_MALLOC( sizeof( drm_vbl_sig_t ) ) ) ) {
+ return -ENOMEM;
+ }
+
+ memset( (void *)vbl_sig, 0, sizeof(*vbl_sig) );
+
+ vbl_sig->sequence = vblwait.request.sequence;
+ vbl_sig->info.si_signo = vblwait.request.signal;
+ vbl_sig->task = current;
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ list_add_tail( (struct list_head *) vbl_sig, &dev->vbl_sigs.head );
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ } else {
+ ret = DRM(vblank_wait)( dev, &vblwait.request.sequence );
+
+ do_gettimeofday( &now );
+ vblwait.reply.tval_sec = now.tv_sec;
+ vblwait.reply.tval_usec = now.tv_usec;
+ }
+
+done:
+ DRM_COPY_TO_USER_IOCTL( (drm_wait_vblank_t *)data, vblwait,
+ sizeof(vblwait) );
+
+ return ret;
+}
+
+/**
+ * Send the VBLANK signals.
+ *
+ * \param dev DRM device.
+ *
+ * Sends a signal for each task in drm_device::vbl_sigs and empties the list.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+void DRM(vbl_send_signals)( drm_device_t *dev )
+{
+ struct list_head *list, *tmp;
+ drm_vbl_sig_t *vbl_sig;
+ unsigned int vbl_seq = atomic_read( &dev->vbl_received );
+ unsigned long flags;
+
+ spin_lock_irqsave( &dev->vbl_lock, flags );
+
+ list_for_each_safe( list, tmp, &dev->vbl_sigs.head ) {
+ vbl_sig = list_entry( list, drm_vbl_sig_t, head );
+ if ( ( vbl_seq - vbl_sig->sequence ) <= (1<<23) ) {
+ vbl_sig->info.si_code = vbl_seq;
+ send_sig_info( vbl_sig->info.si_signo, &vbl_sig->info, vbl_sig->task );
+
+ list_del( list );
+
+ DRM_FREE( vbl_sig, sizeof(*vbl_sig) );
+
+ dev->vbl_pending--;
+ }
+ }
+
+ spin_unlock_irqrestore( &dev->vbl_lock, flags );
+}
+
+#endif /* __HAVE_VBL_IRQ */
+
+#endif /* __HAVE_IRQ */
--- /dev/null
+/*
+ This file is auto-generated from the drm_pciids.txt in the DRM CVS
+ Please contact dri-devel@lists.sf.net to add new cards to this list
+*/
+#define radeon_PCI_IDS \
+ {0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4137, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4965, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4966, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4967, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C58, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C65, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5158, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5159, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x515A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5836, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5960, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5961, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5962, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5963, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5968, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define r128_PCI_IDS \
+ {0x1002, 0x4c45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4d46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4d4c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5041, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5042, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5043, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5044, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5045, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5046, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5047, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5048, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5049, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5052, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5245, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5246, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5247, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x524b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x524c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x534d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5446, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x544C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5452, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define mga_PCI_IDS \
+ {0x102b, 0x0521, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x102b, 0x0525, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x102b, 0x2527, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define mach64_PCI_IDS \
+ {0x1002, 0x4749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4742, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4744, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c49, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c51, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c42, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4752, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4753, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c52, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c53, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c4d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c4e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define sisdrv_PCI_IDS \
+ {0x1039, 0x0300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x5300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x6300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x7300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define tdfx_PCI_IDS \
+ {0x121a, 0x0003, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0007, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x000b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define viadrv_PCI_IDS \
+ {0x1106, 0x3022, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x3122, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x7205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x7204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define i810_PCI_IDS \
+ {0x8086, 0x7121, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x7123, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x7125, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x1132, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define i830_PCI_IDS \
+ {0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define gamma_PCI_IDS \
+ {0x3d3d, 0x0008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define savage_PCI_IDS \
+ {0x5333, 0x8a22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a23, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c10, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c11, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c13, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c20, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c24, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a25, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d04, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define ffb_PCI_IDS \
+ {0, 0, 0}
+
--- /dev/null
+/*
+ * DS1286 Real Time Clock interface for Linux
+ *
+ * Copyright (C) 1998, 1999, 2000 Ralf Baechle
+ *
+ * Based on code written by Paul Gortmaker.
+ *
+ * This driver allows use of the real time clock (built into nearly all
+ * computers) from user space. It exports the /dev/rtc interface supporting
+ * various ioctl() and also the /proc/rtc pseudo-file for status
+ * information.
+ *
+ * The ioctls can be used to set the interrupt behaviour and generation rate
+ * from the RTC via IRQ 8. Then the /dev/rtc interface can be used to make
+ * use of these timer interrupts, be they interval or alarm based.
+ *
+ * The /dev/rtc interface will block on reads until an interrupt has been
+ * received. If a RTC interrupt has already happened, it will output an
+ * unsigned long and then block. The output value contains the interrupt
+ * status in the low byte and the number of interrupts since the last read
+ * in the remaining high bytes. The /dev/rtc interface can also be used with
+ * the select(2) call.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include <linux/ds1286.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/miscdevice.h>
+#include <linux/slab.h>
+#include <linux/ioport.h>
+#include <linux/fcntl.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/rtc.h>
+#include <linux/spinlock.h>
+#include <linux/bcd.h>
+#include <linux/proc_fs.h>
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+
+#define DS1286_VERSION "1.0"
+
+/*
+ * We sponge a minor off of the misc major. No need slurping
+ * up another valuable major dev number for this. If you add
+ * an ioctl, make sure you don't conflict with SPARC's RTC
+ * ioctls.
+ */
+
+static DECLARE_WAIT_QUEUE_HEAD(ds1286_wait);
+
+static ssize_t ds1286_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos);
+
+static int ds1286_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg);
+
+static unsigned int ds1286_poll(struct file *file, poll_table *wait);
+
+static void ds1286_get_alm_time (struct rtc_time *alm_tm);
+static void ds1286_get_time(struct rtc_time *rtc_tm);
+static int ds1286_set_time(struct rtc_time *rtc_tm);
+
+static inline unsigned char ds1286_is_updating(void);
+
+static spinlock_t ds1286_lock = SPIN_LOCK_UNLOCKED;
+
+static int ds1286_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data);
+
+/*
+ * Bits in rtc_status. (7 bits of room for future expansion)
+ */
+
+#define RTC_IS_OPEN 0x01 /* means /dev/rtc is in use */
+#define RTC_TIMER_ON 0x02 /* missed irq timer active */
+
+static unsigned char ds1286_status; /* bitmapped status byte. */
+
+static unsigned char days_in_mo[] = {
+ 0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31
+};
+
+/*
+ * Now all the various file operations that we export.
+ */
+
+static ssize_t ds1286_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos)
+{
+ return -EIO;
+}
+
+static int ds1286_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ struct rtc_time wtime;
+
+ switch (cmd) {
+ case RTC_AIE_OFF: /* Mask alarm int. enab. bit */
+ {
+ unsigned int flags;
+ unsigned char val;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ spin_lock_irqsave(&ds1286_lock, flags);
+ val = rtc_read(RTC_CMD);
+ val |= RTC_TDM;
+ rtc_write(val, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ return 0;
+ }
+ case RTC_AIE_ON: /* Allow alarm interrupts. */
+ {
+ unsigned int flags;
+ unsigned char val;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ spin_lock_irqsave(&ds1286_lock, flags);
+ val = rtc_read(RTC_CMD);
+ val &= ~RTC_TDM;
+ rtc_write(val, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ return 0;
+ }
+ case RTC_WIE_OFF: /* Mask watchdog int. enab. bit */
+ {
+ unsigned int flags;
+ unsigned char val;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ spin_lock_irqsave(&ds1286_lock, flags);
+ val = rtc_read(RTC_CMD);
+ val |= RTC_WAM;
+ rtc_write(val, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ return 0;
+ }
+ case RTC_WIE_ON: /* Allow watchdog interrupts. */
+ {
+ unsigned int flags;
+ unsigned char val;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ spin_lock_irqsave(&ds1286_lock, flags);
+ val = rtc_read(RTC_CMD);
+ val &= ~RTC_WAM;
+ rtc_write(val, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ return 0;
+ }
+ case RTC_ALM_READ: /* Read the present alarm time */
+ {
+ /*
+ * This returns a struct rtc_time. Reading >= 0xc0
+ * means "don't care" or "match all". Only the tm_hour,
+ * tm_min, and tm_sec values are filled in.
+ */
+
+ memset(&wtime, 0, sizeof(wtime));
+ ds1286_get_alm_time(&wtime);
+ break;
+ }
+ case RTC_ALM_SET: /* Store a time into the alarm */
+ {
+ /*
+ * This expects a struct rtc_time. Writing 0xff means
+ * "don't care" or "match all". Only the tm_hour,
+ * tm_min and tm_sec are used.
+ */
+ unsigned char hrs, min, sec;
+ struct rtc_time alm_tm;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ if (copy_from_user(&alm_tm, (struct rtc_time*)arg,
+ sizeof(struct rtc_time)))
+ return -EFAULT;
+
+ hrs = alm_tm.tm_hour;
+ min = alm_tm.tm_min;
+
+ if (hrs >= 24)
+ hrs = 0xff;
+
+ if (min >= 60)
+ min = 0xff;
+
+ BIN_TO_BCD(sec);
+ BIN_TO_BCD(min);
+ BIN_TO_BCD(hrs);
+
+ spin_lock(&ds1286_lock);
+ rtc_write(hrs, RTC_HOURS_ALARM);
+ rtc_write(min, RTC_MINUTES_ALARM);
+ spin_unlock(&ds1286_lock);
+
+ return 0;
+ }
+ case RTC_RD_TIME: /* Read the time/date from RTC */
+ {
+ memset(&wtime, 0, sizeof(wtime));
+ ds1286_get_time(&wtime);
+ break;
+ }
+ case RTC_SET_TIME: /* Set the RTC */
+ {
+ struct rtc_time rtc_tm;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ if (copy_from_user(&rtc_tm, (struct rtc_time*)arg,
+ sizeof(struct rtc_time)))
+ return -EFAULT;
+
+ return ds1286_set_time(&rtc_tm);
+ }
+ default:
+ return -EINVAL;
+ }
+ return copy_to_user((void *)arg, &wtime, sizeof wtime) ? -EFAULT : 0;
+}
+
+/*
+ * We enforce only one user at a time here with the open/close.
+ * Also clear the previous interrupt data on an open, and clean
+ * up things on a close.
+ */
+
+static int ds1286_open(struct inode *inode, struct file *file)
+{
+ spin_lock_irq(&ds1286_lock);
+
+ if (ds1286_status & RTC_IS_OPEN)
+ goto out_busy;
+
+ ds1286_status |= RTC_IS_OPEN;
+
+ spin_unlock_irq(&ds1286_lock);
+ return 0;
+
+out_busy:
+ spin_lock_irq(&ds1286_lock);
+ return -EBUSY;
+}
+
+static int ds1286_release(struct inode *inode, struct file *file)
+{
+ ds1286_status &= ~RTC_IS_OPEN;
+
+ return 0;
+}
+
+static unsigned int ds1286_poll(struct file *file, poll_table *wait)
+{
+ poll_wait(file, &ds1286_wait, wait);
+
+ return 0;
+}
+
+/*
+ * The various file operations we support.
+ */
+
+static struct file_operations ds1286_fops = {
+ .llseek = no_llseek,
+ .read = ds1286_read,
+ .poll = ds1286_poll,
+ .ioctl = ds1286_ioctl,
+ .open = ds1286_open,
+ .release = ds1286_release,
+};
+
+static struct miscdevice ds1286_dev=
+{
+ .minor = RTC_MINOR,
+ .name = "rtc",
+ .fops = &ds1286_fops,
+};
+
+static int __init ds1286_init(void)
+{
+ int err;
+
+ printk(KERN_INFO "DS1286 Real Time Clock Driver v%s\n", DS1286_VERSION);
+
+ err = misc_register(&ds1286_dev);
+ if (err)
+ goto out;
+
+ if (!create_proc_read_entry("driver/rtc", 0, 0, ds1286_read_proc, NULL)) {
+ err = -ENOMEM;
+
+ goto out_deregister;
+ }
+
+ return 0;
+
+out_deregister:
+ misc_deregister(&ds1286_dev);
+
+out:
+ return err;
+}
+
+static void __exit ds1286_exit(void)
+{
+ remove_proc_entry("driver/rtc", NULL);
+ misc_deregister(&ds1286_dev);
+}
+
+static char *days[] = {
+ "***", "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"
+};
+
+/*
+ * Info exported via "/proc/rtc".
+ */
+static int ds1286_proc_output(char *buf)
+{
+ char *p, *s;
+ struct rtc_time tm;
+ unsigned char hundredth, month, cmd, amode;
+
+ p = buf;
+
+ ds1286_get_time(&tm);
+ hundredth = rtc_read(RTC_HUNDREDTH_SECOND);
+ BCD_TO_BIN(hundredth);
+
+ p += sprintf(p,
+ "rtc_time\t: %02d:%02d:%02d.%02d\n"
+ "rtc_date\t: %04d-%02d-%02d\n",
+ tm.tm_hour, tm.tm_min, tm.tm_sec, hundredth,
+ tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday);
+
+ /*
+ * We implicitly assume 24hr mode here. Alarm values >= 0xc0 will
+ * match any value for that particular field. Values that are
+ * greater than a valid time, but less than 0xc0 shouldn't appear.
+ */
+ ds1286_get_alm_time(&tm);
+ p += sprintf(p, "alarm\t\t: %s ", days[tm.tm_wday]);
+ if (tm.tm_hour <= 24)
+ p += sprintf(p, "%02d:", tm.tm_hour);
+ else
+ p += sprintf(p, "**:");
+
+ if (tm.tm_min <= 59)
+ p += sprintf(p, "%02d\n", tm.tm_min);
+ else
+ p += sprintf(p, "**\n");
+
+ month = rtc_read(RTC_MONTH);
+ p += sprintf(p,
+ "oscillator\t: %s\n"
+ "square_wave\t: %s\n",
+ (month & RTC_EOSC) ? "disabled" : "enabled",
+ (month & RTC_ESQW) ? "disabled" : "enabled");
+
+ amode = ((rtc_read(RTC_MINUTES_ALARM) & 0x80) >> 5) |
+ ((rtc_read(RTC_HOURS_ALARM) & 0x80) >> 6) |
+ ((rtc_read(RTC_DAY_ALARM) & 0x80) >> 7);
+ if (amode == 7) s = "each minute";
+ else if (amode == 3) s = "minutes match";
+ else if (amode == 1) s = "hours and minutes match";
+ else if (amode == 0) s = "days, hours and minutes match";
+ else s = "invalid";
+ p += sprintf(p, "alarm_mode\t: %s\n", s);
+
+ cmd = rtc_read(RTC_CMD);
+ p += sprintf(p,
+ "alarm_enable\t: %s\n"
+ "wdog_alarm\t: %s\n"
+ "alarm_mask\t: %s\n"
+ "wdog_alarm_mask\t: %s\n"
+ "interrupt_mode\t: %s\n"
+ "INTB_mode\t: %s_active\n"
+ "interrupt_pins\t: %s\n",
+ (cmd & RTC_TDF) ? "yes" : "no",
+ (cmd & RTC_WAF) ? "yes" : "no",
+ (cmd & RTC_TDM) ? "disabled" : "enabled",
+ (cmd & RTC_WAM) ? "disabled" : "enabled",
+ (cmd & RTC_PU_LVL) ? "pulse" : "level",
+ (cmd & RTC_IBH_LO) ? "low" : "high",
+ (cmd & RTC_IPSW) ? "unswapped" : "swapped");
+
+ return p - buf;
+}
+
+static int ds1286_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ int len = ds1286_proc_output (page);
+ if (len <= off+count) *eof = 1;
+ *start = page + off;
+ len -= off;
+ if (len>count)
+ len = count;
+ if (len<0)
+ len = 0;
+
+ return len;
+}
+
+/*
+ * Returns true if a clock update is in progress
+ */
+static inline unsigned char ds1286_is_updating(void)
+{
+ return rtc_read(RTC_CMD) & RTC_TE;
+}
+
+
+static void ds1286_get_time(struct rtc_time *rtc_tm)
+{
+ unsigned char save_control;
+ unsigned int flags;
+ unsigned long uip_watchdog = jiffies;
+
+ /*
+ * read RTC once any update in progress is done. The update
+ * can take just over 2ms. We wait 10 to 20ms. There is no need to
+ * to poll-wait (up to 1s - eeccch) for the falling edge of RTC_UIP.
+ * If you need to know *exactly* when a second has started, enable
+ * periodic update complete interrupts, (via ioctl) and then
+ * immediately read /dev/rtc which will block until you get the IRQ.
+ * Once the read clears, read the RTC time (again via ioctl). Easy.
+ */
+
+ if (ds1286_is_updating() != 0)
+ while (jiffies - uip_watchdog < 2*HZ/100)
+ barrier();
+
+ /*
+ * Only the values that we read from the RTC are set. We leave
+ * tm_wday, tm_yday and tm_isdst untouched. Even though the
+ * RTC has RTC_DAY_OF_WEEK, we ignore it, as it is only updated
+ * by the RTC when initially set to a non-zero value.
+ */
+ spin_lock_irqsave(&ds1286_lock, flags);
+ save_control = rtc_read(RTC_CMD);
+ rtc_write((save_control|RTC_TE), RTC_CMD);
+
+ rtc_tm->tm_sec = rtc_read(RTC_SECONDS);
+ rtc_tm->tm_min = rtc_read(RTC_MINUTES);
+ rtc_tm->tm_hour = rtc_read(RTC_HOURS) & 0x3f;
+ rtc_tm->tm_mday = rtc_read(RTC_DATE);
+ rtc_tm->tm_mon = rtc_read(RTC_MONTH) & 0x1f;
+ rtc_tm->tm_year = rtc_read(RTC_YEAR);
+
+ rtc_write(save_control, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ BCD_TO_BIN(rtc_tm->tm_sec);
+ BCD_TO_BIN(rtc_tm->tm_min);
+ BCD_TO_BIN(rtc_tm->tm_hour);
+ BCD_TO_BIN(rtc_tm->tm_mday);
+ BCD_TO_BIN(rtc_tm->tm_mon);
+ BCD_TO_BIN(rtc_tm->tm_year);
+
+ /*
+ * Account for differences between how the RTC uses the values
+ * and how they are defined in a struct rtc_time;
+ */
+ if (rtc_tm->tm_year < 45)
+ rtc_tm->tm_year += 30;
+ if ((rtc_tm->tm_year += 40) < 70)
+ rtc_tm->tm_year += 100;
+
+ rtc_tm->tm_mon--;
+}
+
+static int ds1286_set_time(struct rtc_time *rtc_tm)
+{
+ unsigned char mon, day, hrs, min, sec, leap_yr;
+ unsigned char save_control;
+ unsigned int yrs, flags;
+
+
+ yrs = rtc_tm->tm_year + 1900;
+ mon = rtc_tm->tm_mon + 1; /* tm_mon starts at zero */
+ day = rtc_tm->tm_mday;
+ hrs = rtc_tm->tm_hour;
+ min = rtc_tm->tm_min;
+ sec = rtc_tm->tm_sec;
+
+ if (yrs < 1970)
+ return -EINVAL;
+
+ leap_yr = ((!(yrs % 4) && (yrs % 100)) || !(yrs % 400));
+
+ if ((mon > 12) || (day == 0))
+ return -EINVAL;
+
+ if (day > (days_in_mo[mon] + ((mon == 2) && leap_yr)))
+ return -EINVAL;
+
+ if ((hrs >= 24) || (min >= 60) || (sec >= 60))
+ return -EINVAL;
+
+ if ((yrs -= 1940) > 255) /* They are unsigned */
+ return -EINVAL;
+
+ if (yrs >= 100)
+ yrs -= 100;
+
+ BIN_TO_BCD(sec);
+ BIN_TO_BCD(min);
+ BIN_TO_BCD(hrs);
+ BIN_TO_BCD(day);
+ BIN_TO_BCD(mon);
+ BIN_TO_BCD(yrs);
+
+ spin_lock_irqsave(&ds1286_lock, flags);
+ save_control = rtc_read(RTC_CMD);
+ rtc_write((save_control|RTC_TE), RTC_CMD);
+
+ rtc_write(yrs, RTC_YEAR);
+ rtc_write(mon, RTC_MONTH);
+ rtc_write(day, RTC_DATE);
+ rtc_write(hrs, RTC_HOURS);
+ rtc_write(min, RTC_MINUTES);
+ rtc_write(sec, RTC_SECONDS);
+ rtc_write(0, RTC_HUNDREDTH_SECOND);
+
+ rtc_write(save_control, RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ return 0;
+}
+
+static void ds1286_get_alm_time(struct rtc_time *alm_tm)
+{
+ unsigned char cmd;
+ unsigned int flags;
+
+ /*
+ * Only the values that we read from the RTC are set. That
+ * means only tm_wday, tm_hour, tm_min.
+ */
+ spin_lock_irqsave(&ds1286_lock, flags);
+ alm_tm->tm_min = rtc_read(RTC_MINUTES_ALARM) & 0x7f;
+ alm_tm->tm_hour = rtc_read(RTC_HOURS_ALARM) & 0x1f;
+ alm_tm->tm_wday = rtc_read(RTC_DAY_ALARM) & 0x07;
+ cmd = rtc_read(RTC_CMD);
+ spin_unlock_irqrestore(&ds1286_lock, flags);
+
+ BCD_TO_BIN(alm_tm->tm_min);
+ BCD_TO_BIN(alm_tm->tm_hour);
+ alm_tm->tm_sec = 0;
+}
+
+module_init(ds1286_init);
+module_exit(ds1286_exit);
+
+MODULE_AUTHOR("Ralf Baechle");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_MISCDEV(RTC_MINOR);
--- /dev/null
+/*
+ * Intel & MS High Precision Event Timer Implementation.
+ * Contributors:
+ * Venki Pallipadi
+ * Bob Picco
+ */
+
+#include <linux/config.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/ioport.h>
+#include <linux/fcntl.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/proc_fs.h>
+#include <linux/spinlock.h>
+#include <linux/sysctl.h>
+#include <linux/wait.h>
+#include <linux/bcd.h>
+#include <linux/seq_file.h>
+
+#include <asm/current.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/bitops.h>
+#include <asm/div64.h>
+
+#include <linux/acpi.h>
+#include <acpi/acpi_bus.h>
+#include <linux/hpet.h>
+
+/*
+ * The High Precision Event Timer driver.
+ * This driver is closely modelled after the rtc.c driver.
+ * http://www.intel.com/labs/platcomp/hpet/hpetspec.htm
+ */
+#define HPET_USER_FREQ (64)
+#define HPET_DRIFT (500)
+
+static u32 hpet_ntimer, hpet_nhpet, hpet_max_freq = HPET_USER_FREQ;
+
+/* A lock for concurrent access by app and isr hpet activity. */
+static spinlock_t hpet_lock = SPIN_LOCK_UNLOCKED;
+/* A lock for concurrent intermodule access to hpet and isr hpet activity. */
+static spinlock_t hpet_task_lock = SPIN_LOCK_UNLOCKED;
+
+struct hpet_dev {
+ struct hpets *hd_hpets;
+ struct hpet *hd_hpet;
+ struct hpet_timer *hd_timer;
+ unsigned long hd_ireqfreq;
+ unsigned long hd_irqdata;
+ wait_queue_head_t hd_waitqueue;
+ struct fasync_struct *hd_async_queue;
+ struct hpet_task *hd_task;
+ unsigned int hd_flags;
+ unsigned int hd_irq;
+ unsigned int hd_hdwirq;
+};
+
+struct hpets {
+ struct hpets *hp_next;
+ struct hpet *hp_hpet;
+ unsigned long hp_period;
+ unsigned long hp_delta;
+ unsigned int hp_ntimer;
+ unsigned int hp_which;
+ struct hpet_dev hp_dev[1];
+};
+
+static struct hpets *hpets;
+
+#define HPET_OPEN 0x0001
+#define HPET_IE 0x0002 /* interrupt enabled */
+#define HPET_PERIODIC 0x0004
+
+#if BITS_PER_LONG == 64
+#define write_counter(V, MC) writeq(V, MC)
+#define read_counter(MC) readq(MC)
+#else
+#define write_counter(V, MC) writel(V, MC)
+#define read_counter(MC) readl(MC)
+#endif
+
+#ifndef readq
+static unsigned long long __inline readq(void *addr)
+{
+ return readl(addr) | (((unsigned long long)readl(addr + 4)) << 32LL);
+}
+#endif
+
+#ifndef writeq
+static void __inline writeq(unsigned long long v, void *addr)
+{
+ writel(v & 0xffffffff, addr);
+ writel(v >> 32, addr + 4);
+}
+#endif
+
+static irqreturn_t hpet_interrupt(int irq, void *data, struct pt_regs *regs)
+{
+ struct hpet_dev *devp;
+ unsigned long isr;
+
+ devp = data;
+
+ spin_lock(&hpet_lock);
+ devp->hd_irqdata++;
+
+ /*
+ * For non-periodic timers, increment the accumulator.
+ * This has the effect of treating non-periodic like periodic.
+ */
+ if ((devp->hd_flags & (HPET_IE | HPET_PERIODIC)) == HPET_IE) {
+ unsigned long m, t;
+
+ t = devp->hd_ireqfreq;
+ m = read_counter(&devp->hd_hpet->hpet_mc);
+ write_counter(t + m + devp->hd_hpets->hp_delta,
+ &devp->hd_timer->hpet_compare);
+ }
+
+ isr = (1 << (devp - devp->hd_hpets->hp_dev));
+ writeq(isr, &devp->hd_hpet->hpet_isr);
+ spin_unlock(&hpet_lock);
+
+ spin_lock(&hpet_task_lock);
+ if (devp->hd_task)
+ devp->hd_task->ht_func(devp->hd_task->ht_data);
+ spin_unlock(&hpet_task_lock);
+
+ wake_up_interruptible(&devp->hd_waitqueue);
+
+ kill_fasync(&devp->hd_async_queue, SIGIO, POLL_IN);
+
+ return IRQ_HANDLED;
+}
+
+static int hpet_open(struct inode *inode, struct file *file)
+{
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+ int i;
+
+ spin_lock_irq(&hpet_lock);
+
+ for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
+ for (i = 0; i < hpetp->hp_ntimer; i++)
+ if (hpetp->hp_dev[i].hd_flags & HPET_OPEN
+ || hpetp->hp_dev[i].hd_task)
+ continue;
+ else {
+ devp = &hpetp->hp_dev[i];
+ break;
+ }
+
+ if (!devp) {
+ spin_unlock_irq(&hpet_lock);
+ return -EBUSY;
+ }
+
+ file->private_data = devp;
+ devp->hd_irqdata = 0;
+ devp->hd_flags |= HPET_OPEN;
+ spin_unlock_irq(&hpet_lock);
+
+ return 0;
+}
+
+static ssize_t
+hpet_read(struct file *file, char *buf, size_t count, loff_t * ppos)
+{
+ DECLARE_WAITQUEUE(wait, current);
+ unsigned long data;
+ ssize_t retval;
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+ if (!devp->hd_ireqfreq)
+ return -EIO;
+
+ if (count < sizeof(unsigned long))
+ return -EINVAL;
+
+ add_wait_queue(&devp->hd_waitqueue, &wait);
+
+ do {
+ __set_current_state(TASK_INTERRUPTIBLE);
+
+ spin_lock_irq(&hpet_lock);
+ data = devp->hd_irqdata;
+ devp->hd_irqdata = 0;
+ spin_unlock_irq(&hpet_lock);
+
+ if (data)
+ break;
+ else if (file->f_flags & O_NONBLOCK) {
+ retval = -EAGAIN;
+ goto out;
+ } else if (signal_pending(current)) {
+ retval = -ERESTARTSYS;
+ goto out;
+ }
+
+ schedule();
+
+ } while (1);
+
+ retval = put_user(data, (unsigned long *)buf);
+ if (!retval)
+ retval = sizeof(unsigned long);
+ out:
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&devp->hd_waitqueue, &wait);
+
+ return retval;
+}
+
+static unsigned int hpet_poll(struct file *file, poll_table * wait)
+{
+ unsigned long v;
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+
+ if (!devp->hd_ireqfreq)
+ return 0;
+
+ poll_wait(file, &devp->hd_waitqueue, wait);
+
+ spin_lock_irq(&hpet_lock);
+ v = devp->hd_irqdata;
+ spin_unlock_irq(&hpet_lock);
+
+ if (v != 0)
+ return POLLIN | POLLRDNORM;
+
+ return 0;
+}
+
+static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+{
+#ifdef CONFIG_HPET_NOMMAP
+ return -ENOSYS;
+#else
+ struct hpet_dev *devp;
+ unsigned long addr;
+
+ if (((vma->vm_end - vma->vm_start) != PAGE_SIZE) || vma->vm_pgoff)
+ return -EINVAL;
+
+ if (vma->vm_flags & VM_WRITE)
+ return -EPERM;
+
+ devp = file->private_data;
+ addr = (unsigned long)devp->hd_hpet;
+
+ if (addr & (PAGE_SIZE - 1))
+ return -ENOSYS;
+
+ vma->vm_flags |= VM_IO;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ addr = __pa(addr);
+
+ if (remap_page_range
+ (vma, vma->vm_start, addr, PAGE_SIZE, vma->vm_page_prot)) {
+ printk(KERN_ERR "remap_page_range failed in hpet.c\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+#endif
+}
+
+static int hpet_fasync(int fd, struct file *file, int on)
+{
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+
+ if (fasync_helper(fd, file, on, &devp->hd_async_queue) >= 0)
+ return 0;
+ else
+ return -EIO;
+}
+
+static int hpet_release(struct inode *inode, struct file *file)
+{
+ struct hpet_dev *devp;
+ struct hpet_timer *timer;
+ int irq = 0;
+
+ devp = file->private_data;
+ timer = devp->hd_timer;
+
+ spin_lock_irq(&hpet_lock);
+
+ writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
+ &timer->hpet_config);
+
+ irq = devp->hd_irq;
+ devp->hd_irq = 0;
+
+ devp->hd_ireqfreq = 0;
+
+ if (devp->hd_flags & HPET_PERIODIC
+ && readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
+ unsigned long v;
+
+ v = readq(&timer->hpet_config);
+ v ^= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ }
+
+ devp->hd_flags &= ~(HPET_OPEN | HPET_IE | HPET_PERIODIC);
+ spin_unlock_irq(&hpet_lock);
+
+ if (irq)
+ free_irq(irq, devp);
+
+ if (file->f_flags & FASYNC)
+ hpet_fasync(-1, file, 0);
+
+ file->private_data = 0;
+ return 0;
+}
+
+static int hpet_ioctl_common(struct hpet_dev *, int, unsigned long, int);
+
+static int
+hpet_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+ return hpet_ioctl_common(devp, cmd, arg, 0);
+}
+
+static int hpet_ioctl_ieon(struct hpet_dev *devp)
+{
+ struct hpet_timer *timer;
+ struct hpet *hpet;
+ struct hpets *hpetp;
+ int irq;
+ unsigned long g, v, t, m;
+ unsigned long flags, isr;
+
+ timer = devp->hd_timer;
+ hpet = devp->hd_hpet;
+ hpetp = devp->hd_hpets;
+
+ v = readq(&timer->hpet_config);
+ spin_lock_irq(&hpet_lock);
+
+ if (devp->hd_flags & HPET_IE) {
+ spin_unlock_irq(&hpet_lock);
+ return -EBUSY;
+ }
+
+ devp->hd_flags |= HPET_IE;
+ spin_unlock_irq(&hpet_lock);
+
+ t = readq(&timer->hpet_config);
+ irq = devp->hd_hdwirq;
+
+ if (irq) {
+ char name[7];
+
+ sprintf(name, "hpet%d", (int)(devp - hpetp->hp_dev));
+
+ if (request_irq
+ (irq, hpet_interrupt, SA_INTERRUPT, name, (void *)devp)) {
+ printk(KERN_ERR "hpet: IRQ %d is not free\n", irq);
+ irq = 0;
+ }
+ }
+
+ if (irq == 0) {
+ spin_lock_irq(&hpet_lock);
+ devp->hd_flags ^= HPET_IE;
+ spin_unlock_irq(&hpet_lock);
+ return -EIO;
+ }
+
+ devp->hd_irq = irq;
+ t = devp->hd_ireqfreq;
+ v = readq(&timer->hpet_config);
+ g = v | Tn_INT_ENB_CNF_MASK;
+
+ if (devp->hd_flags & HPET_PERIODIC) {
+ write_counter(t, &timer->hpet_compare);
+ g |= Tn_TYPE_CNF_MASK;
+ v |= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ v |= Tn_VAL_SET_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ local_irq_save(flags);
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ } else {
+ local_irq_save(flags);
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ }
+
+ isr = (1 << (devp - hpets->hp_dev));
+ writeq(isr, &hpet->hpet_isr);
+ writeq(g, &timer->hpet_config);
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+static inline unsigned long hpet_time_div(unsigned long dis)
+{
+ unsigned long long m = 1000000000000000ULL;
+
+ do_div(m, dis);
+
+ return (unsigned long)m;
+}
+
+static int
+hpet_ioctl_common(struct hpet_dev *devp, int cmd, unsigned long arg, int kernel)
+{
+ struct hpet_timer *timer;
+ struct hpet *hpet;
+ struct hpets *hpetp;
+ int err;
+ unsigned long v;
+
+ switch (cmd) {
+ case HPET_IE_OFF:
+ case HPET_INFO:
+ case HPET_EPI:
+ case HPET_DPI:
+ case HPET_IRQFREQ:
+ timer = devp->hd_timer;
+ hpet = devp->hd_hpet;
+ hpetp = devp->hd_hpets;
+ break;
+ case HPET_IE_ON:
+ return hpet_ioctl_ieon(devp);
+ default:
+ return -EINVAL;
+ }
+
+ err = 0;
+
+ switch (cmd) {
+ case HPET_IE_OFF:
+ if ((devp->hd_flags & HPET_IE) == 0)
+ break;
+ v = readq(&timer->hpet_config);
+ v &= ~Tn_INT_ENB_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ if (devp->hd_irq) {
+ free_irq(devp->hd_irq, devp);
+ devp->hd_irq = 0;
+ }
+ devp->hd_flags ^= HPET_IE;
+ break;
+ case HPET_INFO:
+ {
+ struct hpet_info info;
+
+ info.hi_ireqfreq = hpet_time_div(hpetp->hp_period *
+ devp->hd_ireqfreq);
+ info.hi_flags =
+ readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK;
+ info.hi_hpet = devp->hd_hpets->hp_which;
+ info.hi_timer = devp - devp->hd_hpets->hp_dev;
+ if (copy_to_user((void *)arg, &info, sizeof(info)))
+ err = -EFAULT;
+ break;
+ }
+ case HPET_EPI:
+ v = readq(&timer->hpet_config);
+ if ((v & Tn_PER_INT_CAP_MASK) == 0) {
+ err = -ENXIO;
+ break;
+ }
+ devp->hd_flags |= HPET_PERIODIC;
+ break;
+ case HPET_DPI:
+ v = readq(&timer->hpet_config);
+ if ((v & Tn_PER_INT_CAP_MASK) == 0) {
+ err = -ENXIO;
+ break;
+ }
+ if (devp->hd_flags & HPET_PERIODIC &&
+ readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
+ v = readq(&timer->hpet_config);
+ v ^= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ }
+ devp->hd_flags &= ~HPET_PERIODIC;
+ break;
+ case HPET_IRQFREQ:
+ if (!kernel && (arg > hpet_max_freq) &&
+ !capable(CAP_SYS_RESOURCE)) {
+ err = -EACCES;
+ break;
+ }
+
+ if (arg & (arg - 1)) {
+ err = -EINVAL;
+ break;
+ }
+
+ devp->hd_ireqfreq = hpet_time_div(hpetp->hp_period * arg);
+ }
+
+ return err;
+}
+
+static struct file_operations hpet_fops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .read = hpet_read,
+ .poll = hpet_poll,
+ .ioctl = hpet_ioctl,
+ .open = hpet_open,
+ .release = hpet_release,
+ .fasync = hpet_fasync,
+ .mmap = hpet_mmap,
+};
+
+EXPORT_SYMBOL(hpet_alloc);
+EXPORT_SYMBOL(hpet_register);
+EXPORT_SYMBOL(hpet_unregister);
+EXPORT_SYMBOL(hpet_control);
+
+int hpet_register(struct hpet_task *tp, int periodic)
+{
+ unsigned int i;
+ u64 mask;
+ struct hpet_timer *timer;
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+
+ switch (periodic) {
+ case 1:
+ mask = Tn_PER_INT_CAP_MASK;
+ break;
+ case 0:
+ mask = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ spin_lock_irq(&hpet_task_lock);
+ spin_lock(&hpet_lock);
+
+ for (devp = 0, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
+ for (timer = hpetp->hp_hpet->hpet_timers, i = 0;
+ i < hpetp->hp_ntimer; i++, timer++) {
+ if ((readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK)
+ != mask)
+ continue;
+
+ devp = &hpetp->hp_dev[i];
+
+ if (devp->hd_flags & HPET_OPEN || devp->hd_task) {
+ devp = 0;
+ continue;
+ }
+
+ tp->ht_opaque = devp;
+ devp->hd_task = tp;
+ break;
+ }
+
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+
+ if (tp->ht_opaque)
+ return 0;
+ else
+ return -EBUSY;
+}
+
+static inline int hpet_tpcheck(struct hpet_task *tp)
+{
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+
+ devp = tp->ht_opaque;
+
+ if (!devp)
+ return -ENXIO;
+
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (devp >= hpetp->hp_dev
+ && devp < (hpetp->hp_dev + hpetp->hp_ntimer)
+ && devp->hd_hpet == hpetp->hp_hpet)
+ return 0;
+
+ return -ENXIO;
+}
+
+int hpet_unregister(struct hpet_task *tp)
+{
+ struct hpet_dev *devp;
+ struct hpet_timer *timer;
+ int err;
+
+ if ((err = hpet_tpcheck(tp)))
+ return err;
+
+ spin_lock_irq(&hpet_task_lock);
+ spin_lock(&hpet_lock);
+
+ devp = tp->ht_opaque;
+ if (devp->hd_task != tp) {
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+ return -ENXIO;
+ }
+
+ timer = devp->hd_timer;
+ writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
+ &timer->hpet_config);
+ devp->hd_flags &= ~(HPET_IE | HPET_PERIODIC);
+ devp->hd_task = 0;
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+
+ return 0;
+}
+
+int hpet_control(struct hpet_task *tp, unsigned int cmd, unsigned long arg)
+{
+ struct hpet_dev *devp;
+ int err;
+
+ if ((err = hpet_tpcheck(tp)))
+ return err;
+
+ spin_lock_irq(&hpet_lock);
+ devp = tp->ht_opaque;
+ if (devp->hd_task != tp) {
+ spin_unlock_irq(&hpet_lock);
+ return -ENXIO;
+ }
+ spin_unlock_irq(&hpet_lock);
+ return hpet_ioctl_common(devp, cmd, arg, 1);
+}
+
+#ifdef CONFIG_TIME_INTERPOLATION
+
+static unsigned long hpet_offset, last_wall_hpet;
+static long hpet_nsecs_per_cycle, hpet_cycles_per_sec;
+
+static unsigned long hpet_getoffset(void)
+{
+ return hpet_offset + (read_counter(&hpets->hp_hpet->hpet_mc) -
+ last_wall_hpet) * hpet_nsecs_per_cycle;
+}
+
+static void hpet_update(long delta)
+{
+ unsigned long mc;
+ unsigned long offset;
+
+ mc = read_counter(&hpets->hp_hpet->hpet_mc);
+ offset = hpet_offset + (mc - last_wall_hpet) * hpet_nsecs_per_cycle;
+
+ if (delta < 0 || (unsigned long)delta < offset)
+ hpet_offset = offset - delta;
+ else
+ hpet_offset = 0;
+ last_wall_hpet = mc;
+}
+
+static void hpet_reset(void)
+{
+ hpet_offset = 0;
+ last_wall_hpet = read_counter(&hpets->hp_hpet->hpet_mc);
+}
+
+static struct time_interpolator hpet_interpolator = {
+ .get_offset = hpet_getoffset,
+ .update = hpet_update,
+ .reset = hpet_reset
+};
+
+#endif
+
+static ctl_table hpet_table[] = {
+ {
+ .ctl_name = 1,
+ .procname = "max-user-freq",
+ .data = &hpet_max_freq,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {.ctl_name = 0}
+};
+
+static ctl_table hpet_root[] = {
+ {
+ .ctl_name = 1,
+ .procname = "hpet",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = hpet_table,
+ },
+ {.ctl_name = 0}
+};
+
+static ctl_table dev_root[] = {
+ {
+ .ctl_name = CTL_DEV,
+ .procname = "dev",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = hpet_root,
+ },
+ {.ctl_name = 0}
+};
+
+static struct ctl_table_header *sysctl_header;
+
+static void *hpet_start(struct seq_file *s, loff_t * pos)
+{
+ struct hpets *hpetp;
+ loff_t n;
+
+ for (n = *pos, hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (!n--)
+ return hpetp;
+
+ return 0;
+}
+
+static void *hpet_next(struct seq_file *s, void *v, loff_t * pos)
+{
+ struct hpets *hpetp;
+
+ hpetp = v;
+ ++*pos;
+ return hpetp->hp_next;
+}
+
+static void hpet_stop(struct seq_file *s, void *v)
+{
+ return;
+}
+
+static int hpet_show(struct seq_file *s, void *v)
+{
+ struct hpets *hpetp;
+ struct hpet *hpet;
+ u64 cap, vendor, period;
+
+ hpetp = v;
+ hpet = hpetp->hp_hpet;
+
+ cap = readq(&hpet->hpet_cap);
+ period = (cap & HPET_COUNTER_CLK_PERIOD_MASK) >>
+ HPET_COUNTER_CLK_PERIOD_SHIFT;
+ vendor = (cap & HPET_VENDOR_ID_MASK) >> HPET_VENDOR_ID_SHIFT;
+
+ seq_printf(s,
+ "HPET%d period = %d 10**-15 vendor = 0x%x number timer = %d\n",
+ hpetp->hp_which, (u32) period, (u32) vendor,
+ hpetp->hp_ntimer);
+
+ return 0;
+}
+
+static struct seq_operations hpet_seq_ops = {
+ .start = hpet_start,
+ .next = hpet_next,
+ .stop = hpet_stop,
+ .show = hpet_show
+};
+
+static int hpet_proc_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &hpet_seq_ops);
+}
+
+static struct file_operations hpet_proc_fops = {
+ .open = hpet_proc_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release
+};
+
+/*
+ * Adjustment for when arming the timer with
+ * initial conditions. That is, main counter
+ * ticks expired before interrupts are enabled.
+ */
+#define TICK_CALIBRATE (1000UL)
+
+static unsigned long __init hpet_calibrate(struct hpets *hpetp)
+{
+ struct hpet_timer *timer;
+ unsigned long t, m, count, i, flags, start;
+ struct hpet_dev *devp;
+ int j;
+ struct hpet *hpet;
+
+ for (timer = 0, j = 0, devp = hpetp->hp_dev; j < hpetp->hp_ntimer;
+ j++, devp++)
+ if ((devp->hd_flags & HPET_OPEN) == 0) {
+ timer = devp->hd_timer;
+ break;
+ }
+
+ if (!timer)
+ return 0;
+
+ hpet = hpets->hp_hpet;
+ t = read_counter(&timer->hpet_compare);
+
+ i = 0;
+ count = hpet_time_div(hpetp->hp_period * TICK_CALIBRATE);
+
+ local_irq_save(flags);
+
+ start = read_counter(&hpet->hpet_mc);
+
+ do {
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ } while (i++, (m - start) < count);
+
+ local_irq_restore(flags);
+
+ return (m - start) / i;
+}
+
+int __init hpet_alloc(struct hpet_data *hdp)
+{
+ u64 cap, mcfg;
+ struct hpet_dev *devp;
+ u32 i, ntimer;
+ struct hpets *hpetp;
+ size_t siz;
+ struct hpet *hpet;
+ static struct hpets *last __initdata = (struct hpets *)0;
+
+ /*
+ * hpet_alloc can be called by platform dependent code.
+ * if platform dependent code has allocated the hpet
+ * ACPI also reports hpet, then we catch it here.
+ */
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (hpetp->hp_hpet == (struct hpet *)(hdp->hd_address))
+ return 0;
+
+ siz = sizeof(struct hpets) + ((hdp->hd_nirqs - 1) *
+ sizeof(struct hpet_dev));
+
+ hpetp = kmalloc(siz, GFP_KERNEL);
+
+ if (!hpetp)
+ return -ENOMEM;
+
+ memset(hpetp, 0, siz);
+
+ hpetp->hp_which = hpet_nhpet++;
+ hpetp->hp_hpet = (struct hpet *)hdp->hd_address;
+
+ hpetp->hp_ntimer = hdp->hd_nirqs;
+
+ for (i = 0; i < hdp->hd_nirqs; i++)
+ hpetp->hp_dev[i].hd_hdwirq = hdp->hd_irq[i];
+
+ hpet = hpetp->hp_hpet;
+
+ cap = readq(&hpet->hpet_cap);
+
+ ntimer = ((cap & HPET_NUM_TIM_CAP_MASK) >> HPET_NUM_TIM_CAP_SHIFT) + 1;
+
+ if (hpetp->hp_ntimer != ntimer) {
+ printk(KERN_WARNING "hpet: number irqs doesn't agree"
+ " with number of timers\n");
+ kfree(hpetp);
+ return -ENODEV;
+ }
+
+ if (last)
+ last->hp_next = hpetp;
+ else
+ hpets = hpetp;
+
+ last = hpetp;
+
+ hpetp->hp_period = (cap & HPET_COUNTER_CLK_PERIOD_MASK) >>
+ HPET_COUNTER_CLK_PERIOD_SHIFT;
+
+ mcfg = readq(&hpet->hpet_config);
+ if ((mcfg & HPET_ENABLE_CNF_MASK) == 0) {
+ write_counter(0L, &hpet->hpet_mc);
+ mcfg |= HPET_ENABLE_CNF_MASK;
+ writeq(mcfg, &hpet->hpet_config);
+ }
+
+ for (i = 0, devp = hpetp->hp_dev; i < hpetp->hp_ntimer;
+ i++, hpet_ntimer++, devp++) {
+ unsigned long v;
+ struct hpet_timer *timer;
+
+ timer = &hpet->hpet_timers[devp - hpetp->hp_dev];
+ v = readq(&timer->hpet_config);
+
+ devp->hd_hpets = hpetp;
+ devp->hd_hpet = hpet;
+ devp->hd_timer = timer;
+
+ /*
+ * If the timer was reserved by platform code,
+ * then make timer unavailable for opens.
+ */
+ if (hdp->hd_state & (1 << i)) {
+ devp->hd_flags = HPET_OPEN;
+ continue;
+ }
+
+ init_waitqueue_head(&devp->hd_waitqueue);
+ }
+
+ hpetp->hp_delta = hpet_calibrate(hpetp);
+
+ return 0;
+}
+
+static acpi_status __init hpet_resources(struct acpi_resource *res, void *data)
+{
+ struct hpet_data *hdp;
+ acpi_status status;
+ struct acpi_resource_address64 addr;
+ struct hpets *hpetp;
+
+ hdp = data;
+
+ status = acpi_resource_to_address64(res, &addr);
+
+ if (ACPI_SUCCESS(status)) {
+ unsigned long size;
+
+ size = addr.max_address_range - addr.min_address_range + 1;
+ hdp->hd_address =
+ (unsigned long)ioremap(addr.min_address_range, size);
+
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (hpetp->hp_hpet == (struct hpet *)(hdp->hd_address))
+ return -EBUSY;
+ } else if (res->id == ACPI_RSTYPE_EXT_IRQ) {
+ struct acpi_resource_ext_irq *irqp;
+ int i;
+
+ irqp = &res->data.extended_irq;
+
+ if (irqp->number_of_interrupts > 0) {
+ hdp->hd_nirqs = irqp->number_of_interrupts;
+
+ for (i = 0; i < hdp->hd_nirqs; i++)
+#ifdef CONFIG_IA64
+ hdp->hd_irq[i] =
+ acpi_register_gsi(irqp->interrupts[i],
+ irqp->edge_level,
+ irqp->active_high_low);
+#else
+ hdp->hd_irq[i] = irqp->interrupts[i];
+#endif
+ }
+ }
+
+ return AE_OK;
+}
+
+static int __init hpet_acpi_add(struct acpi_device *device)
+{
+ acpi_status result;
+ struct hpet_data data;
+
+ memset(&data, 0, sizeof(data));
+
+ result =
+ acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+ hpet_resources, &data);
+
+ if (ACPI_FAILURE(result))
+ return -ENODEV;
+
+ if (!data.hd_address || !data.hd_nirqs) {
+ printk("%s: no address or irqs in _CRS\n", __FUNCTION__);
+ return -ENODEV;
+ }
+
+ return hpet_alloc(&data);
+}
+
+static int __init hpet_acpi_remove(struct acpi_device *device, int type)
+{
+ return 0;
+}
+
+static struct acpi_driver hpet_acpi_driver __initdata = {
+ .name = "hpet",
+ .class = "",
+ .ids = "PNP0103",
+ .ops = {
+ .add = hpet_acpi_add,
+ .remove = hpet_acpi_remove,
+ },
+};
+
+static struct miscdevice hpet_misc = { HPET_MINOR, "hpet", &hpet_fops };
+
+static int __init hpet_init(void)
+{
+ struct proc_dir_entry *entry;
+
+ (void)acpi_bus_register_driver(&hpet_acpi_driver);
+
+ if (hpets) {
+ if (misc_register(&hpet_misc))
+ return -ENODEV;
+
+ entry = create_proc_entry("driver/hpet", 0, 0);
+
+ if (entry)
+ entry->proc_fops = &hpet_proc_fops;
+
+ sysctl_header = register_sysctl_table(dev_root, 0);
+
+#ifdef CONFIG_TIME_INTERPOLATION
+ {
+ struct hpet *hpet;
+
+ hpet = hpets->hp_hpet;
+ hpet_cycles_per_sec = hpet_time_div(hpets->hp_period);
+ hpet_interpolator.frequency = hpet_cycles_per_sec;
+ hpet_interpolator.drift = hpet_cycles_per_sec *
+ HPET_DRIFT / 1000000;
+ hpet_nsecs_per_cycle = 1000000000 / hpet_cycles_per_sec;
+ register_time_interpolator(&hpet_interpolator);
+ }
+#endif
+ return 0;
+ } else
+ return -ENODEV;
+}
+
+static void __exit hpet_exit(void)
+{
+ acpi_bus_unregister_driver(&hpet_acpi_driver);
+
+ if (hpets) {
+ unregister_sysctl_table(sysctl_header);
+ remove_proc_entry("driver/hpet", NULL);
+ }
+
+ return;
+}
+
+module_init(hpet_init);
+module_exit(hpet_exit);
+MODULE_AUTHOR("Bob Picco <Robert.Picco@hp.com>");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * IBM eServer Hypervisor Virtual Console Server Device Driver
+ * Copyright (C) 2003, 2004 IBM Corp.
+ * Ryan S. Arnold (rsa@us.ibm.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author(s) : Ryan S. Arnold <rsa@us.ibm.com>
+ *
+ * This is the device driver for the IBM Hypervisor Virtual Console Server,
+ * "hvcs". The IBM hvcs provides a tty driver interface to allow Linux
+ * user space applications access to the system consoles of logically
+ * partitioned operating systems, e.g. Linux, running on the same partitioned
+ * Power5 ppc64 system. Physical hardware consoles per partition are not
+ * practical on this hardware so system consoles are accessed by this driver
+ * using inter-partition firmware interfaces to virtual terminal devices.
+ *
+ * A vty is known to the HMC as a "virtual serial server adapter". It is a
+ * virtual terminal device that is created by firmware upon partition creation
+ * to act as a partitioned OS's console device.
+ *
+ * Firmware dynamically (via hotplug) exposes vty-servers to a running ppc64
+ * Linux system upon their creation by the HMC or their exposure during boot.
+ * The non-user interactive backend of this driver is implemented as a vio
+ * device driver so that it can receive notification of vty-server lifetimes
+ * after it registers with the vio bus to handle vty-server probe and remove
+ * callbacks.
+ *
+ * Many vty-servers can be configured to connect to one vty, but a vty can
+ * only be actively connected to by a single vty-server, in any manner, at one
+ * time. If the HMC is currently hosting the console for a target Linux
+ * partition; attempts to open the tty device to the partition's console using
+ * the hvcs on any partition will return -EBUSY with every open attempt until
+ * the HMC frees the connection between its vty-server and the desired
+ * partition's vty device. Conversely, a vty-server may only be connected to
+ * a single vty at one time even though it may have several configured vty
+ * partner possibilities.
+ *
+ * Firmware does not provide notification of vty partner changes to this
+ * driver. This means that an HMC Super Admin may add or remove partner vtys
+ * from a vty-server's partner list but the changes will not be signaled to
+ * the vty-server. Firmware only notifies the driver when a vty-server is
+ * added or removed from the system. To compensate for this deficiency, this
+ * driver implements a sysfs update attribute which provides a method for
+ * rescanning partner information upon a user's request.
+ *
+ * Each vty-server, prior to being exposed to this driver is reference counted
+ * using the 2.6 Linux kernel kobject construct. This kobject is also used by
+ * the vio bus to provide a vio device sysfs entry that this driver attaches
+ * device specific attributes to, including partner information. The vio bus
+ * framework also provides a sysfs entry for each vio driver. The hvcs driver
+ * provides driver attributes in this entry.
+ *
+ * For direction on installation and usage of this driver please reference
+ * Documentation/powerpc/hvcs.txt.
+ */
+
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kobject.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/major.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/stat.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <asm/hvconsole.h>
+#include <asm/hvcserver.h>
+#include <asm/uaccess.h>
+#include <asm/vio.h>
+
+/*
+ * 1.0.0 -> 1.1.0 Added kernel_thread scheduling methodology to driver to
+ * replace wait_task constructs.
+ *
+ * 1.1.0 -> 1.2.0 Moved pi_buff initialization out of arch code into driver code
+ * and added locking to share this buffer between hvcs_struct instances. This
+ * is because the page_size kmalloc can't be done with a spin_lock held.
+ *
+ * Also added sysfs attribute to manually disconnect the vty-server from the vty
+ * due to stupid firmware behavior when opening the connection then sending data
+ * then then quickly closing the connection would cause data loss on the
+ * receiving side. This required some reordering of the termination code.
+ *
+ * Fixed the hangup scenario and fixed memory leaks on module_exit.
+ *
+ * 1.2.0 -> 1.3.0 Moved from manual kernel thread creation & execution to
+ * kthread construct which replaced in-kernel IPC for thread termination with
+ * kthread_stop and kthread_should_stop. Explicit wait_queue handling was
+ * removed because kthread handles this. Minor bug fix to postpone partner_info
+ * clearing on hvcs_close until adapter removal to preserve context data for
+ * printk on partner connection free. Added lock to protect hvcs_structs so
+ * that hvcs_struct instances aren't added or removed during list traversal.
+ * Cleaned up comment style, added spaces after commas, and broke function
+ * declaration lines to be under 80 columns.
+ */
+#define HVCS_DRIVER_VERSION "1.3.0"
+
+MODULE_AUTHOR("Ryan S. Arnold <rsa@us.ibm.com>");
+MODULE_DESCRIPTION("IBM hvcs (Hypervisor Virtual Console Server) Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HVCS_DRIVER_VERSION);
+
+/*
+ * Since the Linux TTY code does not currently (2-04-2004) support dynamic
+ * addition of tty derived devices and we shouldn't allocate thousands of
+ * tty_device pointers when the number of vty-server & vty partner connections
+ * will most often be much lower than this, we'll arbitrarily allocate
+ * HVCS_DEFAULT_SERVER_ADAPTERS tty_structs and cdev's by default when we
+ * register the tty_driver. This can be overridden using an insmod parameter.
+ */
+#define HVCS_DEFAULT_SERVER_ADAPTERS 64
+
+/*
+ * The user can't insmod with more than HVCS_MAX_SERVER_ADAPTERS hvcs device
+ * nodes as a sanity check. Theoretically there can be over 1 Billion
+ * vty-server & vty partner connections.
+ */
+#define HVCS_MAX_SERVER_ADAPTERS 1024
+
+/*
+ * We let Linux assign us a major number and we start the minors at zero. There
+ * is no intuitive mapping between minor number and the target partition. The
+ * mapping of minor number is related to the order the vty-servers are exposed
+ * to this driver via the hvcs_probe function.
+ */
+#define HVCS_MINOR_START 0
+
+/*
+ * The hcall interface involves putting 8 chars into each of two registers.
+ * We load up those 2 registers (in arch/ppc64/hvconsole.c) by casting char[16]
+ * to long[2]. It would work without __ALIGNED__, but a little (tiny) bit
+ * slower because an unaligned load is slower than aligned load.
+ */
+#define __ALIGNED__ __attribute__((__aligned__(8)))
+
+/* Converged location code string length + 1 null terminator */
+#define CLC_LENGTH 80
+
+/*
+ * How much data can firmware send with each hvc_put_chars()? Maybe this
+ * should be moved into an architecture specific area.
+ */
+#define HVCS_BUFF_LEN 16
+
+/*
+ * This is the maximum amount of data we'll let the user send us (hvcs_write) at
+ * once in a chunk as a sanity check.
+ */
+#define HVCS_MAX_FROM_USER 4096
+
+/*
+ * Be careful when adding flags to this line discipline. Don't add anything
+ * that will cause echoing or we'll go into recursive loop echoing chars back
+ * and forth with the console drivers.
+ */
+static struct termios hvcs_tty_termios = {
+ .c_iflag = IGNBRK | IGNPAR,
+ .c_oflag = OPOST,
+ .c_cflag = B38400 | CS8 | CREAD | HUPCL,
+ .c_cc = INIT_C_CC
+};
+
+/*
+ * This value is used to take the place of a command line parameter when the
+ * module is inserted. It starts as -1 and stays as such if the user doesn't
+ * specify a module insmod parameter. If they DO specify one then it is set to
+ * the value of the integer passed in.
+ */
+static int hvcs_parm_num_devs = -1;
+module_param(hvcs_parm_num_devs, int, 0);
+
+char hvcs_driver_name[] = "hvcs";
+char hvcs_device_node[] = "hvcs";
+char hvcs_driver_string[]
+ = "IBM hvcs (Hypervisor Virtual Console Server) Driver";
+
+/* Status of partner info rescan triggered via sysfs. */
+static int hvcs_rescan_status = 0;
+
+static struct tty_driver *hvcs_tty_driver;
+
+/*
+ * This is used to associate a vty-server, as it is exposed to this driver, with
+ * a preallocated tty_struct.index. The dev node and hvcs index numbers are not
+ * re-used after device removal otherwise removing and adding a new one would
+ * link a /dev/hvcs* entry to a different vty-server than it did before the
+ * removal. Incidentally, a newly exposed vty-server will always map to an
+ * incrementally higher /dev/hvcs* entry than the last exposed vty-server.
+ */
+static int hvcs_struct_count = -1;
+
+/*
+ * Used by the khvcsd to pick up I/O operations when the kernel_thread is
+ * already awake but potentially shifted to TASK_INTERRUPTIBLE state.
+ */
+static int hvcs_kicked = 0;
+
+/* Used the the kthread construct for task operations */
+static struct task_struct *hvcs_task;
+
+/*
+ * We allocate this for the use of all of the hvcs_structs when they fetch
+ * partner info.
+ */
+static unsigned long *hvcs_pi_buff;
+
+static spinlock_t hvcs_pi_lock;
+
+/* One vty-server per hvcs_struct */
+struct hvcs_struct {
+ spinlock_t lock;
+
+ /*
+ * This index identifies this hvcs device as the complement to a
+ * specific tty index.
+ */
+ unsigned int index;
+
+ struct tty_struct *tty;
+ unsigned int open_count;
+
+ /*
+ * Used to tell the driver kernel_thread what operations need to take
+ * place upon this hvcs_struct instance.
+ */
+ int todo_mask;
+
+ /*
+ * This buffer is required so that when hvcs_write_room() reports that
+ * it can send HVCS_BUFF_LEN characters that it will buffer the full
+ * HVCS_BUFF_LEN characters if need be. This is essential for opost
+ * writes since they do not do high level buffering and expect to be
+ * able to send what the driver commits to sending buffering
+ * [e.g. tab to space conversions in n_tty.c opost()].
+ */
+ char buffer[HVCS_BUFF_LEN];
+ int chars_in_buffer;
+
+ /*
+ * Any variable below the kobject is valid before a tty is connected and
+ * stays valid after the tty is disconnected. These shouldn't be
+ * whacked until the koject refcount reaches zero though some entries
+ * may be changed via sysfs initiatives.
+ */
+ struct kobject kobj; /* ref count & hvcs_struct lifetime */
+ int connected; /* is the vty-server currently connected to a vty? */
+ unsigned int p_unit_address; /* partner unit address */
+ unsigned int p_partition_ID; /* partner partition ID */
+ char p_location_code[CLC_LENGTH];
+ struct list_head next; /* list management */
+ struct vio_dev *vdev;
+};
+
+/* Required to back map a kobject to its containing object */
+#define from_kobj(kobj) container_of(kobj, struct hvcs_struct, kobj)
+
+static struct list_head hvcs_structs = LIST_HEAD_INIT(hvcs_structs);
+static spinlock_t hvcs_structs_lock;
+
+static void hvcs_unthrottle(struct tty_struct *tty);
+static void hvcs_throttle(struct tty_struct *tty);
+static irqreturn_t hvcs_handle_interrupt(int irq, void *dev_instance,
+ struct pt_regs *regs);
+
+static int hvcs_write(struct tty_struct *tty, int from_user,
+ const unsigned char *buf, int count);
+static int hvcs_write_room(struct tty_struct *tty);
+static int hvcs_chars_in_buffer(struct tty_struct *tty);
+
+static int hvcs_has_pi(struct hvcs_struct *hvcsd);
+static void hvcs_set_pi(struct hvcs_partner_info *pi,
+ struct hvcs_struct *hvcsd);
+static int hvcs_get_pi(struct hvcs_struct *hvcsd);
+static int hvcs_rescan_devices_list(void);
+
+static int hvcs_partner_connect(struct hvcs_struct *hvcsd);
+static void hvcs_partner_free(struct hvcs_struct *hvcsd);
+
+static int hvcs_enable_device(struct hvcs_struct *hvcsd,
+ uint32_t unit_address, unsigned int irq, struct vio_dev *dev);
+static void hvcs_final_close(struct hvcs_struct *hvcsd);
+
+static void destroy_hvcs_struct(struct kobject *kobj);
+static int hvcs_open(struct tty_struct *tty, struct file *filp);
+static void hvcs_close(struct tty_struct *tty, struct file *filp);
+static void hvcs_hangup(struct tty_struct * tty);
+
+static void hvcs_create_device_attrs(struct hvcs_struct *hvcsd);
+static void hvcs_remove_device_attrs(struct vio_dev *vdev);
+static void hvcs_create_driver_attrs(void);
+static void hvcs_remove_driver_attrs(void);
+
+static int __devinit hvcs_probe(struct vio_dev *dev,
+ const struct vio_device_id *id);
+static int __devexit hvcs_remove(struct vio_dev *dev);
+static int __init hvcs_module_init(void);
+static void __exit hvcs_module_exit(void);
+
+#define HVCS_SCHED_READ 0x00000001
+#define HVCS_QUICK_READ 0x00000002
+#define HVCS_TRY_WRITE 0x00000004
+#define HVCS_READ_MASK (HVCS_SCHED_READ | HVCS_QUICK_READ)
+
+static void hvcs_kick(void)
+{
+ hvcs_kicked = 1;
+ wmb();
+ wake_up_process(hvcs_task);
+}
+
+static void hvcs_unthrottle(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ hvcs_kick();
+}
+
+static void hvcs_throttle(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ vio_disable_interrupts(hvcsd->vdev);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+}
+
+/*
+ * If the device is being removed we don't have to worry about this interrupt
+ * handler taking any further interrupts because they are disabled which means
+ * the hvcs_struct will always be valid in this handler.
+ */
+static irqreturn_t hvcs_handle_interrupt(int irq, void *dev_instance,
+ struct pt_regs *regs)
+{
+ struct hvcs_struct *hvcsd = dev_instance;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ vio_disable_interrupts(hvcsd->vdev);
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ hvcs_kick();
+
+ return IRQ_HANDLED;
+}
+
+/* This function must be called with the hvcsd->lock held */
+static void hvcs_try_write(struct hvcs_struct *hvcsd)
+{
+ unsigned int unit_address = hvcsd->vdev->unit_address;
+ struct tty_struct *tty = hvcsd->tty;
+ int sent;
+
+ if (hvcsd->todo_mask & HVCS_TRY_WRITE) {
+ /* won't send partial writes */
+ sent = hvc_put_chars(unit_address,
+ &hvcsd->buffer[0],
+ hvcsd->chars_in_buffer );
+ if (sent > 0) {
+ hvcsd->chars_in_buffer = 0;
+ wmb();
+ hvcsd->todo_mask &= ~(HVCS_TRY_WRITE);
+ wmb();
+
+ /*
+ * We are still obligated to deliver the data to the
+ * hypervisor even if the tty has been closed because
+ * we commited to delivering it. But don't try to wake
+ * a non-existent tty.
+ */
+ if (tty) {
+ if ((tty->flags & (1 << TTY_DO_WRITE_WAKEUP))
+ && tty->ldisc.write_wakeup)
+ (tty->ldisc.write_wakeup) (tty);
+ wake_up_interruptible(&tty->write_wait);
+ }
+ }
+ }
+}
+
+static int hvcs_io(struct hvcs_struct *hvcsd)
+{
+ unsigned int unit_address;
+ struct tty_struct *tty;
+ char buf[HVCS_BUFF_LEN] __ALIGNED__;
+ unsigned long flags;
+ int got;
+ int i;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ unit_address = hvcsd->vdev->unit_address;
+ tty = hvcsd->tty;
+
+ hvcs_try_write(hvcsd);
+
+ if (!tty || test_bit(TTY_THROTTLED, &tty->flags)) {
+ hvcsd->todo_mask &= ~(HVCS_READ_MASK);
+ goto bail;
+ } else if (!(hvcsd->todo_mask & (HVCS_READ_MASK)))
+ goto bail;
+
+ /* remove the read masks */
+ hvcsd->todo_mask &= ~(HVCS_READ_MASK);
+
+ if ((tty->flip.count + HVCS_BUFF_LEN) < TTY_FLIPBUF_SIZE) {
+ got = hvc_get_chars(unit_address,
+ &buf[0],
+ HVCS_BUFF_LEN);
+ for (i=0;got && i<got;i++)
+ tty_insert_flip_char(tty, buf[i], TTY_NORMAL);
+ }
+
+ /* Give the TTY time to process the data we just sent. */
+ if (got)
+ hvcsd->todo_mask |= HVCS_QUICK_READ;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ if (tty->flip.count) {
+ /* This is synch because tty->low_latency == 1 */
+ tty_flip_buffer_push(tty);
+ }
+
+ if (!got) {
+ /* Do this _after_ the flip_buffer_push */
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ vio_enable_interrupts(hvcsd->vdev);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+
+ return hvcsd->todo_mask;
+
+ bail:
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return hvcsd->todo_mask;
+}
+
+static int khvcsd(void *unused)
+{
+ struct hvcs_struct *hvcsd = NULL;
+ struct list_head *element;
+ struct list_head *safe_temp;
+ int hvcs_todo_mask;
+ unsigned long structs_flags;
+
+ __set_current_state(TASK_RUNNING);
+
+ do {
+ hvcs_todo_mask = 0;
+ hvcs_kicked = 0;
+ wmb();
+
+ spin_lock_irqsave(&hvcs_structs_lock, structs_flags);
+ list_for_each_safe(element, safe_temp, &hvcs_structs) {
+ hvcsd = list_entry(element, struct hvcs_struct, next);
+ hvcs_todo_mask |= hvcs_io(hvcsd);
+ }
+ spin_unlock_irqrestore(&hvcs_structs_lock, structs_flags);
+
+ /*
+ * If any of the hvcs adapters want to try a write or quick read
+ * don't schedule(), yield a smidgen then execute the hvcs_io
+ * thread again for those that want the write.
+ */
+ if (hvcs_todo_mask & (HVCS_TRY_WRITE | HVCS_QUICK_READ)) {
+ yield();
+ continue;
+ }
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (!hvcs_kicked)
+ schedule();
+ __set_current_state(TASK_RUNNING);
+ } while (!kthread_should_stop());
+
+ return 0;
+}
+
+static struct vio_device_id hvcs_driver_table[] __devinitdata= {
+ {"serial-server", "hvterm2"},
+ { 0, }
+};
+MODULE_DEVICE_TABLE(vio, hvcs_driver_table);
+
+/* callback when the kboject ref count reaches zero */
+static void destroy_hvcs_struct(struct kobject *kobj)
+{
+ struct hvcs_struct *hvcsd = from_kobj(kobj);
+ struct vio_dev *vdev;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ /* the list_del poisons the pointers */
+ list_del(&(hvcsd->next));
+
+ if (hvcsd->connected == 1) {
+ hvcs_partner_free(hvcsd);
+ printk(KERN_INFO "HVCS: Closed vty-server@%X and"
+ " partner vty@%X:%d connection.\n",
+ hvcsd->vdev->unit_address,
+ hvcsd->p_unit_address,
+ (unsigned int)hvcsd->p_partition_ID);
+ }
+ printk(KERN_INFO "HVCS: Destroyed hvcs_struct for vty-server@%X.\n",
+ hvcsd->vdev->unit_address);
+
+ vdev = hvcsd->vdev;
+ hvcsd->vdev = NULL;
+
+ hvcsd->p_unit_address = 0;
+ hvcsd->p_partition_ID = 0;
+ memset(&hvcsd->p_location_code[0], 0x00, CLC_LENGTH);
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ hvcs_remove_device_attrs(vdev);
+
+ kfree(hvcsd);
+}
+
+/* This function must be called with hvcsd->lock held. */
+static void hvcs_final_close(struct hvcs_struct *hvcsd)
+{
+ vio_disable_interrupts(hvcsd->vdev);
+ free_irq(hvcsd->vdev->irq, hvcsd);
+
+ hvcsd->todo_mask = 0;
+
+ /* These two may be redundant if the operation was a close. */
+ if (hvcsd->tty) {
+ hvcsd->tty->driver_data = NULL;
+ hvcsd->tty = NULL;
+ }
+
+ hvcsd->open_count = 0;
+
+ memset(&hvcsd->buffer[0], 0x00, HVCS_BUFF_LEN);
+ hvcsd->chars_in_buffer = 0;
+}
+
+static struct kobj_type hvcs_kobj_type = {
+ .release = destroy_hvcs_struct,
+};
+
+static int __devinit hvcs_probe(
+ struct vio_dev *dev,
+ const struct vio_device_id *id)
+{
+ struct hvcs_struct *hvcsd;
+ unsigned long structs_flags;
+
+ if (!dev || !id) {
+ printk(KERN_ERR "HVCS: probed with invalid parameter.\n");
+ return -EPERM;
+ }
+
+ hvcsd = kmalloc(sizeof(*hvcsd), GFP_KERNEL);
+ if (!hvcsd) {
+ return -ENODEV;
+ }
+
+ /* hvcsd->tty is zeroed out with the memset */
+ memset(hvcsd, 0x00, sizeof(*hvcsd));
+
+ hvcsd->lock = SPIN_LOCK_UNLOCKED;
+ /* Automatically incs the refcount the first time */
+ kobject_init(&hvcsd->kobj);
+ /* Set up the callback for terminating the hvcs_struct's life */
+ hvcsd->kobj.ktype = &hvcs_kobj_type;
+
+ hvcsd->vdev = dev;
+ dev->dev.driver_data = hvcsd;
+
+ hvcsd->index = ++hvcs_struct_count;
+ hvcsd->chars_in_buffer = 0;
+ hvcsd->todo_mask = 0;
+ hvcsd->connected = 0;
+
+ /*
+ * This will populate the hvcs_struct's partner info fields for the
+ * first time.
+ */
+ if (hvcs_get_pi(hvcsd)) {
+ printk(KERN_ERR "HVCS: Failed to fetch partner"
+ " info for vty-server@%X on device probe.\n",
+ hvcsd->vdev->unit_address);
+ }
+
+ /*
+ * If a user app opens a tty that corresponds to this vty-server before
+ * the hvcs_struct has been added to the devices list then the user app
+ * will get -ENODEV.
+ */
+
+ spin_lock_irqsave(&hvcs_structs_lock, structs_flags);
+
+ list_add_tail(&(hvcsd->next), &hvcs_structs);
+
+ spin_unlock_irqrestore(&hvcs_structs_lock, structs_flags);
+
+ hvcs_create_device_attrs(hvcsd);
+
+ printk(KERN_INFO "HVCS: Added vty-server@%X.\n", dev->unit_address);
+
+ /*
+ * DON'T enable interrupts here because there is no user to receive the
+ * data.
+ */
+ return 0;
+}
+
+static int __devexit hvcs_remove(struct vio_dev *dev)
+{
+ struct hvcs_struct *hvcsd = dev->dev.driver_data;
+ unsigned long flags;
+ struct kobject *kobjp;
+ struct tty_struct *tty;
+
+ if (!hvcsd)
+ return -ENODEV;
+
+ /* By this time the vty-server won't be getting any more interrups */
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ tty = hvcsd->tty;
+
+ kobjp = &hvcsd->kobj;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * Let the last holder of this object cause it to be removed, which
+ * would probably be tty_hangup below.
+ */
+ kobject_put (kobjp);
+
+ /*
+ * The hangup is a scheduled function which will auto chain call
+ * hvcs_hangup. The tty should always be valid at this time unless a
+ * simultaneous tty close already cleaned up the hvcs_struct.
+ */
+ if (tty)
+ tty_hangup(tty);
+
+ printk(KERN_INFO "HVCS: vty-server@%X removed from the"
+ " vio bus.\n", dev->unit_address);
+ return 0;
+};
+
+static struct vio_driver hvcs_vio_driver = {
+ .name = hvcs_driver_name,
+ .id_table = hvcs_driver_table,
+ .probe = hvcs_probe,
+ .remove = hvcs_remove,
+};
+
+/* Only called from hvcs_get_pi please */
+static void hvcs_set_pi(struct hvcs_partner_info *pi, struct hvcs_struct *hvcsd)
+{
+ int clclength;
+
+ hvcsd->p_unit_address = pi->unit_address;
+ hvcsd->p_partition_ID = pi->partition_ID;
+ clclength = strlen(&pi->location_code[0]);
+ if (clclength > CLC_LENGTH - 1)
+ clclength = CLC_LENGTH - 1;
+
+ /* copy the null-term char too */
+ strncpy(&hvcsd->p_location_code[0],
+ &pi->location_code[0], clclength + 1);
+}
+
+/*
+ * Traverse the list and add the partner info that is found to the hvcs_struct
+ * struct entry. NOTE: At this time I know that partner info will return a
+ * single entry but in the future there may be multiple partner info entries per
+ * vty-server and you'll want to zero out that list and reset it. If for some
+ * reason you have an old version of this driver but there IS more than one
+ * partner info then hvcsd->p_* will hold the last partner info data from the
+ * firmware query. A good way to update this code would be to replace the three
+ * partner info fields in hvcs_struct with a list of hvcs_partner_info
+ * instances.
+ *
+ * This function must be called with the hvcsd->lock held.
+ */
+static int hvcs_get_pi(struct hvcs_struct *hvcsd)
+{
+ /* struct hvcs_partner_info *head_pi = NULL; */
+ struct hvcs_partner_info *pi = NULL;
+ unsigned int unit_address = hvcsd->vdev->unit_address;
+ struct list_head head;
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcs_pi_lock, flags);
+ if (!hvcs_pi_buff) {
+ spin_unlock_irqrestore(&hvcs_pi_lock, flags);
+ return -EFAULT;
+ }
+ retval = hvcs_get_partner_info(unit_address, &head, hvcs_pi_buff);
+ spin_unlock_irqrestore(&hvcs_pi_lock, flags);
+ if (retval) {
+ printk(KERN_ERR "HVCS: Failed to fetch partner"
+ " info for vty-server@%x.\n", unit_address);
+ return retval;
+ }
+
+ /* nixes the values if the partner vty went away */
+ hvcsd->p_unit_address = 0;
+ hvcsd->p_partition_ID = 0;
+
+ list_for_each_entry(pi, &head, node)
+ hvcs_set_pi(pi, hvcsd);
+
+ hvcs_free_partner_info(&head);
+ return 0;
+}
+
+/*
+ * This function is executed by the driver "rescan" sysfs entry. It shouldn't
+ * be executed elsewhere, in order to prevent deadlock issues.
+ */
+static int hvcs_rescan_devices_list(void)
+{
+ struct hvcs_struct *hvcsd = NULL;
+ unsigned long flags;
+ unsigned long structs_flags;
+
+ spin_lock_irqsave(&hvcs_structs_lock, structs_flags);
+
+ list_for_each_entry(hvcsd, &hvcs_structs, next) {
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcs_get_pi(hvcsd);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+
+ spin_unlock_irqrestore(&hvcs_structs_lock, structs_flags);
+
+ return 0;
+}
+
+/*
+ * Farm this off into its own function because it could be more complex once
+ * multiple partners support is added. This function should be called with
+ * the hvcsd->lock held.
+ */
+static int hvcs_has_pi(struct hvcs_struct *hvcsd)
+{
+ if ((!hvcsd->p_unit_address) || (!hvcsd->p_partition_ID))
+ return 0;
+ return 1;
+}
+
+/*
+ * NOTE: It is possible that the super admin removed a partner vty and then
+ * added a different vty as the new partner.
+ *
+ * This function must be called with the hvcsd->lock held.
+ */
+static int hvcs_partner_connect(struct hvcs_struct *hvcsd)
+{
+ int retval;
+ unsigned int unit_address = hvcsd->vdev->unit_address;
+
+ /*
+ * If there wasn't any pi when the device was added it doesn't meant
+ * there isn't any now. This driver isn't notified when a new partner
+ * vty is added to a vty-server so we discover changes on our own.
+ * Please see comments in hvcs_register_connection() for justification
+ * of this bizarre code.
+ */
+ retval = hvcs_register_connection(unit_address,
+ hvcsd->p_partition_ID,
+ hvcsd->p_unit_address);
+ if (!retval) {
+ hvcsd->connected = 1;
+ return 0;
+ } else if (retval != -EINVAL)
+ return retval;
+
+ /*
+ * As per the spec re-get the pi and try again if -EINVAL after the
+ * first connection attempt.
+ */
+ if (hvcs_get_pi(hvcsd))
+ return -ENOMEM;
+
+ if (!hvcs_has_pi(hvcsd))
+ return -ENODEV;
+
+ retval = hvcs_register_connection(unit_address,
+ hvcsd->p_partition_ID,
+ hvcsd->p_unit_address);
+ if (retval != -EINVAL) {
+ hvcsd->connected = 1;
+ return retval;
+ }
+
+ /*
+ * EBUSY is the most likely scenario though the vty could have been
+ * removed or there really could be an hcall error due to the parameter
+ * data but thanks to ambiguous firmware return codes we can't really
+ * tell.
+ */
+ printk(KERN_INFO "HVCS: vty-server or partner"
+ " vty is busy. Try again later.\n");
+ return -EBUSY;
+}
+
+/* This function must be called with the hvcsd->lock held */
+static void hvcs_partner_free(struct hvcs_struct *hvcsd)
+{
+ int retval;
+ do {
+ retval = hvcs_free_connection(hvcsd->vdev->unit_address);
+ } while (retval == -EBUSY);
+ hvcsd->connected = 0;
+}
+
+/* This helper function must be called WITHOUT the hvcsd->lock held */
+static int hvcs_enable_device(struct hvcs_struct *hvcsd, uint32_t unit_address,
+ unsigned int irq, struct vio_dev *vdev)
+{
+ unsigned long flags;
+
+ /*
+ * It is possible that the vty-server was removed between the time that
+ * the conn was registered and now.
+ */
+ if (!request_irq(irq, &hvcs_handle_interrupt,
+ SA_INTERRUPT, "ibmhvcs", hvcsd)) {
+ /*
+ * It is possible the vty-server was removed after the irq was
+ * requested but before we have time to enable interrupts.
+ */
+ if (vio_enable_interrupts(vdev) == H_Success)
+ return 0;
+ else {
+ printk(KERN_ERR "HVCS: int enable failed for"
+ " vty-server@%X.\n", unit_address);
+ free_irq(irq, hvcsd);
+ }
+ } else
+ printk(KERN_ERR "HVCS: irq req failed for"
+ " vty-server@%X.\n", unit_address);
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcs_partner_free(hvcsd);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ return -ENODEV;
+
+}
+
+/*
+ * This always increments the kobject ref count if the call is successful.
+ * Please remember to dec when you are done with the instance.
+ *
+ * NOTICE: Do NOT hold either the hvcs_struct.lock or hvcs_structs_lock when
+ * calling this function or you will get deadlock.
+ */
+struct hvcs_struct *hvcs_get_by_index(int index)
+{
+ struct hvcs_struct *hvcsd = NULL;
+ struct list_head *element;
+ struct list_head *safe_temp;
+ unsigned long flags;
+ unsigned long structs_flags;
+
+ spin_lock_irqsave(&hvcs_structs_lock, structs_flags);
+ /* We can immediately discard OOB requests */
+ if (index >= 0 && index < HVCS_MAX_SERVER_ADAPTERS) {
+ list_for_each_safe(element, safe_temp, &hvcs_structs) {
+ hvcsd = list_entry(element, struct hvcs_struct, next);
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (hvcsd->index == index) {
+ kobject_get(&hvcsd->kobj);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ spin_unlock_irqrestore(&hvcs_structs_lock,
+ structs_flags);
+ return hvcsd;
+ }
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+ hvcsd = NULL;
+ }
+
+ spin_unlock_irqrestore(&hvcs_structs_lock, structs_flags);
+ return hvcsd;
+}
+
+/*
+ * This is invoked via the tty_open interface when a user app connects to the
+ * /dev node.
+ */
+static int hvcs_open(struct tty_struct *tty, struct file *filp)
+{
+ struct hvcs_struct *hvcsd = NULL;
+ int retval = 0;
+ unsigned long flags;
+ unsigned int irq;
+ struct vio_dev *vdev;
+ unsigned long unit_address;
+
+ if (tty->driver_data)
+ goto fast_open;
+
+ /*
+ * Is there a vty-server that shares the same index?
+ * This function increments the kobject index.
+ */
+ if (!(hvcsd = hvcs_get_by_index(tty->index))) {
+ printk(KERN_WARNING "HVCS: open failed, no index.\n");
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ if (hvcsd->connected == 0)
+ if ((retval = hvcs_partner_connect(hvcsd)))
+ goto error_release;
+
+ hvcsd->open_count = 1;
+ hvcsd->tty = tty;
+ tty->driver_data = hvcsd;
+
+ /*
+ * Set this driver to low latency so that we actually have a chance at
+ * catching a throttled TTY after we flip_buffer_push. Otherwise the
+ * flush_to_async may not execute until after the kernel_thread has
+ * yielded and resumed the next flip_buffer_push resulting in data
+ * loss.
+ */
+ tty->low_latency = 1;
+
+ memset(&hvcsd->buffer[0], 0x3F, HVCS_BUFF_LEN);
+
+ /*
+ * Save these in the spinlock for the enable operations that need them
+ * outside of the spinlock.
+ */
+ irq = hvcsd->vdev->irq;
+ vdev = hvcsd->vdev;
+ unit_address = hvcsd->vdev->unit_address;
+
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * This must be done outside of the spinlock because it requests irqs
+ * and will grab the spinlcok and free the connection if it fails.
+ */
+ if ((hvcs_enable_device(hvcsd, unit_address, irq, vdev))) {
+ kobject_put(&hvcsd->kobj);
+ printk(KERN_WARNING "HVCS: enable device failed.\n");
+ return -ENODEV;
+ }
+
+ goto open_success;
+
+fast_open:
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (!kobject_get(&hvcsd->kobj)) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_ERR "HVCS: Kobject of open"
+ " hvcs doesn't exist.\n");
+ return -EFAULT; /* Is this the right return value? */
+ }
+
+ hvcsd->open_count++;
+
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+open_success:
+ hvcs_kick();
+
+ printk(KERN_INFO "HVCS: vty-server@%X opened.\n",
+ hvcsd->vdev->unit_address );
+
+ return 0;
+
+error_release:
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ kobject_put(&hvcsd->kobj);
+
+ printk(KERN_WARNING "HVCS: HVCS partner connect failed.\n");
+ return retval;
+}
+
+static void hvcs_close(struct tty_struct *tty, struct file *filp)
+{
+ struct hvcs_struct *hvcsd;
+ unsigned long flags;
+ struct kobject *kobjp;
+
+ /*
+ * Is someone trying to close the file associated with this device after
+ * we have hung up? If so tty->driver_data wouldn't be valid.
+ */
+ if (tty_hung_up_p(filp))
+ return;
+
+ /*
+ * No driver_data means that this close was probably issued after a
+ * failed hvcs_open by the tty layer's release_dev() api and we can just
+ * exit cleanly.
+ */
+ if (!tty->driver_data)
+ return;
+
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (--hvcsd->open_count == 0) {
+
+ /*
+ * This line is important because it tells hvcs_open that this
+ * device needs to be re-configured the next time hvcs_open is
+ * called.
+ */
+ hvcsd->tty->driver_data = NULL;
+
+ /*
+ * NULL this early so that the kernel_thread doesn't try to
+ * execute any operations on the TTY even though it is obligated
+ * to deliver any pending I/O to the hypervisor.
+ */
+ hvcsd->tty = NULL;
+
+ /*
+ * Block the close until all the buffered data has been
+ * delivered.
+ */
+ while(hvcsd->chars_in_buffer) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * Give the kernel thread the hvcs_struct so that it can
+ * try to deliver the remaining data but block the close
+ * operation by spinning in this function so that other
+ * tty operations have to wait.
+ */
+ yield();
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ }
+
+ hvcs_final_close(hvcsd);
+
+ } else if (hvcsd->open_count < 0) {
+ printk(KERN_ERR "HVCS: vty-server@%X open_count: %d"
+ " is missmanaged.\n",
+ hvcsd->vdev->unit_address, hvcsd->open_count);
+ }
+ kobjp = &hvcsd->kobj;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ kobject_put(kobjp);
+}
+
+static void hvcs_hangup(struct tty_struct * tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+ int temp_open_count;
+ struct kobject *kobjp;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ /* Preserve this so that we know how many kobject refs to put */
+ temp_open_count = hvcsd->open_count;
+
+ /*
+ * Don't kobject put inside the spinlock because the destruction
+ * callback may use the spinlock and it may get called before the
+ * spinlock has been released. Get a pointer to the kobject and
+ * kobject_put on that instead.
+ */
+ kobjp = &hvcsd->kobj;
+
+ /* Calling this will drop any buffered data on the floor. */
+ hvcs_final_close(hvcsd);
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * We need to kobject_put() for every open_count we have since the
+ * tty_hangup() function doesn't invoke a close per open connection on a
+ * non-console device.
+ */
+ while(temp_open_count) {
+ --temp_open_count;
+ /*
+ * The final put will trigger destruction of the hvcs_struct.
+ * NOTE: If this hangup was signaled from user space then the
+ * final put will never happen.
+ */
+ kobject_put(kobjp);
+ }
+}
+
+/*
+ * NOTE: This is almost always from_user since user level apps interact with the
+ * /dev nodes. I'm trusting that if hvcs_write gets called and interrupted by
+ * hvcs_remove (which removes the target device and executes tty_hangup()) that
+ * tty_hangup will allow hvcs_write time to complete execution before it
+ * terminates our device.
+ */
+static int hvcs_write(struct tty_struct *tty, int from_user,
+ const unsigned char *buf, int count)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned int unit_address;
+ unsigned char *charbuf;
+ unsigned long flags;
+ int total_sent = 0;
+ int tosend = 0;
+ int result = 0;
+
+ /*
+ * If they don't check the return code off of their open they may
+ * attempt this even if there is no connected device.
+ */
+ if (!hvcsd)
+ return -ENODEV;
+
+ /* Reasonable size to prevent user level flooding */
+ if (count > HVCS_MAX_FROM_USER) {
+ printk(KERN_WARNING "HVCS write: count being truncated to"
+ " HVCS_MAX_FROM_USER.\n");
+ count = HVCS_MAX_FROM_USER;
+ }
+
+ if (!from_user)
+ charbuf = (unsigned char *)buf;
+ else {
+ charbuf = kmalloc(count, GFP_KERNEL);
+ if (!charbuf) {
+ printk(KERN_WARNING "HVCS: write -ENOMEM.\n");
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(charbuf, buf, count)) {
+ kfree(charbuf);
+ printk(KERN_WARNING "HVCS: write -EFAULT.\n");
+ return -EFAULT;
+ }
+ }
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ /*
+ * Somehow an open succedded but the device was removed or the
+ * connection terminated between the vty-server and partner vty during
+ * the middle of a write operation? This is a crummy place to do this
+ * but we want to keep it all in the spinlock.
+ */
+ if (hvcsd->open_count <= 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ if (from_user)
+ kfree(charbuf);
+ return -ENODEV;
+ }
+
+ unit_address = hvcsd->vdev->unit_address;
+
+ while (count > 0) {
+ tosend = min(count, (HVCS_BUFF_LEN - hvcsd->chars_in_buffer));
+ /*
+ * No more space, this probably means that the last call to
+ * hvcs_write() didn't succeed and the buffer was filled up.
+ */
+ if (!tosend)
+ break;
+
+ memcpy(&hvcsd->buffer[hvcsd->chars_in_buffer],
+ &charbuf[total_sent],
+ tosend);
+
+ hvcsd->chars_in_buffer += tosend;
+
+ result = 0;
+
+ /*
+ * If this is true then we don't want to try writing to the
+ * hypervisor because that is the kernel_threads job now. We'll
+ * just add to the buffer.
+ */
+ if (!(hvcsd->todo_mask & HVCS_TRY_WRITE))
+ /* won't send partial writes */
+ result = hvc_put_chars(unit_address,
+ &hvcsd->buffer[0],
+ hvcsd->chars_in_buffer);
+
+ /*
+ * Since we know we have enough room in hvcsd->buffer for
+ * tosend we record that it was sent regardless of whether the
+ * hypervisor actually took it because we have it buffered.
+ */
+ total_sent+=tosend;
+ count-=tosend;
+ if (result == 0) {
+ hvcsd->todo_mask |= HVCS_TRY_WRITE;
+ hvcs_kick();
+ break;
+ }
+
+ hvcsd->chars_in_buffer = 0;
+ /*
+ * Test after the chars_in_buffer reset otherwise this could
+ * deadlock our writes if hvc_put_chars fails.
+ */
+ if (result < 0)
+ break;
+ }
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ if (from_user)
+ kfree(charbuf);
+
+ if (result == -1)
+ return -EIO;
+ else
+ return total_sent;
+}
+
+/*
+ * This is really asking how much can we guarentee that we can send or that we
+ * absolutely WILL BUFFER if we can't send it. This driver MUST honor the
+ * return value, hence the reason for hvcs_struct buffering.
+ */
+static int hvcs_write_room(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+ int retval;
+
+ if (!hvcsd || hvcsd->open_count <= 0)
+ return 0;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = HVCS_BUFF_LEN - hvcsd->chars_in_buffer;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+
+static int hvcs_chars_in_buffer(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = hvcsd->chars_in_buffer;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+
+static struct tty_operations hvcs_ops = {
+ .open = hvcs_open,
+ .close = hvcs_close,
+ .hangup = hvcs_hangup,
+ .write = hvcs_write,
+ .write_room = hvcs_write_room,
+ .chars_in_buffer = hvcs_chars_in_buffer,
+ .unthrottle = hvcs_unthrottle,
+ .throttle = hvcs_throttle,
+};
+
+static int __init hvcs_module_init(void)
+{
+ int rc;
+ int num_ttys_to_alloc;
+
+ printk(KERN_INFO "Initializing %s\n", hvcs_driver_string);
+
+ /* Has the user specified an overload with an insmod param? */
+ if (hvcs_parm_num_devs <= 0 ||
+ (hvcs_parm_num_devs > HVCS_MAX_SERVER_ADAPTERS)) {
+ num_ttys_to_alloc = HVCS_DEFAULT_SERVER_ADAPTERS;
+ } else
+ num_ttys_to_alloc = hvcs_parm_num_devs;
+
+ hvcs_tty_driver = alloc_tty_driver(num_ttys_to_alloc);
+ if (!hvcs_tty_driver)
+ return -ENOMEM;
+
+ hvcs_tty_driver->owner = THIS_MODULE;
+
+ hvcs_tty_driver->driver_name = hvcs_driver_name;
+ hvcs_tty_driver->name = hvcs_device_node;
+
+ /*
+ * We'll let the system assign us a major number, indicated by leaving
+ * it blank.
+ */
+
+ hvcs_tty_driver->minor_start = HVCS_MINOR_START;
+ hvcs_tty_driver->type = TTY_DRIVER_TYPE_SYSTEM;
+
+ /*
+ * We role our own so that we DONT ECHO. We can't echo because the
+ * device we are connecting to already echoes by default and this would
+ * throw us into a horrible recursive echo-echo-echo loop.
+ */
+ hvcs_tty_driver->init_termios = hvcs_tty_termios;
+ hvcs_tty_driver->flags = TTY_DRIVER_REAL_RAW;
+
+ tty_set_operations(hvcs_tty_driver, &hvcs_ops);
+
+ /*
+ * The following call will result in sysfs entries that denote the
+ * dynamically assigned major and minor numbers for our devices.
+ */
+ if (tty_register_driver(hvcs_tty_driver)) {
+ printk(KERN_ERR "HVCS: registration "
+ " as a tty driver failed.\n");
+ put_tty_driver(hvcs_tty_driver);
+ return rc;
+ }
+
+ hvcs_structs_lock = SPIN_LOCK_UNLOCKED;
+
+ hvcs_pi_lock = SPIN_LOCK_UNLOCKED;
+ hvcs_pi_buff = kmalloc(PAGE_SIZE, GFP_KERNEL);
+
+ hvcs_task = kthread_run(khvcsd, NULL, "khvcsd");
+ if (IS_ERR(hvcs_task)) {
+ printk("khvcsd creation failed. Driver not loaded.\n");
+ kfree(hvcs_pi_buff);
+ put_tty_driver(hvcs_tty_driver);
+ return -EIO;
+ }
+
+ rc = vio_register_driver(&hvcs_vio_driver);
+
+ /*
+ * This needs to be done AFTER the vio_register_driver() call or else
+ * the kobjects won't be initialized properly.
+ */
+ hvcs_create_driver_attrs();
+
+ printk(KERN_INFO "HVCS: driver module inserted.\n");
+
+ return rc;
+}
+
+static void __exit hvcs_module_exit(void)
+{
+ unsigned long flags;
+
+ /*
+ * This driver receives hvcs_remove callbacks for each device upon
+ * module removal.
+ */
+
+ /*
+ * This synchronous operation will wake the khvcsd kthread if it is
+ * asleep and will return when khvcsd has terminated.
+ */
+ kthread_stop(hvcs_task);
+
+ spin_lock_irqsave(&hvcs_pi_lock, flags);
+ kfree(hvcs_pi_buff);
+ hvcs_pi_buff = NULL;
+ spin_unlock_irqrestore(&hvcs_pi_lock, flags);
+
+ hvcs_remove_driver_attrs();
+
+ vio_unregister_driver(&hvcs_vio_driver);
+
+ tty_unregister_driver(hvcs_tty_driver);
+
+ put_tty_driver(hvcs_tty_driver);
+
+ printk(KERN_INFO "HVCS: driver module removed.\n");
+}
+
+module_init(hvcs_module_init);
+module_exit(hvcs_module_exit);
+
+static inline struct hvcs_struct *from_vio_dev(struct vio_dev *viod)
+{
+ return viod->dev.driver_data;
+}
+/* The sysfs interface for the driver and devices */
+
+static ssize_t hvcs_partner_vtys_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%X\n", hvcsd->p_unit_address);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(partner_vtys, S_IRUGO, hvcs_partner_vtys_show, NULL);
+
+static ssize_t hvcs_partner_clcs_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%s\n", &hvcsd->p_location_code[0]);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(partner_clcs, S_IRUGO, hvcs_partner_clcs_show, NULL);
+
+static ssize_t hvcs_current_vty_store(struct device *dev, const char * buf,
+ size_t count)
+{
+ /*
+ * Don't need this feature at the present time because firmware doesn't
+ * yet support multiple partners.
+ */
+ printk(KERN_INFO "HVCS: Denied current_vty change: -EPERM.\n");
+ return -EPERM;
+}
+
+static ssize_t hvcs_current_vty_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%s\n", &hvcsd->p_location_code[0]);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+
+static DEVICE_ATTR(current_vty,
+ S_IRUGO | S_IWUSR, hvcs_current_vty_show, hvcs_current_vty_store);
+
+static ssize_t hvcs_vterm_state_store(struct device *dev, const char *buf,
+ size_t count)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+
+ /* writing a '0' to this sysfs entry will result in the disconnect. */
+ if (simple_strtol(buf, NULL, 0) != 0)
+ return -EINVAL;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ if (hvcsd->open_count > 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_INFO "HVCS: vterm state unchanged. "
+ "The hvcs device node is still in use.\n");
+ return -EPERM;
+ }
+
+ if (hvcsd->connected == 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_INFO "HVCS: vterm state unchanged. The"
+ " vty-server is not connected to a vty.\n");
+ return -EPERM;
+ }
+
+ hvcs_partner_free(hvcsd);
+ printk(KERN_INFO "HVCS: Closed vty-server@%X and"
+ " partner vty@%X:%d connection.\n",
+ hvcsd->vdev->unit_address,
+ hvcsd->p_unit_address,
+ (unsigned int)hvcsd->p_partition_ID);
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return count;
+}
+
+static ssize_t hvcs_vterm_state_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%d\n", hvcsd->connected);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(vterm_state, S_IRUGO | S_IWUSR,
+ hvcs_vterm_state_show, hvcs_vterm_state_store);
+
+static struct attribute *hvcs_attrs[] = {
+ &dev_attr_partner_vtys.attr,
+ &dev_attr_partner_clcs.attr,
+ &dev_attr_current_vty.attr,
+ &dev_attr_vterm_state.attr,
+ NULL,
+};
+
+static struct attribute_group hvcs_attr_group = {
+ .attrs = hvcs_attrs,
+};
+
+static void hvcs_create_device_attrs(struct hvcs_struct *hvcsd)
+{
+ struct vio_dev *vdev = hvcsd->vdev;
+ sysfs_create_group(&vdev->dev.kobj, &hvcs_attr_group);
+}
+
+static void hvcs_remove_device_attrs(struct vio_dev *vdev)
+{
+ sysfs_remove_group(&vdev->dev.kobj, &hvcs_attr_group);
+}
+
+static ssize_t hvcs_rescan_show(struct device_driver *ddp, char *buf)
+{
+ /* A 1 means it is updating, a 0 means it is done updating */
+ return snprintf(buf, PAGE_SIZE, "%d\n", hvcs_rescan_status);
+}
+
+static ssize_t hvcs_rescan_store(struct device_driver *ddp, const char * buf,
+ size_t count)
+{
+ if ((simple_strtol(buf, NULL, 0) != 1)
+ && (hvcs_rescan_status != 0))
+ return -EINVAL;
+
+ hvcs_rescan_status = 1;
+ printk(KERN_INFO "HVCS: rescanning partner info for all"
+ " vty-servers.\n");
+ hvcs_rescan_devices_list();
+ hvcs_rescan_status = 0;
+ return count;
+}
+static DRIVER_ATTR(rescan,
+ S_IRUGO | S_IWUSR, hvcs_rescan_show, hvcs_rescan_store);
+
+static void hvcs_create_driver_attrs(void)
+{
+ struct device_driver *driverfs = &(hvcs_vio_driver.driver);
+ driver_create_file(driverfs, &driver_attr_rescan);
+}
+
+static void hvcs_remove_driver_attrs(void)
+{
+ struct device_driver *driverfs = &(hvcs_vio_driver.driver);
+ driver_remove_file(driverfs, &driver_attr_rescan);
+}
--- /dev/null
+/*
+ * Driver for the SGS-Thomson M48T35 Timekeeper RAM chip
+ *
+ * Real Time Clock interface for Linux
+ *
+ * TODO: Implement periodic interrupts.
+ *
+ * Copyright (C) 2000 Silicon Graphics, Inc.
+ * Written by Ulf Carlsson (ulfc@engr.sgi.com)
+ *
+ * Based on code written by Paul Gortmaker.
+ *
+ * This driver allows use of the real time clock (built into
+ * nearly all computers) from user space. It exports the /dev/rtc
+ * interface supporting various ioctl() and also the /proc/rtc
+ * pseudo-file for status information.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ */
+
+#define RTC_VERSION "1.09b"
+
+#include <linux/bcd.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <linux/ioport.h>
+#include <linux/fcntl.h>
+#include <linux/rtc.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/proc_fs.h>
+#include <linux/smp_lock.h>
+
+#include <asm/m48t35.h>
+#include <asm/sn/ioc3.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn0/ip27.h>
+#include <asm/sn/sn0/hub.h>
+#include <asm/sn/sn_private.h>
+
+static int rtc_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg);
+
+static int rtc_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data);
+
+static void get_rtc_time(struct rtc_time *rtc_tm);
+
+/*
+ * Bits in rtc_status. (6 bits of room for future expansion)
+ */
+
+#define RTC_IS_OPEN 0x01 /* means /dev/rtc is in use */
+#define RTC_TIMER_ON 0x02 /* missed irq timer active */
+
+static unsigned char rtc_status; /* bitmapped status byte. */
+static unsigned long rtc_freq; /* Current periodic IRQ rate */
+static struct m48t35_rtc *rtc;
+
+/*
+ * If this driver ever becomes modularised, it will be really nice
+ * to make the epoch retain its value across module reload...
+ */
+
+static unsigned long epoch = 1970; /* year corresponding to 0x00 */
+
+static const unsigned char days_in_mo[] =
+{0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};
+
+static int rtc_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+
+ struct rtc_time wtime;
+
+ switch (cmd) {
+ case RTC_RD_TIME: /* Read the time/date from RTC */
+ {
+ get_rtc_time(&wtime);
+ break;
+ }
+ case RTC_SET_TIME: /* Set the RTC */
+ {
+ struct rtc_time rtc_tm;
+ unsigned char mon, day, hrs, min, sec, leap_yr;
+ unsigned int yrs;
+
+ if (!capable(CAP_SYS_TIME))
+ return -EACCES;
+
+ if (copy_from_user(&rtc_tm, (struct rtc_time*)arg,
+ sizeof(struct rtc_time)))
+ return -EFAULT;
+
+ yrs = rtc_tm.tm_year + 1900;
+ mon = rtc_tm.tm_mon + 1; /* tm_mon starts at zero */
+ day = rtc_tm.tm_mday;
+ hrs = rtc_tm.tm_hour;
+ min = rtc_tm.tm_min;
+ sec = rtc_tm.tm_sec;
+
+ if (yrs < 1970)
+ return -EINVAL;
+
+ leap_yr = ((!(yrs % 4) && (yrs % 100)) || !(yrs % 400));
+
+ if ((mon > 12) || (day == 0))
+ return -EINVAL;
+
+ if (day > (days_in_mo[mon] + ((mon == 2) && leap_yr)))
+ return -EINVAL;
+
+ if ((hrs >= 24) || (min >= 60) || (sec >= 60))
+ return -EINVAL;
+
+ if ((yrs -= epoch) > 255) /* They are unsigned */
+ return -EINVAL;
+
+ if (yrs > 169)
+ return -EINVAL;
+
+ if (yrs >= 100)
+ yrs -= 100;
+
+ sec = BIN2BCD(sec);
+ min = BIN2BCD(min);
+ hrs = BIN2BCD(hrs);
+ day = BIN2BCD(day);
+ mon = BIN2BCD(mon);
+ yrs = BIN2BCD(yrs);
+
+ spin_lock_irq(&rtc_lock);
+ rtc->control |= M48T35_RTC_SET;
+ rtc->year = yrs;
+ rtc->month = mon;
+ rtc->date = day;
+ rtc->hour = hrs;
+ rtc->min = min;
+ rtc->sec = sec;
+ rtc->control &= ~M48T35_RTC_SET;
+ spin_unlock_irq(&rtc_lock);
+
+ return 0;
+ }
+ default:
+ return -EINVAL;
+ }
+ return copy_to_user((void *)arg, &wtime, sizeof wtime) ? -EFAULT : 0;
+}
+
+/*
+ * We enforce only one user at a time here with the open/close.
+ * Also clear the previous interrupt data on an open, and clean
+ * up things on a close.
+ */
+
+static int rtc_open(struct inode *inode, struct file *file)
+{
+ spin_lock_irq(&rtc_lock);
+
+ if (rtc_status & RTC_IS_OPEN) {
+ spin_unlock_irq(&rtc_lock);
+ return -EBUSY;
+ }
+
+ rtc_status |= RTC_IS_OPEN;
+ spin_unlock_irq(&rtc_lock);
+
+ return 0;
+}
+
+static int rtc_release(struct inode *inode, struct file *file)
+{
+ /*
+ * Turn off all interrupts once the device is no longer
+ * in use, and clear the data.
+ */
+
+ spin_lock_irq(&rtc_lock);
+ rtc_status &= ~RTC_IS_OPEN;
+ spin_unlock_irq(&rtc_lock);
+
+ return 0;
+}
+
+/*
+ * The various file operations we support.
+ */
+
+static struct file_operations rtc_fops = {
+ .owner = THIS_MODULE,
+ .ioctl = rtc_ioctl,
+ .open = rtc_open,
+ .release = rtc_release,
+};
+
+static struct miscdevice rtc_dev=
+{
+ RTC_MINOR,
+ "rtc",
+ &rtc_fops
+};
+
+static int __init rtc_init(void)
+{
+ rtc = (struct m48t35_rtc *)
+ (KL_CONFIG_CH_CONS_INFO(master_nasid)->memory_base + IOC3_BYTEBUS_DEV0);
+
+ printk(KERN_INFO "Real Time Clock Driver v%s\n", RTC_VERSION);
+ if (misc_register(&rtc_dev)) {
+ printk(KERN_ERR "rtc: cannot register misc device.\n");
+ return -ENODEV;
+ }
+ if (!create_proc_read_entry("driver/rtc", 0, NULL, rtc_read_proc, NULL)) {
+ printk(KERN_ERR "rtc: cannot create /proc/rtc.\n");
+ misc_deregister(&rtc_dev);
+ return -ENOENT;
+ }
+
+ rtc_freq = 1024;
+
+ return 0;
+}
+
+static void __exit rtc_exit (void)
+{
+ /* interrupts and timer disabled at this point by rtc_release */
+
+ remove_proc_entry ("rtc", NULL);
+ misc_deregister(&rtc_dev);
+}
+
+module_init(rtc_init);
+module_exit(rtc_exit);
+
+/*
+ * Info exported via "/proc/rtc".
+ */
+
+static int rtc_get_status(char *buf)
+{
+ char *p;
+ struct rtc_time tm;
+
+ /*
+ * Just emulate the standard /proc/rtc
+ */
+
+ p = buf;
+
+ get_rtc_time(&tm);
+
+ /*
+ * There is no way to tell if the luser has the RTC set for local
+ * time or for Universal Standard Time (GMT). Probably local though.
+ */
+ p += sprintf(p,
+ "rtc_time\t: %02d:%02d:%02d\n"
+ "rtc_date\t: %04d-%02d-%02d\n"
+ "rtc_epoch\t: %04lu\n"
+ "24hr\t\t: yes\n",
+ tm.tm_hour, tm.tm_min, tm.tm_sec,
+ tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, epoch);
+
+ return p - buf;
+}
+
+static int rtc_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+{
+ int len = rtc_get_status(page);
+ if (len <= off+count) *eof = 1;
+ *start = page + off;
+ len -= off;
+ if (len>count) len = count;
+ if (len<0) len = 0;
+ return len;
+}
+
+static void get_rtc_time(struct rtc_time *rtc_tm)
+{
+ /*
+ * Do we need to wait for the last update to finish?
+ */
+
+ /*
+ * Only the values that we read from the RTC are set. We leave
+ * tm_wday, tm_yday and tm_isdst untouched. Even though the
+ * RTC has RTC_DAY_OF_WEEK, we ignore it, as it is only updated
+ * by the RTC when initially set to a non-zero value.
+ */
+ spin_lock_irq(&rtc_lock);
+ rtc->control |= M48T35_RTC_READ;
+ rtc_tm->tm_sec = rtc->sec;
+ rtc_tm->tm_min = rtc->min;
+ rtc_tm->tm_hour = rtc->hour;
+ rtc_tm->tm_mday = rtc->date;
+ rtc_tm->tm_mon = rtc->month;
+ rtc_tm->tm_year = rtc->year;
+ rtc->control &= ~M48T35_RTC_READ;
+ spin_unlock_irq(&rtc_lock);
+
+ rtc_tm->tm_sec = BCD2BIN(rtc_tm->tm_sec);
+ rtc_tm->tm_min = BCD2BIN(rtc_tm->tm_min);
+ rtc_tm->tm_hour = BCD2BIN(rtc_tm->tm_hour);
+ rtc_tm->tm_mday = BCD2BIN(rtc_tm->tm_mday);
+ rtc_tm->tm_mon = BCD2BIN(rtc_tm->tm_mon);
+ rtc_tm->tm_year = BCD2BIN(rtc_tm->tm_year);
+
+ /*
+ * Account for differences between how the RTC uses the values
+ * and how they are defined in a struct rtc_time;
+ */
+ if ((rtc_tm->tm_year += (epoch - 1900)) <= 69)
+ rtc_tm->tm_year += 100;
+
+ rtc_tm->tm_mon--;
+}
--- /dev/null
+/*
+ * LED, LCD and Button panel driver for Cobalt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1996, 1997 by Andrew Bose
+ *
+ * Linux kernel version history:
+ * March 2001: Ported from 2.0.34 by Liam Davies
+ *
+ */
+
+// function headers
+
+static int dqpoll(volatile unsigned long, volatile unsigned char );
+static int timeout(volatile unsigned long);
+
+#define LCD_CHARS_PER_LINE 40
+#define FLASH_SIZE 524288
+#define MAX_IDLE_TIME 120
+
+struct lcd_display {
+ unsigned long buttons;
+ int size1;
+ int size2;
+ unsigned char line1[LCD_CHARS_PER_LINE];
+ unsigned char line2[LCD_CHARS_PER_LINE];
+ unsigned char cursor_address;
+ unsigned char character;
+ unsigned char leds;
+ unsigned char *RomImage;
+};
+
+
+
+#define LCD_DRIVER "Cobalt LCD Driver v2.10"
+
+#define kLCD_IR 0x0F000000
+#define kLCD_DR 0x0F000010
+#define kGPI 0x0D000000
+#define kLED 0x0C000000
+
+#define kDD_R00 0x00
+#define kDD_R01 0x27
+#define kDD_R10 0x40
+#define kDD_R11 0x67
+
+#define kLCD_Addr 0x00000080
+
+#define LCDTimeoutValue 0xfff
+
+
+// Flash definitions AMD 29F040
+#define kFlashBase 0x0FC00000
+
+#define kFlash_Addr1 0x5555
+#define kFlash_Addr2 0x2AAA
+#define kFlash_Data1 0xAA
+#define kFlash_Data2 0x55
+#define kFlash_Prog 0xA0
+#define kFlash_Erase3 0x80
+#define kFlash_Erase6 0x10
+#define kFlash_Read 0xF0
+
+#define kFlash_ID 0x90
+#define kFlash_VenAddr 0x00
+#define kFlash_DevAddr 0x01
+#define kFlash_VenID 0x01
+#define kFlash_DevID 0xA4 // 29F040
+//#define kFlash_DevID 0xAD // 29F016
+
+
+// Macros
+
+#define LCDWriteData(x) outl((x << 24), kLCD_DR)
+#define LCDWriteInst(x) outl((x << 24), kLCD_IR)
+
+#define LCDReadData (inl(kLCD_DR) >> 24)
+#define LCDReadInst (inl(kLCD_IR) >> 24)
+
+#define GPIRead (inl(kGPI) >> 24)
+
+#define LEDSet(x) outb((char)x, kLED)
+
+#define WRITE_GAL(x,y) outl(y, 0x04000000 | (x))
+#define BusyCheck() while ((LCDReadInst & 0x80) == 0x80)
+
+#define WRITE_FLASH(x,y) outb((char)y, kFlashBase | (x))
+#define READ_FLASH(x) (inb(kFlashBase | (x)))
+
+
+
+/*
+ * Function command codes for io_ctl.
+ */
+#define LCD_On 1
+#define LCD_Off 2
+#define LCD_Clear 3
+#define LCD_Reset 4
+#define LCD_Cursor_Left 5
+#define LCD_Cursor_Right 6
+#define LCD_Disp_Left 7
+#define LCD_Disp_Right 8
+#define LCD_Get_Cursor 9
+#define LCD_Set_Cursor 10
+#define LCD_Home 11
+#define LCD_Read 12
+#define LCD_Write 13
+#define LCD_Cursor_Off 14
+#define LCD_Cursor_On 15
+#define LCD_Get_Cursor_Pos 16
+#define LCD_Set_Cursor_Pos 17
+#define LCD_Blink_Off 18
+
+#define LED_Set 40
+#define LED_Bit_Set 41
+#define LED_Bit_Clear 42
+
+
+// Button defs
+#define BUTTON_Read 50
+
+// Flash command codes
+#define FLASH_Erase 60
+#define FLASH_Burn 61
+#define FLASH_Read 62
+
+
+// Ethernet LINK check hackaroo
+#define LINK_Check 90
+#define LINK_Check_2 91
+
+// Button patterns _B - single layer lcd boards
+
+#define BUTTON_NONE 0x3F
+#define BUTTON_NONE_B 0xFE
+
+#define BUTTON_Left 0x3B
+#define BUTTON_Left_B 0xFA
+
+#define BUTTON_Right 0x37
+#define BUTTON_Right_B 0xDE
+
+#define BUTTON_Up 0x2F
+#define BUTTON_Up_B 0xF6
+
+#define BUTTON_Down 0x1F
+#define BUTTON_Down_B 0xEE
+
+#define BUTTON_Next 0x3D
+#define BUTTON_Next_B 0x7E
+
+#define BUTTON_Enter 0x3E
+#define BUTTON_Enter_B 0xBE
+
+#define BUTTON_Reset_B 0xFC
+
+
+// debounce constants
+
+#define BUTTON_SENSE 160000
+#define BUTTON_DEBOUNCE 5000
+
+
+// Galileo register stuff
+
+#define kGal_DevBank2Cfg 0x1466DB33
+#define kGal_DevBank2PReg 0x464
+#define kGal_DevBank3Cfg 0x146FDFFB
+#define kGal_DevBank3PReg 0x468
+
+// Network
+
+#define kIPADDR 1
+#define kNETMASK 2
+#define kGATEWAY 3
+#define kDNS 4
+
+#define kClassA 5
+#define kClassB 6
+#define kClassC 7
+
--- /dev/null
+/*
+ * drivers/watchdog/ixp2000_wdt.c
+ *
+ * Watchdog driver for Intel IXP2000 network processors
+ *
+ * Adapted from the IXP4xx watchdog driver by Lennert Buytenhek.
+ * The original version carries these notices:
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2004 (c) MontaVista, Software, Inc.
+ * Based on sa1100 driver, Copyright (C) 2000 Oleg Drokin <green@crimea.edu>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/watchdog.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_WATCHDOG_NOWAYOUT
+static int nowayout = 1;
+#else
+static int nowayout = 0;
+#endif
+static unsigned int heartbeat = 60; /* (secs) Default is 1 minute */
+static unsigned long wdt_status;
+
+#define WDT_IN_USE 0
+#define WDT_OK_TO_CLOSE 1
+
+static unsigned long wdt_tick_rate;
+
+static void
+wdt_enable(void)
+{
+ ixp2000_reg_write(IXP2000_RESET0, *(IXP2000_RESET0) | WDT_RESET_ENABLE);
+ ixp2000_reg_write(IXP2000_TWDE, WDT_ENABLE);
+ ixp2000_reg_write(IXP2000_T4_CLD, heartbeat * wdt_tick_rate);
+ ixp2000_reg_write(IXP2000_T4_CTL, TIMER_DIVIDER_256 | TIMER_ENABLE);
+}
+
+static void
+wdt_disable(void)
+{
+ ixp2000_reg_write(IXP2000_T4_CTL, 0);
+}
+
+static void
+wdt_keepalive(void)
+{
+ ixp2000_reg_write(IXP2000_T4_CLD, heartbeat * wdt_tick_rate);
+}
+
+static int
+ixp2000_wdt_open(struct inode *inode, struct file *file)
+{
+ if (test_and_set_bit(WDT_IN_USE, &wdt_status))
+ return -EBUSY;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ wdt_enable();
+
+ return nonseekable_open(inode, file);
+}
+
+static ssize_t
+ixp2000_wdt_write(struct file *file, const char *data, size_t len, loff_t *ppos)
+{
+ if (len) {
+ if (!nowayout) {
+ size_t i;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ for (i = 0; i != len; i++) {
+ char c;
+
+ if (get_user(c, data + i))
+ return -EFAULT;
+ if (c == 'V')
+ set_bit(WDT_OK_TO_CLOSE, &wdt_status);
+ }
+ }
+ wdt_keepalive();
+ }
+
+ return len;
+}
+
+
+static struct watchdog_info ident = {
+ .options = WDIOF_MAGICCLOSE | WDIOF_SETTIMEOUT |
+ WDIOF_KEEPALIVEPING,
+ .identity = "IXP2000 Watchdog",
+};
+
+static int
+ixp2000_wdt_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = -ENOIOCTLCMD;
+ int time;
+
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ ret = copy_to_user((struct watchdog_info *)arg, &ident,
+ sizeof(ident)) ? -EFAULT : 0;
+ break;
+
+ case WDIOC_GETSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_GETBOOTSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_SETTIMEOUT:
+ ret = get_user(time, (int *)arg);
+ if (ret)
+ break;
+
+ if (time <= 0 || time > 60) {
+ ret = -EINVAL;
+ break;
+ }
+
+ heartbeat = time;
+ wdt_keepalive();
+ /* Fall through */
+
+ case WDIOC_GETTIMEOUT:
+ ret = put_user(heartbeat, (int *)arg);
+ break;
+
+ case WDIOC_KEEPALIVE:
+ wdt_enable();
+ ret = 0;
+ break;
+ }
+
+ return ret;
+}
+
+static int
+ixp2000_wdt_release(struct inode *inode, struct file *file)
+{
+ if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) {
+ wdt_disable();
+ } else {
+ printk(KERN_CRIT "WATCHDOG: Device closed unexpectdly - "
+ "timer will not stop\n");
+ }
+
+ clear_bit(WDT_IN_USE, &wdt_status);
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ return 0;
+}
+
+
+static struct file_operations ixp2000_wdt_fops =
+{
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = ixp2000_wdt_write,
+ .ioctl = ixp2000_wdt_ioctl,
+ .open = ixp2000_wdt_open,
+ .release = ixp2000_wdt_release,
+};
+
+static struct miscdevice ixp2000_wdt_miscdev =
+{
+ .minor = WATCHDOG_MINOR,
+ .name = "IXP2000 Watchdog",
+ .fops = &ixp2000_wdt_fops,
+};
+
+static int __init ixp2000_wdt_init(void)
+{
+ wdt_tick_rate = (*IXP2000_T1_CLD * HZ)/ 256;;
+
+ return misc_register(&ixp2000_wdt_miscdev);
+}
+
+static void __exit ixp2000_wdt_exit(void)
+{
+ misc_deregister(&ixp2000_wdt_miscdev);
+}
+
+module_init(ixp2000_wdt_init);
+module_exit(ixp2000_wdt_exit);
+
+MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net">);
+MODULE_DESCRIPTION("IXP2000 Network Processor Watchdog");
+
+module_param(heartbeat, int, 0);
+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 60s)");
+
+module_param(nowayout, int, 0);
+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started");
+
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+
--- /dev/null
+/*
+ * drivers/watchdog/ixp4xx_wdt.c
+ *
+ * Watchdog driver for Intel IXP4xx network processors
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2004 (c) MontaVista, Software, Inc.
+ * Based on sa1100 driver, Copyright (C) 2000 Oleg Drokin <green@crimea.edu>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/watchdog.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_WATCHDOG_NOWAYOUT
+static int nowayout = 1;
+#else
+static int nowayout = 0;
+#endif
+static int heartbeat = 60; /* (secs) Default is 1 minute */
+static unsigned long wdt_status;
+static unsigned long boot_status;
+
+#define WDT_TICK_RATE (IXP4XX_PERIPHERAL_BUS_CLOCK * 1000000UL)
+
+#define WDT_IN_USE 0
+#define WDT_OK_TO_CLOSE 1
+
+static void
+wdt_enable(void)
+{
+ *IXP4XX_OSWK = IXP4XX_WDT_KEY;
+ *IXP4XX_OSWE = 0;
+ *IXP4XX_OSWT = WDT_TICK_RATE * heartbeat;
+ *IXP4XX_OSWE = IXP4XX_WDT_COUNT_ENABLE | IXP4XX_WDT_RESET_ENABLE;
+ *IXP4XX_OSWK = 0;
+}
+
+static void
+wdt_disable(void)
+{
+ *IXP4XX_OSWK = IXP4XX_WDT_KEY;
+ *IXP4XX_OSWE = 0;
+ *IXP4XX_OSWK = 0;
+}
+
+static int
+ixp4xx_wdt_open(struct inode *inode, struct file *file)
+{
+ if (test_and_set_bit(WDT_IN_USE, &wdt_status))
+ return -EBUSY;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ wdt_enable();
+
+ return 0;
+}
+
+static ssize_t
+ixp4xx_wdt_write(struct file *file, const char *data, size_t len, loff_t *ppos)
+{
+ /* Can't seek (pwrite) on this device */
+ if (ppos != &file->f_pos)
+ return -ESPIPE;
+
+ if (len) {
+ if (!nowayout) {
+ size_t i;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ for (i = 0; i != len; i++) {
+ char c;
+
+ if (get_user(c, data + i))
+ return -EFAULT;
+ if (c == 'V')
+ set_bit(WDT_OK_TO_CLOSE, &wdt_status);
+ }
+ }
+ wdt_enable();
+ }
+
+ return len;
+}
+
+static struct watchdog_info ident = {
+ .options = WDIOF_CARDRESET | WDIOF_MAGICCLOSE |
+ WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING,
+ .identity = "IXP4xx Watchdog",
+};
+
+
+static int
+ixp4xx_wdt_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = -ENOIOCTLCMD;
+ int time;
+
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ ret = copy_to_user((struct watchdog_info *)arg, &ident,
+ sizeof(ident)) ? -EFAULT : 0;
+ break;
+
+ case WDIOC_GETSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_GETBOOTSTATUS:
+ ret = put_user(boot_status, (int *)arg);
+ break;
+
+ case WDIOC_SETTIMEOUT:
+ ret = get_user(time, (int *)arg);
+ if (ret)
+ break;
+
+ if (time <= 0 || time > 60) {
+ ret = -EINVAL;
+ break;
+ }
+
+ heartbeat = time;
+ wdt_enable();
+ /* Fall through */
+
+ case WDIOC_GETTIMEOUT:
+ ret = put_user(heartbeat, (int *)arg);
+ break;
+
+ case WDIOC_KEEPALIVE:
+ wdt_enable();
+ ret = 0;
+ break;
+ }
+ return ret;
+}
+
+static int
+ixp4xx_wdt_release(struct inode *inode, struct file *file)
+{
+ if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) {
+ wdt_disable();
+ } else {
+ printk(KERN_CRIT "WATCHDOG: Device closed unexpectdly - "
+ "timer will not stop\n");
+ }
+
+ clear_bit(WDT_IN_USE, &wdt_status);
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ return 0;
+}
+
+
+static struct file_operations ixp4xx_wdt_fops =
+{
+ .owner = THIS_MODULE,
+ .write = ixp4xx_wdt_write,
+ .ioctl = ixp4xx_wdt_ioctl,
+ .open = ixp4xx_wdt_open,
+ .release = ixp4xx_wdt_release,
+};
+
+static struct miscdevice ixp4xx_wdt_miscdev =
+{
+ .minor = WATCHDOG_MINOR,
+ .name = "IXP4xx Watchdog",
+ .fops = &ixp4xx_wdt_fops,
+};
+
+static int __init ixp4xx_wdt_init(void)
+{
+ int ret;
+ unsigned long processor_id;
+
+ asm("mrc p15, 0, %0, cr0, cr0, 0;" : "=r"(processor_id) :);
+ if (!(processor_id & 0xf)) {
+ printk("IXP4XXX Watchdog: Rev. A0 CPU detected - "
+ "watchdog disabled\n");
+
+ return -ENODEV;
+ }
+
+ ret = misc_register(&ixp4xx_wdt_miscdev);
+ if (ret == 0)
+ printk("IXP4xx Watchdog Timer: heartbeat %d sec\n", heartbeat);
+
+ boot_status = (*IXP4XX_OSST & IXP4XX_OSST_TIMER_WARM_RESET) ?
+ WDIOF_CARDRESET : 0;
+
+ return ret;
+}
+
+static void __exit ixp4xx_wdt_exit(void)
+{
+ misc_deregister(&ixp4xx_wdt_miscdev);
+}
+
+
+module_init(ixp4xx_wdt_init);
+module_exit(ixp4xx_wdt_exit);
+
+MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net">);
+MODULE_DESCRIPTION("IXP4xx Network Processor Watchdog");
+
+module_param(heartbeat, int, 0);
+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 60s)");
+
+module_param(nowayout, int, 0);
+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started");
+
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+
--- /dev/null
+/*
+ * Copyright (C) 2002, 2003, 2004 Hewlett-Packard Co.
+ * Khalid Aziz <khalid_aziz@hp.com>
+ * Alex Williamson <alex.williamson@hp.com>
+ * Bjorn Helgaas <bjorn.helgaas@hp.com>
+ *
+ * Parse the EFI PCDP table to locate the console device.
+ */
+
+#include <linux/acpi.h>
+#include <linux/console.h>
+#include <linux/efi.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <asm/io.h>
+#include <asm/serial.h>
+#include "pcdp.h"
+
+static inline int
+uart_irq_supported(int rev, struct pcdp_uart *uart)
+{
+ if (rev < 3)
+ return uart->pci_func & PCDP_UART_IRQ;
+ return uart->flags & PCDP_UART_IRQ;
+}
+
+static inline int
+uart_pci(int rev, struct pcdp_uart *uart)
+{
+ if (rev < 3)
+ return uart->pci_func & PCDP_UART_PCI;
+ return uart->flags & PCDP_UART_PCI;
+}
+
+static inline int
+uart_active_high_low(int rev, struct pcdp_uart *uart)
+{
+ if (uart_pci(rev, uart) || uart->flags & PCDP_UART_ACTIVE_LOW)
+ return ACPI_ACTIVE_LOW;
+ return ACPI_ACTIVE_HIGH;
+}
+
+static inline int
+uart_edge_level(int rev, struct pcdp_uart *uart)
+{
+ if (uart_pci(rev, uart))
+ return ACPI_LEVEL_SENSITIVE;
+ if (rev < 3 || uart->flags & PCDP_UART_EDGE_SENSITIVE)
+ return ACPI_EDGE_SENSITIVE;
+ return ACPI_LEVEL_SENSITIVE;
+}
+
+static void __init
+setup_serial_console(int rev, struct pcdp_uart *uart)
+{
+#ifdef CONFIG_SERIAL_8250_CONSOLE
+ struct uart_port port;
+ static char options[16];
+ int mapsize = 64;
+
+ memset(&port, 0, sizeof(port));
+ port.uartclk = uart->clock_rate;
+ if (!port.uartclk) /* some FW doesn't supply this */
+ port.uartclk = BASE_BAUD * 16;
+
+ if (uart->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ port.mapbase = uart->addr.address;
+ port.membase = ioremap(port.mapbase, mapsize);
+ if (!port.membase) {
+ printk(KERN_ERR "%s: couldn't ioremap 0x%lx-0x%lx\n",
+ __FUNCTION__, port.mapbase, port.mapbase + mapsize);
+ return;
+ }
+ port.iotype = UPIO_MEM;
+ } else if (uart->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
+ port.iobase = uart->addr.address;
+ port.iotype = UPIO_PORT;
+ } else
+ return;
+
+ switch (uart->pci_prog_intfc) {
+ case 0x0: port.type = PORT_8250; break;
+ case 0x1: port.type = PORT_16450; break;
+ case 0x2: port.type = PORT_16550; break;
+ case 0x3: port.type = PORT_16650; break;
+ case 0x4: port.type = PORT_16750; break;
+ case 0x5: port.type = PORT_16850; break;
+ case 0x6: port.type = PORT_16C950; break;
+ default: port.type = PORT_UNKNOWN; break;
+ }
+
+ port.flags = UPF_SKIP_TEST | UPF_BOOT_AUTOCONF;
+
+ if (uart_irq_supported(rev, uart)) {
+ port.irq = acpi_register_gsi(uart->gsi,
+ uart_active_high_low(rev, uart),
+ uart_edge_level(rev, uart));
+ port.flags |= UPF_AUTO_IRQ; /* some FW reported wrong GSI */
+ if (uart_pci(rev, uart))
+ port.flags |= UPF_SHARE_IRQ;
+ }
+
+ if (early_serial_setup(&port) < 0)
+ return;
+
+ snprintf(options, sizeof(options), "%lun%d", uart->baud,
+ uart->bits ? uart->bits : 8);
+ add_preferred_console("ttyS", port.line, options);
+
+ printk(KERN_INFO "PCDP: serial console at %s 0x%lx (ttyS%d, options %s)\n",
+ port.iotype == UPIO_MEM ? "MMIO" : "I/O",
+ uart->addr.address, port.line, options);
+#endif
+}
+
+static void __init
+setup_vga_console(struct pcdp_vga *vga)
+{
+#ifdef CONFIG_VT
+#ifdef CONFIG_VGA_CONSOLE
+ if (efi_mem_type(0xA0000) == EFI_CONVENTIONAL_MEMORY) {
+ printk(KERN_ERR "PCDP: VGA selected, but frame buffer is not MMIO!\n");
+ return;
+ }
+
+ conswitchp = &vga_con;
+ printk(KERN_INFO "PCDP: VGA console\n");
+#endif
+#endif
+}
+
+void __init
+efi_setup_pcdp_console(char *cmdline)
+{
+ struct pcdp *pcdp;
+ struct pcdp_uart *uart;
+ struct pcdp_device *dev, *end;
+ int i, serial = 0;
+
+ pcdp = efi.hcdp;
+ if (!pcdp)
+ return;
+
+ printk(KERN_INFO "PCDP: v%d at 0x%p\n", pcdp->rev, pcdp);
+
+ if (pcdp->rev < 3) {
+ if (strstr(cmdline, "console=ttyS0") || efi_uart_console_only())
+ serial = 1;
+ }
+
+ for (i = 0, uart = pcdp->uart; i < pcdp->num_uarts; i++, uart++) {
+ if (uart->flags & PCDP_UART_PRIMARY_CONSOLE || serial) {
+ if (uart->type == PCDP_CONSOLE_UART) {
+ setup_serial_console(pcdp->rev, uart);
+ return;
+ }
+ }
+ }
+
+ end = (struct pcdp_device *) ((u8 *) pcdp + pcdp->length);
+ for (dev = (struct pcdp_device *) (pcdp->uart + pcdp->num_uarts);
+ dev < end;
+ dev = (struct pcdp_device *) ((u8 *) dev + dev->length)) {
+ if (dev->flags & PCDP_PRIMARY_CONSOLE) {
+ if (dev->type == PCDP_CONSOLE_VGA) {
+ setup_vga_console((struct pcdp_vga *) dev);
+ return;
+ }
+ }
+ }
+}
+
+#ifdef CONFIG_IA64_EARLY_PRINTK_UART
+unsigned long
+hcdp_early_uart (void)
+{
+ efi_system_table_t *systab;
+ efi_config_table_t *config_tables;
+ unsigned long addr = 0;
+ struct pcdp *pcdp = 0;
+ struct pcdp_uart *uart;
+ int i;
+
+ systab = (efi_system_table_t *) ia64_boot_param->efi_systab;
+ if (!systab)
+ return 0;
+ systab = __va(systab);
+
+ config_tables = (efi_config_table_t *) systab->tables;
+ if (!config_tables)
+ return 0;
+ config_tables = __va(config_tables);
+
+ for (i = 0; i < systab->nr_tables; i++) {
+ if (efi_guidcmp(config_tables[i].guid, HCDP_TABLE_GUID) == 0) {
+ pcdp = (struct pcdp *) config_tables[i].table;
+ break;
+ }
+ }
+ if (!pcdp)
+ return 0;
+ pcdp = __va(pcdp);
+
+ for (i = 0, uart = pcdp->uart; i < pcdp->num_uarts; i++, uart++) {
+ if (uart->type == PCDP_CONSOLE_UART) {
+ addr = uart->addr.address;
+ break;
+ }
+ }
+ return addr;
+}
+#endif /* CONFIG_IA64_EARLY_PRINTK_UART */
--- /dev/null
+/*
+ * Copyright (C) 2002, 2004 Hewlett-Packard Co.
+ * Khalid Aziz <khalid_aziz@hp.com>
+ * Bjorn Helgaas <bjorn.helgaas@hp.com>
+ *
+ * Definitions for PCDP-defined console devices
+ *
+ * v1.0a: http://www.dig64.org/specifications/DIG64_HCDPv10a_01.pdf
+ * v2.0: http://www.dig64.org/specifications/DIG64_HCDPv20_042804.pdf
+ */
+
+#define PCDP_CONSOLE 0
+#define PCDP_DEBUG 1
+#define PCDP_CONSOLE_OUTPUT 2
+#define PCDP_CONSOLE_INPUT 3
+
+#define PCDP_UART (0 << 3)
+#define PCDP_VGA (1 << 3)
+#define PCDP_USB (2 << 3)
+
+/* pcdp_uart.type and pcdp_device.type */
+#define PCDP_CONSOLE_UART (PCDP_UART | PCDP_CONSOLE)
+#define PCDP_DEBUG_UART (PCDP_UART | PCDP_DEBUG)
+#define PCDP_CONSOLE_VGA (PCDP_VGA | PCDP_CONSOLE_OUTPUT)
+#define PCDP_CONSOLE_USB (PCDP_USB | PCDP_CONSOLE_INPUT)
+
+/* pcdp_uart.flags */
+#define PCDP_UART_EDGE_SENSITIVE (1 << 0)
+#define PCDP_UART_ACTIVE_LOW (1 << 1)
+#define PCDP_UART_PRIMARY_CONSOLE (1 << 2)
+#define PCDP_UART_IRQ (1 << 6) /* in pci_func for rev < 3 */
+#define PCDP_UART_PCI (1 << 7) /* in pci_func for rev < 3 */
+
+struct pcdp_uart {
+ u8 type;
+ u8 bits;
+ u8 parity;
+ u8 stop_bits;
+ u8 pci_seg;
+ u8 pci_bus;
+ u8 pci_dev;
+ u8 pci_func;
+ u64 baud;
+ struct acpi_generic_address addr;
+ u16 pci_dev_id;
+ u16 pci_vendor_id;
+ u32 gsi;
+ u32 clock_rate;
+ u8 pci_prog_intfc;
+ u8 flags;
+};
+
+struct pcdp_vga {
+ u8 count; /* address space descriptors */
+};
+
+/* pcdp_device.flags */
+#define PCDP_PRIMARY_CONSOLE 1
+
+struct pcdp_device {
+ u8 type;
+ u8 flags;
+ u16 length;
+ u16 efi_index;
+};
+
+struct pcdp {
+ u8 signature[4];
+ u32 length;
+ u8 rev; /* PCDP v2.0 is rev 3 */
+ u8 chksum;
+ u8 oemid[6];
+ u8 oem_tabid[8];
+ u32 oem_rev;
+ u8 creator_id[4];
+ u32 creator_rev;
+ u32 num_uarts;
+ struct pcdp_uart uart[0]; /* actual size is num_uarts */
+ /* remainder of table is pcdp_device structures */
+};
--- /dev/null
+/*
+ * drivers/i2c/i2c-adap-ixp4xx.c
+ *
+ * Intel's IXP4xx XScale NPU chipsets (IXP420, 421, 422, 425) do not have
+ * an on board I2C controller but provide 16 GPIO pins that are often
+ * used to create an I2C bus. This driver provides an i2c_adapter
+ * interface that plugs in under algo_bit and drives the GPIO pins
+ * as instructed by the alogorithm driver.
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (c) 2003-2004 MontaVista Software Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ *
+ * NOTE: Since different platforms will use different GPIO pins for
+ * I2C, this driver uses an IXP4xx-specific platform_data
+ * pointer to pass the GPIO numbers to the driver. This
+ * allows us to support all the different IXP4xx platforms
+ * w/o having to put #ifdefs in this driver.
+ *
+ * See arch/arm/mach-ixp4xx/ixdp425.c for an example of building a
+ * device list and filling in the ixp4xx_i2c_pins data structure
+ * that is passed as the platform_data to this driver.
+ */
+
+#include <linux/config.h>
+#ifdef CONFIG_I2C_DEBUG_BUS
+#define DEBUG 1
+#endif
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/i2c-algo-bit.h>
+
+#include <asm/hardware.h> /* Pick up IXP4xx-specific bits */
+
+static inline int ixp4xx_scl_pin(void *data)
+{
+ return ((struct ixp4xx_i2c_pins*)data)->scl_pin;
+}
+
+static inline int ixp4xx_sda_pin(void *data)
+{
+ return ((struct ixp4xx_i2c_pins*)data)->sda_pin;
+}
+
+static void ixp4xx_bit_setscl(void *data, int val)
+{
+ gpio_line_set(ixp4xx_scl_pin(data), 0);
+ gpio_line_config(ixp4xx_scl_pin(data),
+ val ? IXP4XX_GPIO_IN : IXP4XX_GPIO_OUT );
+}
+
+static void ixp4xx_bit_setsda(void *data, int val)
+{
+ gpio_line_set(ixp4xx_sda_pin(data), 0);
+ gpio_line_config(ixp4xx_sda_pin(data),
+ val ? IXP4XX_GPIO_IN : IXP4XX_GPIO_OUT );
+}
+
+static int ixp4xx_bit_getscl(void *data)
+{
+ int scl;
+
+ gpio_line_config(ixp4xx_scl_pin(data), IXP4XX_GPIO_IN );
+ gpio_line_get(ixp4xx_scl_pin(data), &scl);
+
+ return scl;
+}
+
+static int ixp4xx_bit_getsda(void *data)
+{
+ int sda;
+
+ gpio_line_config(ixp4xx_sda_pin(data), IXP4XX_GPIO_IN );
+ gpio_line_get(ixp4xx_sda_pin(data), &sda);
+
+ return sda;
+}
+
+struct ixp4xx_i2c_data {
+ struct ixp4xx_i2c_pins *gpio_pins;
+ struct i2c_adapter adapter;
+ struct i2c_algo_bit_data algo_data;
+};
+
+static int ixp4xx_i2c_remove(struct device *dev)
+{
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct ixp4xx_i2c_data *drv_data = dev_get_drvdata(&plat_dev->dev);
+
+ dev_set_drvdata(&plat_dev->dev, NULL);
+
+ i2c_bit_del_bus(&drv_data->adapter);
+
+ kfree(drv_data);
+
+ return 0;
+}
+
+static int ixp4xx_i2c_probe(struct device *dev)
+{
+ int err;
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct ixp4xx_i2c_pins *gpio = plat_dev->dev.platform_data;
+ struct ixp4xx_i2c_data *drv_data =
+ kmalloc(sizeof(struct ixp4xx_i2c_data), GFP_KERNEL);
+
+ if(!drv_data)
+ return -ENOMEM;
+
+ memzero(drv_data, sizeof(struct ixp4xx_i2c_data));
+ drv_data->gpio_pins = gpio;
+
+ /*
+ * We could make a lot of these structures static, but
+ * certain platforms may have multiple GPIO-based I2C
+ * buses for various device domains, so we need per-device
+ * algo_data->data.
+ */
+ drv_data->algo_data.data = gpio;
+ drv_data->algo_data.setsda = ixp4xx_bit_setsda;
+ drv_data->algo_data.setscl = ixp4xx_bit_setscl;
+ drv_data->algo_data.getsda = ixp4xx_bit_getsda;
+ drv_data->algo_data.getscl = ixp4xx_bit_getscl;
+ drv_data->algo_data.udelay = 10;
+ drv_data->algo_data.mdelay = 10;
+ drv_data->algo_data.timeout = 100;
+
+ drv_data->adapter.id = I2C_HW_B_IXP4XX,
+ drv_data->adapter.algo_data = &drv_data->algo_data,
+
+ drv_data->adapter.dev.parent = &plat_dev->dev;
+
+ gpio_line_config(gpio->scl_pin, IXP4XX_GPIO_IN);
+ gpio_line_config(gpio->sda_pin, IXP4XX_GPIO_IN);
+ gpio_line_set(gpio->scl_pin, 0);
+ gpio_line_set(gpio->sda_pin, 0);
+
+ if ((err = i2c_bit_add_bus(&drv_data->adapter) != 0)) {
+ printk(KERN_ERR "ERROR: Could not install %s\n", dev->bus_id);
+
+ kfree(drv_data);
+ return err;
+ }
+
+ dev_set_drvdata(&plat_dev->dev, drv_data);
+
+ return 0;
+}
+
+static struct device_driver ixp4xx_i2c_driver = {
+ .name = "IXP4XX-I2C",
+ .bus = &platform_bus_type,
+ .probe = ixp4xx_i2c_probe,
+ .remove = ixp4xx_i2c_remove,
+};
+
+static int __init ixp4xx_i2c_init(void)
+{
+ return driver_register(&ixp4xx_i2c_driver);
+}
+
+static void __exit ixp4xx_i2c_exit(void)
+{
+ driver_unregister(&ixp4xx_i2c_driver);
+}
+
+module_init(ixp4xx_i2c_init);
+module_exit(ixp4xx_i2c_exit);
+
+MODULE_DESCRIPTION("GPIO-based I2C adapter for IXP4xx systems");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net>");
+
--- /dev/null
+/*
+ * adm1025.c
+ *
+ * Copyright (C) 2000 Chen-Yuan Wu <gwu@esoft.com>
+ * Copyright (C) 2003-2004 Jean Delvare <khali@linux-fr.org>
+ *
+ * The ADM1025 is a sensor chip made by Analog Devices. It reports up to 6
+ * voltages (including its own power source) and up to two temperatures
+ * (its own plus up to one external one). Voltages are scaled internally
+ * (which is not the common way) with ratios such that the nominal value
+ * of each voltage correspond to a register value of 192 (which means a
+ * resolution of about 0.5% of the nominal value). Temperature values are
+ * reported with a 1 deg resolution and a 3 deg accuracy. Complete
+ * datasheet can be obtained from Analog's website at:
+ * http://www.analog.com/Analog_Root/productPage/productHome/0,2121,ADM1025,00.html
+ *
+ * This driver also supports the ADM1025A, which differs from the ADM1025
+ * only in that it has "open-drain VID inputs while the ADM1025 has
+ * on-chip 100k pull-ups on the VID inputs". It doesn't make any
+ * difference for us.
+ *
+ * This driver also supports the NE1619, a sensor chip made by Philips.
+ * That chip is similar to the ADM1025A, with a few differences. The only
+ * difference that matters to us is that the NE1619 has only two possible
+ * addresses while the ADM1025A has a third one. Complete datasheet can be
+ * obtained from Philips's website at:
+ * http://www.semiconductors.philips.com/pip/NE1619DS.html
+ *
+ * Since the ADM1025 was the first chipset supported by this driver, most
+ * comments will refer to this chipset, but are actually general and
+ * concern all supported chipsets, unless mentioned otherwise.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+#include <linux/i2c-vid.h>
+
+/*
+ * Addresses to scan
+ * ADM1025 and ADM1025A have three possible addresses: 0x2c, 0x2d and 0x2e.
+ * NE1619 has two possible addresses: 0x2c and 0x2d.
+ */
+
+static unsigned short normal_i2c[] = { I2C_CLIENT_END };
+static unsigned short normal_i2c_range[] = { 0x2c, 0x2e, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+static unsigned int normal_isa_range[] = { I2C_CLIENT_ISA_END };
+
+/*
+ * Insmod parameters
+ */
+
+SENSORS_INSMOD_2(adm1025, ne1619);
+
+/*
+ * The ADM1025 registers
+ */
+
+#define ADM1025_REG_MAN_ID 0x3E
+#define ADM1025_REG_CHIP_ID 0x3F
+#define ADM1025_REG_CONFIG 0x40
+#define ADM1025_REG_STATUS1 0x41
+#define ADM1025_REG_STATUS2 0x42
+#define ADM1025_REG_IN(nr) (0x20 + (nr))
+#define ADM1025_REG_IN_MAX(nr) (0x2B + (nr) * 2)
+#define ADM1025_REG_IN_MIN(nr) (0x2C + (nr) * 2)
+#define ADM1025_REG_TEMP(nr) (0x26 + (nr))
+#define ADM1025_REG_TEMP_HIGH(nr) (0x37 + (nr) * 2)
+#define ADM1025_REG_TEMP_LOW(nr) (0x38 + (nr) * 2)
+#define ADM1025_REG_VID 0x47
+#define ADM1025_REG_VID4 0x49
+
+/*
+ * Conversions and various macros
+ * The ADM1025 uses signed 8-bit values for temperatures.
+ */
+
+static int in_scale[6] = { 2500, 2250, 3300, 5000, 12000, 3300 };
+
+#define IN_FROM_REG(reg,scale) (((reg) * (scale) + 96) / 192)
+#define IN_TO_REG(val,scale) ((val) <= 0 ? 0 : \
+ (val) * 192 >= (scale) * 255 ? 255 : \
+ ((val) * 192 + (scale)/2) / (scale))
+
+#define TEMP_FROM_REG(reg) ((reg) * 1000)
+#define TEMP_TO_REG(val) ((val) <= -127500 ? -128 : \
+ (val) >= 126500 ? 127 : \
+ (((val) < 0 ? (val)-500 : (val)+500) / 1000))
+
+/*
+ * Functions declaration
+ */
+
+static int adm1025_attach_adapter(struct i2c_adapter *adapter);
+static int adm1025_detect(struct i2c_adapter *adapter, int address, int kind);
+static void adm1025_init_client(struct i2c_client *client);
+static int adm1025_detach_client(struct i2c_client *client);
+static struct adm1025_data *adm1025_update_device(struct device *dev);
+
+/*
+ * Driver data (common to all clients)
+ */
+
+static struct i2c_driver adm1025_driver = {
+ .owner = THIS_MODULE,
+ .name = "adm1025",
+ .id = I2C_DRIVERID_ADM1025,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = adm1025_attach_adapter,
+ .detach_client = adm1025_detach_client,
+};
+
+/*
+ * Client data (each client gets its own)
+ */
+
+struct adm1025_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid; /* zero until following fields are valid */
+ unsigned long last_updated; /* in jiffies */
+
+ u8 in[6]; /* register value */
+ u8 in_max[6]; /* register value */
+ u8 in_min[6]; /* register value */
+ s8 temp[2]; /* register value */
+ s8 temp_min[2]; /* register value */
+ s8 temp_max[2]; /* register value */
+ u16 alarms; /* register values, combined */
+ u8 vid; /* register values, combined */
+ u8 vrm;
+};
+
+/*
+ * Internal variables
+ */
+
+static int adm1025_id = 0;
+
+/*
+ * Sysfs stuff
+ */
+
+#define show_in(offset) \
+static ssize_t show_in##offset(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in[offset], \
+ in_scale[offset])); \
+} \
+static ssize_t show_in##offset##_min(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in_min[offset], \
+ in_scale[offset])); \
+} \
+static ssize_t show_in##offset##_max(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in_max[offset], \
+ in_scale[offset])); \
+} \
+static DEVICE_ATTR(in##offset##_input, S_IRUGO, show_in##offset, NULL);
+show_in(0);
+show_in(1);
+show_in(2);
+show_in(3);
+show_in(4);
+show_in(5);
+
+#define show_temp(offset) \
+static ssize_t show_temp##offset(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp[offset-1])); \
+} \
+static ssize_t show_temp##offset##_min(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_min[offset-1])); \
+} \
+static ssize_t show_temp##offset##_max(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_max[offset-1])); \
+}\
+static DEVICE_ATTR(temp##offset##_input, S_IRUGO, show_temp##offset, NULL);
+show_temp(1);
+show_temp(2);
+
+#define set_in(offset) \
+static ssize_t set_in##offset##_min(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ data->in_min[offset] = IN_TO_REG(simple_strtol(buf, NULL, 10), \
+ in_scale[offset]); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_IN_MIN(offset), \
+ data->in_min[offset]); \
+ return count; \
+} \
+static ssize_t set_in##offset##_max(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ data->in_max[offset] = IN_TO_REG(simple_strtol(buf, NULL, 10), \
+ in_scale[offset]); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_IN_MAX(offset), \
+ data->in_max[offset]); \
+ return count; \
+} \
+static DEVICE_ATTR(in##offset##_min, S_IWUSR | S_IRUGO, \
+ show_in##offset##_min, set_in##offset##_min); \
+static DEVICE_ATTR(in##offset##_max, S_IWUSR | S_IRUGO, \
+ show_in##offset##_max, set_in##offset##_max);
+set_in(0);
+set_in(1);
+set_in(2);
+set_in(3);
+set_in(4);
+set_in(5);
+
+#define set_temp(offset) \
+static ssize_t set_temp##offset##_min(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ data->temp_min[offset-1] = TEMP_TO_REG(simple_strtol(buf, NULL, 10)); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_TEMP_LOW(offset-1), \
+ data->temp_min[offset-1]); \
+ return count; \
+} \
+static ssize_t set_temp##offset##_max(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ data->temp_max[offset-1] = TEMP_TO_REG(simple_strtol(buf, NULL, 10)); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_TEMP_HIGH(offset-1), \
+ data->temp_max[offset-1]); \
+ return count; \
+} \
+static DEVICE_ATTR(temp##offset##_min, S_IWUSR | S_IRUGO, \
+ show_temp##offset##_min, set_temp##offset##_min); \
+static DEVICE_ATTR(temp##offset##_max, S_IWUSR | S_IRUGO, \
+ show_temp##offset##_max, set_temp##offset##_max);
+set_temp(1);
+set_temp(2);
+
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", data->alarms);
+}
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+static ssize_t show_vid(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", vid_from_reg(data->vid, data->vrm));
+}
+static DEVICE_ATTR(in1_ref, S_IRUGO, show_vid, NULL);
+
+static ssize_t show_vrm(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", data->vrm);
+}
+static ssize_t set_vrm(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1025_data *data = i2c_get_clientdata(client);
+ data->vrm = simple_strtoul(buf, NULL, 10);
+ return count;
+}
+static DEVICE_ATTR(vrm, S_IRUGO | S_IWUSR, show_vrm, set_vrm);
+
+/*
+ * Real code
+ */
+
+static int adm1025_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, adm1025_detect);
+}
+
+/*
+ * The following function does more than just detection. If detection
+ * succeeds, it also registers the new chip.
+ */
+static int adm1025_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct adm1025_data *data;
+ int err = 0;
+ const char *name = "";
+ u8 config;
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct adm1025_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct adm1025_data));
+
+ /* The common I2C client data is placed right before the
+ ADM1025-specific data. */
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &adm1025_driver;
+ new_client->flags = 0;
+
+ /*
+ * Now we do the remaining detection. A negative kind means that
+ * the driver was loaded with no force parameter (default), so we
+ * must both detect and identify the chip. A zero kind means that
+ * the driver was loaded with the force parameter, the detection
+ * step shall be skipped. A positive kind means that the driver
+ * was loaded with the force parameter and a given kind of chip is
+ * requested, so both the detection and the identification steps
+ * are skipped.
+ */
+ config = i2c_smbus_read_byte_data(new_client, ADM1025_REG_CONFIG);
+ if (kind < 0) { /* detection */
+ if ((config & 0x80) != 0x00
+ || (i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_STATUS1) & 0xC0) != 0x00
+ || (i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_STATUS2) & 0xBC) != 0x00) {
+ dev_dbg(&adapter->dev,
+ "ADM1025 detection failed at 0x%02x.\n",
+ address);
+ goto exit_free;
+ }
+ }
+
+ if (kind <= 0) { /* identification */
+ u8 man_id, chip_id;
+
+ man_id = i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_MAN_ID);
+ chip_id = i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_CHIP_ID);
+
+ if (man_id == 0x41) { /* Analog Devices */
+ if ((chip_id & 0xF0) == 0x20) { /* ADM1025/ADM1025A */
+ kind = adm1025;
+ }
+ } else
+ if (man_id == 0xA1) { /* Philips */
+ if (address != 0x2E
+ && (chip_id & 0xF0) == 0x20) { /* NE1619 */
+ kind = ne1619;
+ }
+ }
+
+ if (kind <= 0) { /* identification failed */
+ dev_info(&adapter->dev,
+ "Unsupported chip (man_id=0x%02X, "
+ "chip_id=0x%02X).\n", man_id, chip_id);
+ goto exit_free;
+ }
+ }
+
+ if (kind == adm1025) {
+ name = "adm1025";
+ } else if (kind == ne1619) {
+ name = "ne1619";
+ }
+
+ /* We can fill in the remaining client fields */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+ new_client->id = adm1025_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the ADM1025 chip */
+ adm1025_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_in0_input);
+ device_create_file(&new_client->dev, &dev_attr_in1_input);
+ device_create_file(&new_client->dev, &dev_attr_in2_input);
+ device_create_file(&new_client->dev, &dev_attr_in3_input);
+ device_create_file(&new_client->dev, &dev_attr_in5_input);
+ device_create_file(&new_client->dev, &dev_attr_in0_min);
+ device_create_file(&new_client->dev, &dev_attr_in1_min);
+ device_create_file(&new_client->dev, &dev_attr_in2_min);
+ device_create_file(&new_client->dev, &dev_attr_in3_min);
+ device_create_file(&new_client->dev, &dev_attr_in5_min);
+ device_create_file(&new_client->dev, &dev_attr_in0_max);
+ device_create_file(&new_client->dev, &dev_attr_in1_max);
+ device_create_file(&new_client->dev, &dev_attr_in2_max);
+ device_create_file(&new_client->dev, &dev_attr_in3_max);
+ device_create_file(&new_client->dev, &dev_attr_in5_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+ device_create_file(&new_client->dev, &dev_attr_in1_ref);
+ device_create_file(&new_client->dev, &dev_attr_vrm);
+
+ /* Pin 11 is either in4 (+12V) or VID4 */
+ if (!(config & 0x20)) {
+ device_create_file(&new_client->dev, &dev_attr_in4_input);
+ device_create_file(&new_client->dev, &dev_attr_in4_min);
+ device_create_file(&new_client->dev, &dev_attr_in4_max);
+ }
+
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static void adm1025_init_client(struct i2c_client *client)
+{
+ u8 reg;
+ struct adm1025_data *data = i2c_get_clientdata(client);
+ int i;
+
+ data->vrm = 82;
+
+ /*
+ * Set high limits
+ * Usually we avoid setting limits on driver init, but it happens
+ * that the ADM1025 comes with stupid default limits (all registers
+ * set to 0). In case the chip has not gone through any limit
+ * setting yet, we better set the high limits to the max so that
+ * no alarm triggers.
+ */
+ for (i=0; i<6; i++) {
+ reg = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MAX(i));
+ if (reg == 0)
+ i2c_smbus_write_byte_data(client,
+ ADM1025_REG_IN_MAX(i),
+ 0xFF);
+ }
+ for (i=0; i<2; i++) {
+ reg = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i));
+ if (reg == 0)
+ i2c_smbus_write_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i),
+ 0x7F);
+ }
+
+ /*
+ * Start the conversions
+ */
+ reg = i2c_smbus_read_byte_data(client, ADM1025_REG_CONFIG);
+ if (!(reg & 0x01))
+ i2c_smbus_write_byte_data(client, ADM1025_REG_CONFIG,
+ (reg&0x7E)|0x01);
+}
+
+static int adm1025_detach_client(struct i2c_client *client)
+{
+ int err;
+
+ if ((err = i2c_detach_client(client))) {
+ dev_err(&client->dev, "Client deregistration failed, "
+ "client not detached.\n");
+ return err;
+ }
+
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static struct adm1025_data *adm1025_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1025_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ * 2) ||
+ (jiffies < data->last_updated) ||
+ !data->valid) {
+ int i;
+
+ dev_dbg(&client->dev, "Updating data.\n");
+ for (i=0; i<6; i++) {
+ data->in[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN(i));
+ data->in_min[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MIN(i));
+ data->in_max[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MAX(i));
+ }
+ for (i=0; i<2; i++) {
+ data->temp[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP(i));
+ data->temp_min[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_LOW(i));
+ data->temp_max[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i));
+ }
+ data->alarms = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_STATUS1)
+ | (i2c_smbus_read_byte_data(client,
+ ADM1025_REG_STATUS2) << 8);
+ data->vid = (i2c_smbus_read_byte_data(client,
+ ADM1025_REG_VID) & 0x0f)
+ | ((i2c_smbus_read_byte_data(client,
+ ADM1025_REG_VID4) & 0x01) << 4);
+
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_adm1025_init(void)
+{
+ return i2c_add_driver(&adm1025_driver);
+}
+
+static void __exit sensors_adm1025_exit(void)
+{
+ i2c_del_driver(&adm1025_driver);
+}
+
+MODULE_AUTHOR("Jean Delvare <khali@linux-fr.org>");
+MODULE_DESCRIPTION("ADM1025 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_adm1025_init);
+module_exit(sensors_adm1025_exit);
--- /dev/null
+/*
+ adm1031.c - Part of lm_sensors, Linux kernel modules for hardware
+ monitoring
+ Based on lm75.c and lm85.c
+ Supports adm1030 / adm1031
+ Copyright (C) 2004 Alexandre d'Alton <alex@alexdalton.org>
+ Reworked by Jean Delvare <khali@linux-fr.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+/* Following macros takes channel parameter starting from 0 to 2 */
+#define ADM1031_REG_FAN_SPEED(nr) (0x08 + (nr))
+#define ADM1031_REG_FAN_DIV(nr) (0x20 + (nr))
+#define ADM1031_REG_PWM (0x22)
+#define ADM1031_REG_FAN_MIN(nr) (0x10 + (nr))
+
+#define ADM1031_REG_TEMP_MAX(nr) (0x14 + 4*(nr))
+#define ADM1031_REG_TEMP_MIN(nr) (0x15 + 4*(nr))
+#define ADM1031_REG_TEMP_CRIT(nr) (0x16 + 4*(nr))
+
+#define ADM1031_REG_TEMP(nr) (0xa + (nr))
+#define ADM1031_REG_AUTO_TEMP(nr) (0x24 + (nr))
+
+#define ADM1031_REG_STATUS(nr) (0x2 + (nr))
+
+#define ADM1031_REG_CONF1 0x0
+#define ADM1031_REG_CONF2 0x1
+#define ADM1031_REG_EXT_TEMP 0x6
+
+#define ADM1031_CONF1_MONITOR_ENABLE 0x01 /* Monitoring enable */
+#define ADM1031_CONF1_PWM_INVERT 0x08 /* PWM Invert */
+#define ADM1031_CONF1_AUTO_MODE 0x80 /* Auto FAN */
+
+#define ADM1031_CONF2_PWM1_ENABLE 0x01
+#define ADM1031_CONF2_PWM2_ENABLE 0x02
+#define ADM1031_CONF2_TACH1_ENABLE 0x04
+#define ADM1031_CONF2_TACH2_ENABLE 0x08
+#define ADM1031_CONF2_TEMP_ENABLE(chan) (0x10 << (chan))
+
+/* Addresses to scan */
+static unsigned short normal_i2c[] = { I2C_CLIENT_END };
+static unsigned short normal_i2c_range[] = { 0x2c, 0x2e, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+static unsigned int normal_isa_range[] = { I2C_CLIENT_ISA_END };
+
+/* Insmod parameters */
+SENSORS_INSMOD_2(adm1030, adm1031);
+
+typedef u8 auto_chan_table_t[8][2];
+
+/* Each client has this additional data */
+struct adm1031_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ int chip_type;
+ char valid; /* !=0 if following fields are valid */
+ unsigned long last_updated; /* In jiffies */
+ /* The chan_select_table contains the possible configurations for
+ * auto fan control.
+ */
+ auto_chan_table_t *chan_select_table;
+ u16 alarm;
+ u8 conf1;
+ u8 conf2;
+ u8 fan[2];
+ u8 fan_div[2];
+ u8 fan_min[2];
+ u8 pwm[2];
+ u8 old_pwm[2];
+ s8 temp[3];
+ u8 ext_temp[3];
+ u8 auto_temp[3];
+ u8 auto_temp_min[3];
+ u8 auto_temp_off[3];
+ u8 auto_temp_max[3];
+ s8 temp_min[3];
+ s8 temp_max[3];
+ s8 temp_crit[3];
+};
+
+static int adm1031_attach_adapter(struct i2c_adapter *adapter);
+static int adm1031_detect(struct i2c_adapter *adapter, int address, int kind);
+static void adm1031_init_client(struct i2c_client *client);
+static int adm1031_detach_client(struct i2c_client *client);
+static struct adm1031_data *adm1031_update_device(struct device *dev);
+
+/* This is the driver that will be inserted */
+static struct i2c_driver adm1031_driver = {
+ .owner = THIS_MODULE,
+ .name = "adm1031",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = adm1031_attach_adapter,
+ .detach_client = adm1031_detach_client,
+};
+
+static int adm1031_id;
+
+static inline u8 adm1031_read_value(struct i2c_client *client, u8 reg)
+{
+ return i2c_smbus_read_byte_data(client, reg);
+}
+
+static inline int
+adm1031_write_value(struct i2c_client *client, u8 reg, unsigned int value)
+{
+ return i2c_smbus_write_byte_data(client, reg, value);
+}
+
+
+#define TEMP_TO_REG(val) (((val) < 0 ? ((val - 500) / 1000) : \
+ ((val + 500) / 1000)))
+
+#define TEMP_FROM_REG(val) ((val) * 1000)
+
+#define TEMP_FROM_REG_EXT(val, ext) (TEMP_FROM_REG(val) + (ext) * 125)
+
+#define FAN_FROM_REG(reg, div) ((reg) ? (11250 * 60) / ((reg) * (div)) : 0)
+
+static int FAN_TO_REG(int reg, int div)
+{
+ int tmp;
+ tmp = FAN_FROM_REG(SENSORS_LIMIT(reg, 0, 65535), div);
+ return tmp > 255 ? 255 : tmp;
+}
+
+#define FAN_DIV_FROM_REG(reg) (1<<(((reg)&0xc0)>>6))
+
+#define PWM_TO_REG(val) (SENSORS_LIMIT((val), 0, 255) >> 4)
+#define PWM_FROM_REG(val) ((val) << 4)
+
+#define FAN_CHAN_FROM_REG(reg) (((reg) >> 5) & 7)
+#define FAN_CHAN_TO_REG(val, reg) \
+ (((reg) & 0x1F) | (((val) << 5) & 0xe0))
+
+#define AUTO_TEMP_MIN_TO_REG(val, reg) \
+ ((((val)/500) & 0xf8)|((reg) & 0x7))
+#define AUTO_TEMP_RANGE_FROM_REG(reg) (5000 * (1<< ((reg)&0x7)))
+#define AUTO_TEMP_MIN_FROM_REG(reg) (1000 * ((((reg) >> 3) & 0x1f) << 2))
+
+#define AUTO_TEMP_MIN_FROM_REG_DEG(reg) ((((reg) >> 3) & 0x1f) << 2)
+
+#define AUTO_TEMP_OFF_FROM_REG(reg) \
+ (AUTO_TEMP_MIN_FROM_REG(reg) - 5000)
+
+#define AUTO_TEMP_MAX_FROM_REG(reg) \
+ (AUTO_TEMP_RANGE_FROM_REG(reg) + \
+ AUTO_TEMP_MIN_FROM_REG(reg))
+
+static int AUTO_TEMP_MAX_TO_REG(int val, int reg, int pwm)
+{
+ int ret;
+ int range = val - AUTO_TEMP_MIN_FROM_REG(reg);
+
+ range = ((val - AUTO_TEMP_MIN_FROM_REG(reg))*10)/(16 - pwm);
+ ret = ((reg & 0xf8) |
+ (range < 10000 ? 0 :
+ range < 20000 ? 1 :
+ range < 40000 ? 2 : range < 80000 ? 3 : 4));
+ return ret;
+}
+
+/* FAN auto control */
+#define GET_FAN_AUTO_BITFIELD(data, idx) \
+ (*(data)->chan_select_table)[FAN_CHAN_FROM_REG((data)->conf1)][idx%2]
+
+/* The tables below contains the possible values for the auto fan
+ * control bitfields. the index in the table is the register value.
+ * MSb is the auto fan control enable bit, so the four first entries
+ * in the table disables auto fan control when both bitfields are zero.
+ */
+static auto_chan_table_t auto_channel_select_table_adm1031 = {
+ {0, 0}, {0, 0}, {0, 0}, {0, 0},
+ {2 /*0b010 */ , 4 /*0b100 */ },
+ {2 /*0b010 */ , 2 /*0b010 */ },
+ {4 /*0b100 */ , 4 /*0b100 */ },
+ {7 /*0b111 */ , 7 /*0b111 */ },
+};
+
+static auto_chan_table_t auto_channel_select_table_adm1030 = {
+ {0, 0}, {0, 0}, {0, 0}, {0, 0},
+ {2 /*0b10 */ , 0},
+ {0xff /*invalid */ , 0},
+ {0xff /*invalid */ , 0},
+ {3 /*0b11 */ , 0},
+};
+
+/* That function checks if a bitfield is valid and returns the other bitfield
+ * nearest match if no exact match where found.
+ */
+static int
+get_fan_auto_nearest(struct adm1031_data *data,
+ int chan, u8 val, u8 reg, u8 * new_reg)
+{
+ int i;
+ int first_match = -1, exact_match = -1;
+ u8 other_reg_val =
+ (*data->chan_select_table)[FAN_CHAN_FROM_REG(reg)][chan ? 0 : 1];
+
+ if (val == 0) {
+ *new_reg = 0;
+ return 0;
+ }
+
+ for (i = 0; i < 8; i++) {
+ if ((val == (*data->chan_select_table)[i][chan]) &&
+ ((*data->chan_select_table)[i][chan ? 0 : 1] ==
+ other_reg_val)) {
+ /* We found an exact match */
+ exact_match = i;
+ break;
+ } else if (val == (*data->chan_select_table)[i][chan] &&
+ first_match == -1) {
+ /* Save the first match in case of an exact match has not been
+ * found
+ */
+ first_match = i;
+ }
+ }
+
+ if (exact_match >= 0) {
+ *new_reg = exact_match;
+ } else if (first_match >= 0) {
+ *new_reg = first_match;
+ } else {
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static ssize_t show_fan_auto_channel(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", GET_FAN_AUTO_BITFIELD(data, nr));
+}
+
+static ssize_t
+set_fan_auto_channel(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ u8 reg;
+ int ret;
+ u8 old_fan_mode;
+
+ old_fan_mode = data->conf1;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+
+ if ((ret = get_fan_auto_nearest(data, nr, val, data->conf1, ®))) {
+ up(&data->update_lock);
+ return ret;
+ }
+ if (((data->conf1 = FAN_CHAN_TO_REG(reg, data->conf1)) & ADM1031_CONF1_AUTO_MODE) ^
+ (old_fan_mode & ADM1031_CONF1_AUTO_MODE)) {
+ if (data->conf1 & ADM1031_CONF1_AUTO_MODE){
+ /* Switch to Auto Fan Mode
+ * Save PWM registers
+ * Set PWM registers to 33% Both */
+ data->old_pwm[0] = data->pwm[0];
+ data->old_pwm[1] = data->pwm[1];
+ adm1031_write_value(client, ADM1031_REG_PWM, 0x55);
+ } else {
+ /* Switch to Manual Mode */
+ data->pwm[0] = data->old_pwm[0];
+ data->pwm[1] = data->old_pwm[1];
+ /* Restore PWM registers */
+ adm1031_write_value(client, ADM1031_REG_PWM,
+ data->pwm[0] | (data->pwm[1] << 4));
+ }
+ }
+ data->conf1 = FAN_CHAN_TO_REG(reg, data->conf1);
+ adm1031_write_value(client, ADM1031_REG_CONF1, data->conf1);
+ up(&data->update_lock);
+ return count;
+}
+
+#define fan_auto_channel_offset(offset) \
+static ssize_t show_fan_auto_channel_##offset (struct device *dev, char *buf) \
+{ \
+ return show_fan_auto_channel(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t set_fan_auto_channel_##offset (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_auto_channel(dev, buf, count, 0x##offset - 1); \
+} \
+static DEVICE_ATTR(auto_fan##offset##_channel, S_IRUGO | S_IWUSR, \
+ show_fan_auto_channel_##offset, \
+ set_fan_auto_channel_##offset)
+
+fan_auto_channel_offset(1);
+fan_auto_channel_offset(2);
+
+/* Auto Temps */
+static ssize_t show_auto_temp_off(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_OFF_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t show_auto_temp_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_MIN_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t
+set_auto_temp_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]);
+ adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr),
+ data->auto_temp[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t show_auto_temp_max(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_MAX_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t
+set_auto_temp_max(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], data->pwm[nr]);
+ adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr),
+ data->temp_max[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define auto_temp_reg(offset) \
+static ssize_t show_auto_temp_##offset##_off (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_off(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_auto_temp_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_min(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_auto_temp_##offset##_max (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_max(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t set_auto_temp_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_auto_temp_min(dev, buf, count, 0x##offset - 1); \
+} \
+static ssize_t set_auto_temp_##offset##_max (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_auto_temp_max(dev, buf, count, 0x##offset - 1); \
+} \
+static DEVICE_ATTR(auto_temp##offset##_off, S_IRUGO, \
+ show_auto_temp_##offset##_off, NULL); \
+static DEVICE_ATTR(auto_temp##offset##_min, S_IRUGO | S_IWUSR, \
+ show_auto_temp_##offset##_min, set_auto_temp_##offset##_min);\
+static DEVICE_ATTR(auto_temp##offset##_max, S_IRUGO | S_IWUSR, \
+ show_auto_temp_##offset##_max, set_auto_temp_##offset##_max)
+
+auto_temp_reg(1);
+auto_temp_reg(2);
+auto_temp_reg(3);
+
+/* pwm */
+static ssize_t show_pwm(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", PWM_FROM_REG(data->pwm[nr]));
+}
+static ssize_t
+set_pwm(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ int reg;
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ if ((data->conf1 & ADM1031_CONF1_AUTO_MODE) &&
+ (((val>>4) & 0xf) != 5)) {
+ /* In automatic mode, the only PWM accepted is 33% */
+ up(&data->update_lock);
+ return -EINVAL;
+ }
+ data->pwm[nr] = PWM_TO_REG(val);
+ reg = adm1031_read_value(client, ADM1031_REG_PWM);
+ adm1031_write_value(client, ADM1031_REG_PWM,
+ nr ? ((data->pwm[nr] << 4) & 0xf0) | (reg & 0xf)
+ : (data->pwm[nr] & 0xf) | (reg & 0xf0));
+ up(&data->update_lock);
+ return count;
+}
+
+#define pwm_reg(offset) \
+static ssize_t show_pwm_##offset (struct device *dev, char *buf) \
+{ \
+ return show_pwm(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t set_pwm_##offset (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_pwm(dev, buf, count, 0x##offset - 1); \
+} \
+static DEVICE_ATTR(fan##offset##_pwm, S_IRUGO | S_IWUSR, \
+ show_pwm_##offset, set_pwm_##offset)
+
+pwm_reg(1);
+pwm_reg(2);
+
+/* Fans */
+
+/*
+ * That function checks the cases where the fan reading is not
+ * relevent. It is used to provide 0 as fan reading when the fan is
+ * not supposed to run
+ */
+static int trust_fan_readings(struct adm1031_data *data, int chan)
+{
+ int res = 0;
+
+ if (data->conf1 & ADM1031_CONF1_AUTO_MODE) {
+ switch (data->conf1 & 0x60) {
+ case 0x00: /* remote temp1 controls fan1 remote temp2 controls fan2 */
+ res = data->temp[chan+1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[chan+1]);
+ break;
+ case 0x20: /* remote temp1 controls both fans */
+ res =
+ data->temp[1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[1]);
+ break;
+ case 0x40: /* remote temp2 controls both fans */
+ res =
+ data->temp[2] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[2]);
+ break;
+ case 0x60: /* max controls both fans */
+ res =
+ data->temp[0] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[0])
+ || data->temp[1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[1])
+ || (data->chip_type == adm1031
+ && data->temp[2] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[2]));
+ break;
+ }
+ } else {
+ res = data->pwm[chan] > 0;
+ }
+ return res;
+}
+
+
+static ssize_t show_fan(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ int value;
+
+ value = trust_fan_readings(data, nr) ? FAN_FROM_REG(data->fan[nr],
+ FAN_DIV_FROM_REG(data->fan_div[nr])) : 0;
+ return sprintf(buf, "%d\n", value);
+}
+
+static ssize_t show_fan_div(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", FAN_DIV_FROM_REG(data->fan_div[nr]));
+}
+static ssize_t show_fan_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ FAN_FROM_REG(data->fan_min[nr],
+ FAN_DIV_FROM_REG(data->fan_div[nr])));
+}
+static ssize_t
+set_fan_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ if (val) {
+ data->fan_min[nr] =
+ FAN_TO_REG(val, FAN_DIV_FROM_REG(data->fan_div[nr]));
+ } else {
+ data->fan_min[nr] = 0xff;
+ }
+ adm1031_write_value(client, ADM1031_REG_FAN_MIN(nr), data->fan_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_fan_div(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ u8 tmp;
+ int old_div = FAN_DIV_FROM_REG(data->fan_div[nr]);
+ int new_min;
+
+ val = simple_strtol(buf, NULL, 10);
+ tmp = val == 8 ? 0xc0 :
+ val == 4 ? 0x80 :
+ val == 2 ? 0x40 :
+ val == 1 ? 0x00 :
+ 0xff;
+ if (tmp == 0xff)
+ return -EINVAL;
+ down(&data->update_lock);
+ data->fan_div[nr] = (tmp & 0xC0) | (0x3f & data->fan_div[nr]);
+ new_min = data->fan_min[nr] * old_div /
+ FAN_DIV_FROM_REG(data->fan_div[nr]);
+ data->fan_min[nr] = new_min > 0xff ? 0xff : new_min;
+ data->fan[nr] = data->fan[nr] * old_div /
+ FAN_DIV_FROM_REG(data->fan_div[nr]);
+
+ adm1031_write_value(client, ADM1031_REG_FAN_DIV(nr),
+ data->fan_div[nr]);
+ adm1031_write_value(client, ADM1031_REG_FAN_MIN(nr),
+ data->fan_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define fan_offset(offset) \
+static ssize_t show_fan_##offset (struct device *dev, char *buf) \
+{ \
+ return show_fan(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_fan_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_fan_min(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_fan_##offset##_div (struct device *dev, char *buf) \
+{ \
+ return show_fan_div(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t set_fan_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_min(dev, buf, count, 0x##offset - 1); \
+} \
+static ssize_t set_fan_##offset##_div (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_div(dev, buf, count, 0x##offset - 1); \
+} \
+static DEVICE_ATTR(fan##offset##_input, S_IRUGO, show_fan_##offset, \
+ NULL); \
+static DEVICE_ATTR(fan##offset##_min, S_IRUGO | S_IWUSR, \
+ show_fan_##offset##_min, set_fan_##offset##_min); \
+static DEVICE_ATTR(fan##offset##_div, S_IRUGO | S_IWUSR, \
+ show_fan_##offset##_div, set_fan_##offset##_div); \
+static DEVICE_ATTR(auto_fan##offset##_min_pwm, S_IRUGO | S_IWUSR, \
+ show_pwm_##offset, set_pwm_##offset)
+
+fan_offset(1);
+fan_offset(2);
+
+
+/* Temps */
+static ssize_t show_temp(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ int ext;
+ ext = nr == 0 ?
+ ((data->ext_temp[nr] >> 6) & 0x3) * 2 :
+ (((data->ext_temp[nr] >> ((nr - 1) * 3)) & 7));
+ return sprintf(buf, "%d\n", TEMP_FROM_REG_EXT(data->temp[nr], ext));
+}
+static ssize_t show_temp_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_min[nr]));
+}
+static ssize_t show_temp_max(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_max[nr]));
+}
+static ssize_t show_temp_crit(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_crit[nr]));
+}
+static ssize_t
+set_temp_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_min[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr),
+ data->temp_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_temp_max(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_max[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr),
+ data->temp_max[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_temp_crit(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_crit[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),
+ data->temp_crit[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define temp_reg(offset) \
+static ssize_t show_temp_##offset (struct device *dev, char *buf) \
+{ \
+ return show_temp(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_temp_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_temp_min(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_temp_##offset##_max (struct device *dev, char *buf) \
+{ \
+ return show_temp_max(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t show_temp_##offset##_crit (struct device *dev, char *buf) \
+{ \
+ return show_temp_crit(dev, buf, 0x##offset - 1); \
+} \
+static ssize_t set_temp_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_min(dev, buf, count, 0x##offset - 1); \
+} \
+static ssize_t set_temp_##offset##_max (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_max(dev, buf, count, 0x##offset - 1); \
+} \
+static ssize_t set_temp_##offset##_crit (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_crit(dev, buf, count, 0x##offset - 1); \
+} \
+static DEVICE_ATTR(temp##offset##_input, S_IRUGO, show_temp_##offset, \
+ NULL); \
+static DEVICE_ATTR(temp##offset##_min, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_min, set_temp_##offset##_min); \
+static DEVICE_ATTR(temp##offset##_max, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_max, set_temp_##offset##_max); \
+static DEVICE_ATTR(temp##offset##_crit, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_crit, set_temp_##offset##_crit)
+
+temp_reg(1);
+temp_reg(2);
+temp_reg(3);
+
+/* Alarms */
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", data->alarm);
+}
+
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+
+static int adm1031_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, adm1031_detect);
+}
+
+/* This function is called by i2c_detect */
+static int adm1031_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct adm1031_data *data;
+ int err = 0;
+ const char *name = "";
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct adm1031_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct adm1031_data));
+
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &adm1031_driver;
+ new_client->flags = 0;
+
+ if (kind < 0) {
+ int id, co;
+ id = i2c_smbus_read_byte_data(new_client, 0x3d);
+ co = i2c_smbus_read_byte_data(new_client, 0x3e);
+
+ if (!((id == 0x31 || id == 0x30) && co == 0x41))
+ goto exit_free;
+ kind = (id == 0x30) ? adm1030 : adm1031;
+ }
+
+ if (kind <= 0)
+ kind = adm1031;
+
+ /* Given the detected chip type, set the chip name and the
+ * auto fan control helper table. */
+ if (kind == adm1030) {
+ name = "adm1030";
+ data->chan_select_table = &auto_channel_select_table_adm1030;
+ } else if (kind == adm1031) {
+ name = "adm1031";
+ data->chan_select_table = &auto_channel_select_table_adm1031;
+ }
+ data->chip_type = kind;
+
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+
+ new_client->id = adm1031_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the ADM1031 chip */
+ adm1031_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_fan1_input);
+ device_create_file(&new_client->dev, &dev_attr_fan1_div);
+ device_create_file(&new_client->dev, &dev_attr_fan1_min);
+ device_create_file(&new_client->dev, &dev_attr_fan1_pwm);
+ device_create_file(&new_client->dev, &dev_attr_auto_fan1_channel);
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_max);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_max);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_fan1_min_pwm);
+
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+
+ if (kind == adm1031) {
+ device_create_file(&new_client->dev, &dev_attr_fan2_input);
+ device_create_file(&new_client->dev, &dev_attr_fan2_div);
+ device_create_file(&new_client->dev, &dev_attr_fan2_min);
+ device_create_file(&new_client->dev, &dev_attr_fan2_pwm);
+ device_create_file(&new_client->dev,
+ &dev_attr_auto_fan2_channel);
+ device_create_file(&new_client->dev, &dev_attr_temp3_input);
+ device_create_file(&new_client->dev, &dev_attr_temp3_min);
+ device_create_file(&new_client->dev, &dev_attr_temp3_max);
+ device_create_file(&new_client->dev, &dev_attr_temp3_crit);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_max);
+ device_create_file(&new_client->dev, &dev_attr_auto_fan2_min_pwm);
+ }
+
+ return 0;
+
+exit_free:
+ kfree(new_client);
+exit:
+ return err;
+}
+
+static int adm1031_detach_client(struct i2c_client *client)
+{
+ int ret;
+ if ((ret = i2c_detach_client(client)) != 0) {
+ return ret;
+ }
+ kfree(client);
+ return 0;
+}
+
+static void adm1031_init_client(struct i2c_client *client)
+{
+ unsigned int read_val;
+ unsigned int mask;
+ struct adm1031_data *data = i2c_get_clientdata(client);
+
+ mask = (ADM1031_CONF2_PWM1_ENABLE | ADM1031_CONF2_TACH1_ENABLE);
+ if (data->chip_type == adm1031) {
+ mask |= (ADM1031_CONF2_PWM2_ENABLE |
+ ADM1031_CONF2_TACH2_ENABLE);
+ }
+ /* Initialize the ADM1031 chip (enables fan speed reading ) */
+ read_val = adm1031_read_value(client, ADM1031_REG_CONF2);
+ if ((read_val | mask) != read_val) {
+ adm1031_write_value(client, ADM1031_REG_CONF2, read_val | mask);
+ }
+
+ read_val = adm1031_read_value(client, ADM1031_REG_CONF1);
+ if ((read_val | ADM1031_CONF1_MONITOR_ENABLE) != read_val) {
+ adm1031_write_value(client, ADM1031_REG_CONF1, read_val |
+ ADM1031_CONF1_MONITOR_ENABLE);
+ }
+
+}
+
+static struct adm1031_data *adm1031_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int chan;
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ + HZ / 2) ||
+ (jiffies < data->last_updated) || !data->valid) {
+
+ dev_dbg(&client->dev, "Starting adm1031 update\n");
+ for (chan = 0;
+ chan < ((data->chip_type == adm1031) ? 3 : 2); chan++) {
+ u8 oldh, newh;
+
+ oldh =
+ adm1031_read_value(client, ADM1031_REG_TEMP(chan));
+ data->ext_temp[chan] =
+ adm1031_read_value(client, ADM1031_REG_EXT_TEMP);
+ newh =
+ adm1031_read_value(client, ADM1031_REG_TEMP(chan));
+ if (newh != oldh) {
+ data->ext_temp[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_EXT_TEMP);
+#ifdef DEBUG
+ oldh =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP(chan));
+
+ /* oldh is actually newer */
+ if (newh != oldh)
+ dev_warn(&client->dev,
+ "Remote temperature may be "
+ "wrong.\n");
+#endif
+ }
+ data->temp[chan] = newh;
+
+ data->temp_min[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_MIN(chan));
+ data->temp_max[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_MAX(chan));
+ data->temp_crit[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_CRIT(chan));
+ data->auto_temp[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_AUTO_TEMP(chan));
+
+ }
+
+ data->conf1 = adm1031_read_value(client, ADM1031_REG_CONF1);
+ data->conf2 = adm1031_read_value(client, ADM1031_REG_CONF2);
+
+ data->alarm = adm1031_read_value(client, ADM1031_REG_STATUS(0))
+ | (adm1031_read_value(client, ADM1031_REG_STATUS(1))
+ << 8);
+ if (data->chip_type == adm1030) {
+ data->alarm &= 0xc0ff;
+ }
+
+ for (chan=0; chan<(data->chip_type == adm1030 ? 1 : 2); chan++) {
+ data->fan_div[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_DIV(chan));
+ data->fan_min[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_MIN(chan));
+ data->fan[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_SPEED(chan));
+ data->pwm[chan] =
+ 0xf & (adm1031_read_value(client, ADM1031_REG_PWM) >>
+ (4*chan));
+ }
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_adm1031_init(void)
+{
+ return i2c_add_driver(&adm1031_driver);
+}
+
+static void __exit sensors_adm1031_exit(void)
+{
+ i2c_del_driver(&adm1031_driver);
+}
+
+MODULE_AUTHOR("Alexandre d'Alton <alex@alexdalton.org>");
+MODULE_DESCRIPTION("ADM1031/ADM1030 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_adm1031_init);
+module_exit(sensors_adm1031_exit);
--- /dev/null
+/*
+ lm77.c - Part of lm_sensors, Linux kernel modules for hardware
+ monitoring
+
+ Copyright (c) 2004 Andras BALI <drewie@freemail.hu>
+
+ Heavily based on lm75.c by Frodo Looijaard <frodol@dds.nl>. The LM77
+ is a temperature sensor and thermal window comparator with 0.5 deg
+ resolution made by National Semiconductor. Complete datasheet can be
+ obtained at their site:
+ http://www.national.com/pf/LM/LM77.html
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+
+/* Addresses to scan */
+static unsigned short normal_i2c[] = { I2C_CLIENT_END };
+static unsigned short normal_i2c_range[] = { 0x48, 0x4b, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+static unsigned int normal_isa_range[] = { I2C_CLIENT_ISA_END };
+
+/* Insmod parameters */
+SENSORS_INSMOD_1(lm77);
+
+/* The LM77 registers */
+#define LM77_REG_TEMP 0x00
+#define LM77_REG_CONF 0x01
+#define LM77_REG_TEMP_HYST 0x02
+#define LM77_REG_TEMP_CRIT 0x03
+#define LM77_REG_TEMP_MIN 0x04
+#define LM77_REG_TEMP_MAX 0x05
+
+/* Each client has this additional data */
+struct lm77_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid;
+ unsigned long last_updated; /* In jiffies */
+ int temp_input; /* Temperatures */
+ int temp_crit;
+ int temp_min;
+ int temp_max;
+ int temp_hyst;
+ u8 alarms;
+};
+
+static int lm77_attach_adapter(struct i2c_adapter *adapter);
+static int lm77_detect(struct i2c_adapter *adapter, int address, int kind);
+static void lm77_init_client(struct i2c_client *client);
+static int lm77_detach_client(struct i2c_client *client);
+static u16 lm77_read_value(struct i2c_client *client, u8 reg);
+static int lm77_write_value(struct i2c_client *client, u8 reg, u16 value);
+
+static struct lm77_data *lm77_update_device(struct device *dev);
+
+
+/* This is the driver that will be inserted */
+static struct i2c_driver lm77_driver = {
+ .owner = THIS_MODULE,
+ .name = "lm77",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = lm77_attach_adapter,
+ .detach_client = lm77_detach_client,
+};
+
+static int lm77_id = 0;
+
+/* straight from the datasheet */
+#define LM77_TEMP_MIN (-55000)
+#define LM77_TEMP_MAX 125000
+
+/* In the temperature registers, the low 3 bits are not part of the
+ temperature values; they are the status bits. */
+static inline u16 LM77_TEMP_TO_REG(int temp)
+{
+ int ntemp = SENSORS_LIMIT(temp, LM77_TEMP_MIN, LM77_TEMP_MAX);
+ return (u16)((ntemp / 500) * 8);
+}
+
+static inline int LM77_TEMP_FROM_REG(u16 reg)
+{
+ return ((int)reg / 8) * 500;
+}
+
+/* sysfs stuff */
+
+/* read routines for temperature limits */
+#define show(value) \
+static ssize_t show_##value(struct device *dev, char *buf) \
+{ \
+ struct lm77_data *data = lm77_update_device(dev); \
+ return sprintf(buf, "%d\n", data->value); \
+}
+
+show(temp_input);
+show(temp_crit);
+show(temp_min);
+show(temp_max);
+show(alarms);
+
+/* read routines for hysteresis values */
+static ssize_t show_temp_crit_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_crit - data->temp_hyst);
+}
+static ssize_t show_temp_min_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_min + data->temp_hyst);
+}
+static ssize_t show_temp_max_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_max - data->temp_hyst);
+}
+
+/* write routines */
+#define set(value, reg) \
+static ssize_t set_##value(struct device *dev, const char *buf, size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct lm77_data *data = i2c_get_clientdata(client); \
+ data->value = simple_strtoul(buf, NULL, 10); \
+ lm77_write_value(client, reg, LM77_TEMP_TO_REG(data->value)); \
+ return count; \
+}
+
+set(temp_min, LM77_REG_TEMP_MIN);
+set(temp_max, LM77_REG_TEMP_MAX);
+
+/* hysteresis is stored as a relative value on the chip, so it has to be
+ converted first */
+static ssize_t set_temp_crit_hyst(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+ data->temp_hyst = data->temp_crit - simple_strtoul(buf, NULL, 10);
+ lm77_write_value(client, LM77_REG_TEMP_HYST,
+ LM77_TEMP_TO_REG(data->temp_hyst));
+ return count;
+}
+
+/* preserve hysteresis when setting T_crit */
+static ssize_t set_temp_crit(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+ int oldcrithyst = data->temp_crit - data->temp_hyst;
+ data->temp_crit = simple_strtoul(buf, NULL, 10);
+ data->temp_hyst = data->temp_crit - oldcrithyst;
+ lm77_write_value(client, LM77_REG_TEMP_CRIT,
+ LM77_TEMP_TO_REG(data->temp_crit));
+ lm77_write_value(client, LM77_REG_TEMP_HYST,
+ LM77_TEMP_TO_REG(data->temp_hyst));
+ return count;
+}
+
+static DEVICE_ATTR(temp1_input, S_IRUGO,
+ show_temp_input, NULL);
+static DEVICE_ATTR(temp1_crit, S_IWUSR | S_IRUGO,
+ show_temp_crit, set_temp_crit);
+static DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO,
+ show_temp_min, set_temp_min);
+static DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO,
+ show_temp_max, set_temp_max);
+
+static DEVICE_ATTR(temp1_crit_hyst, S_IWUSR | S_IRUGO,
+ show_temp_crit_hyst, set_temp_crit_hyst);
+static DEVICE_ATTR(temp1_min_hyst, S_IRUGO,
+ show_temp_min_hyst, NULL);
+static DEVICE_ATTR(temp1_max_hyst, S_IRUGO,
+ show_temp_max_hyst, NULL);
+
+static DEVICE_ATTR(alarms, S_IRUGO,
+ show_alarms, NULL);
+
+static int lm77_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, lm77_detect);
+}
+
+/* This function is called by i2c_detect */
+static int lm77_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct lm77_data *data;
+ int err = 0;
+ const char *name = "";
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+ I2C_FUNC_SMBUS_WORD_DATA))
+ goto exit;
+
+ /* OK. For now, we presume we have a valid client. We now create the
+ client structure, even though we cannot fill it completely yet.
+ But it allows us to access lm77_{read,write}_value. */
+ if (!(data = kmalloc(sizeof(struct lm77_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct lm77_data));
+
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &lm77_driver;
+ new_client->flags = 0;
+
+ /* Here comes the remaining detection. Since the LM77 has no
+ register dedicated to identification, we have to rely on the
+ following tricks:
+
+ 1. the high 4 bits represent the sign and thus they should
+ always be the same
+ 2. the high 3 bits are unused in the configuration register
+ 3. addresses 0x06 and 0x07 return the last read value
+ 4. registers cycling over 8-address boundaries
+
+ Word-sized registers are high-byte first. */
+ if (kind < 0) {
+ int i, cur, conf, hyst, crit, min, max;
+
+ /* addresses cycling */
+ cur = i2c_smbus_read_word_data(new_client, 0);
+ conf = i2c_smbus_read_byte_data(new_client, 1);
+ hyst = i2c_smbus_read_word_data(new_client, 2);
+ crit = i2c_smbus_read_word_data(new_client, 3);
+ min = i2c_smbus_read_word_data(new_client, 4);
+ max = i2c_smbus_read_word_data(new_client, 5);
+ for (i = 8; i <= 0xff; i += 8)
+ if (i2c_smbus_read_byte_data(new_client, i + 1) != conf
+ || i2c_smbus_read_word_data(new_client, i + 2) != hyst
+ || i2c_smbus_read_word_data(new_client, i + 3) != crit
+ || i2c_smbus_read_word_data(new_client, i + 4) != min
+ || i2c_smbus_read_word_data(new_client, i + 5) != max)
+ goto exit_free;
+
+ /* sign bits */
+ if (((cur & 0x00f0) != 0xf0 && (cur & 0x00f0) != 0x0)
+ || ((hyst & 0x00f0) != 0xf0 && (hyst & 0x00f0) != 0x0)
+ || ((crit & 0x00f0) != 0xf0 && (crit & 0x00f0) != 0x0)
+ || ((min & 0x00f0) != 0xf0 && (min & 0x00f0) != 0x0)
+ || ((max & 0x00f0) != 0xf0 && (max & 0x00f0) != 0x0))
+ goto exit_free;
+
+ /* unused bits */
+ if (conf & 0xe0)
+ goto exit_free;
+
+ /* 0x06 and 0x07 return the last read value */
+ cur = i2c_smbus_read_word_data(new_client, 0);
+ if (i2c_smbus_read_word_data(new_client, 6) != cur
+ || i2c_smbus_read_word_data(new_client, 7) != cur)
+ goto exit_free;
+ hyst = i2c_smbus_read_word_data(new_client, 2);
+ if (i2c_smbus_read_word_data(new_client, 6) != hyst
+ || i2c_smbus_read_word_data(new_client, 7) != hyst)
+ goto exit_free;
+ min = i2c_smbus_read_word_data(new_client, 4);
+ if (i2c_smbus_read_word_data(new_client, 6) != min
+ || i2c_smbus_read_word_data(new_client, 7) != min)
+ goto exit_free;
+
+ }
+
+ /* Determine the chip type - only one kind supported! */
+ if (kind <= 0)
+ kind = lm77;
+
+ if (kind == lm77) {
+ name = "lm77";
+ }
+
+ /* Fill in the remaining client fields and put it into the global list */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+
+ new_client->id = lm77_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the LM77 chip */
+ lm77_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit_hyst);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min_hyst);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max_hyst);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static int lm77_detach_client(struct i2c_client *client)
+{
+ i2c_detach_client(client);
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+/* All registers are word-sized, except for the configuration register.
+ The LM77 uses the high-byte first convention. */
+static u16 lm77_read_value(struct i2c_client *client, u8 reg)
+{
+ if (reg == LM77_REG_CONF)
+ return i2c_smbus_read_byte_data(client, reg);
+ else
+ return swab16(i2c_smbus_read_word_data(client, reg));
+}
+
+static int lm77_write_value(struct i2c_client *client, u8 reg, u16 value)
+{
+ if (reg == LM77_REG_CONF)
+ return i2c_smbus_write_byte_data(client, reg, value);
+ else
+ return i2c_smbus_write_word_data(client, reg, swab16(value));
+}
+
+static void lm77_init_client(struct i2c_client *client)
+{
+ /* Initialize the LM77 chip - turn off shutdown mode */
+ int conf = lm77_read_value(client, LM77_REG_CONF);
+ if (conf & 1)
+ lm77_write_value(client, LM77_REG_CONF, conf & 0xfe);
+}
+
+static struct lm77_data *lm77_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ + HZ / 2) ||
+ (jiffies < data->last_updated) || !data->valid) {
+ dev_dbg(&client->dev, "Starting lm77 update\n");
+ data->temp_input =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP));
+ data->temp_hyst =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_HYST));
+ data->temp_crit =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_CRIT));
+ data->temp_min =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_MIN));
+ data->temp_max =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_MAX));
+ data->alarms =
+ lm77_read_value(client, LM77_REG_TEMP) & 0x0007;
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_lm77_init(void)
+{
+ return i2c_add_driver(&lm77_driver);
+}
+
+static void __exit sensors_lm77_exit(void)
+{
+ i2c_del_driver(&lm77_driver);
+}
+
+MODULE_AUTHOR("Andras BALI <drewie@freemail.hu>");
+MODULE_DESCRIPTION("LM77 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_lm77_init);
+module_exit(sensors_lm77_exit);
--- /dev/null
+/*
+ * max1619.c - Part of lm_sensors, Linux kernel modules for hardware
+ * monitoring
+ * Copyright (C) 2003-2004 Alexey Fisher <fishor@mail.ru>
+ * Jean Delvare <khali@linux-fr.org>
+ *
+ * Based on the lm90 driver. The MAX1619 is a sensor chip made by Maxim.
+ * It reports up to two temperatures (its own plus up to
+ * one external one). Complete datasheet can be
+ * obtained from Maxim's website at:
+ * http://pdfserv.maxim-ic.com/en/ds/MAX1619.pdf
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+
+static unsigned short normal_i2c[] = { I2C_CLIENT_END };
+static unsigned short normal_i2c_range[] = { 0x18, 0x1a, 0x29, 0x2b,
+ 0x4c, 0x4e, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+static unsigned int normal_isa_range[] = { I2C_CLIENT_ISA_END };
+
+/*
+ * Insmod parameters
+ */
+
+SENSORS_INSMOD_1(max1619);
+
+/*
+ * The MAX1619 registers
+ */
+
+#define MAX1619_REG_R_MAN_ID 0xFE
+#define MAX1619_REG_R_CHIP_ID 0xFF
+#define MAX1619_REG_R_CONFIG 0x03
+#define MAX1619_REG_W_CONFIG 0x09
+#define MAX1619_REG_R_CONVRATE 0x04
+#define MAX1619_REG_W_CONVRATE 0x0A
+#define MAX1619_REG_R_STATUS 0x02
+#define MAX1619_REG_R_LOCAL_TEMP 0x00
+#define MAX1619_REG_R_REMOTE_TEMP 0x01
+#define MAX1619_REG_R_REMOTE_HIGH 0x07
+#define MAX1619_REG_W_REMOTE_HIGH 0x0D
+#define MAX1619_REG_R_REMOTE_LOW 0x08
+#define MAX1619_REG_W_REMOTE_LOW 0x0E
+#define MAX1619_REG_R_REMOTE_CRIT 0x10
+#define MAX1619_REG_W_REMOTE_CRIT 0x12
+#define MAX1619_REG_R_TCRIT_HYST 0x11
+#define MAX1619_REG_W_TCRIT_HYST 0x13
+
+/*
+ * Conversions and various macros
+ */
+
+#define TEMP_FROM_REG(val) ((val & 0x80 ? val-0x100 : val) * 1000)
+#define TEMP_TO_REG(val) ((val < 0 ? val+0x100*1000 : val) / 1000)
+
+/*
+ * Functions declaration
+ */
+
+static int max1619_attach_adapter(struct i2c_adapter *adapter);
+static int max1619_detect(struct i2c_adapter *adapter, int address,
+ int kind);
+static void max1619_init_client(struct i2c_client *client);
+static int max1619_detach_client(struct i2c_client *client);
+static struct max1619_data *max1619_update_device(struct device *dev);
+
+/*
+ * Driver data (common to all clients)
+ */
+
+static struct i2c_driver max1619_driver = {
+ .owner = THIS_MODULE,
+ .name = "max1619",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = max1619_attach_adapter,
+ .detach_client = max1619_detach_client,
+};
+
+/*
+ * Client data (each client gets its own)
+ */
+
+struct max1619_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid; /* zero until following fields are valid */
+ unsigned long last_updated; /* in jiffies */
+
+ /* registers values */
+ u8 temp_input1; /* local */
+ u8 temp_input2, temp_low2, temp_high2; /* remote */
+ u8 temp_crit2;
+ u8 temp_hyst2;
+ u8 alarms;
+};
+
+/*
+ * Internal variables
+ */
+
+static int max1619_id = 0;
+
+/*
+ * Sysfs stuff
+ */
+
+#define show_temp(value) \
+static ssize_t show_##value(struct device *dev, char *buf) \
+{ \
+ struct max1619_data *data = max1619_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->value)); \
+}
+show_temp(temp_input1);
+show_temp(temp_input2);
+show_temp(temp_low2);
+show_temp(temp_high2);
+show_temp(temp_crit2);
+show_temp(temp_hyst2);
+
+#define set_temp2(value, reg) \
+static ssize_t set_##value(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct max1619_data *data = i2c_get_clientdata(client); \
+ data->value = TEMP_TO_REG(simple_strtol(buf, NULL, 10)); \
+ i2c_smbus_write_byte_data(client, reg, data->value); \
+ return count; \
+}
+
+set_temp2(temp_low2, MAX1619_REG_W_REMOTE_LOW);
+set_temp2(temp_high2, MAX1619_REG_W_REMOTE_HIGH);
+set_temp2(temp_crit2, MAX1619_REG_W_REMOTE_CRIT);
+set_temp2(temp_hyst2, MAX1619_REG_W_TCRIT_HYST);
+
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct max1619_data *data = max1619_update_device(dev);
+ return sprintf(buf, "%d\n", data->alarms);
+}
+
+static DEVICE_ATTR(temp1_input, S_IRUGO, show_temp_input1, NULL);
+static DEVICE_ATTR(temp2_input, S_IRUGO, show_temp_input2, NULL);
+static DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp_low2,
+ set_temp_low2);
+static DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp_high2,
+ set_temp_high2);
+static DEVICE_ATTR(temp2_crit, S_IWUSR | S_IRUGO, show_temp_crit2,
+ set_temp_crit2);
+static DEVICE_ATTR(temp2_crit_hyst, S_IWUSR | S_IRUGO, show_temp_hyst2,
+ set_temp_hyst2);
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+/*
+ * Real code
+ */
+
+static int max1619_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, max1619_detect);
+}
+
+/*
+ * The following function does more than just detection. If detection
+ * succeeds, it also registers the new chip.
+ */
+static int max1619_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct max1619_data *data;
+ int err = 0;
+ const char *name = "";
+ u8 reg_config=0, reg_convrate=0, reg_status=0;
+ u8 man_id, chip_id;
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct max1619_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct max1619_data));
+
+ /* The common I2C client data is placed right before the
+ MAX1619-specific data. */
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &max1619_driver;
+ new_client->flags = 0;
+
+ /*
+ * Now we do the remaining detection. A negative kind means that
+ * the driver was loaded with no force parameter (default), so we
+ * must both detect and identify the chip. A zero kind means that
+ * the driver was loaded with the force parameter, the detection
+ * step shall be skipped. A positive kind means that the driver
+ * was loaded with the force parameter and a given kind of chip is
+ * requested, so both the detection and the identification steps
+ * are skipped.
+ */
+ if (kind < 0) { /* detection */
+ reg_config = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CONFIG);
+ reg_convrate = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CONVRATE);
+ reg_status = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_STATUS);
+ if ((reg_config & 0x03) != 0x00
+ || reg_convrate > 0x07 || (reg_status & 0x61 ) !=0x00) {
+ dev_dbg(&adapter->dev,
+ "MAX1619 detection failed at 0x%02x.\n",
+ address);
+ goto exit_free;
+ }
+ }
+
+ if (kind <= 0) { /* identification */
+
+ man_id = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_MAN_ID);
+ chip_id = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CHIP_ID);
+
+ if ((man_id == 0x4D) && (chip_id == 0x04)){
+ kind = max1619;
+ }
+ }
+
+ if (kind <= 0) { /* identification failed */
+ dev_info(&adapter->dev,
+ "Unsupported chip (man_id=0x%02X, "
+ "chip_id=0x%02X).\n", man_id, chip_id);
+ goto exit_free;
+ }
+
+
+ if (kind == max1619){
+ name = "max1619";
+ }
+
+ /* We can fill in the remaining client fields */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+ new_client->id = max1619_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the MAX1619 chip */
+ max1619_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit_hyst);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static void max1619_init_client(struct i2c_client *client)
+{
+ u8 config;
+
+ /*
+ * Start the conversions.
+ */
+ i2c_smbus_write_byte_data(client, MAX1619_REG_W_CONVRATE,
+ 5); /* 2 Hz */
+ config = i2c_smbus_read_byte_data(client, MAX1619_REG_R_CONFIG);
+ if (config & 0x40)
+ i2c_smbus_write_byte_data(client, MAX1619_REG_W_CONFIG,
+ config & 0xBF); /* run */
+}
+
+static int max1619_detach_client(struct i2c_client *client)
+{
+ int err;
+
+ if ((err = i2c_detach_client(client))) {
+ dev_err(&client->dev, "Client deregistration failed, "
+ "client not detached.\n");
+ return err;
+ }
+
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static struct max1619_data *max1619_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct max1619_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ * 2) ||
+ (jiffies < data->last_updated) ||
+ !data->valid) {
+
+ dev_dbg(&client->dev, "Updating max1619 data.\n");
+ data->temp_input1 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_LOCAL_TEMP);
+ data->temp_input2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_TEMP);
+ data->temp_high2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_HIGH);
+ data->temp_low2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_LOW);
+ data->temp_crit2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_CRIT);
+ data->temp_hyst2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_TCRIT_HYST);
+ data->alarms = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_STATUS);
+
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_max1619_init(void)
+{
+ return i2c_add_driver(&max1619_driver);
+}
+
+static void __exit sensors_max1619_exit(void)
+{
+ i2c_del_driver(&max1619_driver);
+}
+
+MODULE_AUTHOR("Alexey Fisher <fishor@mail.ru> and"
+ "Jean Delvare <khali@linux-fr.org>");
+MODULE_DESCRIPTION("MAX1619 sensor driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_max1619_init);
+module_exit(sensors_max1619_exit);
--- /dev/null
+/*
+ * linux/drivers/i2c/chips/rtc8564.c
+ *
+ * Copyright (C) 2002-2004 Stefan Eletzhofer
+ *
+ * based on linux/drivers/acron/char/pcf8583.c
+ * Copyright (C) 2000 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Driver for system3's EPSON RTC 8564 chip
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/rtc.h> /* get the user-level API */
+#include <linux/init.h>
+#include <linux/init.h>
+
+#include "rtc8564.h"
+
+#ifdef DEBUG
+# define _DBG(x, fmt, args...) do{ if (debug>=x) printk(KERN_DEBUG"%s: " fmt "\n", __FUNCTION__, ##args); } while(0);
+#else
+# define _DBG(x, fmt, args...) do { } while(0);
+#endif
+
+#define _DBGRTCTM(x, rtctm) if (debug>=x) printk("%s: secs=%d, mins=%d, hours=%d, mday=%d, " \
+ "mon=%d, year=%d, wday=%d VL=%d\n", __FUNCTION__, \
+ (rtctm).secs, (rtctm).mins, (rtctm).hours, (rtctm).mday, \
+ (rtctm).mon, (rtctm).year, (rtctm).wday, (rtctm).vl);
+
+struct rtc8564_data {
+ struct i2c_client client;
+ u16 ctrl;
+};
+
+static inline u8 _rtc8564_ctrl1(struct i2c_client *client)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ return data->ctrl & 0xff;
+}
+static inline u8 _rtc8564_ctrl2(struct i2c_client *client)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ return (data->ctrl & 0xff00) >> 8;
+}
+
+#define CTRL1(c) _rtc8564_ctrl1(c)
+#define CTRL2(c) _rtc8564_ctrl2(c)
+
+#define BCD_TO_BIN(val) (((val)&15) + ((val)>>4)*10)
+#define BIN_TO_BCD(val) ((((val)/10)<<4) + (val)%10)
+
+static int debug = 0;
+MODULE_PARM(debug, "i");
+
+static struct i2c_driver rtc8564_driver;
+
+static unsigned short ignore[] = { I2C_CLIENT_END };
+static unsigned short normal_addr[] = { 0x51, I2C_CLIENT_END };
+
+static struct i2c_client_address_data addr_data = {
+ .normal_i2c = normal_addr,
+ .normal_i2c_range = ignore,
+ .probe = ignore,
+ .probe_range = ignore,
+ .ignore = ignore,
+ .ignore_range = ignore,
+ .force = ignore,
+};
+
+static int rtc8564_read_mem(struct i2c_client *client, struct mem *mem);
+static int rtc8564_write_mem(struct i2c_client *client, struct mem *mem);
+
+static int rtc8564_read(struct i2c_client *client, unsigned char adr,
+ unsigned char *buf, unsigned char len)
+{
+ int ret = -EIO;
+ unsigned char addr[1] = { adr };
+ struct i2c_msg msgs[2] = {
+ {client->addr, 0, 1, addr},
+ {client->addr, I2C_M_RD, len, buf}
+ };
+
+ _DBG(1, "client=%p, adr=%d, buf=%p, len=%d", client, adr, buf, len);
+
+ if (!buf || !client) {
+ ret = -EINVAL;
+ goto done;
+ }
+
+ ret = i2c_transfer(client->adapter, msgs, 2);
+ if (ret == 2) {
+ ret = 0;
+ }
+
+done:
+ return ret;
+}
+
+static int rtc8564_write(struct i2c_client *client, unsigned char adr,
+ unsigned char *data, unsigned char len)
+{
+ int ret = 0;
+ unsigned char _data[16];
+ struct i2c_msg wr;
+ int i;
+
+ if (!client || !data || len > 15) {
+ ret = -EINVAL;
+ goto done;
+ }
+
+ _DBG(1, "client=%p, adr=%d, buf=%p, len=%d", client, adr, data, len);
+
+ _data[0] = adr;
+ for (i = 0; i < len; i++) {
+ _data[i + 1] = data[i];
+ _DBG(5, "data[%d] = 0x%02x (%d)", i, data[i], data[i]);
+ }
+
+ wr.addr = client->addr;
+ wr.flags = 0;
+ wr.len = len + 1;
+ wr.buf = _data;
+
+ ret = i2c_transfer(client->adapter, &wr, 1);
+ if (ret == 1) {
+ ret = 0;
+ }
+
+done:
+ return ret;
+}
+
+static int rtc8564_attach(struct i2c_adapter *adap, int addr, int kind)
+{
+ int ret;
+ struct i2c_client *new_client;
+ struct rtc8564_data *d;
+ unsigned char data[10];
+ unsigned char ad[1] = { 0 };
+ struct i2c_msg ctrl_wr[1] = {
+ {addr, 0, 2, data}
+ };
+ struct i2c_msg ctrl_rd[2] = {
+ {addr, 0, 1, ad},
+ {addr, I2C_M_RD, 2, data}
+ };
+
+ d = kmalloc(sizeof(struct rtc8564_data), GFP_KERNEL);
+ if (!d) {
+ ret = -ENOMEM;
+ goto done;
+ }
+ memset(d, 0, sizeof(struct rtc8564_data));
+ new_client = &d->client;
+
+ strlcpy(new_client->name, "RTC8564", I2C_NAME_SIZE);
+ i2c_set_clientdata(new_client, d);
+ new_client->id = rtc8564_driver.id;
+ new_client->flags = I2C_CLIENT_ALLOW_USE | I2C_DF_NOTIFY;
+ new_client->addr = addr;
+ new_client->adapter = adap;
+ new_client->driver = &rtc8564_driver;
+
+ _DBG(1, "client=%p", new_client);
+ _DBG(1, "client.id=%d", new_client->id);
+
+ /* init ctrl1 reg */
+ data[0] = 0;
+ data[1] = 0;
+ ret = i2c_transfer(new_client->adapter, ctrl_wr, 1);
+ if (ret != 1) {
+ printk(KERN_INFO "rtc8564: cant init ctrl1\n");
+ ret = -ENODEV;
+ goto done;
+ }
+
+ /* read back ctrl1 and ctrl2 */
+ ret = i2c_transfer(new_client->adapter, ctrl_rd, 2);
+ if (ret != 2) {
+ printk(KERN_INFO "rtc8564: cant read ctrl\n");
+ ret = -ENODEV;
+ goto done;
+ }
+
+ d->ctrl = data[0] | (data[1] << 8);
+
+ _DBG(1, "RTC8564_REG_CTRL1=%02x, RTC8564_REG_CTRL2=%02x",
+ data[0], data[1]);
+
+ ret = i2c_attach_client(new_client);
+done:
+ if (ret) {
+ kfree(d);
+ }
+ return ret;
+}
+
+static int rtc8564_probe(struct i2c_adapter *adap)
+{
+ return i2c_probe(adap, &addr_data, rtc8564_attach);
+}
+
+static int rtc8564_detach(struct i2c_client *client)
+{
+ i2c_detach_client(client);
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static int rtc8564_get_datetime(struct i2c_client *client, struct rtc_tm *dt)
+{
+ int ret = -EIO;
+ unsigned char buf[15];
+
+ _DBG(1, "client=%p, dt=%p", client, dt);
+
+ if (!dt || !client)
+ return -EINVAL;
+
+ memset(buf, 0, sizeof(buf));
+
+ ret = rtc8564_read(client, 0, buf, 15);
+ if (ret)
+ return ret;
+
+ /* century stored in minute alarm reg */
+ dt->year = BCD_TO_BIN(buf[RTC8564_REG_YEAR]);
+ dt->year += 100 * BCD_TO_BIN(buf[RTC8564_REG_AL_MIN] & 0x3f);
+ dt->mday = BCD_TO_BIN(buf[RTC8564_REG_DAY] & 0x3f);
+ dt->wday = BCD_TO_BIN(buf[RTC8564_REG_WDAY] & 7);
+ dt->mon = BCD_TO_BIN(buf[RTC8564_REG_MON_CENT] & 0x1f);
+
+ dt->secs = BCD_TO_BIN(buf[RTC8564_REG_SEC] & 0x7f);
+ dt->vl = (buf[RTC8564_REG_SEC] & 0x80) == 0x80;
+ dt->mins = BCD_TO_BIN(buf[RTC8564_REG_MIN] & 0x7f);
+ dt->hours = BCD_TO_BIN(buf[RTC8564_REG_HR] & 0x3f);
+
+ _DBGRTCTM(2, *dt);
+
+ return 0;
+}
+
+static int
+rtc8564_set_datetime(struct i2c_client *client, struct rtc_tm *dt, int datetoo)
+{
+ int ret, len = 5;
+ unsigned char buf[15];
+
+ _DBG(1, "client=%p, dt=%p", client, dt);
+
+ if (!dt || !client)
+ return -EINVAL;
+
+ _DBGRTCTM(2, *dt);
+
+ buf[RTC8564_REG_CTRL1] = CTRL1(client) | RTC8564_CTRL1_STOP;
+ buf[RTC8564_REG_CTRL2] = CTRL2(client);
+ buf[RTC8564_REG_SEC] = BIN_TO_BCD(dt->secs);
+ buf[RTC8564_REG_MIN] = BIN_TO_BCD(dt->mins);
+ buf[RTC8564_REG_HR] = BIN_TO_BCD(dt->hours);
+
+ if (datetoo) {
+ len += 5;
+ buf[RTC8564_REG_DAY] = BIN_TO_BCD(dt->mday);
+ buf[RTC8564_REG_WDAY] = BIN_TO_BCD(dt->wday);
+ buf[RTC8564_REG_MON_CENT] = BIN_TO_BCD(dt->mon) & 0x1f;
+ /* century stored in minute alarm reg */
+ buf[RTC8564_REG_YEAR] = BIN_TO_BCD(dt->year % 100);
+ buf[RTC8564_REG_AL_MIN] = BIN_TO_BCD(dt->year / 100);
+ }
+
+ ret = rtc8564_write(client, 0, buf, len);
+ if (ret) {
+ _DBG(1, "error writing data! %d", ret);
+ }
+
+ buf[RTC8564_REG_CTRL1] = CTRL1(client);
+ ret = rtc8564_write(client, 0, buf, 1);
+ if (ret) {
+ _DBG(1, "error writing data! %d", ret);
+ }
+
+ return ret;
+}
+
+static int rtc8564_get_ctrl(struct i2c_client *client, unsigned int *ctrl)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+
+ if (!ctrl || !client)
+ return -1;
+
+ *ctrl = data->ctrl;
+ return 0;
+}
+
+static int rtc8564_set_ctrl(struct i2c_client *client, unsigned int *ctrl)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ unsigned char buf[2];
+
+ if (!ctrl || !client)
+ return -1;
+
+ buf[0] = *ctrl & 0xff;
+ buf[1] = (*ctrl & 0xff00) >> 8;
+ data->ctrl = *ctrl;
+
+ return rtc8564_write(client, 0, buf, 2);
+}
+
+static int rtc8564_read_mem(struct i2c_client *client, struct mem *mem)
+{
+
+ if (!mem || !client)
+ return -EINVAL;
+
+ return rtc8564_read(client, mem->loc, mem->data, mem->nr);
+}
+
+static int rtc8564_write_mem(struct i2c_client *client, struct mem *mem)
+{
+
+ if (!mem || !client)
+ return -EINVAL;
+
+ return rtc8564_write(client, mem->loc, mem->data, mem->nr);
+}
+
+static int
+rtc8564_command(struct i2c_client *client, unsigned int cmd, void *arg)
+{
+
+ _DBG(1, "cmd=%d", cmd);
+
+ switch (cmd) {
+ case RTC_GETDATETIME:
+ return rtc8564_get_datetime(client, arg);
+
+ case RTC_SETTIME:
+ return rtc8564_set_datetime(client, arg, 0);
+
+ case RTC_SETDATETIME:
+ return rtc8564_set_datetime(client, arg, 1);
+
+ case RTC_GETCTRL:
+ return rtc8564_get_ctrl(client, arg);
+
+ case RTC_SETCTRL:
+ return rtc8564_set_ctrl(client, arg);
+
+ case MEM_READ:
+ return rtc8564_read_mem(client, arg);
+
+ case MEM_WRITE:
+ return rtc8564_write_mem(client, arg);
+
+ default:
+ return -EINVAL;
+ }
+}
+
+static struct i2c_driver rtc8564_driver = {
+ .owner = THIS_MODULE,
+ .name = "RTC8564",
+ .id = I2C_DRIVERID_RTC8564,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = rtc8564_probe,
+ .detach_client = rtc8564_detach,
+ .command = rtc8564_command
+};
+
+static __init int rtc8564_init(void)
+{
+ return i2c_add_driver(&rtc8564_driver);
+}
+
+static __exit void rtc8564_exit(void)
+{
+ i2c_del_driver(&rtc8564_driver);
+}
+
+MODULE_AUTHOR("Stefan Eletzhofer <Stefan.Eletzhofer@eletztrick.de>");
+MODULE_DESCRIPTION("EPSON RTC8564 Driver");
+MODULE_LICENSE("GPL");
+
+module_init(rtc8564_init);
+module_exit(rtc8564_exit);
--- /dev/null
+/*
+ * linux/drivers/i2c/chips/rtc8564.h
+ *
+ * Copyright (C) 2002-2004 Stefan Eletzhofer
+ *
+ * based on linux/drivers/acron/char/pcf8583.h
+ * Copyright (C) 2000 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+struct rtc_tm {
+ unsigned char secs;
+ unsigned char mins;
+ unsigned char hours;
+ unsigned char mday;
+ unsigned char mon;
+ unsigned short year; /* xxxx 4 digits :) */
+ unsigned char wday;
+ unsigned char vl;
+};
+
+struct mem {
+ unsigned int loc;
+ unsigned int nr;
+ unsigned char *data;
+};
+
+#define RTC_GETDATETIME 0
+#define RTC_SETTIME 1
+#define RTC_SETDATETIME 2
+#define RTC_GETCTRL 3
+#define RTC_SETCTRL 4
+#define MEM_READ 5
+#define MEM_WRITE 6
+
+#define RTC8564_REG_CTRL1 0x0 /* T 0 S 0 | T 0 0 0 */
+#define RTC8564_REG_CTRL2 0x1 /* 0 0 0 TI/TP | AF TF AIE TIE */
+#define RTC8564_REG_SEC 0x2 /* VL 4 2 1 | 8 4 2 1 */
+#define RTC8564_REG_MIN 0x3 /* x 4 2 1 | 8 4 2 1 */
+#define RTC8564_REG_HR 0x4 /* x x 2 1 | 8 4 2 1 */
+#define RTC8564_REG_DAY 0x5 /* x x 2 1 | 8 4 2 1 */
+#define RTC8564_REG_WDAY 0x6 /* x x x x | x 4 2 1 */
+#define RTC8564_REG_MON_CENT 0x7 /* C x x 1 | 8 4 2 1 */
+#define RTC8564_REG_YEAR 0x8 /* 8 4 2 1 | 8 4 2 1 */
+#define RTC8564_REG_AL_MIN 0x9 /* AE 4 2 1 | 8 4 2 1 */
+#define RTC8564_REG_AL_HR 0xa /* AE 4 2 1 | 8 4 2 1 */
+#define RTC8564_REG_AL_DAY 0xb /* AE x 2 1 | 8 4 2 1 */
+#define RTC8564_REG_AL_WDAY 0xc /* AE x x x | x 4 2 1 */
+#define RTC8564_REG_CLKOUT 0xd /* FE x x x | x x FD1 FD0 */
+#define RTC8564_REG_TCTL 0xe /* TE x x x | x x FD1 FD0 */
+#define RTC8564_REG_TIMER 0xf /* 8 bit binary */
+
+/* Control reg */
+#define RTC8564_CTRL1_TEST1 (1<<3)
+#define RTC8564_CTRL1_STOP (1<<5)
+#define RTC8564_CTRL1_TEST2 (1<<7)
+
+#define RTC8564_CTRL2_TIE (1<<0)
+#define RTC8564_CTRL2_AIE (1<<1)
+#define RTC8564_CTRL2_TF (1<<2)
+#define RTC8564_CTRL2_AF (1<<3)
+#define RTC8564_CTRL2_TI_TP (1<<4)
+
+/* CLKOUT frequencies */
+#define RTC8564_FD_32768HZ (0x0)
+#define RTC8564_FD_1024HZ (0x1)
+#define RTC8564_FD_32 (0x2)
+#define RTC8564_FD_1HZ (0x3)
+
+/* Timer CTRL */
+#define RTC8564_TD_4096HZ (0x0)
+#define RTC8564_TD_64HZ (0x1)
+#define RTC8564_TD_1HZ (0x2)
+#define RTC8564_TD_1_60HZ (0x3)
+
+#define I2C_DRIVERID_RTC8564 0xf000
--- /dev/null
+/*
+ * drivers/ide/ide-h8300.c
+ * H8/300 generic IDE interface
+ */
+
+#include <linux/init.h>
+#include <linux/ide.h>
+#include <linux/config.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#define bswap(d) \
+({ \
+ u16 r; \
+ __asm__("mov.b %w1,r1h\n\t" \
+ "mov.b %x1,r1l\n\t" \
+ "mov.w r1,%0" \
+ :"=r"(r) \
+ :"r"(d) \
+ :"er1"); \
+ (r); \
+})
+
+static void mm_outw(u16 d, unsigned long a)
+{
+ __asm__("mov.b %w0,r2h\n\t"
+ "mov.b %x0,r2l\n\t"
+ "mov.w r2,@%1"
+ :
+ :"r"(d),"r"(a)
+ :"er2");
+}
+
+static u16 mm_inw(unsigned long a)
+{
+ register u16 r __asm__("er0");
+ __asm__("mov.w @%1,r2\n\t"
+ "mov.b r2l,%x0\n\t"
+ "mov.b r2h,%w0"
+ :"=r"(r)
+ :"r"(a)
+ :"er2");
+ return r;
+}
+
+static void mm_outsw(unsigned long addr, void *buf, u32 len)
+{
+ unsigned short *bp = (unsigned short *)buf;
+ for (; len > 0; len--, bp++)
+ *(volatile u16 *)addr = bswap(*bp);
+}
+
+static void mm_insw(unsigned long addr, void *buf, u32 len)
+{
+ unsigned short *bp = (unsigned short *)buf;
+ for (; len > 0; len--, bp++)
+ *bp = bswap(*(volatile u16 *)addr);
+}
+
+#define H8300_IDE_GAP (2)
+
+static inline void hw_setup(hw_regs_t *hw)
+{
+ int i;
+
+ memset(hw, 0, sizeof(hw_regs_t));
+ for (i = 0; i <= IDE_STATUS_OFFSET; i++)
+ hw->io_ports[i] = CONFIG_H8300_IDE_BASE + H8300_IDE_GAP*i;
+ hw->io_ports[IDE_CONTROL_OFFSET] = CONFIG_H8300_IDE_ALT;
+ hw->irq = EXT_IRQ0 + CONFIG_H8300_IDE_IRQ;
+ hw->dma = NO_DMA;
+ hw->chipset = ide_generic;
+}
+
+static inline void hwif_setup(ide_hwif_t *hwif)
+{
+ default_hwif_iops(hwif);
+
+ hwif->mmio = 2;
+ hwif->OUTW = mm_outw;
+ hwif->OUTSW = mm_outsw;
+ hwif->INW = mm_inw;
+ hwif->INSW = mm_insw;
+ hwif->OUTL = NULL;
+ hwif->INL = NULL;
+ hwif->OUTSL = NULL;
+ hwif->INSL = NULL;
+}
+
+void __init h8300_ide_init(void)
+{
+ hw_regs_t hw;
+ ide_hwif_t *hwif;
+ int idx;
+
+ if (!request_region(CONFIG_H8300_IDE_BASE, H8300_IDE_GAP*8, "ide-h8300"))
+ goto out_busy;
+ if (!request_region(CONFIG_H8300_IDE_ALT, H8300_IDE_GAP, "ide-h8300")) {
+ release_region(CONFIG_H8300_IDE_BASE, H8300_IDE_GAP*8);
+ goto out_busy;
+ }
+
+ hw_setup(&hw);
+
+ /* register if */
+ idx = ide_register_hw(&hw, &hwif);
+ if (idx == -1) {
+ printk(KERN_ERR "ide-h8300: IDE I/F register failed\n");
+ return;
+ }
+
+ hwif_setup(hwif);
+ printk(KERN_INFO "ide%d: H8/300 generic IDE interface\n", idx);
+ return;
+
+out_busy:
+ printk(KERN_ERR "ide-h8300: IDE I/F resource already used.\n");
+}
--- /dev/null
+/*
+ * dm-snapshot.c
+ *
+ * Copyright (C) 2001-2002 Sistina Software (UK) Limited.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm.h"
+#include "dm-snap.h"
+#include "dm-io.h"
+#include "kcopyd.h"
+
+#include <linux/mm.h>
+#include <linux/pagemap.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+
+/*-----------------------------------------------------------------
+ * Persistent snapshots, by persistent we mean that the snapshot
+ * will survive a reboot.
+ *---------------------------------------------------------------*/
+
+/*
+ * We need to store a record of which parts of the origin have
+ * been copied to the snapshot device. The snapshot code
+ * requires that we copy exception chunks to chunk aligned areas
+ * of the COW store. It makes sense therefore, to store the
+ * metadata in chunk size blocks.
+ *
+ * There is no backward or forward compatibility implemented,
+ * snapshots with different disk versions than the kernel will
+ * not be usable. It is expected that "lvcreate" will blank out
+ * the start of a fresh COW device before calling the snapshot
+ * constructor.
+ *
+ * The first chunk of the COW device just contains the header.
+ * After this there is a chunk filled with exception metadata,
+ * followed by as many exception chunks as can fit in the
+ * metadata areas.
+ *
+ * All on disk structures are in little-endian format. The end
+ * of the exceptions info is indicated by an exception with a
+ * new_chunk of 0, which is invalid since it would point to the
+ * header chunk.
+ */
+
+/*
+ * Magic for persistent snapshots: "SnAp" - Feeble isn't it.
+ */
+#define SNAP_MAGIC 0x70416e53
+
+/*
+ * The on-disk version of the metadata.
+ */
+#define SNAPSHOT_DISK_VERSION 1
+
+struct disk_header {
+ uint32_t magic;
+
+ /*
+ * Is this snapshot valid. There is no way of recovering
+ * an invalid snapshot.
+ */
+ uint32_t valid;
+
+ /*
+ * Simple, incrementing version. no backward
+ * compatibility.
+ */
+ uint32_t version;
+
+ /* In sectors */
+ uint32_t chunk_size;
+};
+
+struct disk_exception {
+ uint64_t old_chunk;
+ uint64_t new_chunk;
+};
+
+struct commit_callback {
+ void (*callback)(void *, int success);
+ void *context;
+};
+
+/*
+ * The top level structure for a persistent exception store.
+ */
+struct pstore {
+ struct dm_snapshot *snap; /* up pointer to my snapshot */
+ int version;
+ int valid;
+ uint32_t chunk_size;
+ uint32_t exceptions_per_area;
+
+ /*
+ * Now that we have an asynchronous kcopyd there is no
+ * need for large chunk sizes, so it wont hurt to have a
+ * whole chunks worth of metadata in memory at once.
+ */
+ void *area;
+
+ /*
+ * Used to keep track of which metadata area the data in
+ * 'chunk' refers to.
+ */
+ uint32_t current_area;
+
+ /*
+ * The next free chunk for an exception.
+ */
+ uint32_t next_free;
+
+ /*
+ * The index of next free exception in the current
+ * metadata area.
+ */
+ uint32_t current_committed;
+
+ atomic_t pending_count;
+ uint32_t callback_count;
+ struct commit_callback *callbacks;
+};
+
+static inline unsigned int sectors_to_pages(unsigned int sectors)
+{
+ return sectors / (PAGE_SIZE >> 9);
+}
+
+static int alloc_area(struct pstore *ps)
+{
+ int r = -ENOMEM;
+ size_t len;
+
+ len = ps->chunk_size << SECTOR_SHIFT;
+
+ /*
+ * Allocate the chunk_size block of memory that will hold
+ * a single metadata area.
+ */
+ ps->area = vmalloc(len);
+ if (!ps->area)
+ return r;
+
+ return 0;
+}
+
+static void free_area(struct pstore *ps)
+{
+ vfree(ps->area);
+}
+
+/*
+ * Read or write a chunk aligned and sized block of data from a device.
+ */
+static int chunk_io(struct pstore *ps, uint32_t chunk, int rw)
+{
+ struct io_region where;
+ unsigned long bits;
+
+ where.bdev = ps->snap->cow->bdev;
+ where.sector = ps->chunk_size * chunk;
+ where.count = ps->chunk_size;
+
+ return dm_io_sync_vm(1, &where, rw, ps->area, &bits);
+}
+
+/*
+ * Read or write a metadata area. Remembering to skip the first
+ * chunk which holds the header.
+ */
+static int area_io(struct pstore *ps, uint32_t area, int rw)
+{
+ int r;
+ uint32_t chunk;
+
+ /* convert a metadata area index to a chunk index */
+ chunk = 1 + ((ps->exceptions_per_area + 1) * area);
+
+ r = chunk_io(ps, chunk, rw);
+ if (r)
+ return r;
+
+ ps->current_area = area;
+ return 0;
+}
+
+static int zero_area(struct pstore *ps, uint32_t area)
+{
+ memset(ps->area, 0, ps->chunk_size << SECTOR_SHIFT);
+ return area_io(ps, area, WRITE);
+}
+
+static int read_header(struct pstore *ps, int *new_snapshot)
+{
+ int r;
+ struct disk_header *dh;
+
+ r = chunk_io(ps, 0, READ);
+ if (r)
+ return r;
+
+ dh = (struct disk_header *) ps->area;
+
+ if (le32_to_cpu(dh->magic) == 0) {
+ *new_snapshot = 1;
+
+ } else if (le32_to_cpu(dh->magic) == SNAP_MAGIC) {
+ *new_snapshot = 0;
+ ps->valid = le32_to_cpu(dh->valid);
+ ps->version = le32_to_cpu(dh->version);
+ ps->chunk_size = le32_to_cpu(dh->chunk_size);
+
+ } else {
+ DMWARN("Invalid/corrupt snapshot");
+ r = -ENXIO;
+ }
+
+ return r;
+}
+
+static int write_header(struct pstore *ps)
+{
+ struct disk_header *dh;
+
+ memset(ps->area, 0, ps->chunk_size << SECTOR_SHIFT);
+
+ dh = (struct disk_header *) ps->area;
+ dh->magic = cpu_to_le32(SNAP_MAGIC);
+ dh->valid = cpu_to_le32(ps->valid);
+ dh->version = cpu_to_le32(ps->version);
+ dh->chunk_size = cpu_to_le32(ps->chunk_size);
+
+ return chunk_io(ps, 0, WRITE);
+}
+
+/*
+ * Access functions for the disk exceptions, these do the endian conversions.
+ */
+static struct disk_exception *get_exception(struct pstore *ps, uint32_t index)
+{
+ if (index >= ps->exceptions_per_area)
+ return NULL;
+
+ return ((struct disk_exception *) ps->area) + index;
+}
+
+static int read_exception(struct pstore *ps,
+ uint32_t index, struct disk_exception *result)
+{
+ struct disk_exception *e;
+
+ e = get_exception(ps, index);
+ if (!e)
+ return -EINVAL;
+
+ /* copy it */
+ result->old_chunk = le64_to_cpu(e->old_chunk);
+ result->new_chunk = le64_to_cpu(e->new_chunk);
+
+ return 0;
+}
+
+static int write_exception(struct pstore *ps,
+ uint32_t index, struct disk_exception *de)
+{
+ struct disk_exception *e;
+
+ e = get_exception(ps, index);
+ if (!e)
+ return -EINVAL;
+
+ /* copy it */
+ e->old_chunk = cpu_to_le64(de->old_chunk);
+ e->new_chunk = cpu_to_le64(de->new_chunk);
+
+ return 0;
+}
+
+/*
+ * Registers the exceptions that are present in the current area.
+ * 'full' is filled in to indicate if the area has been
+ * filled.
+ */
+static int insert_exceptions(struct pstore *ps, int *full)
+{
+ int r;
+ unsigned int i;
+ struct disk_exception de;
+
+ /* presume the area is full */
+ *full = 1;
+
+ for (i = 0; i < ps->exceptions_per_area; i++) {
+ r = read_exception(ps, i, &de);
+
+ if (r)
+ return r;
+
+ /*
+ * If the new_chunk is pointing at the start of
+ * the COW device, where the first metadata area
+ * is we know that we've hit the end of the
+ * exceptions. Therefore the area is not full.
+ */
+ if (de.new_chunk == 0LL) {
+ ps->current_committed = i;
+ *full = 0;
+ break;
+ }
+
+ /*
+ * Keep track of the start of the free chunks.
+ */
+ if (ps->next_free <= de.new_chunk)
+ ps->next_free = de.new_chunk + 1;
+
+ /*
+ * Otherwise we add the exception to the snapshot.
+ */
+ r = dm_add_exception(ps->snap, de.old_chunk, de.new_chunk);
+ if (r)
+ return r;
+ }
+
+ return 0;
+}
+
+static int read_exceptions(struct pstore *ps)
+{
+ uint32_t area;
+ int r, full = 1;
+
+ /*
+ * Keeping reading chunks and inserting exceptions until
+ * we find a partially full area.
+ */
+ for (area = 0; full; area++) {
+ r = area_io(ps, area, READ);
+ if (r)
+ return r;
+
+ r = insert_exceptions(ps, &full);
+ if (r)
+ return r;
+ }
+
+ return 0;
+}
+
+static inline struct pstore *get_info(struct exception_store *store)
+{
+ return (struct pstore *) store->context;
+}
+
+static void persistent_fraction_full(struct exception_store *store,
+ sector_t *numerator, sector_t *denominator)
+{
+ *numerator = get_info(store)->next_free * store->snap->chunk_size;
+ *denominator = get_dev_size(store->snap->cow->bdev);
+}
+
+static void persistent_destroy(struct exception_store *store)
+{
+ struct pstore *ps = get_info(store);
+
+ dm_io_put(sectors_to_pages(ps->chunk_size));
+ vfree(ps->callbacks);
+ free_area(ps);
+ kfree(ps);
+}
+
+static int persistent_read_metadata(struct exception_store *store)
+{
+ int r, new_snapshot;
+ struct pstore *ps = get_info(store);
+
+ /*
+ * Read the snapshot header.
+ */
+ r = read_header(ps, &new_snapshot);
+ if (r)
+ return r;
+
+ /*
+ * Do we need to setup a new snapshot ?
+ */
+ if (new_snapshot) {
+ r = write_header(ps);
+ if (r) {
+ DMWARN("write_header failed");
+ return r;
+ }
+
+ r = zero_area(ps, 0);
+ if (r) {
+ DMWARN("zero_area(0) failed");
+ return r;
+ }
+
+ } else {
+ /*
+ * Sanity checks.
+ */
+ if (!ps->valid) {
+ DMWARN("snapshot is marked invalid");
+ return -EINVAL;
+ }
+
+ if (ps->version != SNAPSHOT_DISK_VERSION) {
+ DMWARN("unable to handle snapshot disk version %d",
+ ps->version);
+ return -EINVAL;
+ }
+
+ /*
+ * Read the metadata.
+ */
+ r = read_exceptions(ps);
+ if (r)
+ return r;
+ }
+
+ return 0;
+}
+
+static int persistent_prepare(struct exception_store *store,
+ struct exception *e)
+{
+ struct pstore *ps = get_info(store);
+ uint32_t stride;
+ sector_t size = get_dev_size(store->snap->cow->bdev);
+
+ /* Is there enough room ? */
+ if (size < ((ps->next_free + 1) * store->snap->chunk_size))
+ return -ENOSPC;
+
+ e->new_chunk = ps->next_free;
+
+ /*
+ * Move onto the next free pending, making sure to take
+ * into account the location of the metadata chunks.
+ */
+ stride = (ps->exceptions_per_area + 1);
+ if ((++ps->next_free % stride) == 1)
+ ps->next_free++;
+
+ atomic_inc(&ps->pending_count);
+ return 0;
+}
+
+static void persistent_commit(struct exception_store *store,
+ struct exception *e,
+ void (*callback) (void *, int success),
+ void *callback_context)
+{
+ int r;
+ unsigned int i;
+ struct pstore *ps = get_info(store);
+ struct disk_exception de;
+ struct commit_callback *cb;
+
+ de.old_chunk = e->old_chunk;
+ de.new_chunk = e->new_chunk;
+ write_exception(ps, ps->current_committed++, &de);
+
+ /*
+ * Add the callback to the back of the array. This code
+ * is the only place where the callback array is
+ * manipulated, and we know that it will never be called
+ * multiple times concurrently.
+ */
+ cb = ps->callbacks + ps->callback_count++;
+ cb->callback = callback;
+ cb->context = callback_context;
+
+ /*
+ * If there are no more exceptions in flight, or we have
+ * filled this metadata area we commit the exceptions to
+ * disk.
+ */
+ if (atomic_dec_and_test(&ps->pending_count) ||
+ (ps->current_committed == ps->exceptions_per_area)) {
+ r = area_io(ps, ps->current_area, WRITE);
+ if (r)
+ ps->valid = 0;
+
+ for (i = 0; i < ps->callback_count; i++) {
+ cb = ps->callbacks + i;
+ cb->callback(cb->context, r == 0 ? 1 : 0);
+ }
+
+ ps->callback_count = 0;
+ }
+
+ /*
+ * Have we completely filled the current area ?
+ */
+ if (ps->current_committed == ps->exceptions_per_area) {
+ ps->current_committed = 0;
+ r = zero_area(ps, ps->current_area + 1);
+ if (r)
+ ps->valid = 0;
+ }
+}
+
+static void persistent_drop(struct exception_store *store)
+{
+ struct pstore *ps = get_info(store);
+
+ ps->valid = 0;
+ if (write_header(ps))
+ DMWARN("write header failed");
+}
+
+int dm_create_persistent(struct exception_store *store, uint32_t chunk_size)
+{
+ int r;
+ struct pstore *ps;
+
+ r = dm_io_get(sectors_to_pages(chunk_size));
+ if (r)
+ return r;
+
+ /* allocate the pstore */
+ ps = kmalloc(sizeof(*ps), GFP_KERNEL);
+ if (!ps) {
+ r = -ENOMEM;
+ goto bad;
+ }
+
+ ps->snap = store->snap;
+ ps->valid = 1;
+ ps->version = SNAPSHOT_DISK_VERSION;
+ ps->chunk_size = chunk_size;
+ ps->exceptions_per_area = (chunk_size << SECTOR_SHIFT) /
+ sizeof(struct disk_exception);
+ ps->next_free = 2; /* skipping the header and first area */
+ ps->current_committed = 0;
+
+ r = alloc_area(ps);
+ if (r)
+ goto bad;
+
+ /*
+ * Allocate space for all the callbacks.
+ */
+ ps->callback_count = 0;
+ atomic_set(&ps->pending_count, 0);
+ ps->callbacks = dm_vcalloc(ps->exceptions_per_area,
+ sizeof(*ps->callbacks));
+
+ if (!ps->callbacks) {
+ r = -ENOMEM;
+ goto bad;
+ }
+
+ store->destroy = persistent_destroy;
+ store->read_metadata = persistent_read_metadata;
+ store->prepare_exception = persistent_prepare;
+ store->commit_exception = persistent_commit;
+ store->drop_snapshot = persistent_drop;
+ store->fraction_full = persistent_fraction_full;
+ store->context = ps;
+
+ return 0;
+
+ bad:
+ dm_io_put(sectors_to_pages(chunk_size));
+ if (ps) {
+ if (ps->area)
+ free_area(ps);
+
+ kfree(ps);
+ }
+ return r;
+}
+
+/*-----------------------------------------------------------------
+ * Implementation of the store for non-persistent snapshots.
+ *---------------------------------------------------------------*/
+struct transient_c {
+ sector_t next_free;
+};
+
+static void transient_destroy(struct exception_store *store)
+{
+ kfree(store->context);
+}
+
+static int transient_read_metadata(struct exception_store *store)
+{
+ return 0;
+}
+
+static int transient_prepare(struct exception_store *store, struct exception *e)
+{
+ struct transient_c *tc = (struct transient_c *) store->context;
+ sector_t size = get_dev_size(store->snap->cow->bdev);
+
+ if (size < (tc->next_free + store->snap->chunk_size))
+ return -1;
+
+ e->new_chunk = sector_to_chunk(store->snap, tc->next_free);
+ tc->next_free += store->snap->chunk_size;
+
+ return 0;
+}
+
+static void transient_commit(struct exception_store *store,
+ struct exception *e,
+ void (*callback) (void *, int success),
+ void *callback_context)
+{
+ /* Just succeed */
+ callback(callback_context, 1);
+}
+
+static void transient_fraction_full(struct exception_store *store,
+ sector_t *numerator, sector_t *denominator)
+{
+ *numerator = ((struct transient_c *) store->context)->next_free;
+ *denominator = get_dev_size(store->snap->cow->bdev);
+}
+
+int dm_create_transient(struct exception_store *store,
+ struct dm_snapshot *s, int blocksize)
+{
+ struct transient_c *tc;
+
+ memset(store, 0, sizeof(*store));
+ store->destroy = transient_destroy;
+ store->read_metadata = transient_read_metadata;
+ store->prepare_exception = transient_prepare;
+ store->commit_exception = transient_commit;
+ store->fraction_full = transient_fraction_full;
+ store->snap = s;
+
+ tc = kmalloc(sizeof(struct transient_c), GFP_KERNEL);
+ if (!tc)
+ return -ENOMEM;
+
+ tc->next_free = 0;
+ store->context = tc;
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-io.h"
+
+#include <linux/bio.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+
+#define BIO_POOL_SIZE 256
+
+
+/*-----------------------------------------------------------------
+ * Bio set, move this to bio.c
+ *---------------------------------------------------------------*/
+#define BV_NAME_SIZE 16
+struct biovec_pool {
+ int nr_vecs;
+ char name[BV_NAME_SIZE];
+ kmem_cache_t *slab;
+ mempool_t *pool;
+ atomic_t allocated; /* FIXME: debug */
+};
+
+#define BIOVEC_NR_POOLS 6
+struct bio_set {
+ char name[BV_NAME_SIZE];
+ kmem_cache_t *bio_slab;
+ mempool_t *bio_pool;
+ struct biovec_pool pools[BIOVEC_NR_POOLS];
+};
+
+static void bio_set_exit(struct bio_set *bs)
+{
+ unsigned i;
+ struct biovec_pool *bp;
+
+ if (bs->bio_pool)
+ mempool_destroy(bs->bio_pool);
+
+ if (bs->bio_slab)
+ kmem_cache_destroy(bs->bio_slab);
+
+ for (i = 0; i < BIOVEC_NR_POOLS; i++) {
+ bp = bs->pools + i;
+ if (bp->pool)
+ mempool_destroy(bp->pool);
+
+ if (bp->slab)
+ kmem_cache_destroy(bp->slab);
+ }
+}
+
+static void mk_name(char *str, size_t len, const char *prefix, unsigned count)
+{
+ snprintf(str, len, "%s-%u", prefix, count);
+}
+
+static int bio_set_init(struct bio_set *bs, const char *slab_prefix,
+ unsigned pool_entries, unsigned scale)
+{
+ /* FIXME: this must match bvec_index(), why not go the
+ * whole hog and have a pool per power of 2 ? */
+ static unsigned _vec_lengths[BIOVEC_NR_POOLS] = {
+ 1, 4, 16, 64, 128, BIO_MAX_PAGES
+ };
+
+
+ unsigned i, size;
+ struct biovec_pool *bp;
+
+ /* zero the bs so we can tear down properly on error */
+ memset(bs, 0, sizeof(*bs));
+
+ /*
+ * Set up the bio pool.
+ */
+ snprintf(bs->name, sizeof(bs->name), "%s-bio", slab_prefix);
+
+ bs->bio_slab = kmem_cache_create(bs->name, sizeof(struct bio), 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (!bs->bio_slab) {
+ DMWARN("can't init bio slab");
+ goto bad;
+ }
+
+ bs->bio_pool = mempool_create(pool_entries, mempool_alloc_slab,
+ mempool_free_slab, bs->bio_slab);
+ if (!bs->bio_pool) {
+ DMWARN("can't init bio pool");
+ goto bad;
+ }
+
+ /*
+ * Set up the biovec pools.
+ */
+ for (i = 0; i < BIOVEC_NR_POOLS; i++) {
+ bp = bs->pools + i;
+ bp->nr_vecs = _vec_lengths[i];
+ atomic_set(&bp->allocated, 1); /* FIXME: debug */
+
+
+ size = bp->nr_vecs * sizeof(struct bio_vec);
+
+ mk_name(bp->name, sizeof(bp->name), slab_prefix, i);
+ bp->slab = kmem_cache_create(bp->name, size, 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (!bp->slab) {
+ DMWARN("can't init biovec slab cache");
+ goto bad;
+ }
+
+ if (i >= scale)
+ pool_entries >>= 1;
+
+ bp->pool = mempool_create(pool_entries, mempool_alloc_slab,
+ mempool_free_slab, bp->slab);
+ if (!bp->pool) {
+ DMWARN("can't init biovec mempool");
+ goto bad;
+ }
+ }
+
+ return 0;
+
+ bad:
+ bio_set_exit(bs);
+ return -ENOMEM;
+}
+
+/* FIXME: blech */
+static inline unsigned bvec_index(unsigned nr)
+{
+ switch (nr) {
+ case 1: return 0;
+ case 2 ... 4: return 1;
+ case 5 ... 16: return 2;
+ case 17 ... 64: return 3;
+ case 65 ... 128:return 4;
+ case 129 ... BIO_MAX_PAGES: return 5;
+ }
+
+ BUG();
+ return 0;
+}
+
+static inline void bs_bio_init(struct bio *bio)
+{
+ bio->bi_next = NULL;
+ bio->bi_flags = 1 << BIO_UPTODATE;
+ bio->bi_rw = 0;
+ bio->bi_vcnt = 0;
+ bio->bi_idx = 0;
+ bio->bi_phys_segments = 0;
+ bio->bi_hw_segments = 0;
+ bio->bi_size = 0;
+ bio->bi_max_vecs = 0;
+ bio->bi_end_io = NULL;
+ atomic_set(&bio->bi_cnt, 1);
+ bio->bi_private = NULL;
+}
+
+static unsigned _bio_count = 0;
+struct bio *bio_set_alloc(struct bio_set *bs, int gfp_mask, int nr_iovecs)
+{
+ struct biovec_pool *bp;
+ struct bio_vec *bv = NULL;
+ unsigned long idx;
+ struct bio *bio;
+
+ bio = mempool_alloc(bs->bio_pool, gfp_mask);
+ if (unlikely(!bio))
+ return NULL;
+
+ bio_init(bio);
+
+ if (likely(nr_iovecs)) {
+ idx = bvec_index(nr_iovecs);
+ bp = bs->pools + idx;
+ bv = mempool_alloc(bp->pool, gfp_mask);
+ if (!bv) {
+ mempool_free(bio, bs->bio_pool);
+ return NULL;
+ }
+
+ memset(bv, 0, bp->nr_vecs * sizeof(*bv));
+ bio->bi_flags |= idx << BIO_POOL_OFFSET;
+ bio->bi_max_vecs = bp->nr_vecs;
+ atomic_inc(&bp->allocated);
+ }
+
+ bio->bi_io_vec = bv;
+ return bio;
+}
+
+static void bio_set_free(struct bio_set *bs, struct bio *bio)
+{
+ struct biovec_pool *bp = bs->pools + BIO_POOL_IDX(bio);
+
+ if (atomic_dec_and_test(&bp->allocated))
+ BUG();
+
+ mempool_free(bio->bi_io_vec, bp->pool);
+ mempool_free(bio, bs->bio_pool);
+}
+
+/*-----------------------------------------------------------------
+ * dm-io proper
+ *---------------------------------------------------------------*/
+static struct bio_set _bios;
+
+/* FIXME: can we shrink this ? */
+struct io {
+ unsigned long error;
+ atomic_t count;
+ struct task_struct *sleeper;
+ io_notify_fn callback;
+ void *context;
+};
+
+/*
+ * io contexts are only dynamically allocated for asynchronous
+ * io. Since async io is likely to be the majority of io we'll
+ * have the same number of io contexts as buffer heads ! (FIXME:
+ * must reduce this).
+ */
+static unsigned _num_ios;
+static mempool_t *_io_pool;
+
+static void *alloc_io(int gfp_mask, void *pool_data)
+{
+ return kmalloc(sizeof(struct io), gfp_mask);
+}
+
+static void free_io(void *element, void *pool_data)
+{
+ kfree(element);
+}
+
+static unsigned int pages_to_ios(unsigned int pages)
+{
+ return 4 * pages; /* too many ? */
+}
+
+static int resize_pool(unsigned int new_ios)
+{
+ int r = 0;
+
+ if (_io_pool) {
+ if (new_ios == 0) {
+ /* free off the pool */
+ mempool_destroy(_io_pool);
+ _io_pool = NULL;
+ bio_set_exit(&_bios);
+
+ } else {
+ /* resize the pool */
+ r = mempool_resize(_io_pool, new_ios, GFP_KERNEL);
+ }
+
+ } else {
+ /* create new pool */
+ _io_pool = mempool_create(new_ios, alloc_io, free_io, NULL);
+ if (!_io_pool)
+ r = -ENOMEM;
+
+ r = bio_set_init(&_bios, "dm-io", 512, 1);
+ if (r) {
+ mempool_destroy(_io_pool);
+ _io_pool = NULL;
+ }
+ }
+
+ if (!r)
+ _num_ios = new_ios;
+
+ return r;
+}
+
+int dm_io_get(unsigned int num_pages)
+{
+ return resize_pool(_num_ios + pages_to_ios(num_pages));
+}
+
+void dm_io_put(unsigned int num_pages)
+{
+ resize_pool(_num_ios - pages_to_ios(num_pages));
+}
+
+/*-----------------------------------------------------------------
+ * We need to keep track of which region a bio is doing io for.
+ * In order to save a memory allocation we store this the last
+ * bvec which we know is unused (blech).
+ *---------------------------------------------------------------*/
+static inline void bio_set_region(struct bio *bio, unsigned region)
+{
+ bio->bi_io_vec[bio->bi_max_vecs - 1].bv_len = region;
+}
+
+static inline unsigned bio_get_region(struct bio *bio)
+{
+ return bio->bi_io_vec[bio->bi_max_vecs - 1].bv_len;
+}
+
+/*-----------------------------------------------------------------
+ * We need an io object to keep track of the number of bios that
+ * have been dispatched for a particular io.
+ *---------------------------------------------------------------*/
+static void dec_count(struct io *io, unsigned int region, int error)
+{
+ if (error)
+ set_bit(region, &io->error);
+
+ if (atomic_dec_and_test(&io->count)) {
+ if (io->sleeper)
+ wake_up_process(io->sleeper);
+
+ else {
+ int r = io->error;
+ io_notify_fn fn = io->callback;
+ void *context = io->context;
+
+ mempool_free(io, _io_pool);
+ fn(r, context);
+ }
+ }
+}
+
+/* FIXME Move this to bio.h? */
+static void zero_fill_bio(struct bio *bio)
+{
+ unsigned long flags;
+ struct bio_vec *bv;
+ int i;
+
+ bio_for_each_segment(bv, bio, i) {
+ char *data = bvec_kmap_irq(bv, &flags);
+ memset(data, 0, bv->bv_len);
+ flush_dcache_page(bv->bv_page);
+ bvec_kunmap_irq(data, &flags);
+ }
+}
+
+static int endio(struct bio *bio, unsigned int done, int error)
+{
+ struct io *io = (struct io *) bio->bi_private;
+
+ /* keep going until we've finished */
+ if (bio->bi_size)
+ return 1;
+
+ if (error && bio_data_dir(bio) == READ)
+ zero_fill_bio(bio);
+
+ dec_count(io, bio_get_region(bio), error);
+ bio_put(bio);
+
+ return 0;
+}
+
+static void bio_dtr(struct bio *bio)
+{
+ _bio_count--;
+ bio_set_free(&_bios, bio);
+}
+
+/*-----------------------------------------------------------------
+ * These little objects provide an abstraction for getting a new
+ * destination page for io.
+ *---------------------------------------------------------------*/
+struct dpages {
+ void (*get_page)(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset);
+ void (*next_page)(struct dpages *dp);
+
+ unsigned context_u;
+ void *context_ptr;
+};
+
+/*
+ * Functions for getting the pages from a list.
+ */
+static void list_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ unsigned o = dp->context_u;
+ struct page_list *pl = (struct page_list *) dp->context_ptr;
+
+ *p = pl->page;
+ *len = PAGE_SIZE - o;
+ *offset = o;
+}
+
+static void list_next_page(struct dpages *dp)
+{
+ struct page_list *pl = (struct page_list *) dp->context_ptr;
+ dp->context_ptr = pl->next;
+ dp->context_u = 0;
+}
+
+static void list_dp_init(struct dpages *dp, struct page_list *pl, unsigned offset)
+{
+ dp->get_page = list_get_page;
+ dp->next_page = list_next_page;
+ dp->context_u = offset;
+ dp->context_ptr = pl;
+}
+
+/*
+ * Functions for getting the pages from a bvec.
+ */
+static void bvec_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr;
+ *p = bvec->bv_page;
+ *len = bvec->bv_len;
+ *offset = bvec->bv_offset;
+}
+
+static void bvec_next_page(struct dpages *dp)
+{
+ struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr;
+ dp->context_ptr = bvec + 1;
+}
+
+static void bvec_dp_init(struct dpages *dp, struct bio_vec *bvec)
+{
+ dp->get_page = bvec_get_page;
+ dp->next_page = bvec_next_page;
+ dp->context_ptr = bvec;
+}
+
+static void vm_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ *p = vmalloc_to_page(dp->context_ptr);
+ *offset = dp->context_u;
+ *len = PAGE_SIZE - dp->context_u;
+}
+
+static void vm_next_page(struct dpages *dp)
+{
+ dp->context_ptr += PAGE_SIZE - dp->context_u;
+ dp->context_u = 0;
+}
+
+static void vm_dp_init(struct dpages *dp, void *data)
+{
+ dp->get_page = vm_get_page;
+ dp->next_page = vm_next_page;
+ dp->context_u = ((unsigned long) data) & (PAGE_SIZE - 1);
+ dp->context_ptr = data;
+}
+
+/*-----------------------------------------------------------------
+ * IO routines that accept a list of pages.
+ *---------------------------------------------------------------*/
+static void do_region(int rw, unsigned int region, struct io_region *where,
+ struct dpages *dp, struct io *io)
+{
+ struct bio *bio;
+ struct page *page;
+ unsigned long len;
+ unsigned offset;
+ unsigned num_bvecs;
+ sector_t remaining = where->count;
+
+ while (remaining) {
+ /*
+ * Allocate a suitably sized bio, we add an extra
+ * bvec for bio_get/set_region().
+ */
+ num_bvecs = (remaining / (PAGE_SIZE >> 9)) + 2;
+ _bio_count++;
+ bio = bio_set_alloc(&_bios, GFP_NOIO, num_bvecs);
+ bio->bi_sector = where->sector + (where->count - remaining);
+ bio->bi_bdev = where->bdev;
+ bio->bi_end_io = endio;
+ bio->bi_private = io;
+ bio->bi_destructor = bio_dtr;
+ bio_set_region(bio, region);
+
+ /*
+ * Try and add as many pages as possible.
+ */
+ while (remaining) {
+ dp->get_page(dp, &page, &len, &offset);
+ len = min(len, to_bytes(remaining));
+ if (!bio_add_page(bio, page, len, offset))
+ break;
+
+ offset = 0;
+ remaining -= to_sector(len);
+ dp->next_page(dp);
+ }
+
+ atomic_inc(&io->count);
+ submit_bio(rw, bio);
+ }
+}
+
+static void dispatch_io(int rw, unsigned int num_regions,
+ struct io_region *where, struct dpages *dp,
+ struct io *io, int sync)
+{
+ int i;
+ struct dpages old_pages = *dp;
+
+ if (sync)
+ rw |= (1 << BIO_RW_SYNC);
+
+ /*
+ * For multiple regions we need to be careful to rewind
+ * the dp object for each call to do_region.
+ */
+ for (i = 0; i < num_regions; i++) {
+ *dp = old_pages;
+ if (where[i].count)
+ do_region(rw, i, where + i, dp, io);
+ }
+
+ /*
+ * Drop the extra refence that we were holding to avoid
+ * the io being completed too early.
+ */
+ dec_count(io, 0, 0);
+}
+
+static int sync_io(unsigned int num_regions, struct io_region *where,
+ int rw, struct dpages *dp, unsigned long *error_bits)
+{
+ struct io io;
+
+ if (num_regions > 1 && rw != WRITE) {
+ WARN_ON(1);
+ return -EIO;
+ }
+
+ io.error = 0;
+ atomic_set(&io.count, 1); /* see dispatch_io() */
+ io.sleeper = current;
+
+ dispatch_io(rw, num_regions, where, dp, &io, 1);
+
+ while (1) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+
+ if (!atomic_read(&io.count) || signal_pending(current))
+ break;
+
+ io_schedule();
+ }
+ set_current_state(TASK_RUNNING);
+
+ if (atomic_read(&io.count))
+ return -EINTR;
+
+ *error_bits = io.error;
+ return io.error ? -EIO : 0;
+}
+
+static int async_io(unsigned int num_regions, struct io_region *where, int rw,
+ struct dpages *dp, io_notify_fn fn, void *context)
+{
+ struct io *io;
+
+ if (num_regions > 1 && rw != WRITE) {
+ WARN_ON(1);
+ fn(1, context);
+ return -EIO;
+ }
+
+ io = mempool_alloc(_io_pool, GFP_NOIO);
+ io->error = 0;
+ atomic_set(&io->count, 1); /* see dispatch_io() */
+ io->sleeper = NULL;
+ io->callback = fn;
+ io->context = context;
+
+ dispatch_io(rw, num_regions, where, dp, io, 0);
+ return 0;
+}
+
+int dm_io_sync(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ unsigned long *error_bits)
+{
+ struct dpages dp;
+ list_dp_init(&dp, pl, offset);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_sync_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, unsigned long *error_bits)
+{
+ struct dpages dp;
+ bvec_dp_init(&dp, bvec);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, unsigned long *error_bits)
+{
+ struct dpages dp;
+ vm_dp_init(&dp, data);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ list_dp_init(&dp, pl, offset);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+int dm_io_async_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ bvec_dp_init(&dp, bvec);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+int dm_io_async_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ vm_dp_init(&dp, data);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+EXPORT_SYMBOL(dm_io_get);
+EXPORT_SYMBOL(dm_io_put);
+EXPORT_SYMBOL(dm_io_sync);
+EXPORT_SYMBOL(dm_io_async);
+EXPORT_SYMBOL(dm_io_sync_bvec);
+EXPORT_SYMBOL(dm_io_async_bvec);
+EXPORT_SYMBOL(dm_io_sync_vm);
+EXPORT_SYMBOL(dm_io_async_vm);
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef _DM_IO_H
+#define _DM_IO_H
+
+#include "dm.h"
+
+/* FIXME make this configurable */
+#define DM_MAX_IO_REGIONS 8
+
+struct io_region {
+ struct block_device *bdev;
+ sector_t sector;
+ sector_t count;
+};
+
+struct page_list {
+ struct page_list *next;
+ struct page *page;
+};
+
+
+/*
+ * 'error' is a bitset, with each bit indicating whether an error
+ * occurred doing io to the corresponding region.
+ */
+typedef void (*io_notify_fn)(unsigned long error, void *context);
+
+
+/*
+ * Before anyone uses the IO interface they should call
+ * dm_io_get(), specifying roughly how many pages they are
+ * expecting to perform io on concurrently.
+ *
+ * This function may block.
+ */
+int dm_io_get(unsigned int num_pages);
+void dm_io_put(unsigned int num_pages);
+
+/*
+ * Synchronous IO.
+ *
+ * Please ensure that the rw flag in the next two functions is
+ * either READ or WRITE, ie. we don't take READA. Any
+ * regions with a zero count field will be ignored.
+ */
+int dm_io_sync(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ unsigned long *error_bits);
+
+int dm_io_sync_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, unsigned long *error_bits);
+
+int dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, unsigned long *error_bits);
+
+/*
+ * Aynchronous IO.
+ *
+ * The 'where' array may be safely allocated on the stack since
+ * the function takes a copy.
+ */
+int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ io_notify_fn fn, void *context);
+
+int dm_io_async_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, io_notify_fn fn, void *context);
+
+int dm_io_async_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, io_notify_fn fn, void *context);
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the LGPL.
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "dm-log.h"
+#include "dm-io.h"
+
+static LIST_HEAD(_log_types);
+static spinlock_t _lock = SPIN_LOCK_UNLOCKED;
+
+int dm_register_dirty_log_type(struct dirty_log_type *type)
+{
+ if (!try_module_get(type->module))
+ return -EINVAL;
+
+ spin_lock(&_lock);
+ type->use_count = 0;
+ list_add(&type->list, &_log_types);
+ spin_unlock(&_lock);
+
+ return 0;
+}
+
+int dm_unregister_dirty_log_type(struct dirty_log_type *type)
+{
+ spin_lock(&_lock);
+
+ if (type->use_count)
+ DMWARN("Attempt to unregister a log type that is still in use");
+ else {
+ list_del(&type->list);
+ module_put(type->module);
+ }
+
+ spin_unlock(&_lock);
+
+ return 0;
+}
+
+static struct dirty_log_type *get_type(const char *type_name)
+{
+ struct dirty_log_type *type;
+
+ spin_lock(&_lock);
+ list_for_each_entry (type, &_log_types, list)
+ if (!strcmp(type_name, type->name)) {
+ type->use_count++;
+ spin_unlock(&_lock);
+ return type;
+ }
+
+ spin_unlock(&_lock);
+ return NULL;
+}
+
+static void put_type(struct dirty_log_type *type)
+{
+ spin_lock(&_lock);
+ type->use_count--;
+ spin_unlock(&_lock);
+}
+
+struct dirty_log *dm_create_dirty_log(const char *type_name, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct dirty_log_type *type;
+ struct dirty_log *log;
+
+ log = kmalloc(sizeof(*log), GFP_KERNEL);
+ if (!log)
+ return NULL;
+
+ type = get_type(type_name);
+ if (!type) {
+ kfree(log);
+ return NULL;
+ }
+
+ log->type = type;
+ if (type->ctr(log, ti, argc, argv)) {
+ kfree(log);
+ put_type(type);
+ return NULL;
+ }
+
+ return log;
+}
+
+void dm_destroy_dirty_log(struct dirty_log *log)
+{
+ log->type->dtr(log);
+ put_type(log->type);
+ kfree(log);
+}
+
+/*-----------------------------------------------------------------
+ * Persistent and core logs share a lot of their implementation.
+ * FIXME: need a reload method to be called from a resume
+ *---------------------------------------------------------------*/
+/*
+ * Magic for persistent mirrors: "MiRr"
+ */
+#define MIRROR_MAGIC 0x4D695272
+
+/*
+ * The on-disk version of the metadata.
+ */
+#define MIRROR_DISK_VERSION 1
+#define LOG_OFFSET 2
+
+struct log_header {
+ uint32_t magic;
+
+ /*
+ * Simple, incrementing version. no backward
+ * compatibility.
+ */
+ uint32_t version;
+ sector_t nr_regions;
+};
+
+struct log_c {
+ struct dm_target *ti;
+ int touched;
+ sector_t region_size;
+ unsigned int region_count;
+ region_t sync_count;
+
+ unsigned bitset_uint32_count;
+ uint32_t *clean_bits;
+ uint32_t *sync_bits;
+ uint32_t *recovering_bits; /* FIXME: this seems excessive */
+
+ int sync_search;
+
+ /*
+ * Disk log fields
+ */
+ struct dm_dev *log_dev;
+ struct log_header header;
+
+ struct io_region header_location;
+ struct log_header *disk_header;
+
+ struct io_region bits_location;
+ uint32_t *disk_bits;
+};
+
+/*
+ * The touched member needs to be updated every time we access
+ * one of the bitsets.
+ */
+static inline int log_test_bit(uint32_t *bs, unsigned bit)
+{
+ return test_bit(bit, (unsigned long *) bs) ? 1 : 0;
+}
+
+static inline void log_set_bit(struct log_c *l,
+ uint32_t *bs, unsigned bit)
+{
+ set_bit(bit, (unsigned long *) bs);
+ l->touched = 1;
+}
+
+static inline void log_clear_bit(struct log_c *l,
+ uint32_t *bs, unsigned bit)
+{
+ clear_bit(bit, (unsigned long *) bs);
+ l->touched = 1;
+}
+
+/*----------------------------------------------------------------
+ * Header IO
+ *--------------------------------------------------------------*/
+static void header_to_disk(struct log_header *core, struct log_header *disk)
+{
+ disk->magic = cpu_to_le32(core->magic);
+ disk->version = cpu_to_le32(core->version);
+ disk->nr_regions = cpu_to_le64(core->nr_regions);
+}
+
+static void header_from_disk(struct log_header *core, struct log_header *disk)
+{
+ core->magic = le32_to_cpu(disk->magic);
+ core->version = le32_to_cpu(disk->version);
+ core->nr_regions = le64_to_cpu(disk->nr_regions);
+}
+
+static int read_header(struct log_c *log)
+{
+ int r;
+ unsigned long ebits;
+
+ r = dm_io_sync_vm(1, &log->header_location, READ,
+ log->disk_header, &ebits);
+ if (r)
+ return r;
+
+ header_from_disk(&log->header, log->disk_header);
+
+ if (log->header.magic != MIRROR_MAGIC) {
+ log->header.magic = MIRROR_MAGIC;
+ log->header.version = MIRROR_DISK_VERSION;
+ log->header.nr_regions = 0;
+ }
+
+ if (log->header.version != MIRROR_DISK_VERSION) {
+ DMWARN("incompatible disk log version");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int write_header(struct log_c *log)
+{
+ unsigned long ebits;
+
+ header_to_disk(&log->header, log->disk_header);
+ return dm_io_sync_vm(1, &log->header_location, WRITE,
+ log->disk_header, &ebits);
+}
+
+/*----------------------------------------------------------------
+ * Bits IO
+ *--------------------------------------------------------------*/
+static inline void bits_to_core(uint32_t *core, uint32_t *disk, unsigned count)
+{
+ unsigned i;
+
+ for (i = 0; i < count; i++)
+ core[i] = le32_to_cpu(disk[i]);
+}
+
+static inline void bits_to_disk(uint32_t *core, uint32_t *disk, unsigned count)
+{
+ unsigned i;
+
+ /* copy across the clean/dirty bitset */
+ for (i = 0; i < count; i++)
+ disk[i] = cpu_to_le32(core[i]);
+}
+
+static int read_bits(struct log_c *log)
+{
+ int r;
+ unsigned long ebits;
+
+ r = dm_io_sync_vm(1, &log->bits_location, READ,
+ log->disk_bits, &ebits);
+ if (r)
+ return r;
+
+ bits_to_core(log->clean_bits, log->disk_bits,
+ log->bitset_uint32_count);
+ return 0;
+}
+
+static int write_bits(struct log_c *log)
+{
+ unsigned long ebits;
+ bits_to_disk(log->clean_bits, log->disk_bits,
+ log->bitset_uint32_count);
+ return dm_io_sync_vm(1, &log->bits_location, WRITE,
+ log->disk_bits, &ebits);
+}
+
+/*----------------------------------------------------------------
+ * constructor/destructor
+ *--------------------------------------------------------------*/
+#define BYTE_SHIFT 3
+static int core_ctr(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct log_c *lc;
+ sector_t region_size;
+ unsigned int region_count;
+ size_t bitset_size;
+
+ if (argc != 1) {
+ DMWARN("wrong number of arguments to log_c");
+ return -EINVAL;
+ }
+
+ if (sscanf(argv[0], SECTOR_FORMAT, ®ion_size) != 1) {
+ DMWARN("invalid region size string");
+ return -EINVAL;
+ }
+
+ region_count = dm_div_up(ti->len, region_size);
+
+ lc = kmalloc(sizeof(*lc), GFP_KERNEL);
+ if (!lc) {
+ DMWARN("couldn't allocate core log");
+ return -ENOMEM;
+ }
+
+ lc->ti = ti;
+ lc->touched = 0;
+ lc->region_size = region_size;
+ lc->region_count = region_count;
+
+ /*
+ * Work out how many words we need to hold the bitset.
+ */
+ bitset_size = dm_round_up(region_count,
+ sizeof(*lc->clean_bits) << BYTE_SHIFT);
+ bitset_size >>= BYTE_SHIFT;
+
+ lc->bitset_uint32_count = bitset_size / 4;
+ lc->clean_bits = vmalloc(bitset_size);
+ if (!lc->clean_bits) {
+ DMWARN("couldn't allocate clean bitset");
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->clean_bits, -1, bitset_size);
+
+ lc->sync_bits = vmalloc(bitset_size);
+ if (!lc->sync_bits) {
+ DMWARN("couldn't allocate sync bitset");
+ vfree(lc->clean_bits);
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->sync_bits, 0, bitset_size);
+ lc->sync_count = 0;
+
+ lc->recovering_bits = vmalloc(bitset_size);
+ if (!lc->recovering_bits) {
+ DMWARN("couldn't allocate sync bitset");
+ vfree(lc->sync_bits);
+ vfree(lc->clean_bits);
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->recovering_bits, 0, bitset_size);
+ lc->sync_search = 0;
+ log->context = lc;
+ return 0;
+}
+
+static void core_dtr(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ vfree(lc->clean_bits);
+ vfree(lc->sync_bits);
+ vfree(lc->recovering_bits);
+ kfree(lc);
+}
+
+static int disk_ctr(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ int r;
+ size_t size;
+ struct log_c *lc;
+ struct dm_dev *dev;
+
+ if (argc != 2) {
+ DMWARN("wrong number of arguments to log_d");
+ return -EINVAL;
+ }
+
+ r = dm_get_device(ti, argv[0], 0, 0 /* FIXME */,
+ FMODE_READ | FMODE_WRITE, &dev);
+ if (r)
+ return r;
+
+ r = core_ctr(log, ti, argc - 1, argv + 1);
+ if (r) {
+ dm_put_device(ti, dev);
+ return r;
+ }
+
+ lc = (struct log_c *) log->context;
+ lc->log_dev = dev;
+
+ /* setup the disk header fields */
+ lc->header_location.bdev = lc->log_dev->bdev;
+ lc->header_location.sector = 0;
+ lc->header_location.count = 1;
+
+ /*
+ * We can't read less than this amount, even though we'll
+ * not be using most of this space.
+ */
+ lc->disk_header = vmalloc(1 << SECTOR_SHIFT);
+ if (!lc->disk_header)
+ goto bad;
+
+ /* setup the disk bitset fields */
+ lc->bits_location.bdev = lc->log_dev->bdev;
+ lc->bits_location.sector = LOG_OFFSET;
+
+ size = dm_round_up(lc->bitset_uint32_count * sizeof(uint32_t),
+ 1 << SECTOR_SHIFT);
+ lc->bits_location.count = size >> SECTOR_SHIFT;
+ lc->disk_bits = vmalloc(size);
+ if (!lc->disk_bits) {
+ vfree(lc->disk_header);
+ goto bad;
+ }
+ return 0;
+
+ bad:
+ dm_put_device(ti, lc->log_dev);
+ core_dtr(log);
+ return -ENOMEM;
+}
+
+static void disk_dtr(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ dm_put_device(lc->ti, lc->log_dev);
+ vfree(lc->disk_header);
+ vfree(lc->disk_bits);
+ core_dtr(log);
+}
+
+static int count_bits32(uint32_t *addr, unsigned size)
+{
+ int count = 0, i;
+
+ for (i = 0; i < size; i++) {
+ count += hweight32(*(addr+i));
+ }
+ return count;
+}
+
+static int disk_resume(struct dirty_log *log)
+{
+ int r;
+ unsigned i;
+ struct log_c *lc = (struct log_c *) log->context;
+ size_t size = lc->bitset_uint32_count * sizeof(uint32_t);
+
+ /* read the disk header */
+ r = read_header(lc);
+ if (r)
+ return r;
+
+ /* read the bits */
+ r = read_bits(lc);
+ if (r)
+ return r;
+
+ /* zero any new bits if the mirror has grown */
+ for (i = lc->header.nr_regions; i < lc->region_count; i++)
+ /* FIXME: amazingly inefficient */
+ log_clear_bit(lc, lc->clean_bits, i);
+
+ /* copy clean across to sync */
+ memcpy(lc->sync_bits, lc->clean_bits, size);
+ lc->sync_count = count_bits32(lc->clean_bits, lc->bitset_uint32_count);
+
+ /* write the bits */
+ r = write_bits(lc);
+ if (r)
+ return r;
+
+ /* set the correct number of regions in the header */
+ lc->header.nr_regions = lc->region_count;
+
+ /* write the new header */
+ return write_header(lc);
+}
+
+static sector_t core_get_region_size(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return lc->region_size;
+}
+
+static int core_is_clean(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return log_test_bit(lc->clean_bits, region);
+}
+
+static int core_in_sync(struct dirty_log *log, region_t region, int block)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return log_test_bit(lc->sync_bits, region);
+}
+
+static int core_flush(struct dirty_log *log)
+{
+ /* no op */
+ return 0;
+}
+
+static int disk_flush(struct dirty_log *log)
+{
+ int r;
+ struct log_c *lc = (struct log_c *) log->context;
+
+ /* only write if the log has changed */
+ if (!lc->touched)
+ return 0;
+
+ r = write_bits(lc);
+ if (!r)
+ lc->touched = 0;
+
+ return r;
+}
+
+static void core_mark_region(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ log_clear_bit(lc, lc->clean_bits, region);
+}
+
+static void core_clear_region(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ log_set_bit(lc, lc->clean_bits, region);
+}
+
+static int core_get_resync_work(struct dirty_log *log, region_t *region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ if (lc->sync_search >= lc->region_count)
+ return 0;
+
+ do {
+ *region = find_next_zero_bit((unsigned long *) lc->sync_bits,
+ lc->region_count,
+ lc->sync_search);
+ lc->sync_search = *region + 1;
+
+ if (*region == lc->region_count)
+ return 0;
+
+ } while (log_test_bit(lc->recovering_bits, *region));
+
+ log_set_bit(lc, lc->recovering_bits, *region);
+ return 1;
+}
+
+static void core_complete_resync_work(struct dirty_log *log, region_t region,
+ int success)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ log_clear_bit(lc, lc->recovering_bits, region);
+ if (success) {
+ log_set_bit(lc, lc->sync_bits, region);
+ lc->sync_count++;
+ }
+}
+
+static region_t core_get_sync_count(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ return lc->sync_count;
+}
+
+static struct dirty_log_type _core_type = {
+ .name = "core",
+ .module = THIS_MODULE,
+ .ctr = core_ctr,
+ .dtr = core_dtr,
+ .get_region_size = core_get_region_size,
+ .is_clean = core_is_clean,
+ .in_sync = core_in_sync,
+ .flush = core_flush,
+ .mark_region = core_mark_region,
+ .clear_region = core_clear_region,
+ .get_resync_work = core_get_resync_work,
+ .complete_resync_work = core_complete_resync_work,
+ .get_sync_count = core_get_sync_count
+};
+
+static struct dirty_log_type _disk_type = {
+ .name = "disk",
+ .module = THIS_MODULE,
+ .ctr = disk_ctr,
+ .dtr = disk_dtr,
+ .suspend = disk_flush,
+ .resume = disk_resume,
+ .get_region_size = core_get_region_size,
+ .is_clean = core_is_clean,
+ .in_sync = core_in_sync,
+ .flush = disk_flush,
+ .mark_region = core_mark_region,
+ .clear_region = core_clear_region,
+ .get_resync_work = core_get_resync_work,
+ .complete_resync_work = core_complete_resync_work,
+ .get_sync_count = core_get_sync_count
+};
+
+int __init dm_dirty_log_init(void)
+{
+ int r;
+
+ r = dm_register_dirty_log_type(&_core_type);
+ if (r)
+ DMWARN("couldn't register core log");
+
+ r = dm_register_dirty_log_type(&_disk_type);
+ if (r) {
+ DMWARN("couldn't register disk type");
+ dm_unregister_dirty_log_type(&_core_type);
+ }
+
+ return r;
+}
+
+void dm_dirty_log_exit(void)
+{
+ dm_unregister_dirty_log_type(&_disk_type);
+ dm_unregister_dirty_log_type(&_core_type);
+}
+
+EXPORT_SYMBOL(dm_register_dirty_log_type);
+EXPORT_SYMBOL(dm_unregister_dirty_log_type);
+EXPORT_SYMBOL(dm_create_dirty_log);
+EXPORT_SYMBOL(dm_destroy_dirty_log);
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the LGPL.
+ */
+
+#ifndef DM_DIRTY_LOG
+#define DM_DIRTY_LOG
+
+#include "dm.h"
+
+typedef sector_t region_t;
+
+struct dirty_log_type;
+
+struct dirty_log {
+ struct dirty_log_type *type;
+ void *context;
+};
+
+struct dirty_log_type {
+ struct list_head list;
+ const char *name;
+ struct module *module;
+ unsigned int use_count;
+
+ int (*ctr)(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv);
+ void (*dtr)(struct dirty_log *log);
+
+ /*
+ * There are times when we don't want the log to touch
+ * the disk.
+ */
+ int (*suspend)(struct dirty_log *log);
+ int (*resume)(struct dirty_log *log);
+
+ /*
+ * Retrieves the smallest size of region that the log can
+ * deal with.
+ */
+ sector_t (*get_region_size)(struct dirty_log *log);
+
+ /*
+ * A predicate to say whether a region is clean or not.
+ * May block.
+ */
+ int (*is_clean)(struct dirty_log *log, region_t region);
+
+ /*
+ * Returns: 0, 1, -EWOULDBLOCK, < 0
+ *
+ * A predicate function to check the area given by
+ * [sector, sector + len) is in sync.
+ *
+ * If -EWOULDBLOCK is returned the state of the region is
+ * unknown, typically this will result in a read being
+ * passed to a daemon to deal with, since a daemon is
+ * allowed to block.
+ */
+ int (*in_sync)(struct dirty_log *log, region_t region, int can_block);
+
+ /*
+ * Flush the current log state (eg, to disk). This
+ * function may block.
+ */
+ int (*flush)(struct dirty_log *log);
+
+ /*
+ * Mark an area as clean or dirty. These functions may
+ * block, though for performance reasons blocking should
+ * be extremely rare (eg, allocating another chunk of
+ * memory for some reason).
+ */
+ void (*mark_region)(struct dirty_log *log, region_t region);
+ void (*clear_region)(struct dirty_log *log, region_t region);
+
+ /*
+ * Returns: <0 (error), 0 (no region), 1 (region)
+ *
+ * The mirrord will need perform recovery on regions of
+ * the mirror that are in the NOSYNC state. This
+ * function asks the log to tell the caller about the
+ * next region that this machine should recover.
+ *
+ * Do not confuse this function with 'in_sync()', one
+ * tells you if an area is synchronised, the other
+ * assigns recovery work.
+ */
+ int (*get_resync_work)(struct dirty_log *log, region_t *region);
+
+ /*
+ * This notifies the log that the resync of an area has
+ * been completed. The log should then mark this region
+ * as CLEAN.
+ */
+ void (*complete_resync_work)(struct dirty_log *log,
+ region_t region, int success);
+
+ /*
+ * Returns the number of regions that are in sync.
+ */
+ region_t (*get_sync_count)(struct dirty_log *log);
+};
+
+int dm_register_dirty_log_type(struct dirty_log_type *type);
+int dm_unregister_dirty_log_type(struct dirty_log_type *type);
+
+
+/*
+ * Make sure you use these two functions, rather than calling
+ * type->constructor/destructor() directly.
+ */
+struct dirty_log *dm_create_dirty_log(const char *type_name, struct dm_target *ti,
+ unsigned int argc, char **argv);
+void dm_destroy_dirty_log(struct dirty_log *log);
+
+/*
+ * init/exit functions.
+ */
+int dm_dirty_log_init(void);
+void dm_dirty_log_exit(void);
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software Limited.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm.h"
+#include "dm-bio-list.h"
+#include "dm-io.h"
+#include "dm-log.h"
+#include "kcopyd.h"
+
+#include <linux/ctype.h>
+#include <linux/init.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/pagemap.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/vmalloc.h>
+#include <linux/workqueue.h>
+
+static struct workqueue_struct *_kmirrord_wq;
+static struct work_struct _kmirrord_work;
+
+static inline void wake(void)
+{
+ queue_work(_kmirrord_wq, &_kmirrord_work);
+}
+
+/*-----------------------------------------------------------------
+ * Region hash
+ *
+ * The mirror splits itself up into discrete regions. Each
+ * region can be in one of three states: clean, dirty,
+ * nosync. There is no need to put clean regions in the hash.
+ *
+ * In addition to being present in the hash table a region _may_
+ * be present on one of three lists.
+ *
+ * clean_regions: Regions on this list have no io pending to
+ * them, they are in sync, we are no longer interested in them,
+ * they are dull. rh_update_states() will remove them from the
+ * hash table.
+ *
+ * quiesced_regions: These regions have been spun down, ready
+ * for recovery. rh_recovery_start() will remove regions from
+ * this list and hand them to kmirrord, which will schedule the
+ * recovery io with kcopyd.
+ *
+ * recovered_regions: Regions that kcopyd has successfully
+ * recovered. rh_update_states() will now schedule any delayed
+ * io, up the recovery_count, and remove the region from the
+ * hash.
+ *
+ * There are 2 locks:
+ * A rw spin lock 'hash_lock' protects just the hash table,
+ * this is never held in write mode from interrupt context,
+ * which I believe means that we only have to disable irqs when
+ * doing a write lock.
+ *
+ * An ordinary spin lock 'region_lock' that protects the three
+ * lists in the region_hash, with the 'state', 'list' and
+ * 'bhs_delayed' fields of the regions. This is used from irq
+ * context, so all other uses will have to suspend local irqs.
+ *---------------------------------------------------------------*/
+struct mirror_set;
+struct region_hash {
+ struct mirror_set *ms;
+ sector_t region_size;
+ unsigned region_shift;
+
+ /* holds persistent region state */
+ struct dirty_log *log;
+
+ /* hash table */
+ rwlock_t hash_lock;
+ mempool_t *region_pool;
+ unsigned int mask;
+ unsigned int nr_buckets;
+ struct list_head *buckets;
+
+ spinlock_t region_lock;
+ struct semaphore recovery_count;
+ struct list_head clean_regions;
+ struct list_head quiesced_regions;
+ struct list_head recovered_regions;
+};
+
+enum {
+ RH_CLEAN,
+ RH_DIRTY,
+ RH_NOSYNC,
+ RH_RECOVERING
+};
+
+struct region {
+ struct region_hash *rh; /* FIXME: can we get rid of this ? */
+ region_t key;
+ int state;
+
+ struct list_head hash_list;
+ struct list_head list;
+
+ atomic_t pending;
+ struct bio_list delayed_bios;
+};
+
+/*
+ * Conversion fns
+ */
+static inline region_t bio_to_region(struct region_hash *rh, struct bio *bio)
+{
+ return bio->bi_sector >> rh->region_shift;
+}
+
+static inline sector_t region_to_sector(struct region_hash *rh, region_t region)
+{
+ return region << rh->region_shift;
+}
+
+/* FIXME move this */
+static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw);
+
+static void *region_alloc(int gfp_mask, void *pool_data)
+{
+ return kmalloc(sizeof(struct region), gfp_mask);
+}
+
+static void region_free(void *element, void *pool_data)
+{
+ kfree(element);
+}
+
+#define MIN_REGIONS 64
+#define MAX_RECOVERY 1
+static int rh_init(struct region_hash *rh, struct mirror_set *ms,
+ struct dirty_log *log, sector_t region_size,
+ region_t nr_regions)
+{
+ unsigned int nr_buckets, max_buckets;
+ size_t i;
+
+ /*
+ * Calculate a suitable number of buckets for our hash
+ * table.
+ */
+ max_buckets = nr_regions >> 6;
+ for (nr_buckets = 128u; nr_buckets < max_buckets; nr_buckets <<= 1)
+ ;
+ nr_buckets >>= 1;
+
+ rh->ms = ms;
+ rh->log = log;
+ rh->region_size = region_size;
+ rh->region_shift = ffs(region_size) - 1;
+ rwlock_init(&rh->hash_lock);
+ rh->mask = nr_buckets - 1;
+ rh->nr_buckets = nr_buckets;
+
+ rh->buckets = vmalloc(nr_buckets * sizeof(*rh->buckets));
+ if (!rh->buckets) {
+ DMERR("unable to allocate region hash memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_buckets; i++)
+ INIT_LIST_HEAD(rh->buckets + i);
+
+ spin_lock_init(&rh->region_lock);
+ sema_init(&rh->recovery_count, 0);
+ INIT_LIST_HEAD(&rh->clean_regions);
+ INIT_LIST_HEAD(&rh->quiesced_regions);
+ INIT_LIST_HEAD(&rh->recovered_regions);
+
+ rh->region_pool = mempool_create(MIN_REGIONS, region_alloc,
+ region_free, NULL);
+ if (!rh->region_pool) {
+ vfree(rh->buckets);
+ rh->buckets = NULL;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void rh_exit(struct region_hash *rh)
+{
+ unsigned int h;
+ struct region *reg, *nreg;
+
+ BUG_ON(!list_empty(&rh->quiesced_regions));
+ for (h = 0; h < rh->nr_buckets; h++) {
+ list_for_each_entry_safe(reg, nreg, rh->buckets + h, hash_list) {
+ BUG_ON(atomic_read(®->pending));
+ mempool_free(reg, rh->region_pool);
+ }
+ }
+
+ if (rh->log)
+ dm_destroy_dirty_log(rh->log);
+ if (rh->region_pool)
+ mempool_destroy(rh->region_pool);
+ vfree(rh->buckets);
+}
+
+#define RH_HASH_MULT 2654435387U
+
+static inline unsigned int rh_hash(struct region_hash *rh, region_t region)
+{
+ return (unsigned int) ((region * RH_HASH_MULT) >> 12) & rh->mask;
+}
+
+static struct region *__rh_lookup(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ list_for_each_entry (reg, rh->buckets + rh_hash(rh, region), hash_list)
+ if (reg->key == region)
+ return reg;
+
+ return NULL;
+}
+
+static void __rh_insert(struct region_hash *rh, struct region *reg)
+{
+ unsigned int h = rh_hash(rh, reg->key);
+ list_add(®->hash_list, rh->buckets + h);
+}
+
+static struct region *__rh_alloc(struct region_hash *rh, region_t region)
+{
+ struct region *reg, *nreg;
+
+ read_unlock(&rh->hash_lock);
+ nreg = mempool_alloc(rh->region_pool, GFP_NOIO);
+ nreg->state = rh->log->type->in_sync(rh->log, region, 1) ?
+ RH_CLEAN : RH_NOSYNC;
+ nreg->rh = rh;
+ nreg->key = region;
+
+ INIT_LIST_HEAD(&nreg->list);
+
+ atomic_set(&nreg->pending, 0);
+ bio_list_init(&nreg->delayed_bios);
+ write_lock_irq(&rh->hash_lock);
+
+ reg = __rh_lookup(rh, region);
+ if (reg)
+ /* we lost the race */
+ mempool_free(nreg, rh->region_pool);
+
+ else {
+ __rh_insert(rh, nreg);
+ if (nreg->state == RH_CLEAN) {
+ spin_lock_irq(&rh->region_lock);
+ list_add(&nreg->list, &rh->clean_regions);
+ spin_unlock_irq(&rh->region_lock);
+ }
+ reg = nreg;
+ }
+ write_unlock_irq(&rh->hash_lock);
+ read_lock(&rh->hash_lock);
+
+ return reg;
+}
+
+static inline struct region *__rh_find(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ reg = __rh_lookup(rh, region);
+ if (!reg)
+ reg = __rh_alloc(rh, region);
+
+ return reg;
+}
+
+static int rh_state(struct region_hash *rh, region_t region, int may_block)
+{
+ int r;
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_lookup(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ if (reg)
+ return reg->state;
+
+ /*
+ * The region wasn't in the hash, so we fall back to the
+ * dirty log.
+ */
+ r = rh->log->type->in_sync(rh->log, region, may_block);
+
+ /*
+ * Any error from the dirty log (eg. -EWOULDBLOCK) gets
+ * taken as a RH_NOSYNC
+ */
+ return r == 1 ? RH_CLEAN : RH_NOSYNC;
+}
+
+static inline int rh_in_sync(struct region_hash *rh,
+ region_t region, int may_block)
+{
+ int state = rh_state(rh, region, may_block);
+ return state == RH_CLEAN || state == RH_DIRTY;
+}
+
+static void dispatch_bios(struct mirror_set *ms, struct bio_list *bio_list)
+{
+ struct bio *bio;
+
+ while ((bio = bio_list_pop(bio_list))) {
+ queue_bio(ms, bio, WRITE);
+ }
+}
+
+static void rh_update_states(struct region_hash *rh)
+{
+ struct region *reg, *next;
+
+ LIST_HEAD(clean);
+ LIST_HEAD(recovered);
+
+ /*
+ * Quickly grab the lists.
+ */
+ write_lock_irq(&rh->hash_lock);
+ spin_lock(&rh->region_lock);
+ if (!list_empty(&rh->clean_regions)) {
+ list_splice(&rh->clean_regions, &clean);
+ INIT_LIST_HEAD(&rh->clean_regions);
+
+ list_for_each_entry (reg, &clean, list) {
+ rh->log->type->clear_region(rh->log, reg->key);
+ list_del(®->hash_list);
+ }
+ }
+
+ if (!list_empty(&rh->recovered_regions)) {
+ list_splice(&rh->recovered_regions, &recovered);
+ INIT_LIST_HEAD(&rh->recovered_regions);
+
+ list_for_each_entry (reg, &recovered, list)
+ list_del(®->hash_list);
+ }
+ spin_unlock(&rh->region_lock);
+ write_unlock_irq(&rh->hash_lock);
+
+ /*
+ * All the regions on the recovered and clean lists have
+ * now been pulled out of the system, so no need to do
+ * any more locking.
+ */
+ list_for_each_entry_safe (reg, next, &recovered, list) {
+ rh->log->type->clear_region(rh->log, reg->key);
+ rh->log->type->complete_resync_work(rh->log, reg->key, 1);
+ dispatch_bios(rh->ms, ®->delayed_bios);
+ up(&rh->recovery_count);
+ mempool_free(reg, rh->region_pool);
+ }
+
+ if (!list_empty(&recovered))
+ rh->log->type->flush(rh->log);
+
+ list_for_each_entry_safe (reg, next, &clean, list)
+ mempool_free(reg, rh->region_pool);
+}
+
+static void rh_inc(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, region);
+ if (reg->state == RH_CLEAN) {
+ rh->log->type->mark_region(rh->log, reg->key);
+
+ spin_lock_irq(&rh->region_lock);
+ reg->state = RH_DIRTY;
+ list_del_init(®->list); /* take off the clean list */
+ spin_unlock_irq(&rh->region_lock);
+ }
+
+ atomic_inc(®->pending);
+ read_unlock(&rh->hash_lock);
+}
+
+static void rh_inc_pending(struct region_hash *rh, struct bio_list *bios)
+{
+ struct bio *bio;
+
+ for (bio = bios->head; bio; bio = bio->bi_next)
+ rh_inc(rh, bio_to_region(rh, bio));
+}
+
+static void rh_dec(struct region_hash *rh, region_t region)
+{
+ unsigned long flags;
+ struct region *reg;
+ int should_wake = 0;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_lookup(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ if (atomic_dec_and_test(®->pending)) {
+ spin_lock_irqsave(&rh->region_lock, flags);
+ if (reg->state == RH_RECOVERING) {
+ list_add_tail(®->list, &rh->quiesced_regions);
+ } else {
+ reg->state = RH_CLEAN;
+ list_add(®->list, &rh->clean_regions);
+ }
+ spin_unlock_irqrestore(&rh->region_lock, flags);
+ should_wake = 1;
+ }
+
+ if (should_wake)
+ wake();
+}
+
+/*
+ * Starts quiescing a region in preparation for recovery.
+ */
+static int __rh_recovery_prepare(struct region_hash *rh)
+{
+ int r;
+ struct region *reg;
+ region_t region;
+
+ /*
+ * Ask the dirty log what's next.
+ */
+ r = rh->log->type->get_resync_work(rh->log, ®ion);
+ if (r <= 0)
+ return r;
+
+ /*
+ * Get this region, and start it quiescing by setting the
+ * recovering flag.
+ */
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ spin_lock_irq(&rh->region_lock);
+ reg->state = RH_RECOVERING;
+
+ /* Already quiesced ? */
+ if (atomic_read(®->pending))
+ list_del_init(®->list);
+
+ else {
+ list_del_init(®->list);
+ list_add(®->list, &rh->quiesced_regions);
+ }
+ spin_unlock_irq(&rh->region_lock);
+
+ return 1;
+}
+
+static void rh_recovery_prepare(struct region_hash *rh)
+{
+ while (!down_trylock(&rh->recovery_count))
+ if (__rh_recovery_prepare(rh) <= 0) {
+ up(&rh->recovery_count);
+ break;
+ }
+}
+
+/*
+ * Returns any quiesced regions.
+ */
+static struct region *rh_recovery_start(struct region_hash *rh)
+{
+ struct region *reg = NULL;
+
+ spin_lock_irq(&rh->region_lock);
+ if (!list_empty(&rh->quiesced_regions)) {
+ reg = list_entry(rh->quiesced_regions.next,
+ struct region, list);
+ list_del_init(®->list); /* remove from the quiesced list */
+ }
+ spin_unlock_irq(&rh->region_lock);
+
+ return reg;
+}
+
+/* FIXME: success ignored for now */
+static void rh_recovery_end(struct region *reg, int success)
+{
+ struct region_hash *rh = reg->rh;
+
+ spin_lock_irq(&rh->region_lock);
+ list_add(®->list, ®->rh->recovered_regions);
+ spin_unlock_irq(&rh->region_lock);
+
+ wake();
+}
+
+static void rh_flush(struct region_hash *rh)
+{
+ rh->log->type->flush(rh->log);
+}
+
+static void rh_delay(struct region_hash *rh, struct bio *bio)
+{
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, bio_to_region(rh, bio));
+ bio_list_add(®->delayed_bios, bio);
+ read_unlock(&rh->hash_lock);
+}
+
+static void rh_stop_recovery(struct region_hash *rh)
+{
+ int i;
+
+ /* wait for any recovering regions */
+ for (i = 0; i < MAX_RECOVERY; i++)
+ down(&rh->recovery_count);
+}
+
+static void rh_start_recovery(struct region_hash *rh)
+{
+ int i;
+
+ for (i = 0; i < MAX_RECOVERY; i++)
+ up(&rh->recovery_count);
+
+ wake();
+}
+
+/*-----------------------------------------------------------------
+ * Mirror set structures.
+ *---------------------------------------------------------------*/
+struct mirror {
+ atomic_t error_count;
+ struct dm_dev *dev;
+ sector_t offset;
+};
+
+struct mirror_set {
+ struct dm_target *ti;
+ struct list_head list;
+ struct region_hash rh;
+ struct kcopyd_client *kcopyd_client;
+
+ spinlock_t lock; /* protects the next two lists */
+ struct bio_list reads;
+ struct bio_list writes;
+
+ /* recovery */
+ region_t nr_regions;
+ int in_sync;
+
+ unsigned int nr_mirrors;
+ struct mirror mirror[0];
+};
+
+/*
+ * Every mirror should look like this one.
+ */
+#define DEFAULT_MIRROR 0
+
+/*
+ * This is yucky. We squirrel the mirror_set struct away inside
+ * bi_next for write buffers. This is safe since the bh
+ * doesn't get submitted to the lower levels of block layer.
+ */
+static struct mirror_set *bio_get_ms(struct bio *bio)
+{
+ return (struct mirror_set *) bio->bi_next;
+}
+
+static void bio_set_ms(struct bio *bio, struct mirror_set *ms)
+{
+ bio->bi_next = (struct bio *) ms;
+}
+
+/*-----------------------------------------------------------------
+ * Recovery.
+ *
+ * When a mirror is first activated we may find that some regions
+ * are in the no-sync state. We have to recover these by
+ * recopying from the default mirror to all the others.
+ *---------------------------------------------------------------*/
+static void recovery_complete(int read_err, unsigned int write_err,
+ void *context)
+{
+ struct region *reg = (struct region *) context;
+
+ /* FIXME: better error handling */
+ rh_recovery_end(reg, read_err || write_err);
+}
+
+static int recover(struct mirror_set *ms, struct region *reg)
+{
+ int r;
+ unsigned int i;
+ struct io_region from, to[ms->nr_mirrors - 1], *dest;
+ struct mirror *m;
+ unsigned long flags = 0;
+
+ /* fill in the source */
+ m = ms->mirror + DEFAULT_MIRROR;
+ from.bdev = m->dev->bdev;
+ from.sector = m->offset + region_to_sector(reg->rh, reg->key);
+ if (reg->key == (ms->nr_regions - 1)) {
+ /*
+ * The final region may be smaller than
+ * region_size.
+ */
+ from.count = ms->ti->len & (reg->rh->region_size - 1);
+ if (!from.count)
+ from.count = reg->rh->region_size;
+ } else
+ from.count = reg->rh->region_size;
+
+ /* fill in the destinations */
+ for (i = 0, dest = to; i < ms->nr_mirrors; i++) {
+ if (i == DEFAULT_MIRROR)
+ continue;
+
+ m = ms->mirror + i;
+ dest->bdev = m->dev->bdev;
+ dest->sector = m->offset + region_to_sector(reg->rh, reg->key);
+ dest->count = from.count;
+ dest++;
+ }
+
+ /* hand to kcopyd */
+ set_bit(KCOPYD_IGNORE_ERROR, &flags);
+ r = kcopyd_copy(ms->kcopyd_client, &from, ms->nr_mirrors - 1, to, flags,
+ recovery_complete, reg);
+
+ return r;
+}
+
+static void do_recovery(struct mirror_set *ms)
+{
+ int r;
+ struct region *reg;
+ struct dirty_log *log = ms->rh.log;
+
+ /*
+ * Start quiescing some regions.
+ */
+ rh_recovery_prepare(&ms->rh);
+
+ /*
+ * Copy any already quiesced regions.
+ */
+ while ((reg = rh_recovery_start(&ms->rh))) {
+ r = recover(ms, reg);
+ if (r)
+ rh_recovery_end(reg, 0);
+ }
+
+ /*
+ * Update the in sync flag.
+ */
+ if (!ms->in_sync &&
+ (log->type->get_sync_count(log) == ms->nr_regions)) {
+ /* the sync is complete */
+ dm_table_event(ms->ti->table);
+ ms->in_sync = 1;
+ }
+}
+
+/*-----------------------------------------------------------------
+ * Reads
+ *---------------------------------------------------------------*/
+static struct mirror *choose_mirror(struct mirror_set *ms, sector_t sector)
+{
+ /* FIXME: add read balancing */
+ return ms->mirror + DEFAULT_MIRROR;
+}
+
+/*
+ * remap a buffer to a particular mirror.
+ */
+static void map_bio(struct mirror_set *ms, struct mirror *m, struct bio *bio)
+{
+ bio->bi_bdev = m->dev->bdev;
+ bio->bi_sector = m->offset + (bio->bi_sector - ms->ti->begin);
+}
+
+static void do_reads(struct mirror_set *ms, struct bio_list *reads)
+{
+ region_t region;
+ struct bio *bio;
+ struct mirror *m;
+
+ while ((bio = bio_list_pop(reads))) {
+ region = bio_to_region(&ms->rh, bio);
+
+ /*
+ * We can only read balance if the region is in sync.
+ */
+ if (rh_in_sync(&ms->rh, region, 0))
+ m = choose_mirror(ms, bio->bi_sector);
+ else
+ m = ms->mirror + DEFAULT_MIRROR;
+
+ map_bio(ms, m, bio);
+ generic_make_request(bio);
+ }
+}
+
+/*-----------------------------------------------------------------
+ * Writes.
+ *
+ * We do different things with the write io depending on the
+ * state of the region that it's in:
+ *
+ * SYNC: increment pending, use kcopyd to write to *all* mirrors
+ * RECOVERING: delay the io until recovery completes
+ * NOSYNC: increment pending, just write to the default mirror
+ *---------------------------------------------------------------*/
+static void write_callback(unsigned long error, void *context)
+{
+ unsigned int i;
+ int uptodate = 1;
+ struct bio *bio = (struct bio *) context;
+ struct mirror_set *ms;
+
+ ms = bio_get_ms(bio);
+ bio_set_ms(bio, NULL);
+
+ /*
+ * NOTE: We don't decrement the pending count here,
+ * instead it is done by the targets endio function.
+ * This way we handle both writes to SYNC and NOSYNC
+ * regions with the same code.
+ */
+
+ if (error) {
+ /*
+ * only error the io if all mirrors failed.
+ * FIXME: bogus
+ */
+ uptodate = 0;
+ for (i = 0; i < ms->nr_mirrors; i++)
+ if (!test_bit(i, &error)) {
+ uptodate = 1;
+ break;
+ }
+ }
+ bio_endio(bio, bio->bi_size, 0);
+}
+
+static void do_write(struct mirror_set *ms, struct bio *bio)
+{
+ unsigned int i;
+ struct io_region io[ms->nr_mirrors];
+ struct mirror *m;
+
+ for (i = 0; i < ms->nr_mirrors; i++) {
+ m = ms->mirror + i;
+
+ io[i].bdev = m->dev->bdev;
+ io[i].sector = m->offset + (bio->bi_sector - ms->ti->begin);
+ io[i].count = bio->bi_size >> 9;
+ }
+
+ bio_set_ms(bio, ms);
+ dm_io_async_bvec(ms->nr_mirrors, io, WRITE,
+ bio->bi_io_vec + bio->bi_idx,
+ write_callback, bio);
+}
+
+static void do_writes(struct mirror_set *ms, struct bio_list *writes)
+{
+ int state;
+ struct bio *bio;
+ struct bio_list sync, nosync, recover, *this_list = NULL;
+
+ if (!writes->head)
+ return;
+
+ /*
+ * Classify each write.
+ */
+ bio_list_init(&sync);
+ bio_list_init(&nosync);
+ bio_list_init(&recover);
+
+ while ((bio = bio_list_pop(writes))) {
+ state = rh_state(&ms->rh, bio_to_region(&ms->rh, bio), 1);
+ switch (state) {
+ case RH_CLEAN:
+ case RH_DIRTY:
+ this_list = &sync;
+ break;
+
+ case RH_NOSYNC:
+ this_list = &nosync;
+ break;
+
+ case RH_RECOVERING:
+ this_list = &recover;
+ break;
+ }
+
+ bio_list_add(this_list, bio);
+ }
+
+ /*
+ * Increment the pending counts for any regions that will
+ * be written to (writes to recover regions are going to
+ * be delayed).
+ */
+ rh_inc_pending(&ms->rh, &sync);
+ rh_inc_pending(&ms->rh, &nosync);
+ rh_flush(&ms->rh);
+
+ /*
+ * Dispatch io.
+ */
+ while ((bio = bio_list_pop(&sync)))
+ do_write(ms, bio);
+
+ while ((bio = bio_list_pop(&recover)))
+ rh_delay(&ms->rh, bio);
+
+ while ((bio = bio_list_pop(&nosync))) {
+ map_bio(ms, ms->mirror + DEFAULT_MIRROR, bio);
+ generic_make_request(bio);
+ }
+}
+
+/*-----------------------------------------------------------------
+ * kmirrord
+ *---------------------------------------------------------------*/
+static LIST_HEAD(_mirror_sets);
+static DECLARE_RWSEM(_mirror_sets_lock);
+
+static void do_mirror(struct mirror_set *ms)
+{
+ struct bio_list reads, writes;
+
+ spin_lock(&ms->lock);
+ reads = ms->reads;
+ writes = ms->writes;
+ bio_list_init(&ms->reads);
+ bio_list_init(&ms->writes);
+ spin_unlock(&ms->lock);
+
+ rh_update_states(&ms->rh);
+ do_recovery(ms);
+ do_reads(ms, &reads);
+ do_writes(ms, &writes);
+}
+
+static void do_work(void *ignored)
+{
+ struct mirror_set *ms;
+
+ down_read(&_mirror_sets_lock);
+ list_for_each_entry (ms, &_mirror_sets, list)
+ do_mirror(ms);
+ up_read(&_mirror_sets_lock);
+}
+
+/*-----------------------------------------------------------------
+ * Target functions
+ *---------------------------------------------------------------*/
+static struct mirror_set *alloc_context(unsigned int nr_mirrors,
+ sector_t region_size,
+ struct dm_target *ti,
+ struct dirty_log *dl)
+{
+ size_t len;
+ struct mirror_set *ms = NULL;
+
+ if (array_too_big(sizeof(*ms), sizeof(ms->mirror[0]), nr_mirrors))
+ return NULL;
+
+ len = sizeof(*ms) + (sizeof(ms->mirror[0]) * nr_mirrors);
+
+ ms = kmalloc(len, GFP_KERNEL);
+ if (!ms) {
+ ti->error = "dm-mirror: Cannot allocate mirror context";
+ return NULL;
+ }
+
+ memset(ms, 0, len);
+ spin_lock_init(&ms->lock);
+
+ ms->ti = ti;
+ ms->nr_mirrors = nr_mirrors;
+ ms->nr_regions = dm_div_up(ti->len, region_size);
+ ms->in_sync = 0;
+
+ if (rh_init(&ms->rh, ms, dl, region_size, ms->nr_regions)) {
+ ti->error = "dm-mirror: Error creating dirty region hash";
+ kfree(ms);
+ return NULL;
+ }
+
+ return ms;
+}
+
+static void free_context(struct mirror_set *ms, struct dm_target *ti,
+ unsigned int m)
+{
+ while (m--)
+ dm_put_device(ti, ms->mirror[m].dev);
+
+ rh_exit(&ms->rh);
+ kfree(ms);
+}
+
+static inline int _check_region_size(struct dm_target *ti, sector_t size)
+{
+ return !(size % (PAGE_SIZE >> 9) || (size & (size - 1)) ||
+ size > ti->len);
+}
+
+static int get_mirror(struct mirror_set *ms, struct dm_target *ti,
+ unsigned int mirror, char **argv)
+{
+ sector_t offset;
+
+ if (sscanf(argv[1], SECTOR_FORMAT, &offset) != 1) {
+ ti->error = "dm-mirror: Invalid offset";
+ return -EINVAL;
+ }
+
+ if (dm_get_device(ti, argv[0], offset, ti->len,
+ dm_table_get_mode(ti->table),
+ &ms->mirror[mirror].dev)) {
+ ti->error = "dm-mirror: Device lookup failure";
+ return -ENXIO;
+ }
+
+ ms->mirror[mirror].offset = offset;
+
+ return 0;
+}
+
+static int add_mirror_set(struct mirror_set *ms)
+{
+ down_write(&_mirror_sets_lock);
+ list_add_tail(&ms->list, &_mirror_sets);
+ up_write(&_mirror_sets_lock);
+ wake();
+
+ return 0;
+}
+
+static void del_mirror_set(struct mirror_set *ms)
+{
+ down_write(&_mirror_sets_lock);
+ list_del(&ms->list);
+ up_write(&_mirror_sets_lock);
+}
+
+/*
+ * Create dirty log: log_type #log_params <log_params>
+ */
+static struct dirty_log *create_dirty_log(struct dm_target *ti,
+ unsigned int argc, char **argv,
+ unsigned int *args_used)
+{
+ unsigned int param_count;
+ struct dirty_log *dl;
+
+ if (argc < 2) {
+ ti->error = "dm-mirror: Insufficient mirror log arguments";
+ return NULL;
+ }
+
+ if (sscanf(argv[1], "%u", ¶m_count) != 1) {
+ ti->error = "dm-mirror: Invalid mirror log argument count";
+ return NULL;
+ }
+
+ *args_used = 2 + param_count;
+
+ if (argc < *args_used) {
+ ti->error = "dm-mirror: Insufficient mirror log arguments";
+ return NULL;
+ }
+
+ dl = dm_create_dirty_log(argv[0], ti, param_count, argv + 2);
+ if (!dl) {
+ ti->error = "dm-mirror: Error creating mirror dirty log";
+ return NULL;
+ }
+
+ if (!_check_region_size(ti, dl->type->get_region_size(dl))) {
+ ti->error = "dm-mirror: Invalid region size";
+ dm_destroy_dirty_log(dl);
+ return NULL;
+ }
+
+ return dl;
+}
+
+/*
+ * Construct a mirror mapping:
+ *
+ * log_type #log_params <log_params>
+ * #mirrors [mirror_path offset]{2,}
+ *
+ * For now, #log_params = 1, log_type = "core"
+ *
+ */
+#define DM_IO_PAGES 64
+static int mirror_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ int r;
+ unsigned int nr_mirrors, m, args_used;
+ struct mirror_set *ms;
+ struct dirty_log *dl;
+
+ dl = create_dirty_log(ti, argc, argv, &args_used);
+ if (!dl)
+ return -EINVAL;
+
+ argv += args_used;
+ argc -= args_used;
+
+ if (!argc || sscanf(argv[0], "%u", &nr_mirrors) != 1 ||
+ nr_mirrors < 2) {
+ ti->error = "dm-mirror: Invalid number of mirrors";
+ dm_destroy_dirty_log(dl);
+ return -EINVAL;
+ }
+
+ argv++, argc--;
+
+ if (argc != nr_mirrors * 2) {
+ ti->error = "dm-mirror: Wrong number of mirror arguments";
+ dm_destroy_dirty_log(dl);
+ return -EINVAL;
+ }
+
+ ms = alloc_context(nr_mirrors, dl->type->get_region_size(dl), ti, dl);
+ if (!ms) {
+ dm_destroy_dirty_log(dl);
+ return -ENOMEM;
+ }
+
+ /* Get the mirror parameter sets */
+ for (m = 0; m < nr_mirrors; m++) {
+ r = get_mirror(ms, ti, m, argv);
+ if (r) {
+ free_context(ms, ti, m);
+ return r;
+ }
+ argv += 2;
+ argc -= 2;
+ }
+
+ ti->private = ms;
+
+ r = kcopyd_client_create(DM_IO_PAGES, &ms->kcopyd_client);
+ if (r) {
+ free_context(ms, ti, ms->nr_mirrors);
+ return r;
+ }
+
+ add_mirror_set(ms);
+ return 0;
+}
+
+static void mirror_dtr(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+
+ del_mirror_set(ms);
+ kcopyd_client_destroy(ms->kcopyd_client);
+ free_context(ms, ti, ms->nr_mirrors);
+}
+
+static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw)
+{
+ int should_wake = 0;
+ struct bio_list *bl;
+
+ bl = (rw == WRITE) ? &ms->writes : &ms->reads;
+ spin_lock(&ms->lock);
+ should_wake = !(bl->head);
+ bio_list_add(bl, bio);
+ spin_unlock(&ms->lock);
+
+ if (should_wake)
+ wake();
+}
+
+/*
+ * Mirror mapping function
+ */
+static int mirror_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ int r, rw = bio_rw(bio);
+ struct mirror *m;
+ struct mirror_set *ms = ti->private;
+
+ map_context->ll = bio->bi_sector >> ms->rh.region_shift;
+
+ if (rw == WRITE) {
+ queue_bio(ms, bio, rw);
+ return 0;
+ }
+
+ r = ms->rh.log->type->in_sync(ms->rh.log,
+ bio_to_region(&ms->rh, bio), 0);
+ if (r < 0 && r != -EWOULDBLOCK)
+ return r;
+
+ if (r == -EWOULDBLOCK) /* FIXME: ugly */
+ r = 0;
+
+ /*
+ * We don't want to fast track a recovery just for a read
+ * ahead. So we just let it silently fail.
+ * FIXME: get rid of this.
+ */
+ if (!r && rw == READA)
+ return -EIO;
+
+ if (!r) {
+ /* Pass this io over to the daemon */
+ queue_bio(ms, bio, rw);
+ return 0;
+ }
+
+ m = choose_mirror(ms, bio->bi_sector);
+ if (!m)
+ return -EIO;
+
+ map_bio(ms, m, bio);
+ return 1;
+}
+
+static int mirror_end_io(struct dm_target *ti, struct bio *bio,
+ int error, union map_info *map_context)
+{
+ int rw = bio_rw(bio);
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ region_t region = map_context->ll;
+
+ /*
+ * We need to dec pending if this was a write.
+ */
+ if (rw == WRITE)
+ rh_dec(&ms->rh, region);
+
+ return 0;
+}
+
+static void mirror_suspend(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ struct dirty_log *log = ms->rh.log;
+ rh_stop_recovery(&ms->rh);
+ if (log->type->suspend && log->type->suspend(log))
+ /* FIXME: need better error handling */
+ DMWARN("log suspend failed");
+}
+
+static void mirror_resume(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ struct dirty_log *log = ms->rh.log;
+ if (log->type->resume && log->type->resume(log))
+ /* FIXME: need better error handling */
+ DMWARN("log resume failed");
+ rh_start_recovery(&ms->rh);
+}
+
+static int mirror_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned int maxlen)
+{
+ char buffer[32];
+ unsigned int m, sz = 0;
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+
+#define EMIT(x...) sz += ((sz >= maxlen) ? \
+ 0 : scnprintf(result + sz, maxlen - sz, x))
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ EMIT("%d ", ms->nr_mirrors);
+
+ for (m = 0; m < ms->nr_mirrors; m++) {
+ format_dev_t(buffer, ms->mirror[m].dev->bdev->bd_dev);
+ EMIT("%s ", buffer);
+ }
+
+ EMIT(SECTOR_FORMAT "/" SECTOR_FORMAT,
+ ms->rh.log->type->get_sync_count(ms->rh.log),
+ ms->nr_regions);
+ break;
+
+ case STATUSTYPE_TABLE:
+ EMIT("%s 1 " SECTOR_FORMAT " %d ",
+ ms->rh.log->type->name, ms->rh.region_size,
+ ms->nr_mirrors);
+
+ for (m = 0; m < ms->nr_mirrors; m++) {
+ format_dev_t(buffer, ms->mirror[m].dev->bdev->bd_dev);
+ EMIT("%s " SECTOR_FORMAT " ",
+ buffer, ms->mirror[m].offset);
+ }
+ }
+
+ return 0;
+}
+
+static struct target_type mirror_target = {
+ .name = "mirror",
+ .version = {1, 0, 1},
+ .module = THIS_MODULE,
+ .ctr = mirror_ctr,
+ .dtr = mirror_dtr,
+ .map = mirror_map,
+ .end_io = mirror_end_io,
+ .suspend = mirror_suspend,
+ .resume = mirror_resume,
+ .status = mirror_status,
+};
+
+static int __init dm_mirror_init(void)
+{
+ int r;
+
+ r = dm_dirty_log_init();
+ if (r)
+ return r;
+
+ _kmirrord_wq = create_workqueue("kmirrord");
+ if (!_kmirrord_wq) {
+ DMERR("couldn't start kmirrord");
+ dm_dirty_log_exit();
+ return r;
+ }
+ INIT_WORK(&_kmirrord_work, do_work, NULL);
+
+ r = dm_register_target(&mirror_target);
+ if (r < 0) {
+ DMERR("%s: Failed to register mirror target",
+ mirror_target.name);
+ dm_dirty_log_exit();
+ destroy_workqueue(_kmirrord_wq);
+ }
+
+ return r;
+}
+
+static void __exit dm_mirror_exit(void)
+{
+ int r;
+
+ r = dm_unregister_target(&mirror_target);
+ if (r < 0)
+ DMERR("%s: unregister failed %d", mirror_target.name, r);
+
+ destroy_workqueue(_kmirrord_wq);
+ dm_dirty_log_exit();
+}
+
+/* Module hooks */
+module_init(dm_mirror_init);
+module_exit(dm_mirror_exit);
+
+MODULE_DESCRIPTION(DM_NAME " mirror target");
+MODULE_AUTHOR("Joe Thornber");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * dm-snapshot.c
+ *
+ * Copyright (C) 2001-2002 Sistina Software (UK) Limited.
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/blkdev.h>
+#include <linux/config.h>
+#include <linux/ctype.h>
+#include <linux/device-mapper.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/kdev_t.h>
+#include <linux/list.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#include "dm-snap.h"
+#include "dm-bio-list.h"
+#include "kcopyd.h"
+
+/*
+ * The percentage increment we will wake up users at
+ */
+#define WAKE_UP_PERCENT 5
+
+/*
+ * kcopyd priority of snapshot operations
+ */
+#define SNAPSHOT_COPY_PRIORITY 2
+
+/*
+ * Each snapshot reserves this many pages for io
+ */
+#define SNAPSHOT_PAGES 256
+
+struct pending_exception {
+ struct exception e;
+
+ /*
+ * Origin buffers waiting for this to complete are held
+ * in a bio list
+ */
+ struct bio_list origin_bios;
+ struct bio_list snapshot_bios;
+
+ /*
+ * Other pending_exceptions that are processing this
+ * chunk. When this list is empty, we know we can
+ * complete the origins.
+ */
+ struct list_head siblings;
+
+ /* Pointer back to snapshot context */
+ struct dm_snapshot *snap;
+
+ /*
+ * 1 indicates the exception has already been sent to
+ * kcopyd.
+ */
+ int started;
+};
+
+/*
+ * Hash table mapping origin volumes to lists of snapshots and
+ * a lock to protect it
+ */
+static kmem_cache_t *exception_cache;
+static kmem_cache_t *pending_cache;
+static mempool_t *pending_pool;
+
+/*
+ * One of these per registered origin, held in the snapshot_origins hash
+ */
+struct origin {
+ /* The origin device */
+ struct block_device *bdev;
+
+ struct list_head hash_list;
+
+ /* List of snapshots for this origin */
+ struct list_head snapshots;
+};
+
+/*
+ * Size of the hash table for origin volumes. If we make this
+ * the size of the minors list then it should be nearly perfect
+ */
+#define ORIGIN_HASH_SIZE 256
+#define ORIGIN_MASK 0xFF
+static struct list_head *_origins;
+static struct rw_semaphore _origins_lock;
+
+static int init_origin_hash(void)
+{
+ int i;
+
+ _origins = kmalloc(ORIGIN_HASH_SIZE * sizeof(struct list_head),
+ GFP_KERNEL);
+ if (!_origins) {
+ DMERR("Device mapper: Snapshot: unable to allocate memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < ORIGIN_HASH_SIZE; i++)
+ INIT_LIST_HEAD(_origins + i);
+ init_rwsem(&_origins_lock);
+
+ return 0;
+}
+
+static void exit_origin_hash(void)
+{
+ kfree(_origins);
+}
+
+static inline unsigned int origin_hash(struct block_device *bdev)
+{
+ return bdev->bd_dev & ORIGIN_MASK;
+}
+
+static struct origin *__lookup_origin(struct block_device *origin)
+{
+ struct list_head *ol;
+ struct origin *o;
+
+ ol = &_origins[origin_hash(origin)];
+ list_for_each_entry (o, ol, hash_list)
+ if (bdev_equal(o->bdev, origin))
+ return o;
+
+ return NULL;
+}
+
+static void __insert_origin(struct origin *o)
+{
+ struct list_head *sl = &_origins[origin_hash(o->bdev)];
+ list_add_tail(&o->hash_list, sl);
+}
+
+/*
+ * Make a note of the snapshot and its origin so we can look it
+ * up when the origin has a write on it.
+ */
+static int register_snapshot(struct dm_snapshot *snap)
+{
+ struct origin *o;
+ struct block_device *bdev = snap->origin->bdev;
+
+ down_write(&_origins_lock);
+ o = __lookup_origin(bdev);
+
+ if (!o) {
+ /* New origin */
+ o = kmalloc(sizeof(*o), GFP_KERNEL);
+ if (!o) {
+ up_write(&_origins_lock);
+ return -ENOMEM;
+ }
+
+ /* Initialise the struct */
+ INIT_LIST_HEAD(&o->snapshots);
+ o->bdev = bdev;
+
+ __insert_origin(o);
+ }
+
+ list_add_tail(&snap->list, &o->snapshots);
+
+ up_write(&_origins_lock);
+ return 0;
+}
+
+static void unregister_snapshot(struct dm_snapshot *s)
+{
+ struct origin *o;
+
+ down_write(&_origins_lock);
+ o = __lookup_origin(s->origin->bdev);
+
+ list_del(&s->list);
+ if (list_empty(&o->snapshots)) {
+ list_del(&o->hash_list);
+ kfree(o);
+ }
+
+ up_write(&_origins_lock);
+}
+
+/*
+ * Implementation of the exception hash tables.
+ */
+static int init_exception_table(struct exception_table *et, uint32_t size)
+{
+ unsigned int i;
+
+ et->hash_mask = size - 1;
+ et->table = dm_vcalloc(size, sizeof(struct list_head));
+ if (!et->table)
+ return -ENOMEM;
+
+ for (i = 0; i < size; i++)
+ INIT_LIST_HEAD(et->table + i);
+
+ return 0;
+}
+
+static void exit_exception_table(struct exception_table *et, kmem_cache_t *mem)
+{
+ struct list_head *slot;
+ struct exception *ex, *next;
+ int i, size;
+
+ size = et->hash_mask + 1;
+ for (i = 0; i < size; i++) {
+ slot = et->table + i;
+
+ list_for_each_entry_safe (ex, next, slot, hash_list)
+ kmem_cache_free(mem, ex);
+ }
+
+ vfree(et->table);
+}
+
+static inline uint32_t exception_hash(struct exception_table *et, chunk_t chunk)
+{
+ return chunk & et->hash_mask;
+}
+
+static void insert_exception(struct exception_table *eh, struct exception *e)
+{
+ struct list_head *l = &eh->table[exception_hash(eh, e->old_chunk)];
+ list_add(&e->hash_list, l);
+}
+
+static inline void remove_exception(struct exception *e)
+{
+ list_del(&e->hash_list);
+}
+
+/*
+ * Return the exception data for a sector, or NULL if not
+ * remapped.
+ */
+static struct exception *lookup_exception(struct exception_table *et,
+ chunk_t chunk)
+{
+ struct list_head *slot;
+ struct exception *e;
+
+ slot = &et->table[exception_hash(et, chunk)];
+ list_for_each_entry (e, slot, hash_list)
+ if (e->old_chunk == chunk)
+ return e;
+
+ return NULL;
+}
+
+static inline struct exception *alloc_exception(void)
+{
+ struct exception *e;
+
+ e = kmem_cache_alloc(exception_cache, GFP_NOIO);
+ if (!e)
+ e = kmem_cache_alloc(exception_cache, GFP_ATOMIC);
+
+ return e;
+}
+
+static inline void free_exception(struct exception *e)
+{
+ kmem_cache_free(exception_cache, e);
+}
+
+static inline struct pending_exception *alloc_pending_exception(void)
+{
+ return mempool_alloc(pending_pool, GFP_NOIO);
+}
+
+static inline void free_pending_exception(struct pending_exception *pe)
+{
+ mempool_free(pe, pending_pool);
+}
+
+int dm_add_exception(struct dm_snapshot *s, chunk_t old, chunk_t new)
+{
+ struct exception *e;
+
+ e = alloc_exception();
+ if (!e)
+ return -ENOMEM;
+
+ e->old_chunk = old;
+ e->new_chunk = new;
+ insert_exception(&s->complete, e);
+ return 0;
+}
+
+/*
+ * Hard coded magic.
+ */
+static int calc_max_buckets(void)
+{
+ /* use a fixed size of 2MB */
+ unsigned long mem = 2 * 1024 * 1024;
+ mem /= sizeof(struct list_head);
+
+ return mem;
+}
+
+/*
+ * Rounds a number down to a power of 2.
+ */
+static inline uint32_t round_down(uint32_t n)
+{
+ while (n & (n - 1))
+ n &= (n - 1);
+ return n;
+}
+
+/*
+ * Allocate room for a suitable hash table.
+ */
+static int init_hash_tables(struct dm_snapshot *s)
+{
+ sector_t hash_size, cow_dev_size, origin_dev_size, max_buckets;
+
+ /*
+ * Calculate based on the size of the original volume or
+ * the COW volume...
+ */
+ cow_dev_size = get_dev_size(s->cow->bdev);
+ origin_dev_size = get_dev_size(s->origin->bdev);
+ max_buckets = calc_max_buckets();
+
+ hash_size = min(origin_dev_size, cow_dev_size) >> s->chunk_shift;
+ hash_size = min(hash_size, max_buckets);
+
+ /* Round it down to a power of 2 */
+ hash_size = round_down(hash_size);
+ if (init_exception_table(&s->complete, hash_size))
+ return -ENOMEM;
+
+ /*
+ * Allocate hash table for in-flight exceptions
+ * Make this smaller than the real hash table
+ */
+ hash_size >>= 3;
+ if (hash_size < 64)
+ hash_size = 64;
+
+ if (init_exception_table(&s->pending, hash_size)) {
+ exit_exception_table(&s->complete, exception_cache);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/*
+ * Round a number up to the nearest 'size' boundary. size must
+ * be a power of 2.
+ */
+static inline ulong round_up(ulong n, ulong size)
+{
+ size--;
+ return (n + size) & ~size;
+}
+
+/*
+ * Construct a snapshot mapping: <origin_dev> <COW-dev> <p/n> <chunk-size>
+ */
+static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ struct dm_snapshot *s;
+ unsigned long chunk_size;
+ int r = -EINVAL;
+ char persistent;
+ char *origin_path;
+ char *cow_path;
+ char *value;
+ int blocksize;
+
+ if (argc < 4) {
+ ti->error = "dm-snapshot: requires exactly 4 arguments";
+ r = -EINVAL;
+ goto bad1;
+ }
+
+ origin_path = argv[0];
+ cow_path = argv[1];
+ persistent = toupper(*argv[2]);
+
+ if (persistent != 'P' && persistent != 'N') {
+ ti->error = "Persistent flag is not P or N";
+ r = -EINVAL;
+ goto bad1;
+ }
+
+ chunk_size = simple_strtoul(argv[3], &value, 10);
+ if (chunk_size == 0 || value == NULL) {
+ ti->error = "Invalid chunk size";
+ r = -EINVAL;
+ goto bad1;
+ }
+
+ s = kmalloc(sizeof(*s), GFP_KERNEL);
+ if (s == NULL) {
+ ti->error = "Cannot allocate snapshot context private "
+ "structure";
+ r = -ENOMEM;
+ goto bad1;
+ }
+
+ r = dm_get_device(ti, origin_path, 0, ti->len, FMODE_READ, &s->origin);
+ if (r) {
+ ti->error = "Cannot get origin device";
+ goto bad2;
+ }
+
+ r = dm_get_device(ti, cow_path, 0, 0,
+ FMODE_READ | FMODE_WRITE, &s->cow);
+ if (r) {
+ dm_put_device(ti, s->origin);
+ ti->error = "Cannot get COW device";
+ goto bad2;
+ }
+
+ /*
+ * Chunk size must be multiple of page size. Silently
+ * round up if it's not.
+ */
+ chunk_size = round_up(chunk_size, PAGE_SIZE >> 9);
+
+ /* Validate the chunk size against the device block size */
+ blocksize = s->cow->bdev->bd_disk->queue->hardsect_size;
+ if (chunk_size % (blocksize >> 9)) {
+ ti->error = "Chunk size is not a multiple of device blocksize";
+ r = -EINVAL;
+ goto bad3;
+ }
+
+ /* Check chunk_size is a power of 2 */
+ if (chunk_size & (chunk_size - 1)) {
+ ti->error = "Chunk size is not a power of 2";
+ r = -EINVAL;
+ goto bad3;
+ }
+
+ s->chunk_size = chunk_size;
+ s->chunk_mask = chunk_size - 1;
+ s->type = persistent;
+ s->chunk_shift = ffs(chunk_size) - 1;
+
+ s->valid = 1;
+ s->have_metadata = 0;
+ s->last_percent = 0;
+ init_rwsem(&s->lock);
+ s->table = ti->table;
+
+ /* Allocate hash table for COW data */
+ if (init_hash_tables(s)) {
+ ti->error = "Unable to allocate hash table space";
+ r = -ENOMEM;
+ goto bad3;
+ }
+
+ /*
+ * Check the persistent flag - done here because we need the iobuf
+ * to check the LV header
+ */
+ s->store.snap = s;
+
+ if (persistent == 'P')
+ r = dm_create_persistent(&s->store, chunk_size);
+ else
+ r = dm_create_transient(&s->store, s, blocksize);
+
+ if (r) {
+ ti->error = "Couldn't create exception store";
+ r = -EINVAL;
+ goto bad4;
+ }
+
+ r = kcopyd_client_create(SNAPSHOT_PAGES, &s->kcopyd_client);
+ if (r) {
+ ti->error = "Could not create kcopyd client";
+ goto bad5;
+ }
+
+ /* Add snapshot to the list of snapshots for this origin */
+ if (register_snapshot(s)) {
+ r = -EINVAL;
+ ti->error = "Cannot register snapshot origin";
+ goto bad6;
+ }
+
+ ti->private = s;
+ ti->split_io = chunk_size;
+
+ return 0;
+
+ bad6:
+ kcopyd_client_destroy(s->kcopyd_client);
+
+ bad5:
+ s->store.destroy(&s->store);
+
+ bad4:
+ exit_exception_table(&s->pending, pending_cache);
+ exit_exception_table(&s->complete, exception_cache);
+
+ bad3:
+ dm_put_device(ti, s->cow);
+ dm_put_device(ti, s->origin);
+
+ bad2:
+ kfree(s);
+
+ bad1:
+ return r;
+}
+
+static void snapshot_dtr(struct dm_target *ti)
+{
+ struct dm_snapshot *s = (struct dm_snapshot *) ti->private;
+
+ unregister_snapshot(s);
+
+ exit_exception_table(&s->pending, pending_cache);
+ exit_exception_table(&s->complete, exception_cache);
+
+ /* Deallocate memory used */
+ s->store.destroy(&s->store);
+
+ dm_put_device(ti, s->origin);
+ dm_put_device(ti, s->cow);
+ kcopyd_client_destroy(s->kcopyd_client);
+ kfree(s);
+}
+
+/*
+ * Flush a list of buffers.
+ */
+static void flush_bios(struct bio *bio)
+{
+ struct bio *n;
+
+ while (bio) {
+ n = bio->bi_next;
+ bio->bi_next = NULL;
+ generic_make_request(bio);
+ bio = n;
+ }
+}
+
+/*
+ * Error a list of buffers.
+ */
+static void error_bios(struct bio *bio)
+{
+ struct bio *n;
+
+ while (bio) {
+ n = bio->bi_next;
+ bio->bi_next = NULL;
+ bio_io_error(bio, bio->bi_size);
+ bio = n;
+ }
+}
+
+static struct bio *__flush_bios(struct pending_exception *pe)
+{
+ struct pending_exception *sibling;
+
+ if (list_empty(&pe->siblings))
+ return bio_list_get(&pe->origin_bios);
+
+ sibling = list_entry(pe->siblings.next,
+ struct pending_exception, siblings);
+
+ list_del(&pe->siblings);
+
+ /* This is fine as long as kcopyd is single-threaded. If kcopyd
+ * becomes multi-threaded, we'll need some locking here.
+ */
+ bio_list_merge(&sibling->origin_bios, &pe->origin_bios);
+
+ return NULL;
+}
+
+static void pending_complete(struct pending_exception *pe, int success)
+{
+ struct exception *e;
+ struct dm_snapshot *s = pe->snap;
+ struct bio *flush = NULL;
+
+ if (success) {
+ e = alloc_exception();
+ if (!e) {
+ DMWARN("Unable to allocate exception.");
+ down_write(&s->lock);
+ s->store.drop_snapshot(&s->store);
+ s->valid = 0;
+ flush = __flush_bios(pe);
+ up_write(&s->lock);
+
+ error_bios(bio_list_get(&pe->snapshot_bios));
+ goto out;
+ }
+ *e = pe->e;
+
+ /*
+ * Add a proper exception, and remove the
+ * in-flight exception from the list.
+ */
+ down_write(&s->lock);
+ insert_exception(&s->complete, e);
+ remove_exception(&pe->e);
+ flush = __flush_bios(pe);
+
+ /* Submit any pending write bios */
+ up_write(&s->lock);
+
+ flush_bios(bio_list_get(&pe->snapshot_bios));
+ } else {
+ /* Read/write error - snapshot is unusable */
+ down_write(&s->lock);
+ if (s->valid)
+ DMERR("Error reading/writing snapshot");
+ s->store.drop_snapshot(&s->store);
+ s->valid = 0;
+ remove_exception(&pe->e);
+ flush = __flush_bios(pe);
+ up_write(&s->lock);
+
+ error_bios(bio_list_get(&pe->snapshot_bios));
+
+ dm_table_event(s->table);
+ }
+
+ out:
+ free_pending_exception(pe);
+
+ if (flush)
+ flush_bios(flush);
+}
+
+static void commit_callback(void *context, int success)
+{
+ struct pending_exception *pe = (struct pending_exception *) context;
+ pending_complete(pe, success);
+}
+
+/*
+ * Called when the copy I/O has finished. kcopyd actually runs
+ * this code so don't block.
+ */
+static void copy_callback(int read_err, unsigned int write_err, void *context)
+{
+ struct pending_exception *pe = (struct pending_exception *) context;
+ struct dm_snapshot *s = pe->snap;
+
+ if (read_err || write_err)
+ pending_complete(pe, 0);
+
+ else
+ /* Update the metadata if we are persistent */
+ s->store.commit_exception(&s->store, &pe->e, commit_callback,
+ pe);
+}
+
+/*
+ * Dispatches the copy operation to kcopyd.
+ */
+static inline void start_copy(struct pending_exception *pe)
+{
+ struct dm_snapshot *s = pe->snap;
+ struct io_region src, dest;
+ struct block_device *bdev = s->origin->bdev;
+ sector_t dev_size;
+
+ dev_size = get_dev_size(bdev);
+
+ src.bdev = bdev;
+ src.sector = chunk_to_sector(s, pe->e.old_chunk);
+ src.count = min(s->chunk_size, dev_size - src.sector);
+
+ dest.bdev = s->cow->bdev;
+ dest.sector = chunk_to_sector(s, pe->e.new_chunk);
+ dest.count = src.count;
+
+ /* Hand over to kcopyd */
+ kcopyd_copy(s->kcopyd_client,
+ &src, 1, &dest, 0, copy_callback, pe);
+}
+
+/*
+ * Looks to see if this snapshot already has a pending exception
+ * for this chunk, otherwise it allocates a new one and inserts
+ * it into the pending table.
+ *
+ * NOTE: a write lock must be held on snap->lock before calling
+ * this.
+ */
+static struct pending_exception *
+__find_pending_exception(struct dm_snapshot *s, struct bio *bio)
+{
+ struct exception *e;
+ struct pending_exception *pe;
+ chunk_t chunk = sector_to_chunk(s, bio->bi_sector);
+
+ /*
+ * Is there a pending exception for this already ?
+ */
+ e = lookup_exception(&s->pending, chunk);
+ if (e) {
+ /* cast the exception to a pending exception */
+ pe = container_of(e, struct pending_exception, e);
+
+ } else {
+ /*
+ * Create a new pending exception, we don't want
+ * to hold the lock while we do this.
+ */
+ up_write(&s->lock);
+ pe = alloc_pending_exception();
+ down_write(&s->lock);
+
+ e = lookup_exception(&s->pending, chunk);
+ if (e) {
+ free_pending_exception(pe);
+ pe = container_of(e, struct pending_exception, e);
+ } else {
+ pe->e.old_chunk = chunk;
+ bio_list_init(&pe->origin_bios);
+ bio_list_init(&pe->snapshot_bios);
+ INIT_LIST_HEAD(&pe->siblings);
+ pe->snap = s;
+ pe->started = 0;
+
+ if (s->store.prepare_exception(&s->store, &pe->e)) {
+ free_pending_exception(pe);
+ s->valid = 0;
+ return NULL;
+ }
+
+ insert_exception(&s->pending, &pe->e);
+ }
+ }
+
+ return pe;
+}
+
+static inline void remap_exception(struct dm_snapshot *s, struct exception *e,
+ struct bio *bio)
+{
+ bio->bi_bdev = s->cow->bdev;
+ bio->bi_sector = chunk_to_sector(s, e->new_chunk) +
+ (bio->bi_sector & s->chunk_mask);
+}
+
+static int snapshot_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ struct exception *e;
+ struct dm_snapshot *s = (struct dm_snapshot *) ti->private;
+ int r = 1;
+ chunk_t chunk;
+ struct pending_exception *pe;
+
+ chunk = sector_to_chunk(s, bio->bi_sector);
+
+ /* Full snapshots are not usable */
+ if (!s->valid)
+ return -1;
+
+ /*
+ * Write to snapshot - higher level takes care of RW/RO
+ * flags so we should only get this if we are
+ * writeable.
+ */
+ if (bio_rw(bio) == WRITE) {
+
+ /* FIXME: should only take write lock if we need
+ * to copy an exception */
+ down_write(&s->lock);
+
+ /* If the block is already remapped - use that, else remap it */
+ e = lookup_exception(&s->complete, chunk);
+ if (e) {
+ remap_exception(s, e, bio);
+ up_write(&s->lock);
+
+ } else {
+ pe = __find_pending_exception(s, bio);
+
+ if (!pe) {
+ if (s->store.drop_snapshot)
+ s->store.drop_snapshot(&s->store);
+ s->valid = 0;
+ r = -EIO;
+ up_write(&s->lock);
+ } else {
+ remap_exception(s, &pe->e, bio);
+ bio_list_add(&pe->snapshot_bios, bio);
+
+ if (!pe->started) {
+ /* this is protected by snap->lock */
+ pe->started = 1;
+ up_write(&s->lock);
+ start_copy(pe);
+ } else
+ up_write(&s->lock);
+ r = 0;
+ }
+ }
+
+ } else {
+ /*
+ * FIXME: this read path scares me because we
+ * always use the origin when we have a pending
+ * exception. However I can't think of a
+ * situation where this is wrong - ejt.
+ */
+
+ /* Do reads */
+ down_read(&s->lock);
+
+ /* See if it it has been remapped */
+ e = lookup_exception(&s->complete, chunk);
+ if (e)
+ remap_exception(s, e, bio);
+ else
+ bio->bi_bdev = s->origin->bdev;
+
+ up_read(&s->lock);
+ }
+
+ return r;
+}
+
+static void snapshot_resume(struct dm_target *ti)
+{
+ struct dm_snapshot *s = (struct dm_snapshot *) ti->private;
+
+ if (s->have_metadata)
+ return;
+
+ if (s->store.read_metadata(&s->store)) {
+ down_write(&s->lock);
+ s->valid = 0;
+ up_write(&s->lock);
+ }
+
+ s->have_metadata = 1;
+}
+
+static int snapshot_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned int maxlen)
+{
+ struct dm_snapshot *snap = (struct dm_snapshot *) ti->private;
+ char cow[32];
+ char org[32];
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ if (!snap->valid)
+ snprintf(result, maxlen, "Invalid");
+ else {
+ if (snap->store.fraction_full) {
+ sector_t numerator, denominator;
+ snap->store.fraction_full(&snap->store,
+ &numerator,
+ &denominator);
+ snprintf(result, maxlen,
+ SECTOR_FORMAT "/" SECTOR_FORMAT,
+ numerator, denominator);
+ }
+ else
+ snprintf(result, maxlen, "Unknown");
+ }
+ break;
+
+ case STATUSTYPE_TABLE:
+ /*
+ * kdevname returns a static pointer so we need
+ * to make private copies if the output is to
+ * make sense.
+ */
+ format_dev_t(cow, snap->cow->bdev->bd_dev);
+ format_dev_t(org, snap->origin->bdev->bd_dev);
+ snprintf(result, maxlen, "%s %s %c " SECTOR_FORMAT, org, cow,
+ snap->type, snap->chunk_size);
+ break;
+ }
+
+ return 0;
+}
+
+/*-----------------------------------------------------------------
+ * Origin methods
+ *---------------------------------------------------------------*/
+static void list_merge(struct list_head *l1, struct list_head *l2)
+{
+ struct list_head *l1_n, *l2_p;
+
+ l1_n = l1->next;
+ l2_p = l2->prev;
+
+ l1->next = l2;
+ l2->prev = l1;
+
+ l2_p->next = l1_n;
+ l1_n->prev = l2_p;
+}
+
+static int __origin_write(struct list_head *snapshots, struct bio *bio)
+{
+ int r = 1, first = 1;
+ struct dm_snapshot *snap;
+ struct exception *e;
+ struct pending_exception *pe, *last = NULL;
+ chunk_t chunk;
+
+ /* Do all the snapshots on this origin */
+ list_for_each_entry (snap, snapshots, list) {
+
+ /* Only deal with valid snapshots */
+ if (!snap->valid)
+ continue;
+
+ down_write(&snap->lock);
+
+ /*
+ * Remember, different snapshots can have
+ * different chunk sizes.
+ */
+ chunk = sector_to_chunk(snap, bio->bi_sector);
+
+ /*
+ * Check exception table to see if block
+ * is already remapped in this snapshot
+ * and trigger an exception if not.
+ */
+ e = lookup_exception(&snap->complete, chunk);
+ if (!e) {
+ pe = __find_pending_exception(snap, bio);
+ if (!pe) {
+ snap->store.drop_snapshot(&snap->store);
+ snap->valid = 0;
+
+ } else {
+ if (last)
+ list_merge(&pe->siblings,
+ &last->siblings);
+
+ last = pe;
+ r = 0;
+ }
+ }
+
+ up_write(&snap->lock);
+ }
+
+ /*
+ * Now that we have a complete pe list we can start the copying.
+ */
+ if (last) {
+ pe = last;
+ do {
+ down_write(&pe->snap->lock);
+ if (first)
+ bio_list_add(&pe->origin_bios, bio);
+ if (!pe->started) {
+ pe->started = 1;
+ up_write(&pe->snap->lock);
+ start_copy(pe);
+ } else
+ up_write(&pe->snap->lock);
+ first = 0;
+ pe = list_entry(pe->siblings.next,
+ struct pending_exception, siblings);
+
+ } while (pe != last);
+ }
+
+ return r;
+}
+
+/*
+ * Called on a write from the origin driver.
+ */
+static int do_origin(struct dm_dev *origin, struct bio *bio)
+{
+ struct origin *o;
+ int r = 1;
+
+ down_read(&_origins_lock);
+ o = __lookup_origin(origin->bdev);
+ if (o)
+ r = __origin_write(&o->snapshots, bio);
+ up_read(&_origins_lock);
+
+ return r;
+}
+
+/*
+ * Origin: maps a linear range of a device, with hooks for snapshotting.
+ */
+
+/*
+ * Construct an origin mapping: <dev_path>
+ * The context for an origin is merely a 'struct dm_dev *'
+ * pointing to the real device.
+ */
+static int origin_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ int r;
+ struct dm_dev *dev;
+
+ if (argc != 1) {
+ ti->error = "dm-origin: incorrect number of arguments";
+ return -EINVAL;
+ }
+
+ r = dm_get_device(ti, argv[0], 0, ti->len,
+ dm_table_get_mode(ti->table), &dev);
+ if (r) {
+ ti->error = "Cannot get target device";
+ return r;
+ }
+
+ ti->private = dev;
+ return 0;
+}
+
+static void origin_dtr(struct dm_target *ti)
+{
+ struct dm_dev *dev = (struct dm_dev *) ti->private;
+ dm_put_device(ti, dev);
+}
+
+static int origin_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ struct dm_dev *dev = (struct dm_dev *) ti->private;
+ bio->bi_bdev = dev->bdev;
+
+ /* Only tell snapshots if this is a write */
+ return (bio_rw(bio) == WRITE) ? do_origin(dev, bio) : 1;
+}
+
+#define min_not_zero(l, r) (l == 0) ? r : ((r == 0) ? l : min(l, r))
+
+/*
+ * Set the target "split_io" field to the minimum of all the snapshots'
+ * chunk sizes.
+ */
+static void origin_resume(struct dm_target *ti)
+{
+ struct dm_dev *dev = (struct dm_dev *) ti->private;
+ struct dm_snapshot *snap;
+ struct origin *o;
+ chunk_t chunk_size = 0;
+
+ down_read(&_origins_lock);
+ o = __lookup_origin(dev->bdev);
+ if (o)
+ list_for_each_entry (snap, &o->snapshots, list)
+ chunk_size = min_not_zero(chunk_size, snap->chunk_size);
+ up_read(&_origins_lock);
+
+ ti->split_io = chunk_size;
+}
+
+static int origin_status(struct dm_target *ti, status_type_t type, char *result,
+ unsigned int maxlen)
+{
+ struct dm_dev *dev = (struct dm_dev *) ti->private;
+ char buffer[32];
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ result[0] = '\0';
+ break;
+
+ case STATUSTYPE_TABLE:
+ format_dev_t(buffer, dev->bdev->bd_dev);
+ snprintf(result, maxlen, "%s", buffer);
+ break;
+ }
+
+ return 0;
+}
+
+static struct target_type origin_target = {
+ .name = "snapshot-origin",
+ .version = {1, 0, 1},
+ .module = THIS_MODULE,
+ .ctr = origin_ctr,
+ .dtr = origin_dtr,
+ .map = origin_map,
+ .resume = origin_resume,
+ .status = origin_status,
+};
+
+static struct target_type snapshot_target = {
+ .name = "snapshot",
+ .version = {1, 0, 1},
+ .module = THIS_MODULE,
+ .ctr = snapshot_ctr,
+ .dtr = snapshot_dtr,
+ .map = snapshot_map,
+ .resume = snapshot_resume,
+ .status = snapshot_status,
+};
+
+static int __init dm_snapshot_init(void)
+{
+ int r;
+
+ r = dm_register_target(&snapshot_target);
+ if (r) {
+ DMERR("snapshot target register failed %d", r);
+ return r;
+ }
+
+ r = dm_register_target(&origin_target);
+ if (r < 0) {
+ DMERR("Device mapper: Origin: register failed %d\n", r);
+ goto bad1;
+ }
+
+ r = init_origin_hash();
+ if (r) {
+ DMERR("init_origin_hash failed.");
+ goto bad2;
+ }
+
+ exception_cache = kmem_cache_create("dm-snapshot-ex",
+ sizeof(struct exception),
+ __alignof__(struct exception),
+ 0, NULL, NULL);
+ if (!exception_cache) {
+ DMERR("Couldn't create exception cache.");
+ r = -ENOMEM;
+ goto bad3;
+ }
+
+ pending_cache =
+ kmem_cache_create("dm-snapshot-in",
+ sizeof(struct pending_exception),
+ __alignof__(struct pending_exception),
+ 0, NULL, NULL);
+ if (!pending_cache) {
+ DMERR("Couldn't create pending cache.");
+ r = -ENOMEM;
+ goto bad4;
+ }
+
+ pending_pool = mempool_create(128, mempool_alloc_slab,
+ mempool_free_slab, pending_cache);
+ if (!pending_pool) {
+ DMERR("Couldn't create pending pool.");
+ r = -ENOMEM;
+ goto bad5;
+ }
+
+ return 0;
+
+ bad5:
+ kmem_cache_destroy(pending_cache);
+ bad4:
+ kmem_cache_destroy(exception_cache);
+ bad3:
+ exit_origin_hash();
+ bad2:
+ dm_unregister_target(&origin_target);
+ bad1:
+ dm_unregister_target(&snapshot_target);
+ return r;
+}
+
+static void __exit dm_snapshot_exit(void)
+{
+ int r;
+
+ r = dm_unregister_target(&snapshot_target);
+ if (r)
+ DMERR("snapshot unregister failed %d", r);
+
+ r = dm_unregister_target(&origin_target);
+ if (r)
+ DMERR("origin unregister failed %d", r);
+
+ exit_origin_hash();
+ mempool_destroy(pending_pool);
+ kmem_cache_destroy(pending_cache);
+ kmem_cache_destroy(exception_cache);
+}
+
+/* Module hooks */
+module_init(dm_snapshot_init);
+module_exit(dm_snapshot_exit);
+
+MODULE_DESCRIPTION(DM_NAME " snapshot target");
+MODULE_AUTHOR("Joe Thornber");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * dm-snapshot.c
+ *
+ * Copyright (C) 2001-2002 Sistina Software (UK) Limited.
+ *
+ * This file is released under the GPL.
+ */
+
+#ifndef DM_SNAPSHOT_H
+#define DM_SNAPSHOT_H
+
+#include "dm.h"
+#include <linux/blkdev.h>
+
+struct exception_table {
+ uint32_t hash_mask;
+ struct list_head *table;
+};
+
+/*
+ * The snapshot code deals with largish chunks of the disk at a
+ * time. Typically 64k - 256k.
+ */
+/* FIXME: can we get away with limiting these to a uint32_t ? */
+typedef sector_t chunk_t;
+
+/*
+ * An exception is used where an old chunk of data has been
+ * replaced by a new one.
+ */
+struct exception {
+ struct list_head hash_list;
+
+ chunk_t old_chunk;
+ chunk_t new_chunk;
+};
+
+/*
+ * Abstraction to handle the meta/layout of exception stores (the
+ * COW device).
+ */
+struct exception_store {
+
+ /*
+ * Destroys this object when you've finished with it.
+ */
+ void (*destroy) (struct exception_store *store);
+
+ /*
+ * The target shouldn't read the COW device until this is
+ * called.
+ */
+ int (*read_metadata) (struct exception_store *store);
+
+ /*
+ * Find somewhere to store the next exception.
+ */
+ int (*prepare_exception) (struct exception_store *store,
+ struct exception *e);
+
+ /*
+ * Update the metadata with this exception.
+ */
+ void (*commit_exception) (struct exception_store *store,
+ struct exception *e,
+ void (*callback) (void *, int success),
+ void *callback_context);
+
+ /*
+ * The snapshot is invalid, note this in the metadata.
+ */
+ void (*drop_snapshot) (struct exception_store *store);
+
+ /*
+ * Return how full the snapshot is.
+ */
+ void (*fraction_full) (struct exception_store *store,
+ sector_t *numerator,
+ sector_t *denominator);
+
+ struct dm_snapshot *snap;
+ void *context;
+};
+
+struct dm_snapshot {
+ struct rw_semaphore lock;
+ struct dm_table *table;
+
+ struct dm_dev *origin;
+ struct dm_dev *cow;
+
+ /* List of snapshots per Origin */
+ struct list_head list;
+
+ /* Size of data blocks saved - must be a power of 2 */
+ chunk_t chunk_size;
+ chunk_t chunk_mask;
+ chunk_t chunk_shift;
+
+ /* You can't use a snapshot if this is 0 (e.g. if full) */
+ int valid;
+ int have_metadata;
+
+ /* Used for display of table */
+ char type;
+
+ /* The last percentage we notified */
+ int last_percent;
+
+ struct exception_table pending;
+ struct exception_table complete;
+
+ /* The on disk metadata handler */
+ struct exception_store store;
+
+ struct kcopyd_client *kcopyd_client;
+};
+
+/*
+ * Used by the exception stores to load exceptions hen
+ * initialising.
+ */
+int dm_add_exception(struct dm_snapshot *s, chunk_t old, chunk_t new);
+
+/*
+ * Constructor and destructor for the default persistent
+ * store.
+ */
+int dm_create_persistent(struct exception_store *store, uint32_t chunk_size);
+
+int dm_create_transient(struct exception_store *store,
+ struct dm_snapshot *s, int blocksize);
+
+/*
+ * Return the number of sectors in the device.
+ */
+static inline sector_t get_dev_size(struct block_device *bdev)
+{
+ return bdev->bd_inode->i_size >> SECTOR_SHIFT;
+}
+
+static inline chunk_t sector_to_chunk(struct dm_snapshot *s, sector_t sector)
+{
+ return (sector & ~s->chunk_mask) >> s->chunk_shift;
+}
+
+static inline sector_t chunk_to_sector(struct dm_snapshot *s, chunk_t chunk)
+{
+ return chunk << s->chunk_shift;
+}
+
+static inline int bdev_equal(struct block_device *lhs, struct block_device *rhs)
+{
+ /*
+ * There is only ever one instance of a particular block
+ * device so we can compare pointers safely.
+ */
+ return lhs == rhs;
+}
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 Christophe Saout <christophe@saout.de>
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm.h"
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/bio.h>
+
+/*
+ * Construct a dummy mapping that only returns zeros
+ */
+static int zero_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ if (argc != 0) {
+ ti->error = "dm-zero: No arguments required";
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/*
+ * Fills the bio pages with zeros
+ */
+static void zero_fill_bio(struct bio *bio)
+{
+ unsigned long flags;
+ struct bio_vec *bv;
+ int i;
+
+ bio_for_each_segment(bv, bio, i) {
+ char *data = bvec_kmap_irq(bv, &flags);
+ memset(data, 0, bv->bv_len);
+ flush_dcache_page(bv->bv_page);
+ bvec_kunmap_irq(data, &flags);
+ }
+}
+
+/*
+ * Return zeros only on reads
+ */
+static int zero_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ switch(bio_rw(bio)) {
+ case READ:
+ zero_fill_bio(bio);
+ break;
+ case READA:
+ /* readahead of null bytes only wastes buffer cache */
+ return -EIO;
+ case WRITE:
+ /* writes get silently dropped */
+ break;
+ }
+
+ bio_endio(bio, bio->bi_size, 0);
+
+ /* accepted bio, don't make new request */
+ return 0;
+}
+
+static struct target_type zero_target = {
+ .name = "zero",
+ .version = {1, 0, 0},
+ .module = THIS_MODULE,
+ .ctr = zero_ctr,
+ .map = zero_map,
+};
+
+int __init dm_zero_init(void)
+{
+ int r = dm_register_target(&zero_target);
+
+ if (r < 0)
+ DMERR("zero: register failed %d", r);
+
+ return r;
+}
+
+void __exit dm_zero_exit(void)
+{
+ int r = dm_unregister_target(&zero_target);
+
+ if (r < 0)
+ DMERR("zero: unregister failed %d", r);
+}
+
+module_init(dm_zero_init)
+module_exit(dm_zero_exit)
+
+MODULE_AUTHOR("Christophe Saout <christophe@saout.de>");
+MODULE_DESCRIPTION(DM_NAME " dummy target returning zeros");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Copyright (C) 2002 Sistina Software (UK) Limited.
+ *
+ * This file is released under the GPL.
+ *
+ * Kcopyd provides a simple interface for copying an area of one
+ * block-device to one or more other block-devices, with an asynchronous
+ * completion notification.
+ */
+
+#include <asm/atomic.h>
+
+#include <linux/blkdev.h>
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/pagemap.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/workqueue.h>
+
+#include "kcopyd.h"
+
+/* FIXME: this is only needed for the DMERR macros */
+#include "dm.h"
+
+static struct workqueue_struct *_kcopyd_wq;
+static struct work_struct _kcopyd_work;
+
+static inline void wake(void)
+{
+ queue_work(_kcopyd_wq, &_kcopyd_work);
+}
+
+/*-----------------------------------------------------------------
+ * Each kcopyd client has its own little pool of preallocated
+ * pages for kcopyd io.
+ *---------------------------------------------------------------*/
+struct kcopyd_client {
+ struct list_head list;
+
+ spinlock_t lock;
+ struct page_list *pages;
+ unsigned int nr_pages;
+ unsigned int nr_free_pages;
+};
+
+static struct page_list *alloc_pl(void)
+{
+ struct page_list *pl;
+
+ pl = kmalloc(sizeof(*pl), GFP_KERNEL);
+ if (!pl)
+ return NULL;
+
+ pl->page = alloc_page(GFP_KERNEL);
+ if (!pl->page) {
+ kfree(pl);
+ return NULL;
+ }
+
+ return pl;
+}
+
+static void free_pl(struct page_list *pl)
+{
+ __free_page(pl->page);
+ kfree(pl);
+}
+
+static int kcopyd_get_pages(struct kcopyd_client *kc,
+ unsigned int nr, struct page_list **pages)
+{
+ struct page_list *pl;
+
+ spin_lock(&kc->lock);
+ if (kc->nr_free_pages < nr) {
+ spin_unlock(&kc->lock);
+ return -ENOMEM;
+ }
+
+ kc->nr_free_pages -= nr;
+ for (*pages = pl = kc->pages; --nr; pl = pl->next)
+ ;
+
+ kc->pages = pl->next;
+ pl->next = 0;
+
+ spin_unlock(&kc->lock);
+
+ return 0;
+}
+
+static void kcopyd_put_pages(struct kcopyd_client *kc, struct page_list *pl)
+{
+ struct page_list *cursor;
+
+ spin_lock(&kc->lock);
+ for (cursor = pl; cursor->next; cursor = cursor->next)
+ kc->nr_free_pages++;
+
+ kc->nr_free_pages++;
+ cursor->next = kc->pages;
+ kc->pages = pl;
+ spin_unlock(&kc->lock);
+}
+
+/*
+ * These three functions resize the page pool.
+ */
+static void drop_pages(struct page_list *pl)
+{
+ struct page_list *next;
+
+ while (pl) {
+ next = pl->next;
+ free_pl(pl);
+ pl = next;
+ }
+}
+
+static int client_alloc_pages(struct kcopyd_client *kc, unsigned int nr)
+{
+ unsigned int i;
+ struct page_list *pl = NULL, *next;
+
+ for (i = 0; i < nr; i++) {
+ next = alloc_pl();
+ if (!next) {
+ if (pl)
+ drop_pages(pl);
+ return -ENOMEM;
+ }
+ next->next = pl;
+ pl = next;
+ }
+
+ kcopyd_put_pages(kc, pl);
+ kc->nr_pages += nr;
+ return 0;
+}
+
+static void client_free_pages(struct kcopyd_client *kc)
+{
+ BUG_ON(kc->nr_free_pages != kc->nr_pages);
+ drop_pages(kc->pages);
+ kc->pages = NULL;
+ kc->nr_free_pages = kc->nr_pages = 0;
+}
+
+/*-----------------------------------------------------------------
+ * kcopyd_jobs need to be allocated by the *clients* of kcopyd,
+ * for this reason we use a mempool to prevent the client from
+ * ever having to do io (which could cause a deadlock).
+ *---------------------------------------------------------------*/
+struct kcopyd_job {
+ struct kcopyd_client *kc;
+ struct list_head list;
+ unsigned long flags;
+
+ /*
+ * Error state of the job.
+ */
+ int read_err;
+ unsigned int write_err;
+
+ /*
+ * Either READ or WRITE
+ */
+ int rw;
+ struct io_region source;
+
+ /*
+ * The destinations for the transfer.
+ */
+ unsigned int num_dests;
+ struct io_region dests[KCOPYD_MAX_REGIONS];
+
+ sector_t offset;
+ unsigned int nr_pages;
+ struct page_list *pages;
+
+ /*
+ * Set this to ensure you are notified when the job has
+ * completed. 'context' is for callback to use.
+ */
+ kcopyd_notify_fn fn;
+ void *context;
+
+ /*
+ * These fields are only used if the job has been split
+ * into more manageable parts.
+ */
+ struct semaphore lock;
+ atomic_t sub_jobs;
+ sector_t progress;
+};
+
+/* FIXME: this should scale with the number of pages */
+#define MIN_JOBS 512
+
+static kmem_cache_t *_job_cache;
+static mempool_t *_job_pool;
+
+/*
+ * We maintain three lists of jobs:
+ *
+ * i) jobs waiting for pages
+ * ii) jobs that have pages, and are waiting for the io to be issued.
+ * iii) jobs that have completed.
+ *
+ * All three of these are protected by job_lock.
+ */
+static spinlock_t _job_lock = SPIN_LOCK_UNLOCKED;
+
+static LIST_HEAD(_complete_jobs);
+static LIST_HEAD(_io_jobs);
+static LIST_HEAD(_pages_jobs);
+
+static int jobs_init(void)
+{
+ _job_cache = kmem_cache_create("kcopyd-jobs",
+ sizeof(struct kcopyd_job),
+ __alignof__(struct kcopyd_job),
+ 0, NULL, NULL);
+ if (!_job_cache)
+ return -ENOMEM;
+
+ _job_pool = mempool_create(MIN_JOBS, mempool_alloc_slab,
+ mempool_free_slab, _job_cache);
+ if (!_job_pool) {
+ kmem_cache_destroy(_job_cache);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void jobs_exit(void)
+{
+ BUG_ON(!list_empty(&_complete_jobs));
+ BUG_ON(!list_empty(&_io_jobs));
+ BUG_ON(!list_empty(&_pages_jobs));
+
+ mempool_destroy(_job_pool);
+ kmem_cache_destroy(_job_cache);
+ _job_pool = NULL;
+ _job_cache = NULL;
+}
+
+/*
+ * Functions to push and pop a job onto the head of a given job
+ * list.
+ */
+static inline struct kcopyd_job *pop(struct list_head *jobs)
+{
+ struct kcopyd_job *job = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&_job_lock, flags);
+
+ if (!list_empty(jobs)) {
+ job = list_entry(jobs->next, struct kcopyd_job, list);
+ list_del(&job->list);
+ }
+ spin_unlock_irqrestore(&_job_lock, flags);
+
+ return job;
+}
+
+static inline void push(struct list_head *jobs, struct kcopyd_job *job)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&_job_lock, flags);
+ list_add_tail(&job->list, jobs);
+ spin_unlock_irqrestore(&_job_lock, flags);
+}
+
+/*
+ * These three functions process 1 item from the corresponding
+ * job list.
+ *
+ * They return:
+ * < 0: error
+ * 0: success
+ * > 0: can't process yet.
+ */
+static int run_complete_job(struct kcopyd_job *job)
+{
+ void *context = job->context;
+ int read_err = job->read_err;
+ unsigned int write_err = job->write_err;
+ kcopyd_notify_fn fn = job->fn;
+
+ kcopyd_put_pages(job->kc, job->pages);
+ mempool_free(job, _job_pool);
+ fn(read_err, write_err, context);
+ return 0;
+}
+
+static void complete_io(unsigned long error, void *context)
+{
+ struct kcopyd_job *job = (struct kcopyd_job *) context;
+
+ if (error) {
+ if (job->rw == WRITE)
+ job->write_err &= error;
+ else
+ job->read_err = 1;
+
+ if (!test_bit(KCOPYD_IGNORE_ERROR, &job->flags)) {
+ push(&_complete_jobs, job);
+ wake();
+ return;
+ }
+ }
+
+ if (job->rw == WRITE)
+ push(&_complete_jobs, job);
+
+ else {
+ job->rw = WRITE;
+ push(&_io_jobs, job);
+ }
+
+ wake();
+}
+
+/*
+ * Request io on as many buffer heads as we can currently get for
+ * a particular job.
+ */
+static int run_io_job(struct kcopyd_job *job)
+{
+ int r;
+
+ if (job->rw == READ)
+ r = dm_io_async(1, &job->source, job->rw,
+ job->pages,
+ job->offset, complete_io, job);
+
+ else
+ r = dm_io_async(job->num_dests, job->dests, job->rw,
+ job->pages,
+ job->offset, complete_io, job);
+
+ return r;
+}
+
+static int run_pages_job(struct kcopyd_job *job)
+{
+ int r;
+
+ job->nr_pages = dm_div_up(job->dests[0].count + job->offset,
+ PAGE_SIZE >> 9);
+ r = kcopyd_get_pages(job->kc, job->nr_pages, &job->pages);
+ if (!r) {
+ /* this job is ready for io */
+ push(&_io_jobs, job);
+ return 0;
+ }
+
+ if (r == -ENOMEM)
+ /* can't complete now */
+ return 1;
+
+ return r;
+}
+
+/*
+ * Run through a list for as long as possible. Returns the count
+ * of successful jobs.
+ */
+static int process_jobs(struct list_head *jobs, int (*fn) (struct kcopyd_job *))
+{
+ struct kcopyd_job *job;
+ int r, count = 0;
+
+ while ((job = pop(jobs))) {
+
+ r = fn(job);
+
+ if (r < 0) {
+ /* error this rogue job */
+ if (job->rw == WRITE)
+ job->write_err = (unsigned int) -1;
+ else
+ job->read_err = 1;
+ push(&_complete_jobs, job);
+ break;
+ }
+
+ if (r > 0) {
+ /*
+ * We couldn't service this job ATM, so
+ * push this job back onto the list.
+ */
+ push(jobs, job);
+ break;
+ }
+
+ count++;
+ }
+
+ return count;
+}
+
+/*
+ * kcopyd does this every time it's woken up.
+ */
+static void do_work(void *ignored)
+{
+ /*
+ * The order that these are called is *very* important.
+ * complete jobs can free some pages for pages jobs.
+ * Pages jobs when successful will jump onto the io jobs
+ * list. io jobs call wake when they complete and it all
+ * starts again.
+ */
+ process_jobs(&_complete_jobs, run_complete_job);
+ process_jobs(&_pages_jobs, run_pages_job);
+ process_jobs(&_io_jobs, run_io_job);
+}
+
+/*
+ * If we are copying a small region we just dispatch a single job
+ * to do the copy, otherwise the io has to be split up into many
+ * jobs.
+ */
+static void dispatch_job(struct kcopyd_job *job)
+{
+ push(&_pages_jobs, job);
+ wake();
+}
+
+#define SUB_JOB_SIZE 128
+static void segment_complete(int read_err,
+ unsigned int write_err, void *context)
+{
+ /* FIXME: tidy this function */
+ sector_t progress = 0;
+ sector_t count = 0;
+ struct kcopyd_job *job = (struct kcopyd_job *) context;
+
+ down(&job->lock);
+
+ /* update the error */
+ if (read_err)
+ job->read_err = 1;
+
+ if (write_err)
+ job->write_err &= write_err;
+
+ /*
+ * Only dispatch more work if there hasn't been an error.
+ */
+ if ((!job->read_err && !job->write_err) ||
+ test_bit(KCOPYD_IGNORE_ERROR, &job->flags)) {
+ /* get the next chunk of work */
+ progress = job->progress;
+ count = job->source.count - progress;
+ if (count) {
+ if (count > SUB_JOB_SIZE)
+ count = SUB_JOB_SIZE;
+
+ job->progress += count;
+ }
+ }
+ up(&job->lock);
+
+ if (count) {
+ int i;
+ struct kcopyd_job *sub_job = mempool_alloc(_job_pool, GFP_NOIO);
+
+ *sub_job = *job;
+ sub_job->source.sector += progress;
+ sub_job->source.count = count;
+
+ for (i = 0; i < job->num_dests; i++) {
+ sub_job->dests[i].sector += progress;
+ sub_job->dests[i].count = count;
+ }
+
+ sub_job->fn = segment_complete;
+ sub_job->context = job;
+ dispatch_job(sub_job);
+
+ } else if (atomic_dec_and_test(&job->sub_jobs)) {
+
+ /*
+ * To avoid a race we must keep the job around
+ * until after the notify function has completed.
+ * Otherwise the client may try and stop the job
+ * after we've completed.
+ */
+ job->fn(read_err, write_err, job->context);
+ mempool_free(job, _job_pool);
+ }
+}
+
+/*
+ * Create some little jobs that will do the move between
+ * them.
+ */
+#define SPLIT_COUNT 8
+static void split_job(struct kcopyd_job *job)
+{
+ int i;
+
+ atomic_set(&job->sub_jobs, SPLIT_COUNT);
+ for (i = 0; i < SPLIT_COUNT; i++)
+ segment_complete(0, 0u, job);
+}
+
+int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from,
+ unsigned int num_dests, struct io_region *dests,
+ unsigned int flags, kcopyd_notify_fn fn, void *context)
+{
+ struct kcopyd_job *job;
+
+ /*
+ * Allocate a new job.
+ */
+ job = mempool_alloc(_job_pool, GFP_NOIO);
+
+ /*
+ * set up for the read.
+ */
+ job->kc = kc;
+ job->flags = flags;
+ job->read_err = 0;
+ job->write_err = 0;
+ job->rw = READ;
+
+ job->source = *from;
+
+ job->num_dests = num_dests;
+ memcpy(&job->dests, dests, sizeof(*dests) * num_dests);
+
+ job->offset = 0;
+ job->nr_pages = 0;
+ job->pages = NULL;
+
+ job->fn = fn;
+ job->context = context;
+
+ if (job->source.count < SUB_JOB_SIZE)
+ dispatch_job(job);
+
+ else {
+ init_MUTEX(&job->lock);
+ job->progress = 0;
+ split_job(job);
+ }
+
+ return 0;
+}
+
+/*
+ * Cancels a kcopyd job, eg. someone might be deactivating a
+ * mirror.
+ */
+int kcopyd_cancel(struct kcopyd_job *job, int block)
+{
+ /* FIXME: finish */
+ return -1;
+}
+
+/*-----------------------------------------------------------------
+ * Unit setup
+ *---------------------------------------------------------------*/
+static DECLARE_MUTEX(_client_lock);
+static LIST_HEAD(_clients);
+
+static int client_add(struct kcopyd_client *kc)
+{
+ down(&_client_lock);
+ list_add(&kc->list, &_clients);
+ up(&_client_lock);
+ return 0;
+}
+
+static void client_del(struct kcopyd_client *kc)
+{
+ down(&_client_lock);
+ list_del(&kc->list);
+ up(&_client_lock);
+}
+
+static DECLARE_MUTEX(kcopyd_init_lock);
+static int kcopyd_clients = 0;
+
+static int kcopyd_init(void)
+{
+ int r;
+
+ down(&kcopyd_init_lock);
+
+ if (kcopyd_clients) {
+ /* Already initialized. */
+ kcopyd_clients++;
+ up(&kcopyd_init_lock);
+ return 0;
+ }
+
+ r = jobs_init();
+ if (r) {
+ up(&kcopyd_init_lock);
+ return r;
+ }
+
+ _kcopyd_wq = create_singlethread_workqueue("kcopyd");
+ if (!_kcopyd_wq) {
+ jobs_exit();
+ up(&kcopyd_init_lock);
+ return -ENOMEM;
+ }
+
+ kcopyd_clients++;
+ INIT_WORK(&_kcopyd_work, do_work, NULL);
+ up(&kcopyd_init_lock);
+ return 0;
+}
+
+static void kcopyd_exit(void)
+{
+ down(&kcopyd_init_lock);
+ kcopyd_clients--;
+ if (!kcopyd_clients) {
+ jobs_exit();
+ destroy_workqueue(_kcopyd_wq);
+ _kcopyd_wq = NULL;
+ }
+ up(&kcopyd_init_lock);
+}
+
+int kcopyd_client_create(unsigned int nr_pages, struct kcopyd_client **result)
+{
+ int r = 0;
+ struct kcopyd_client *kc;
+
+ r = kcopyd_init();
+ if (r)
+ return r;
+
+ kc = kmalloc(sizeof(*kc), GFP_KERNEL);
+ if (!kc) {
+ kcopyd_exit();
+ return -ENOMEM;
+ }
+
+ kc->lock = SPIN_LOCK_UNLOCKED;
+ kc->pages = NULL;
+ kc->nr_pages = kc->nr_free_pages = 0;
+ r = client_alloc_pages(kc, nr_pages);
+ if (r) {
+ kfree(kc);
+ kcopyd_exit();
+ return r;
+ }
+
+ r = dm_io_get(nr_pages);
+ if (r) {
+ client_free_pages(kc);
+ kfree(kc);
+ kcopyd_exit();
+ return r;
+ }
+
+ r = client_add(kc);
+ if (r) {
+ dm_io_put(nr_pages);
+ client_free_pages(kc);
+ kfree(kc);
+ kcopyd_exit();
+ return r;
+ }
+
+ *result = kc;
+ return 0;
+}
+
+void kcopyd_client_destroy(struct kcopyd_client *kc)
+{
+ dm_io_put(kc->nr_pages);
+ client_free_pages(kc);
+ client_del(kc);
+ kfree(kc);
+ kcopyd_exit();
+}
+
+EXPORT_SYMBOL(kcopyd_client_create);
+EXPORT_SYMBOL(kcopyd_client_destroy);
+EXPORT_SYMBOL(kcopyd_copy);
+EXPORT_SYMBOL(kcopyd_cancel);
--- /dev/null
+/*
+ * Copyright (C) 2001 Sistina Software
+ *
+ * This file is released under the GPL.
+ *
+ * Kcopyd provides a simple interface for copying an area of one
+ * block-device to one or more other block-devices, with an asynchronous
+ * completion notification.
+ */
+
+#ifndef DM_KCOPYD_H
+#define DM_KCOPYD_H
+
+#include "dm-io.h"
+
+/* FIXME: make this configurable */
+#define KCOPYD_MAX_REGIONS 8
+
+#define KCOPYD_IGNORE_ERROR 1
+
+/*
+ * To use kcopyd you must first create a kcopyd client object.
+ */
+struct kcopyd_client;
+int kcopyd_client_create(unsigned int num_pages, struct kcopyd_client **result);
+void kcopyd_client_destroy(struct kcopyd_client *kc);
+
+/*
+ * Submit a copy job to kcopyd. This is built on top of the
+ * previous three fns.
+ *
+ * read_err is a boolean,
+ * write_err is a bitset, with 1 bit for each destination region
+ */
+typedef void (*kcopyd_notify_fn)(int read_err,
+ unsigned int write_err, void *context);
+
+int kcopyd_copy(struct kcopyd_client *kc, struct io_region *from,
+ unsigned int num_dests, struct io_region *dests,
+ unsigned int flags, kcopyd_notify_fn fn, void *context);
+
+#endif
--- /dev/null
+ovcamchip-objs := ovcamchip_core.o ov6x20.o ov6x30.o ov7x10.o ov7x20.o \
+ ov76be.o
+
+obj-$(CONFIG_VIDEO_OVCAMCHIP) += ovcamchip.o
--- /dev/null
+/* OmniVision OV6620/OV6120 Camera Chip Support Code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+/* Registers */
+#define REG_GAIN 0x00 /* gain [5:0] */
+#define REG_BLUE 0x01 /* blue gain */
+#define REG_RED 0x02 /* red gain */
+#define REG_SAT 0x03 /* saturation */
+#define REG_CNT 0x05 /* Y contrast */
+#define REG_BRT 0x06 /* Y brightness */
+#define REG_WB_BLUE 0x0C /* WB blue ratio [5:0] */
+#define REG_WB_RED 0x0D /* WB red ratio [5:0] */
+#define REG_EXP 0x10 /* exposure */
+
+/* Window parameters */
+#define HWSBASE 0x38
+#define HWEBASE 0x3A
+#define VWSBASE 0x05
+#define VWEBASE 0x06
+
+struct ov6x20 {
+ int auto_brt;
+ int auto_exp;
+ int backlight;
+ int bandfilt;
+ int mirror;
+};
+
+/* Initial values for use with OV511/OV511+ cameras */
+static struct ovcamchip_regvals regvals_init_6x20_511[] = {
+ { 0x12, 0x80 }, /* reset */
+ { 0x11, 0x01 },
+ { 0x03, 0x60 },
+ { 0x05, 0x7f }, /* For when autoadjust is off */
+ { 0x07, 0xa8 },
+ { 0x0c, 0x24 },
+ { 0x0d, 0x24 },
+ { 0x0f, 0x15 }, /* COMS */
+ { 0x10, 0x75 }, /* AEC Exposure time */
+ { 0x12, 0x24 }, /* Enable AGC and AWB */
+ { 0x14, 0x04 },
+ { 0x16, 0x03 },
+ { 0x26, 0xb2 }, /* BLC enable */
+ /* 0x28: 0x05 Selects RGB format if RGB on */
+ { 0x28, 0x05 },
+ { 0x2a, 0x04 }, /* Disable framerate adjust */
+ { 0x2d, 0x99 },
+ { 0x33, 0xa0 }, /* Color Processing Parameter */
+ { 0x34, 0xd2 }, /* Max A/D range */
+ { 0x38, 0x8b },
+ { 0x39, 0x40 },
+
+ { 0x3c, 0x39 }, /* Enable AEC mode changing */
+ { 0x3c, 0x3c }, /* Change AEC mode */
+ { 0x3c, 0x24 }, /* Disable AEC mode changing */
+
+ { 0x3d, 0x80 },
+ /* These next two registers (0x4a, 0x4b) are undocumented. They
+ * control the color balance */
+ { 0x4a, 0x80 },
+ { 0x4b, 0x80 },
+ { 0x4d, 0xd2 }, /* This reduces noise a bit */
+ { 0x4e, 0xc1 },
+ { 0x4f, 0x04 },
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* Initial values for use with OV518 cameras */
+static struct ovcamchip_regvals regvals_init_6x20_518[] = {
+ { 0x12, 0x80 }, /* Do a reset */
+ { 0x03, 0xc0 }, /* Saturation */
+ { 0x05, 0x8a }, /* Contrast */
+ { 0x0c, 0x24 }, /* AWB blue */
+ { 0x0d, 0x24 }, /* AWB red */
+ { 0x0e, 0x8d }, /* Additional 2x gain */
+ { 0x0f, 0x25 }, /* Black expanding level = 1.3V */
+ { 0x11, 0x01 }, /* Clock div. */
+ { 0x12, 0x24 }, /* Enable AGC and AWB */
+ { 0x13, 0x01 }, /* (default) */
+ { 0x14, 0x80 }, /* Set reserved bit 7 */
+ { 0x15, 0x01 }, /* (default) */
+ { 0x16, 0x03 }, /* (default) */
+ { 0x17, 0x38 }, /* (default) */
+ { 0x18, 0xea }, /* (default) */
+ { 0x19, 0x04 },
+ { 0x1a, 0x93 },
+ { 0x1b, 0x00 }, /* (default) */
+ { 0x1e, 0xc4 }, /* (default) */
+ { 0x1f, 0x04 }, /* (default) */
+ { 0x20, 0x20 }, /* Enable 1st stage aperture correction */
+ { 0x21, 0x10 }, /* Y offset */
+ { 0x22, 0x88 }, /* U offset */
+ { 0x23, 0xc0 }, /* Set XTAL power level */
+ { 0x24, 0x53 }, /* AEC bright ratio */
+ { 0x25, 0x7a }, /* AEC black ratio */
+ { 0x26, 0xb2 }, /* BLC enable */
+ { 0x27, 0xa2 }, /* Full output range */
+ { 0x28, 0x01 }, /* (default) */
+ { 0x29, 0x00 }, /* (default) */
+ { 0x2a, 0x84 }, /* (default) */
+ { 0x2b, 0xa8 }, /* Set custom frame rate */
+ { 0x2c, 0xa0 }, /* (reserved) */
+ { 0x2d, 0x95 }, /* Enable banding filter */
+ { 0x2e, 0x88 }, /* V offset */
+ { 0x33, 0x22 }, /* Luminance gamma on */
+ { 0x34, 0xc7 }, /* A/D bias */
+ { 0x36, 0x12 }, /* (reserved) */
+ { 0x37, 0x63 }, /* (reserved) */
+ { 0x38, 0x8b }, /* Quick AEC/AEB */
+ { 0x39, 0x00 }, /* (default) */
+ { 0x3a, 0x0f }, /* (default) */
+ { 0x3b, 0x3c }, /* (default) */
+ { 0x3c, 0x5c }, /* AEC controls */
+ { 0x3d, 0x80 }, /* Drop 1 (bad) frame when AEC change */
+ { 0x3e, 0x80 }, /* (default) */
+ { 0x3f, 0x02 }, /* (default) */
+ { 0x40, 0x10 }, /* (reserved) */
+ { 0x41, 0x10 }, /* (reserved) */
+ { 0x42, 0x00 }, /* (reserved) */
+ { 0x43, 0x7f }, /* (reserved) */
+ { 0x44, 0x80 }, /* (reserved) */
+ { 0x45, 0x1c }, /* (reserved) */
+ { 0x46, 0x1c }, /* (reserved) */
+ { 0x47, 0x80 }, /* (reserved) */
+ { 0x48, 0x5f }, /* (reserved) */
+ { 0x49, 0x00 }, /* (reserved) */
+ { 0x4a, 0x00 }, /* Color balance (undocumented) */
+ { 0x4b, 0x80 }, /* Color balance (undocumented) */
+ { 0x4c, 0x58 }, /* (reserved) */
+ { 0x4d, 0xd2 }, /* U *= .938, V *= .838 */
+ { 0x4e, 0xa0 }, /* (default) */
+ { 0x4f, 0x04 }, /* UV 3-point average */
+ { 0x50, 0xff }, /* (reserved) */
+ { 0x51, 0x58 }, /* (reserved) */
+ { 0x52, 0xc0 }, /* (reserved) */
+ { 0x53, 0x42 }, /* (reserved) */
+ { 0x27, 0xa6 }, /* Enable manual offset adj. (reg 21 & 22) */
+ { 0x12, 0x20 },
+ { 0x12, 0x24 },
+
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* This initializes the OV6x20 camera chip and relevant variables. */
+static int ov6x20_init(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x20 *s;
+ int rc;
+
+ DDEBUG(4, &c->dev, "entered");
+
+ switch (c->adapter->id) {
+ case I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV511:
+ rc = ov_write_regvals(c, regvals_init_6x20_511);
+ break;
+ case I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518:
+ rc = ov_write_regvals(c, regvals_init_6x20_518);
+ break;
+ default:
+ dev_err(&c->dev, "ov6x20: Unsupported adapter\n");
+ rc = -ENODEV;
+ }
+
+ if (rc < 0)
+ return rc;
+
+ ov->spriv = s = kmalloc(sizeof *s, GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ memset(s, 0, sizeof *s);
+
+ s->auto_brt = 1;
+ s->auto_exp = 1;
+
+ return rc;
+}
+
+static int ov6x20_free(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ kfree(ov->spriv);
+ return 0;
+}
+
+static int ov6x20_set_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x20 *s = ov->spriv;
+ int rc;
+ int v = ctl->value;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_write(c, REG_CNT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_write(c, REG_BRT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_write(c, REG_SAT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_write(c, REG_RED, 0xFF - (v >> 8));
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, REG_BLUE, v >> 8);
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_write(c, REG_EXP, v);
+ break;
+ case OVCAMCHIP_CID_FREQ:
+ {
+ int sixty = (v == 60);
+
+ rc = ov_write(c, 0x2b, sixty?0xa8:0x28);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, 0x2a, sixty?0x84:0xa4);
+ break;
+ }
+ case OVCAMCHIP_CID_BANDFILT:
+ rc = ov_write_mask(c, 0x2d, v?0x04:0x00, 0x04);
+ s->bandfilt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ rc = ov_write_mask(c, 0x2d, v?0x10:0x00, 0x10);
+ s->auto_brt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ rc = ov_write_mask(c, 0x13, v?0x01:0x00, 0x01);
+ s->auto_exp = v;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ {
+ rc = ov_write_mask(c, 0x4e, v?0xe0:0xc0, 0xe0);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x29, v?0x08:0x00, 0x08);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x0e, v?0x80:0x00, 0x80);
+ s->backlight = v;
+ break;
+ }
+ case OVCAMCHIP_CID_MIRROR:
+ rc = ov_write_mask(c, 0x12, v?0x40:0x00, 0x40);
+ s->mirror = v;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+out:
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, v, rc);
+ return rc;
+}
+
+static int ov6x20_get_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x20 *s = ov->spriv;
+ int rc = 0;
+ unsigned char val = 0;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_read(c, REG_CNT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_read(c, REG_BRT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_read(c, REG_SAT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_read(c, REG_BLUE, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_read(c, REG_EXP, &val);
+ ctl->value = val;
+ break;
+ case OVCAMCHIP_CID_BANDFILT:
+ ctl->value = s->bandfilt;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ ctl->value = s->auto_brt;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ ctl->value = s->auto_exp;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ ctl->value = s->backlight;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ ctl->value = s->mirror;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, ctl->value, rc);
+ return rc;
+}
+
+static int ov6x20_mode_init(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ /******** QCIF-specific regs ********/
+
+ ov_write(c, 0x14, win->quarter?0x24:0x04);
+
+ /******** Palette-specific regs ********/
+
+ /* OV518 needs 8 bit multiplexed in color mode, and 16 bit in B&W */
+ if (c->adapter->id == (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518)) {
+ if (win->format == VIDEO_PALETTE_GREY)
+ ov_write_mask(c, 0x13, 0x00, 0x20);
+ else
+ ov_write_mask(c, 0x13, 0x20, 0x20);
+ } else {
+ if (win->format == VIDEO_PALETTE_GREY)
+ ov_write_mask(c, 0x13, 0x20, 0x20);
+ else
+ ov_write_mask(c, 0x13, 0x00, 0x20);
+ }
+
+ /******** Clock programming ********/
+
+ /* The OV6620 needs special handling. This prevents the
+ * severe banding that normally occurs */
+
+ /* Clock down */
+ ov_write(c, 0x2a, 0x04);
+
+ ov_write(c, 0x11, win->clockdiv);
+
+ ov_write(c, 0x2a, 0x84);
+ /* This next setting is critical. It seems to improve
+ * the gain or the contrast. The "reserved" bits seem
+ * to have some effect in this case. */
+ ov_write(c, 0x2d, 0x85); /* FIXME: This messes up banding filter */
+
+ return 0;
+}
+
+static int ov6x20_set_window(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int ret, hwscale, vwscale;
+
+ ret = ov6x20_mode_init(c, win);
+ if (ret < 0)
+ return ret;
+
+ if (win->quarter) {
+ hwscale = 0;
+ vwscale = 0;
+ } else {
+ hwscale = 1;
+ vwscale = 1; /* The datasheet says 0; it's wrong */
+ }
+
+ ov_write(c, 0x17, HWSBASE + (win->x >> hwscale));
+ ov_write(c, 0x18, HWEBASE + ((win->x + win->width) >> hwscale));
+ ov_write(c, 0x19, VWSBASE + (win->y >> vwscale));
+ ov_write(c, 0x1a, VWEBASE + ((win->y + win->height) >> vwscale));
+
+ return 0;
+}
+
+static int ov6x20_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case OVCAMCHIP_CMD_S_CTRL:
+ return ov6x20_set_control(c, arg);
+ case OVCAMCHIP_CMD_G_CTRL:
+ return ov6x20_get_control(c, arg);
+ case OVCAMCHIP_CMD_S_MODE:
+ return ov6x20_set_window(c, arg);
+ default:
+ DDEBUG(2, &c->dev, "command not supported: %d", cmd);
+ return -ENOIOCTLCMD;
+ }
+}
+
+struct ovcamchip_ops ov6x20_ops = {
+ .init = ov6x20_init,
+ .free = ov6x20_free,
+ .command = ov6x20_command,
+};
--- /dev/null
+/* OmniVision OV6630/OV6130 Camera Chip Support Code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+/* Registers */
+#define REG_GAIN 0x00 /* gain [5:0] */
+#define REG_BLUE 0x01 /* blue gain */
+#define REG_RED 0x02 /* red gain */
+#define REG_SAT 0x03 /* saturation [7:3] */
+#define REG_CNT 0x05 /* Y contrast [3:0] */
+#define REG_BRT 0x06 /* Y brightness */
+#define REG_SHARP 0x07 /* sharpness */
+#define REG_WB_BLUE 0x0C /* WB blue ratio [5:0] */
+#define REG_WB_RED 0x0D /* WB red ratio [5:0] */
+#define REG_EXP 0x10 /* exposure */
+
+/* Window parameters */
+#define HWSBASE 0x38
+#define HWEBASE 0x3A
+#define VWSBASE 0x05
+#define VWEBASE 0x06
+
+struct ov6x30 {
+ int auto_brt;
+ int auto_exp;
+ int backlight;
+ int bandfilt;
+ int mirror;
+};
+
+static struct ovcamchip_regvals regvals_init_6x30[] = {
+ { 0x12, 0x80 }, /* reset */
+ { 0x00, 0x1f }, /* Gain */
+ { 0x01, 0x99 }, /* Blue gain */
+ { 0x02, 0x7c }, /* Red gain */
+ { 0x03, 0xc0 }, /* Saturation */
+ { 0x05, 0x0a }, /* Contrast */
+ { 0x06, 0x95 }, /* Brightness */
+ { 0x07, 0x2d }, /* Sharpness */
+ { 0x0c, 0x20 },
+ { 0x0d, 0x20 },
+ { 0x0e, 0x20 },
+ { 0x0f, 0x05 },
+ { 0x10, 0x9a }, /* "exposure check" */
+ { 0x11, 0x00 }, /* Pixel clock = fastest */
+ { 0x12, 0x24 }, /* Enable AGC and AWB */
+ { 0x13, 0x21 },
+ { 0x14, 0x80 },
+ { 0x15, 0x01 },
+ { 0x16, 0x03 },
+ { 0x17, 0x38 },
+ { 0x18, 0xea },
+ { 0x19, 0x04 },
+ { 0x1a, 0x93 },
+ { 0x1b, 0x00 },
+ { 0x1e, 0xc4 },
+ { 0x1f, 0x04 },
+ { 0x20, 0x20 },
+ { 0x21, 0x10 },
+ { 0x22, 0x88 },
+ { 0x23, 0xc0 }, /* Crystal circuit power level */
+ { 0x25, 0x9a }, /* Increase AEC black pixel ratio */
+ { 0x26, 0xb2 }, /* BLC enable */
+ { 0x27, 0xa2 },
+ { 0x28, 0x00 },
+ { 0x29, 0x00 },
+ { 0x2a, 0x84 }, /* (keep) */
+ { 0x2b, 0xa8 }, /* (keep) */
+ { 0x2c, 0xa0 },
+ { 0x2d, 0x95 }, /* Enable auto-brightness */
+ { 0x2e, 0x88 },
+ { 0x33, 0x26 },
+ { 0x34, 0x03 },
+ { 0x36, 0x8f },
+ { 0x37, 0x80 },
+ { 0x38, 0x83 },
+ { 0x39, 0x80 },
+ { 0x3a, 0x0f },
+ { 0x3b, 0x3c },
+ { 0x3c, 0x1a },
+ { 0x3d, 0x80 },
+ { 0x3e, 0x80 },
+ { 0x3f, 0x0e },
+ { 0x40, 0x00 }, /* White bal */
+ { 0x41, 0x00 }, /* White bal */
+ { 0x42, 0x80 },
+ { 0x43, 0x3f }, /* White bal */
+ { 0x44, 0x80 },
+ { 0x45, 0x20 },
+ { 0x46, 0x20 },
+ { 0x47, 0x80 },
+ { 0x48, 0x7f },
+ { 0x49, 0x00 },
+ { 0x4a, 0x00 },
+ { 0x4b, 0x80 },
+ { 0x4c, 0xd0 },
+ { 0x4d, 0x10 }, /* U = 0.563u, V = 0.714v */
+ { 0x4e, 0x40 },
+ { 0x4f, 0x07 }, /* UV average mode, color killer: strongest */
+ { 0x50, 0xff },
+ { 0x54, 0x23 }, /* Max AGC gain: 18dB */
+ { 0x55, 0xff },
+ { 0x56, 0x12 },
+ { 0x57, 0x81 }, /* (default) */
+ { 0x58, 0x75 },
+ { 0x59, 0x01 }, /* AGC dark current compensation: +1 */
+ { 0x5a, 0x2c },
+ { 0x5b, 0x0f }, /* AWB chrominance levels */
+ { 0x5c, 0x10 },
+ { 0x3d, 0x80 },
+ { 0x27, 0xa6 },
+ /* Toggle AWB off and on */
+ { 0x12, 0x20 },
+ { 0x12, 0x24 },
+
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* This initializes the OV6x30 camera chip and relevant variables. */
+static int ov6x30_init(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x30 *s;
+ int rc;
+
+ DDEBUG(4, &c->dev, "entered");
+
+ rc = ov_write_regvals(c, regvals_init_6x30);
+ if (rc < 0)
+ return rc;
+
+ ov->spriv = s = kmalloc(sizeof *s, GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ memset(s, 0, sizeof *s);
+
+ s->auto_brt = 1;
+ s->auto_exp = 1;
+
+ return rc;
+}
+
+static int ov6x30_free(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ kfree(ov->spriv);
+ return 0;
+}
+
+static int ov6x30_set_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x30 *s = ov->spriv;
+ int rc;
+ int v = ctl->value;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_write_mask(c, REG_CNT, v >> 12, 0x0f);
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_write(c, REG_BRT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_write(c, REG_SAT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_write(c, REG_RED, 0xFF - (v >> 8));
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, REG_BLUE, v >> 8);
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_write(c, REG_EXP, v);
+ break;
+ case OVCAMCHIP_CID_FREQ:
+ {
+ int sixty = (v == 60);
+
+ rc = ov_write(c, 0x2b, sixty?0xa8:0x28);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, 0x2a, sixty?0x84:0xa4);
+ break;
+ }
+ case OVCAMCHIP_CID_BANDFILT:
+ rc = ov_write_mask(c, 0x2d, v?0x04:0x00, 0x04);
+ s->bandfilt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ rc = ov_write_mask(c, 0x2d, v?0x10:0x00, 0x10);
+ s->auto_brt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ rc = ov_write_mask(c, 0x28, v?0x00:0x10, 0x10);
+ s->auto_exp = v;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ {
+ rc = ov_write_mask(c, 0x4e, v?0x80:0x60, 0xe0);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x29, v?0x08:0x00, 0x08);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x28, v?0x02:0x00, 0x02);
+ s->backlight = v;
+ break;
+ }
+ case OVCAMCHIP_CID_MIRROR:
+ rc = ov_write_mask(c, 0x12, v?0x40:0x00, 0x40);
+ s->mirror = v;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+out:
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, v, rc);
+ return rc;
+}
+
+static int ov6x30_get_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov6x30 *s = ov->spriv;
+ int rc = 0;
+ unsigned char val = 0;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_read(c, REG_CNT, &val);
+ ctl->value = (val & 0x0f) << 12;
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_read(c, REG_BRT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_read(c, REG_SAT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_read(c, REG_BLUE, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_read(c, REG_EXP, &val);
+ ctl->value = val;
+ break;
+ case OVCAMCHIP_CID_BANDFILT:
+ ctl->value = s->bandfilt;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ ctl->value = s->auto_brt;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ ctl->value = s->auto_exp;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ ctl->value = s->backlight;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ ctl->value = s->mirror;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, ctl->value, rc);
+ return rc;
+}
+
+static int ov6x30_mode_init(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ /******** QCIF-specific regs ********/
+
+ ov_write_mask(c, 0x14, win->quarter?0x20:0x00, 0x20);
+
+ /******** Palette-specific regs ********/
+
+ if (win->format == VIDEO_PALETTE_GREY) {
+ if (c->adapter->id == (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518)) {
+ /* Do nothing - we're already in 8-bit mode */
+ } else {
+ ov_write_mask(c, 0x13, 0x20, 0x20);
+ }
+ } else {
+ /* The OV518 needs special treatment. Although both the OV518
+ * and the OV6630 support a 16-bit video bus, only the 8 bit Y
+ * bus is actually used. The UV bus is tied to ground.
+ * Therefore, the OV6630 needs to be in 8-bit multiplexed
+ * output mode */
+
+ if (c->adapter->id == (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518)) {
+ /* Do nothing - we want to stay in 8-bit mode */
+ /* Warning: Messing with reg 0x13 breaks OV518 color */
+ } else {
+ ov_write_mask(c, 0x13, 0x00, 0x20);
+ }
+ }
+
+ /******** Clock programming ********/
+
+ ov_write(c, 0x11, win->clockdiv);
+
+ return 0;
+}
+
+static int ov6x30_set_window(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int ret, hwscale, vwscale;
+
+ ret = ov6x30_mode_init(c, win);
+ if (ret < 0)
+ return ret;
+
+ if (win->quarter) {
+ hwscale = 0;
+ vwscale = 0;
+ } else {
+ hwscale = 1;
+ vwscale = 1; /* The datasheet says 0; it's wrong */
+ }
+
+ ov_write(c, 0x17, HWSBASE + (win->x >> hwscale));
+ ov_write(c, 0x18, HWEBASE + ((win->x + win->width) >> hwscale));
+ ov_write(c, 0x19, VWSBASE + (win->y >> vwscale));
+ ov_write(c, 0x1a, VWEBASE + ((win->y + win->height) >> vwscale));
+
+ return 0;
+}
+
+static int ov6x30_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case OVCAMCHIP_CMD_S_CTRL:
+ return ov6x30_set_control(c, arg);
+ case OVCAMCHIP_CMD_G_CTRL:
+ return ov6x30_get_control(c, arg);
+ case OVCAMCHIP_CMD_S_MODE:
+ return ov6x30_set_window(c, arg);
+ default:
+ DDEBUG(2, &c->dev, "command not supported: %d", cmd);
+ return -ENOIOCTLCMD;
+ }
+}
+
+struct ovcamchip_ops ov6x30_ops = {
+ .init = ov6x30_init,
+ .free = ov6x30_free,
+ .command = ov6x30_command,
+};
--- /dev/null
+/* OmniVision OV76BE Camera Chip Support Code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+/* OV7610 registers: Since the OV76BE is undocumented, we'll settle for these
+ * for now. */
+#define REG_GAIN 0x00 /* gain [5:0] */
+#define REG_BLUE 0x01 /* blue channel balance */
+#define REG_RED 0x02 /* red channel balance */
+#define REG_SAT 0x03 /* saturation */
+#define REG_CNT 0x05 /* Y contrast */
+#define REG_BRT 0x06 /* Y brightness */
+#define REG_BLUE_BIAS 0x0C /* blue channel bias [5:0] */
+#define REG_RED_BIAS 0x0D /* red channel bias [5:0] */
+#define REG_GAMMA_COEFF 0x0E /* gamma settings */
+#define REG_WB_RANGE 0x0F /* AEC/ALC/S-AWB settings */
+#define REG_EXP 0x10 /* manual exposure setting */
+#define REG_CLOCK 0x11 /* polarity/clock prescaler */
+#define REG_FIELD_DIVIDE 0x16 /* field interval/mode settings */
+#define REG_HWIN_START 0x17 /* horizontal window start */
+#define REG_HWIN_END 0x18 /* horizontal window end */
+#define REG_VWIN_START 0x19 /* vertical window start */
+#define REG_VWIN_END 0x1A /* vertical window end */
+#define REG_PIXEL_SHIFT 0x1B /* pixel shift */
+#define REG_YOFFSET 0x21 /* Y channel offset */
+#define REG_UOFFSET 0x22 /* U channel offset */
+#define REG_ECW 0x24 /* exposure white level for AEC */
+#define REG_ECB 0x25 /* exposure black level for AEC */
+#define REG_FRAMERATE_H 0x2A /* frame rate MSB + misc */
+#define REG_FRAMERATE_L 0x2B /* frame rate LSB */
+#define REG_ALC 0x2C /* Auto Level Control settings */
+#define REG_VOFFSET 0x2E /* V channel offset adjustment */
+#define REG_ARRAY_BIAS 0x2F /* array bias -- don't change */
+#define REG_YGAMMA 0x33 /* misc gamma settings [7:6] */
+#define REG_BIAS_ADJUST 0x34 /* misc bias settings */
+
+/* Window parameters */
+#define HWSBASE 0x38
+#define HWEBASE 0x3a
+#define VWSBASE 0x05
+#define VWEBASE 0x05
+
+struct ov76be {
+ int auto_brt;
+ int auto_exp;
+ int bandfilt;
+ int mirror;
+};
+
+/* NOTE: These are the same as the 7x10 settings, but should eventually be
+ * optimized for the OV76BE */
+static struct ovcamchip_regvals regvals_init_76be[] = {
+ { 0x10, 0xff },
+ { 0x16, 0x03 },
+ { 0x28, 0x24 },
+ { 0x2b, 0xac },
+ { 0x12, 0x00 },
+ { 0x38, 0x81 },
+ { 0x28, 0x24 }, /* 0c */
+ { 0x0f, 0x85 }, /* lg's setting */
+ { 0x15, 0x01 },
+ { 0x20, 0x1c },
+ { 0x23, 0x2a },
+ { 0x24, 0x10 },
+ { 0x25, 0x8a },
+ { 0x26, 0xa2 },
+ { 0x27, 0xc2 },
+ { 0x2a, 0x04 },
+ { 0x2c, 0xfe },
+ { 0x2d, 0x93 },
+ { 0x30, 0x71 },
+ { 0x31, 0x60 },
+ { 0x32, 0x26 },
+ { 0x33, 0x20 },
+ { 0x34, 0x48 },
+ { 0x12, 0x24 },
+ { 0x11, 0x01 },
+ { 0x0c, 0x24 },
+ { 0x0d, 0x24 },
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* This initializes the OV76be camera chip and relevant variables. */
+static int ov76be_init(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov76be *s;
+ int rc;
+
+ DDEBUG(4, &c->dev, "entered");
+
+ rc = ov_write_regvals(c, regvals_init_76be);
+ if (rc < 0)
+ return rc;
+
+ ov->spriv = s = kmalloc(sizeof *s, GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ memset(s, 0, sizeof *s);
+
+ s->auto_brt = 1;
+ s->auto_exp = 1;
+
+ return rc;
+}
+
+static int ov76be_free(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ kfree(ov->spriv);
+ return 0;
+}
+
+static int ov76be_set_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov76be *s = ov->spriv;
+ int rc;
+ int v = ctl->value;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_write(c, REG_BRT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_write(c, REG_SAT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_write(c, REG_EXP, v);
+ break;
+ case OVCAMCHIP_CID_FREQ:
+ {
+ int sixty = (v == 60);
+
+ rc = ov_write_mask(c, 0x2a, sixty?0x00:0x80, 0x80);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, 0x2b, sixty?0x00:0xac);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x76, 0x01, 0x01);
+ break;
+ }
+ case OVCAMCHIP_CID_BANDFILT:
+ rc = ov_write_mask(c, 0x2d, v?0x04:0x00, 0x04);
+ s->bandfilt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ rc = ov_write_mask(c, 0x2d, v?0x10:0x00, 0x10);
+ s->auto_brt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ rc = ov_write_mask(c, 0x13, v?0x01:0x00, 0x01);
+ s->auto_exp = v;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ rc = ov_write_mask(c, 0x12, v?0x40:0x00, 0x40);
+ s->mirror = v;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+out:
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, v, rc);
+ return rc;
+}
+
+static int ov76be_get_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov76be *s = ov->spriv;
+ int rc = 0;
+ unsigned char val = 0;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_read(c, REG_BRT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_read(c, REG_SAT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_read(c, REG_EXP, &val);
+ ctl->value = val;
+ break;
+ case OVCAMCHIP_CID_BANDFILT:
+ ctl->value = s->bandfilt;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ ctl->value = s->auto_brt;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ ctl->value = s->auto_exp;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ ctl->value = s->mirror;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, ctl->value, rc);
+ return rc;
+}
+
+static int ov76be_mode_init(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int qvga = win->quarter;
+
+ /******** QVGA-specific regs ********/
+
+ ov_write(c, 0x14, qvga?0xa4:0x84);
+
+ /******** Palette-specific regs ********/
+
+ if (win->format == VIDEO_PALETTE_GREY) {
+ ov_write_mask(c, 0x0e, 0x40, 0x40);
+ ov_write_mask(c, 0x13, 0x20, 0x20);
+ } else {
+ ov_write_mask(c, 0x0e, 0x00, 0x40);
+ ov_write_mask(c, 0x13, 0x00, 0x20);
+ }
+
+ /******** Clock programming ********/
+
+ ov_write(c, 0x11, win->clockdiv);
+
+ /******** Resolution-specific ********/
+
+ if (win->width == 640 && win->height == 480)
+ ov_write(c, 0x35, 0x9e);
+ else
+ ov_write(c, 0x35, 0x1e);
+
+ return 0;
+}
+
+static int ov76be_set_window(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int ret, hwscale, vwscale;
+
+ ret = ov76be_mode_init(c, win);
+ if (ret < 0)
+ return ret;
+
+ if (win->quarter) {
+ hwscale = 1;
+ vwscale = 0;
+ } else {
+ hwscale = 2;
+ vwscale = 1;
+ }
+
+ ov_write(c, 0x17, HWSBASE + (win->x >> hwscale));
+ ov_write(c, 0x18, HWEBASE + ((win->x + win->width) >> hwscale));
+ ov_write(c, 0x19, VWSBASE + (win->y >> vwscale));
+ ov_write(c, 0x1a, VWEBASE + ((win->y + win->height) >> vwscale));
+
+ return 0;
+}
+
+static int ov76be_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case OVCAMCHIP_CMD_S_CTRL:
+ return ov76be_set_control(c, arg);
+ case OVCAMCHIP_CMD_G_CTRL:
+ return ov76be_get_control(c, arg);
+ case OVCAMCHIP_CMD_S_MODE:
+ return ov76be_set_window(c, arg);
+ default:
+ DDEBUG(2, &c->dev, "command not supported: %d", cmd);
+ return -ENOIOCTLCMD;
+ }
+}
+
+struct ovcamchip_ops ov76be_ops = {
+ .init = ov76be_init,
+ .free = ov76be_free,
+ .command = ov76be_command,
+};
--- /dev/null
+/* OmniVision OV7610/OV7110 Camera Chip Support Code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * Color fixes by by Orion Sky Lawlor <olawlor@acm.org> (2/26/2000)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+/* Registers */
+#define REG_GAIN 0x00 /* gain [5:0] */
+#define REG_BLUE 0x01 /* blue channel balance */
+#define REG_RED 0x02 /* red channel balance */
+#define REG_SAT 0x03 /* saturation */
+#define REG_CNT 0x05 /* Y contrast */
+#define REG_BRT 0x06 /* Y brightness */
+#define REG_BLUE_BIAS 0x0C /* blue channel bias [5:0] */
+#define REG_RED_BIAS 0x0D /* red channel bias [5:0] */
+#define REG_GAMMA_COEFF 0x0E /* gamma settings */
+#define REG_WB_RANGE 0x0F /* AEC/ALC/S-AWB settings */
+#define REG_EXP 0x10 /* manual exposure setting */
+#define REG_CLOCK 0x11 /* polarity/clock prescaler */
+#define REG_FIELD_DIVIDE 0x16 /* field interval/mode settings */
+#define REG_HWIN_START 0x17 /* horizontal window start */
+#define REG_HWIN_END 0x18 /* horizontal window end */
+#define REG_VWIN_START 0x19 /* vertical window start */
+#define REG_VWIN_END 0x1A /* vertical window end */
+#define REG_PIXEL_SHIFT 0x1B /* pixel shift */
+#define REG_YOFFSET 0x21 /* Y channel offset */
+#define REG_UOFFSET 0x22 /* U channel offset */
+#define REG_ECW 0x24 /* exposure white level for AEC */
+#define REG_ECB 0x25 /* exposure black level for AEC */
+#define REG_FRAMERATE_H 0x2A /* frame rate MSB + misc */
+#define REG_FRAMERATE_L 0x2B /* frame rate LSB */
+#define REG_ALC 0x2C /* Auto Level Control settings */
+#define REG_VOFFSET 0x2E /* V channel offset adjustment */
+#define REG_ARRAY_BIAS 0x2F /* array bias -- don't change */
+#define REG_YGAMMA 0x33 /* misc gamma settings [7:6] */
+#define REG_BIAS_ADJUST 0x34 /* misc bias settings */
+
+/* Window parameters */
+#define HWSBASE 0x38
+#define HWEBASE 0x3a
+#define VWSBASE 0x05
+#define VWEBASE 0x05
+
+struct ov7x10 {
+ int auto_brt;
+ int auto_exp;
+ int bandfilt;
+ int mirror;
+};
+
+/* Lawrence Glaister <lg@jfm.bc.ca> reports:
+ *
+ * Register 0x0f in the 7610 has the following effects:
+ *
+ * 0x85 (AEC method 1): Best overall, good contrast range
+ * 0x45 (AEC method 2): Very overexposed
+ * 0xa5 (spec sheet default): Ok, but the black level is
+ * shifted resulting in loss of contrast
+ * 0x05 (old driver setting): very overexposed, too much
+ * contrast
+ */
+static struct ovcamchip_regvals regvals_init_7x10[] = {
+ { 0x10, 0xff },
+ { 0x16, 0x03 },
+ { 0x28, 0x24 },
+ { 0x2b, 0xac },
+ { 0x12, 0x00 },
+ { 0x38, 0x81 },
+ { 0x28, 0x24 }, /* 0c */
+ { 0x0f, 0x85 }, /* lg's setting */
+ { 0x15, 0x01 },
+ { 0x20, 0x1c },
+ { 0x23, 0x2a },
+ { 0x24, 0x10 },
+ { 0x25, 0x8a },
+ { 0x26, 0xa2 },
+ { 0x27, 0xc2 },
+ { 0x2a, 0x04 },
+ { 0x2c, 0xfe },
+ { 0x2d, 0x93 },
+ { 0x30, 0x71 },
+ { 0x31, 0x60 },
+ { 0x32, 0x26 },
+ { 0x33, 0x20 },
+ { 0x34, 0x48 },
+ { 0x12, 0x24 },
+ { 0x11, 0x01 },
+ { 0x0c, 0x24 },
+ { 0x0d, 0x24 },
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* This initializes the OV7x10 camera chip and relevant variables. */
+static int ov7x10_init(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x10 *s;
+ int rc;
+
+ DDEBUG(4, &c->dev, "entered");
+
+ rc = ov_write_regvals(c, regvals_init_7x10);
+ if (rc < 0)
+ return rc;
+
+ ov->spriv = s = kmalloc(sizeof *s, GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ memset(s, 0, sizeof *s);
+
+ s->auto_brt = 1;
+ s->auto_exp = 1;
+
+ return rc;
+}
+
+static int ov7x10_free(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ kfree(ov->spriv);
+ return 0;
+}
+
+static int ov7x10_set_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x10 *s = ov->spriv;
+ int rc;
+ int v = ctl->value;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_write(c, REG_CNT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_write(c, REG_BRT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_write(c, REG_SAT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_write(c, REG_RED, 0xFF - (v >> 8));
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, REG_BLUE, v >> 8);
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_write(c, REG_EXP, v);
+ break;
+ case OVCAMCHIP_CID_FREQ:
+ {
+ int sixty = (v == 60);
+
+ rc = ov_write_mask(c, 0x2a, sixty?0x00:0x80, 0x80);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, 0x2b, sixty?0x00:0xac);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x13, 0x10, 0x10);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x13, 0x00, 0x10);
+ break;
+ }
+ case OVCAMCHIP_CID_BANDFILT:
+ rc = ov_write_mask(c, 0x2d, v?0x04:0x00, 0x04);
+ s->bandfilt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ rc = ov_write_mask(c, 0x2d, v?0x10:0x00, 0x10);
+ s->auto_brt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ rc = ov_write_mask(c, 0x29, v?0x00:0x80, 0x80);
+ s->auto_exp = v;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ rc = ov_write_mask(c, 0x12, v?0x40:0x00, 0x40);
+ s->mirror = v;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+out:
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, v, rc);
+ return rc;
+}
+
+static int ov7x10_get_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x10 *s = ov->spriv;
+ int rc = 0;
+ unsigned char val = 0;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_read(c, REG_CNT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_read(c, REG_BRT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_read(c, REG_SAT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_HUE:
+ rc = ov_read(c, REG_BLUE, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_read(c, REG_EXP, &val);
+ ctl->value = val;
+ break;
+ case OVCAMCHIP_CID_BANDFILT:
+ ctl->value = s->bandfilt;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ ctl->value = s->auto_brt;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ ctl->value = s->auto_exp;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ ctl->value = s->mirror;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, ctl->value, rc);
+ return rc;
+}
+
+static int ov7x10_mode_init(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int qvga = win->quarter;
+
+ /******** QVGA-specific regs ********/
+
+ ov_write(c, 0x14, qvga?0x24:0x04);
+
+ /******** Palette-specific regs ********/
+
+ if (win->format == VIDEO_PALETTE_GREY) {
+ ov_write_mask(c, 0x0e, 0x40, 0x40);
+ ov_write_mask(c, 0x13, 0x20, 0x20);
+ } else {
+ ov_write_mask(c, 0x0e, 0x00, 0x40);
+ ov_write_mask(c, 0x13, 0x00, 0x20);
+ }
+
+ /******** Clock programming ********/
+
+ ov_write(c, 0x11, win->clockdiv);
+
+ /******** Resolution-specific ********/
+
+ if (win->width == 640 && win->height == 480)
+ ov_write(c, 0x35, 0x9e);
+ else
+ ov_write(c, 0x35, 0x1e);
+
+ return 0;
+}
+
+static int ov7x10_set_window(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int ret, hwscale, vwscale;
+
+ ret = ov7x10_mode_init(c, win);
+ if (ret < 0)
+ return ret;
+
+ if (win->quarter) {
+ hwscale = 1;
+ vwscale = 0;
+ } else {
+ hwscale = 2;
+ vwscale = 1;
+ }
+
+ ov_write(c, 0x17, HWSBASE + (win->x >> hwscale));
+ ov_write(c, 0x18, HWEBASE + ((win->x + win->width) >> hwscale));
+ ov_write(c, 0x19, VWSBASE + (win->y >> vwscale));
+ ov_write(c, 0x1a, VWEBASE + ((win->y + win->height) >> vwscale));
+
+ return 0;
+}
+
+static int ov7x10_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case OVCAMCHIP_CMD_S_CTRL:
+ return ov7x10_set_control(c, arg);
+ case OVCAMCHIP_CMD_G_CTRL:
+ return ov7x10_get_control(c, arg);
+ case OVCAMCHIP_CMD_S_MODE:
+ return ov7x10_set_window(c, arg);
+ default:
+ DDEBUG(2, &c->dev, "command not supported: %d", cmd);
+ return -ENOIOCTLCMD;
+ }
+}
+
+struct ovcamchip_ops ov7x10_ops = {
+ .init = ov7x10_init,
+ .free = ov7x10_free,
+ .command = ov7x10_command,
+};
--- /dev/null
+/* OmniVision OV7620/OV7120 Camera Chip Support Code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * OV7620 fixes by Charl P. Botha <cpbotha@ieee.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+/* Registers */
+#define REG_GAIN 0x00 /* gain [5:0] */
+#define REG_BLUE 0x01 /* blue gain */
+#define REG_RED 0x02 /* red gain */
+#define REG_SAT 0x03 /* saturation */
+#define REG_BRT 0x06 /* Y brightness */
+#define REG_SHARP 0x07 /* analog sharpness */
+#define REG_BLUE_BIAS 0x0C /* WB blue ratio [5:0] */
+#define REG_RED_BIAS 0x0D /* WB red ratio [5:0] */
+#define REG_EXP 0x10 /* exposure */
+
+/* Default control settings. Values are in terms of V4L2 controls. */
+#define OV7120_DFL_BRIGHT 0x60
+#define OV7620_DFL_BRIGHT 0x60
+#define OV7120_DFL_SAT 0xb0
+#define OV7620_DFL_SAT 0xc0
+#define DFL_AUTO_EXP 1
+#define DFL_AUTO_GAIN 1
+#define OV7120_DFL_GAIN 0x00
+#define OV7620_DFL_GAIN 0x00
+/* NOTE: Since autoexposure is the default, these aren't programmed into the
+ * OV7x20 chip. They are just here because V4L2 expects a default */
+#define OV7120_DFL_EXP 0x7f
+#define OV7620_DFL_EXP 0x7f
+
+/* Window parameters */
+#define HWSBASE 0x2F /* From 7620.SET (spec is wrong) */
+#define HWEBASE 0x2F
+#define VWSBASE 0x05
+#define VWEBASE 0x05
+
+struct ov7x20 {
+ int auto_brt;
+ int auto_exp;
+ int auto_gain;
+ int backlight;
+ int bandfilt;
+ int mirror;
+};
+
+/* Contrast look-up table */
+static unsigned char ctab[] = {
+ 0x01, 0x05, 0x09, 0x11, 0x15, 0x35, 0x37, 0x57,
+ 0x5b, 0xa5, 0xa7, 0xc7, 0xc9, 0xcf, 0xef, 0xff
+};
+
+/* Settings for (Black & White) OV7120 camera chip */
+static struct ovcamchip_regvals regvals_init_7120[] = {
+ { 0x12, 0x80 }, /* reset */
+ { 0x13, 0x00 }, /* Autoadjust off */
+ { 0x12, 0x20 }, /* Disable AWB */
+ { 0x13, DFL_AUTO_GAIN?0x01:0x00 }, /* Autoadjust on (if desired) */
+ { 0x00, OV7120_DFL_GAIN },
+ { 0x01, 0x80 },
+ { 0x02, 0x80 },
+ { 0x03, OV7120_DFL_SAT },
+ { 0x06, OV7120_DFL_BRIGHT },
+ { 0x07, 0x00 },
+ { 0x0c, 0x20 },
+ { 0x0d, 0x20 },
+ { 0x11, 0x01 },
+ { 0x14, 0x84 },
+ { 0x15, 0x01 },
+ { 0x16, 0x03 },
+ { 0x17, 0x2f },
+ { 0x18, 0xcf },
+ { 0x19, 0x06 },
+ { 0x1a, 0xf5 },
+ { 0x1b, 0x00 },
+ { 0x20, 0x08 },
+ { 0x21, 0x80 },
+ { 0x22, 0x80 },
+ { 0x23, 0x00 },
+ { 0x26, 0xa0 },
+ { 0x27, 0xfa },
+ { 0x28, 0x20 }, /* DON'T set bit 6. It is for the OV7620 only */
+ { 0x29, DFL_AUTO_EXP?0x00:0x80 },
+ { 0x2a, 0x10 },
+ { 0x2b, 0x00 },
+ { 0x2c, 0x88 },
+ { 0x2d, 0x95 },
+ { 0x2e, 0x80 },
+ { 0x2f, 0x44 },
+ { 0x60, 0x20 },
+ { 0x61, 0x02 },
+ { 0x62, 0x5f },
+ { 0x63, 0xd5 },
+ { 0x64, 0x57 },
+ { 0x65, 0x83 }, /* OV says "don't change this value" */
+ { 0x66, 0x55 },
+ { 0x67, 0x92 },
+ { 0x68, 0xcf },
+ { 0x69, 0x76 },
+ { 0x6a, 0x22 },
+ { 0x6b, 0xe2 },
+ { 0x6c, 0x40 },
+ { 0x6d, 0x48 },
+ { 0x6e, 0x80 },
+ { 0x6f, 0x0d },
+ { 0x70, 0x89 },
+ { 0x71, 0x00 },
+ { 0x72, 0x14 },
+ { 0x73, 0x54 },
+ { 0x74, 0xa0 },
+ { 0x75, 0x8e },
+ { 0x76, 0x00 },
+ { 0x77, 0xff },
+ { 0x78, 0x80 },
+ { 0x79, 0x80 },
+ { 0x7a, 0x80 },
+ { 0x7b, 0xe6 },
+ { 0x7c, 0x00 },
+ { 0x24, 0x3a },
+ { 0x25, 0x60 },
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* Settings for (color) OV7620 camera chip */
+static struct ovcamchip_regvals regvals_init_7620[] = {
+ { 0x12, 0x80 }, /* reset */
+ { 0x00, OV7620_DFL_GAIN },
+ { 0x01, 0x80 },
+ { 0x02, 0x80 },
+ { 0x03, OV7620_DFL_SAT },
+ { 0x06, OV7620_DFL_BRIGHT },
+ { 0x07, 0x00 },
+ { 0x0c, 0x24 },
+ { 0x0c, 0x24 },
+ { 0x0d, 0x24 },
+ { 0x11, 0x01 },
+ { 0x12, 0x24 },
+ { 0x13, DFL_AUTO_GAIN?0x01:0x00 },
+ { 0x14, 0x84 },
+ { 0x15, 0x01 },
+ { 0x16, 0x03 },
+ { 0x17, 0x2f },
+ { 0x18, 0xcf },
+ { 0x19, 0x06 },
+ { 0x1a, 0xf5 },
+ { 0x1b, 0x00 },
+ { 0x20, 0x18 },
+ { 0x21, 0x80 },
+ { 0x22, 0x80 },
+ { 0x23, 0x00 },
+ { 0x26, 0xa2 },
+ { 0x27, 0xea },
+ { 0x28, 0x20 },
+ { 0x29, DFL_AUTO_EXP?0x00:0x80 },
+ { 0x2a, 0x10 },
+ { 0x2b, 0x00 },
+ { 0x2c, 0x88 },
+ { 0x2d, 0x91 },
+ { 0x2e, 0x80 },
+ { 0x2f, 0x44 },
+ { 0x60, 0x27 },
+ { 0x61, 0x02 },
+ { 0x62, 0x5f },
+ { 0x63, 0xd5 },
+ { 0x64, 0x57 },
+ { 0x65, 0x83 },
+ { 0x66, 0x55 },
+ { 0x67, 0x92 },
+ { 0x68, 0xcf },
+ { 0x69, 0x76 },
+ { 0x6a, 0x22 },
+ { 0x6b, 0x00 },
+ { 0x6c, 0x02 },
+ { 0x6d, 0x44 },
+ { 0x6e, 0x80 },
+ { 0x6f, 0x1d },
+ { 0x70, 0x8b },
+ { 0x71, 0x00 },
+ { 0x72, 0x14 },
+ { 0x73, 0x54 },
+ { 0x74, 0x00 },
+ { 0x75, 0x8e },
+ { 0x76, 0x00 },
+ { 0x77, 0xff },
+ { 0x78, 0x80 },
+ { 0x79, 0x80 },
+ { 0x7a, 0x80 },
+ { 0x7b, 0xe2 },
+ { 0x7c, 0x00 },
+ { 0xff, 0xff }, /* END MARKER */
+};
+
+/* Returns index into the specified look-up table, with 'n' elements, for which
+ * the value is greater than or equal to "val". If a match isn't found, (n-1)
+ * is returned. The entries in the table must be in ascending order. */
+static inline int ov7x20_lut_find(unsigned char lut[], int n, unsigned char val)
+{
+ int i = 0;
+
+ while (lut[i] < val && i < n)
+ i++;
+
+ return i;
+}
+
+/* This initializes the OV7x20 camera chip and relevant variables. */
+static int ov7x20_init(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x20 *s;
+ int rc;
+
+ DDEBUG(4, &c->dev, "entered");
+
+ if (ov->mono)
+ rc = ov_write_regvals(c, regvals_init_7120);
+ else
+ rc = ov_write_regvals(c, regvals_init_7620);
+
+ if (rc < 0)
+ return rc;
+
+ ov->spriv = s = kmalloc(sizeof *s, GFP_KERNEL);
+ if (!s)
+ return -ENOMEM;
+ memset(s, 0, sizeof *s);
+
+ s->auto_brt = 1;
+ s->auto_exp = DFL_AUTO_EXP;
+ s->auto_gain = DFL_AUTO_GAIN;
+
+ return 0;
+}
+
+static int ov7x20_free(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ kfree(ov->spriv);
+ return 0;
+}
+
+static int ov7x20_set_v4l1_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x20 *s = ov->spriv;
+ int rc;
+ int v = ctl->value;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ {
+ /* Use Y gamma control instead. Bit 0 enables it. */
+ rc = ov_write(c, 0x64, ctab[v >> 12]);
+ break;
+ }
+ case OVCAMCHIP_CID_BRIGHT:
+ /* 7620 doesn't like manual changes when in auto mode */
+ if (!s->auto_brt)
+ rc = ov_write(c, REG_BRT, v >> 8);
+ else
+ rc = 0;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_write(c, REG_SAT, v >> 8);
+ break;
+ case OVCAMCHIP_CID_EXP:
+ if (!s->auto_exp)
+ rc = ov_write(c, REG_EXP, v);
+ else
+ rc = -EBUSY;
+ break;
+ case OVCAMCHIP_CID_FREQ:
+ {
+ int sixty = (v == 60);
+
+ rc = ov_write_mask(c, 0x2a, sixty?0x00:0x80, 0x80);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write(c, 0x2b, sixty?0x00:0xac);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x76, 0x01, 0x01);
+ break;
+ }
+ case OVCAMCHIP_CID_BANDFILT:
+ rc = ov_write_mask(c, 0x2d, v?0x04:0x00, 0x04);
+ s->bandfilt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ rc = ov_write_mask(c, 0x2d, v?0x10:0x00, 0x10);
+ s->auto_brt = v;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ rc = ov_write_mask(c, 0x13, v?0x01:0x00, 0x01);
+ s->auto_exp = v;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ {
+ rc = ov_write_mask(c, 0x68, v?0xe0:0xc0, 0xe0);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x29, v?0x08:0x00, 0x08);
+ if (rc < 0)
+ goto out;
+
+ rc = ov_write_mask(c, 0x28, v?0x02:0x00, 0x02);
+ s->backlight = v;
+ break;
+ }
+ case OVCAMCHIP_CID_MIRROR:
+ rc = ov_write_mask(c, 0x12, v?0x40:0x00, 0x40);
+ s->mirror = v;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+out:
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, v, rc);
+ return rc;
+}
+
+static int ov7x20_get_v4l1_control(struct i2c_client *c,
+ struct ovcamchip_control *ctl)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ struct ov7x20 *s = ov->spriv;
+ int rc = 0;
+ unsigned char val = 0;
+
+ switch (ctl->id) {
+ case OVCAMCHIP_CID_CONT:
+ rc = ov_read(c, 0x64, &val);
+ ctl->value = ov7x20_lut_find(ctab, 16, val) << 12;
+ break;
+ case OVCAMCHIP_CID_BRIGHT:
+ rc = ov_read(c, REG_BRT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_SAT:
+ rc = ov_read(c, REG_SAT, &val);
+ ctl->value = val << 8;
+ break;
+ case OVCAMCHIP_CID_EXP:
+ rc = ov_read(c, REG_EXP, &val);
+ ctl->value = val;
+ break;
+ case OVCAMCHIP_CID_BANDFILT:
+ ctl->value = s->bandfilt;
+ break;
+ case OVCAMCHIP_CID_AUTOBRIGHT:
+ ctl->value = s->auto_brt;
+ break;
+ case OVCAMCHIP_CID_AUTOEXP:
+ ctl->value = s->auto_exp;
+ break;
+ case OVCAMCHIP_CID_BACKLIGHT:
+ ctl->value = s->backlight;
+ break;
+ case OVCAMCHIP_CID_MIRROR:
+ ctl->value = s->mirror;
+ break;
+ default:
+ DDEBUG(2, &c->dev, "control not supported: %d", ctl->id);
+ return -EPERM;
+ }
+
+ DDEBUG(3, &c->dev, "id=%d, arg=%d, rc=%d", ctl->id, ctl->value, rc);
+ return rc;
+}
+
+static int ov7x20_mode_init(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int qvga = win->quarter;
+
+ /******** QVGA-specific regs ********/
+ ov_write_mask(c, 0x14, qvga?0x20:0x00, 0x20);
+ ov_write_mask(c, 0x28, qvga?0x00:0x20, 0x20);
+ ov_write(c, 0x24, qvga?0x20:0x3a);
+ ov_write(c, 0x25, qvga?0x30:0x60);
+ ov_write_mask(c, 0x2d, qvga?0x40:0x00, 0x40);
+ if (!ov->mono)
+ ov_write_mask(c, 0x67, qvga?0xf0:0x90, 0xf0);
+ ov_write_mask(c, 0x74, qvga?0x20:0x00, 0x20);
+
+ /******** Clock programming ********/
+
+ ov_write(c, 0x11, win->clockdiv);
+
+ return 0;
+}
+
+static int ov7x20_set_window(struct i2c_client *c, struct ovcamchip_window *win)
+{
+ int ret, hwscale, vwscale;
+
+ ret = ov7x20_mode_init(c, win);
+ if (ret < 0)
+ return ret;
+
+ if (win->quarter) {
+ hwscale = 1;
+ vwscale = 0;
+ } else {
+ hwscale = 2;
+ vwscale = 1;
+ }
+
+ ov_write(c, 0x17, HWSBASE + (win->x >> hwscale));
+ ov_write(c, 0x18, HWEBASE + ((win->x + win->width) >> hwscale));
+ ov_write(c, 0x19, VWSBASE + (win->y >> vwscale));
+ ov_write(c, 0x1a, VWEBASE + ((win->y + win->height) >> vwscale));
+
+ return 0;
+}
+
+static int ov7x20_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ switch (cmd) {
+ case OVCAMCHIP_CMD_S_CTRL:
+ return ov7x20_set_v4l1_control(c, arg);
+ case OVCAMCHIP_CMD_G_CTRL:
+ return ov7x20_get_v4l1_control(c, arg);
+ case OVCAMCHIP_CMD_S_MODE:
+ return ov7x20_set_window(c, arg);
+ default:
+ DDEBUG(2, &c->dev, "command not supported: %d", cmd);
+ return -ENOIOCTLCMD;
+ }
+}
+
+struct ovcamchip_ops ov7x20_ops = {
+ .init = ov7x20_init,
+ .free = ov7x20_free,
+ .command = ov7x20_command,
+};
--- /dev/null
+/* Shared Code for OmniVision Camera Chip Drivers
+ *
+ * Copyright (c) 2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include "ovcamchip_priv.h"
+
+#define DRIVER_VERSION "v2.27 for Linux 2.6"
+#define DRIVER_AUTHOR "Mark McClelland <mark@alpha.dyndns.org>"
+#define DRIVER_DESC "OV camera chip I2C driver"
+
+#define PINFO(fmt, args...) printk(KERN_INFO "ovcamchip: " fmt "\n" , ## args);
+#define PERROR(fmt, args...) printk(KERN_ERR "ovcamchip: " fmt "\n" , ## args);
+
+#ifdef DEBUG
+int ovcamchip_debug = 0;
+static int debug;
+module_param(debug, int, 0);
+MODULE_PARM_DESC(debug,
+ "Debug level: 0=none, 1=inits, 2=warning, 3=config, 4=functions, 5=all");
+#endif
+
+/* By default, let bridge driver tell us if chip is monochrome. mono=0
+ * will ignore that and always treat chips as color. mono=1 will force
+ * monochrome mode for all chips. */
+static int mono = -1;
+module_param(mono, int, 0);
+MODULE_PARM_DESC(mono,
+ "1=chips are monochrome (OVx1xx), 0=force color, -1=autodetect (default)");
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+
+/* Registers common to all chips, that are needed for detection */
+#define GENERIC_REG_ID_HIGH 0x1C /* manufacturer ID MSB */
+#define GENERIC_REG_ID_LOW 0x1D /* manufacturer ID LSB */
+#define GENERIC_REG_COM_I 0x29 /* misc ID bits */
+
+extern struct ovcamchip_ops ov6x20_ops;
+extern struct ovcamchip_ops ov6x30_ops;
+extern struct ovcamchip_ops ov7x10_ops;
+extern struct ovcamchip_ops ov7x20_ops;
+extern struct ovcamchip_ops ov76be_ops;
+
+static char *chip_names[NUM_CC_TYPES] = {
+ [CC_UNKNOWN] = "Unknown chip",
+ [CC_OV76BE] = "OV76BE",
+ [CC_OV7610] = "OV7610",
+ [CC_OV7620] = "OV7620",
+ [CC_OV7620AE] = "OV7620AE",
+ [CC_OV6620] = "OV6620",
+ [CC_OV6630] = "OV6630",
+ [CC_OV6630AE] = "OV6630AE",
+ [CC_OV6630AF] = "OV6630AF",
+};
+
+/* Forward declarations */
+static struct i2c_driver driver;
+static struct i2c_client client_template;
+
+/* ----------------------------------------------------------------------- */
+
+int ov_write_regvals(struct i2c_client *c, struct ovcamchip_regvals *rvals)
+{
+ int rc;
+
+ while (rvals->reg != 0xff) {
+ rc = ov_write(c, rvals->reg, rvals->val);
+ if (rc < 0)
+ return rc;
+ rvals++;
+ }
+
+ return 0;
+}
+
+/* Writes bits at positions specified by mask to an I2C reg. Bits that are in
+ * the same position as 1's in "mask" are cleared and set to "value". Bits
+ * that are in the same position as 0's in "mask" are preserved, regardless
+ * of their respective state in "value".
+ */
+int ov_write_mask(struct i2c_client *c,
+ unsigned char reg,
+ unsigned char value,
+ unsigned char mask)
+{
+ int rc;
+ unsigned char oldval, newval;
+
+ if (mask == 0xff) {
+ newval = value;
+ } else {
+ rc = ov_read(c, reg, &oldval);
+ if (rc < 0)
+ return rc;
+
+ oldval &= (~mask); /* Clear the masked bits */
+ value &= mask; /* Enforce mask on value */
+ newval = oldval | value; /* Set the desired bits */
+ }
+
+ return ov_write(c, reg, newval);
+}
+
+/* ----------------------------------------------------------------------- */
+
+/* Reset the chip and ensure that I2C is synchronized. Returns <0 if failure.
+ */
+static int init_camchip(struct i2c_client *c)
+{
+ int i, success;
+ unsigned char high, low;
+
+ /* Reset the chip */
+ ov_write(c, 0x12, 0x80);
+
+ /* Wait for it to initialize */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(1 + 150 * HZ / 1000);
+
+ for (i = 0, success = 0; i < I2C_DETECT_RETRIES && !success; i++) {
+ if (ov_read(c, GENERIC_REG_ID_HIGH, &high) >= 0) {
+ if (ov_read(c, GENERIC_REG_ID_LOW, &low) >= 0) {
+ if (high == 0x7F && low == 0xA2) {
+ success = 1;
+ continue;
+ }
+ }
+ }
+
+ /* Reset the chip */
+ ov_write(c, 0x12, 0x80);
+
+ /* Wait for it to initialize */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(1 + 150 * HZ / 1000);
+
+ /* Dummy read to sync I2C */
+ ov_read(c, 0x00, &low);
+ }
+
+ if (!success)
+ return -EIO;
+
+ PDEBUG(1, "I2C synced in %d attempt(s)", i);
+
+ return 0;
+}
+
+/* This detects the OV7610, OV7620, or OV76BE chip. */
+static int ov7xx0_detect(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+ unsigned char val;
+
+ PDEBUG(4, "");
+
+ /* Detect chip (sub)type */
+ rc = ov_read(c, GENERIC_REG_COM_I, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov7xx0 type");
+ return rc;
+ }
+
+ if ((val & 3) == 3) {
+ PINFO("Camera chip is an OV7610");
+ ov->subtype = CC_OV7610;
+ } else if ((val & 3) == 1) {
+ rc = ov_read(c, 0x15, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov7xx0 type");
+ return rc;
+ }
+
+ if (val & 1) {
+ PINFO("Camera chip is an OV7620AE");
+ /* OV7620 is a close enough match for now. There are
+ * some definite differences though, so this should be
+ * fixed */
+ ov->subtype = CC_OV7620;
+ } else {
+ PINFO("Camera chip is an OV76BE");
+ ov->subtype = CC_OV76BE;
+ }
+ } else if ((val & 3) == 0) {
+ PINFO("Camera chip is an OV7620");
+ ov->subtype = CC_OV7620;
+ } else {
+ PERROR("Unknown camera chip version: %d", val & 3);
+ return -ENOSYS;
+ }
+
+ if (ov->subtype == CC_OV76BE)
+ ov->sops = &ov76be_ops;
+ else if (ov->subtype == CC_OV7620)
+ ov->sops = &ov7x20_ops;
+ else
+ ov->sops = &ov7x10_ops;
+
+ return 0;
+}
+
+/* This detects the OV6620, OV6630, OV6630AE, or OV6630AF chip. */
+static int ov6xx0_detect(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+ unsigned char val;
+
+ PDEBUG(4, "");
+
+ /* Detect chip (sub)type */
+ rc = ov_read(c, GENERIC_REG_COM_I, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov6xx0 type");
+ return -1;
+ }
+
+ if ((val & 3) == 0) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630");
+ } else if ((val & 3) == 1) {
+ ov->subtype = CC_OV6620;
+ PINFO("Camera chip is an OV6620");
+ } else if ((val & 3) == 2) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630AE");
+ } else if ((val & 3) == 3) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630AF");
+ }
+
+ if (ov->subtype == CC_OV6620)
+ ov->sops = &ov6x20_ops;
+ else
+ ov->sops = &ov6x30_ops;
+
+ return 0;
+}
+
+static int ovcamchip_detect(struct i2c_client *c)
+{
+ /* Ideally we would just try a single register write and see if it NAKs.
+ * That isn't possible since the OV518 can't report I2C transaction
+ * failures. So, we have to try to initialize the chip (i.e. reset it
+ * and check the ID registers) to detect its presence. */
+
+ /* Test for 7xx0 */
+ PDEBUG(3, "Testing for 0V7xx0");
+ c->addr = OV7xx0_SID;
+ if (init_camchip(c) < 0) {
+ /* Test for 6xx0 */
+ PDEBUG(3, "Testing for 0V6xx0");
+ c->addr = OV6xx0_SID;
+ if (init_camchip(c) < 0) {
+ return -ENODEV;
+ } else {
+ if (ov6xx0_detect(c) < 0) {
+ PERROR("Failed to init OV6xx0");
+ return -EIO;
+ }
+ }
+ } else {
+ if (ov7xx0_detect(c) < 0) {
+ PERROR("Failed to init OV7xx0");
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
+
+/* ----------------------------------------------------------------------- */
+
+static int ovcamchip_attach(struct i2c_adapter *adap)
+{
+ int rc = 0;
+ struct ovcamchip *ov;
+ struct i2c_client *c;
+
+ /* I2C is not a PnP bus, so we can never be certain that we're talking
+ * to the right chip. To prevent damage to EEPROMS and such, only
+ * attach to adapters that are known to contain OV camera chips. */
+
+ switch (adap->id) {
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV511):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OVFX2):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_W9968CF):
+ PDEBUG(1, "Adapter ID 0x%06x accepted", adap->id);
+ break;
+ default:
+ PDEBUG(1, "Adapter ID 0x%06x rejected", adap->id);
+ return -ENODEV;
+ }
+
+ c = kmalloc(sizeof *c, GFP_KERNEL);
+ if (!c) {
+ rc = -ENOMEM;
+ goto no_client;
+ }
+ memcpy(c, &client_template, sizeof *c);
+ c->adapter = adap;
+ strcpy(i2c_clientname(c), "OV????");
+
+ ov = kmalloc(sizeof *ov, GFP_KERNEL);
+ if (!ov) {
+ rc = -ENOMEM;
+ goto no_ov;
+ }
+ memset(ov, 0, sizeof *ov);
+ i2c_set_clientdata(c, ov);
+
+ rc = ovcamchip_detect(c);
+ if (rc < 0)
+ goto error;
+
+ strcpy(i2c_clientname(c), chip_names[ov->subtype]);
+
+ PDEBUG(1, "Camera chip detection complete");
+
+ i2c_attach_client(c);
+
+ return rc;
+error:
+ kfree(ov);
+no_ov:
+ kfree(c);
+no_client:
+ PDEBUG(1, "returning %d", rc);
+ return rc;
+}
+
+static int ovcamchip_detach(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+
+ rc = ov->sops->free(c);
+ if (rc < 0)
+ return rc;
+
+ i2c_detach_client(c);
+
+ kfree(ov);
+ kfree(c);
+ return 0;
+}
+
+static int ovcamchip_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ if (!ov->initialized &&
+ cmd != OVCAMCHIP_CMD_Q_SUBTYPE &&
+ cmd != OVCAMCHIP_CMD_INITIALIZE) {
+ dev_err(&c->dev, "ERROR: Camera chip not initialized yet!\n");
+ return -EPERM;
+ }
+
+ switch (cmd) {
+ case OVCAMCHIP_CMD_Q_SUBTYPE:
+ {
+ *(int *)arg = ov->subtype;
+ return 0;
+ }
+ case OVCAMCHIP_CMD_INITIALIZE:
+ {
+ int rc;
+
+ if (mono == -1)
+ ov->mono = *(int *)arg;
+ else
+ ov->mono = mono;
+
+ if (ov->mono) {
+ if (ov->subtype != CC_OV7620)
+ dev_warn(&c->dev, "Warning: Monochrome not "
+ "implemented for this chip\n");
+ else
+ dev_info(&c->dev, "Initializing chip as "
+ "monochrome\n");
+ }
+
+ rc = ov->sops->init(c);
+ if (rc < 0)
+ return rc;
+
+ ov->initialized = 1;
+ return 0;
+ }
+ default:
+ return ov->sops->command(c, cmd, arg);
+ }
+}
+
+/* ----------------------------------------------------------------------- */
+
+static struct i2c_driver driver = {
+ .owner = THIS_MODULE,
+ .name = "ovcamchip",
+ .id = I2C_DRIVERID_OVCAMCHIP,
+ .class = I2C_CLASS_CAM_DIGITAL,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = ovcamchip_attach,
+ .detach_client = ovcamchip_detach,
+ .command = ovcamchip_command,
+};
+
+static struct i2c_client client_template = {
+ I2C_DEVNAME("(unset)"),
+ .id = -1,
+ .driver = &driver,
+};
+
+static int __init ovcamchip_init(void)
+{
+#ifdef DEBUG
+ ovcamchip_debug = debug;
+#endif
+
+ PINFO(DRIVER_VERSION " : " DRIVER_DESC);
+ return i2c_add_driver(&driver);
+}
+
+static void __exit ovcamchip_exit(void)
+{
+ i2c_del_driver(&driver);
+}
+
+module_init(ovcamchip_init);
+module_exit(ovcamchip_exit);
--- /dev/null
+/* OmniVision* camera chip driver private definitions for core code and
+ * chip-specific code
+ *
+ * Copyright (c) 1999-2004 Mark McClelland
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ *
+ * * OmniVision is a trademark of OmniVision Technologies, Inc. This driver
+ * is not sponsored or developed by them.
+ */
+
+#ifndef __LINUX_OVCAMCHIP_PRIV_H
+#define __LINUX_OVCAMCHIP_PRIV_H
+
+#include <media/ovcamchip.h>
+
+#ifdef DEBUG
+extern int ovcamchip_debug;
+#endif
+
+#define PDEBUG(level, fmt, args...) \
+ if (ovcamchip_debug >= (level)) pr_debug("[%s:%d] " fmt "\n", \
+ __FUNCTION__, __LINE__ , ## args)
+
+#define DDEBUG(level, dev, fmt, args...) \
+ if (ovcamchip_debug >= (level)) dev_dbg(dev, "[%s:%d] " fmt "\n", \
+ __FUNCTION__, __LINE__ , ## args)
+
+/* Number of times to retry chip detection. Increase this if you are getting
+ * "Failed to init camera chip" */
+#define I2C_DETECT_RETRIES 10
+
+struct ovcamchip_regvals {
+ unsigned char reg;
+ unsigned char val;
+};
+
+struct ovcamchip_ops {
+ int (*init)(struct i2c_client *);
+ int (*free)(struct i2c_client *);
+ int (*command)(struct i2c_client *, unsigned int, void *);
+};
+
+struct ovcamchip {
+ struct ovcamchip_ops *sops;
+ void *spriv; /* Private data for OV7x10.c etc... */
+ int subtype; /* = SEN_OV7610 etc... */
+ int mono; /* Monochrome chip? (invalid until init) */
+ int initialized; /* OVCAMCHIP_CMD_INITIALIZE was successful */
+};
+
+/* --------------------------------- */
+/* I2C I/O */
+/* --------------------------------- */
+
+static inline int ov_read(struct i2c_client *c, unsigned char reg,
+ unsigned char *value)
+{
+ int rc;
+
+ rc = i2c_smbus_read_byte_data(c, reg);
+ *value = (unsigned char) rc;
+ return rc;
+}
+
+static inline int ov_write(struct i2c_client *c, unsigned char reg,
+ unsigned char value )
+{
+ return i2c_smbus_write_byte_data(c, reg, value);
+}
+
+/* --------------------------------- */
+/* FUNCTION PROTOTYPES */
+/* --------------------------------- */
+
+/* Functions in ovcamchip_core.c */
+
+extern int ov_write_regvals(struct i2c_client *c,
+ struct ovcamchip_regvals *rvals);
+
+extern int ov_write_mask(struct i2c_client *c, unsigned char reg,
+ unsigned char value, unsigned char mask);
+
+#endif
--- /dev/null
+/*
+ * Common Flash Interface support:
+ * Generic utility functions not dependant on command set
+ *
+ * Copyright (C) 2002 Red Hat
+ * Copyright (C) 2003 STMicroelectronics Limited
+ *
+ * This code is covered by the GPL.
+ *
+ * $Id: cfi_util.c,v 1.4 2004/07/14 08:38:44 dwmw2 Exp $
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <asm/io.h>
+#include <asm/byteorder.h>
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/cfi.h>
+#include <linux/mtd/compatmac.h>
+
+struct cfi_extquery *
+cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* name)
+{
+ struct cfi_private *cfi = map->fldrv_priv;
+ __u32 base = 0; // cfi->chips[0].start;
+ int ofs_factor = cfi->interleave * cfi->device_type;
+ int i;
+ struct cfi_extquery *extp = NULL;
+
+ printk(" %s Extended Query Table at 0x%4.4X\n", name, adr);
+ if (!adr)
+ goto out;
+
+ /* Switch it into Query Mode */
+ cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
+
+ extp = kmalloc(size, GFP_KERNEL);
+ if (!extp) {
+ printk(KERN_ERR "Failed to allocate memory\n");
+ goto out;
+ }
+
+ /* Read in the Extended Query Table */
+ for (i=0; i<size; i++) {
+ ((unsigned char *)extp)[i] =
+ cfi_read_query(map, base+((adr+i)*ofs_factor));
+ }
+
+ if (extp->MajorVersion != '1' ||
+ (extp->MinorVersion < '0' || extp->MinorVersion > '3')) {
+ printk(KERN_WARNING " Unknown %s Extended Query "
+ "version %c.%c.\n", name, extp->MajorVersion,
+ extp->MinorVersion);
+ kfree(extp);
+ extp = NULL;
+ goto out;
+ }
+
+out:
+ /* Make sure it's in read mode */
+ cfi_send_gen_cmd(0xf0, 0, base, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0xff, 0, base, map, cfi, cfi->device_type, NULL);
+
+ return extp;
+}
+
+EXPORT_SYMBOL(cfi_read_pri);
+
+void cfi_fixup(struct map_info *map, struct cfi_fixup* fixups)
+{
+ struct cfi_private *cfi = map->fldrv_priv;
+ struct cfi_fixup *f;
+
+ for (f=fixups; f->fixup; f++) {
+ if (((f->mfr == CFI_MFR_ANY) || (f->mfr == cfi->mfr)) &&
+ ((f->id == CFI_ID_ANY) || (f->id == cfi->id))) {
+ f->fixup(map, f->param);
+ }
+ }
+}
+
+EXPORT_SYMBOL(cfi_fixup);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/**
+ *
+ * $Id: phram.c,v 1.1 2003/08/21 17:52:30 joern Exp $
+ *
+ * Copyright (c) Jochen Schaeuble <psionic@psionic.de>
+ * 07/2003 rewritten by Joern Engel <joern@wh.fh-wedel.de>
+ *
+ * DISCLAIMER: This driver makes use of Rusty's excellent module code,
+ * so it will not work for 2.4 without changes and it wont work for 2.4
+ * as a module without major changes. Oh well!
+ *
+ * Usage:
+ *
+ * one commend line parameter per device, each in the form:
+ * phram=<name>,<start>,<len>
+ * <name> may be up to 63 characters.
+ * <start> and <len> can be octal, decimal or hexadecimal. If followed
+ * by "k", "M" or "G", the numbers will be interpreted as kilo, mega or
+ * gigabytes.
+ *
+ */
+
+#include <asm/io.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mtd/mtd.h>
+
+#define ERROR(fmt, args...) printk(KERN_ERR "phram: " fmt , ## args)
+
+struct phram_mtd_list {
+ struct list_head list;
+ struct mtd_info *mtdinfo;
+};
+
+static LIST_HEAD(phram_list);
+
+
+
+int phram_erase(struct mtd_info *mtd, struct erase_info *instr)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (instr->addr + instr->len > mtd->size)
+ return -EINVAL;
+
+ memset(start + instr->addr, 0xff, instr->len);
+
+ /* This'll catch a few races. Free the thing before returning :)
+ * I don't feel at all ashamed. This kind of thing is possible anyway
+ * with flash, but unlikely.
+ */
+
+ instr->state = MTD_ERASE_DONE;
+
+ if (instr->callback)
+ (*(instr->callback))(instr);
+ else
+ kfree(instr);
+
+ return 0;
+}
+
+int phram_point(struct mtd_info *mtd, loff_t from, size_t len,
+ size_t *retlen, u_char **mtdbuf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (from + len > mtd->size)
+ return -EINVAL;
+
+ *mtdbuf = start + from;
+ *retlen = len;
+ return 0;
+}
+
+void phram_unpoint(struct mtd_info *mtd, u_char *addr, loff_t from, size_t len)
+{
+}
+
+int phram_read(struct mtd_info *mtd, loff_t from, size_t len,
+ size_t *retlen, u_char *buf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (from + len > mtd->size)
+ return -EINVAL;
+
+ memcpy(buf, start + from, len);
+
+ *retlen = len;
+ return 0;
+}
+
+int phram_write(struct mtd_info *mtd, loff_t to, size_t len,
+ size_t *retlen, const u_char *buf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (to + len > mtd->size)
+ return -EINVAL;
+
+ memcpy(start + to, buf, len);
+
+ *retlen = len;
+ return 0;
+}
+
+
+
+static void unregister_devices(void)
+{
+ struct phram_mtd_list *this;
+
+ list_for_each_entry(this, &phram_list, list) {
+ del_mtd_device(this->mtdinfo);
+ iounmap(this->mtdinfo->priv);
+ kfree(this->mtdinfo);
+ kfree(this);
+ }
+}
+
+static int register_device(char *name, unsigned long start, unsigned long len)
+{
+ struct phram_mtd_list *new;
+ int ret = -ENOMEM;
+
+ new = kmalloc(sizeof(*new), GFP_KERNEL);
+ if (!new)
+ goto out0;
+
+ new->mtdinfo = kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
+ if (!new->mtdinfo)
+ goto out1;
+
+ memset(new->mtdinfo, 0, sizeof(struct mtd_info));
+
+ ret = -EIO;
+ new->mtdinfo->priv = ioremap(start, len);
+ if (!new->mtdinfo->priv) {
+ ERROR("ioremap failed\n");
+ goto out2;
+ }
+
+
+ new->mtdinfo->name = name;
+ new->mtdinfo->size = len;
+ new->mtdinfo->flags = MTD_CAP_RAM | MTD_ERASEABLE | MTD_VOLATILE;
+ new->mtdinfo->erase = phram_erase;
+ new->mtdinfo->point = phram_point;
+ new->mtdinfo->unpoint = phram_unpoint;
+ new->mtdinfo->read = phram_read;
+ new->mtdinfo->write = phram_write;
+ new->mtdinfo->owner = THIS_MODULE;
+ new->mtdinfo->type = MTD_RAM;
+ new->mtdinfo->erasesize = 0x0;
+
+ ret = -EAGAIN;
+ if (add_mtd_device(new->mtdinfo)) {
+ ERROR("Failed to register new device\n");
+ goto out3;
+ }
+
+ list_add_tail(&new->list, &phram_list);
+ return 0;
+
+out3:
+ iounmap(new->mtdinfo->priv);
+out2:
+ kfree(new->mtdinfo);
+out1:
+ kfree(new);
+out0:
+ return ret;
+}
+
+static int ustrtoul(const char *cp, char **endp, unsigned int base)
+{
+ unsigned long result = simple_strtoul(cp, endp, base);
+
+ switch (**endp) {
+ case 'G':
+ result *= 1024;
+ case 'M':
+ result *= 1024;
+ case 'k':
+ result *= 1024;
+ endp++;
+ }
+ return result;
+}
+
+static int parse_num32(uint32_t *num32, const char *token)
+{
+ char *endp;
+ unsigned long n;
+
+ n = ustrtoul(token, &endp, 0);
+ if (*endp)
+ return -EINVAL;
+
+ *num32 = n;
+ return 0;
+}
+
+static int parse_name(char **pname, const char *token)
+{
+ size_t len;
+ char *name;
+
+ len = strlen(token) + 1;
+ if (len > 64)
+ return -ENOSPC;
+
+ name = kmalloc(len, GFP_KERNEL);
+ if (!name)
+ return -ENOMEM;
+
+ strcpy(name, token);
+
+ *pname = name;
+ return 0;
+}
+
+#define parse_err(fmt, args...) do { \
+ ERROR(fmt , ## args); \
+ return 0; \
+} while (0)
+
+static int phram_setup(const char *val, struct kernel_param *kp)
+{
+ char buf[64+12+12], *str = buf;
+ char *token[3];
+ char *name;
+ uint32_t start;
+ uint32_t len;
+ int i, ret;
+
+ if (strnlen(val, sizeof(str)) >= sizeof(str))
+ parse_err("parameter too long\n");
+
+ strcpy(str, val);
+
+ for (i=0; i<3; i++)
+ token[i] = strsep(&str, ",");
+
+ if (str)
+ parse_err("too many arguments\n");
+
+ if (!token[2])
+ parse_err("not enough arguments\n");
+
+ ret = parse_name(&name, token[0]);
+ if (ret == -ENOMEM)
+ parse_err("out of memory\n");
+ if (ret == -ENOSPC)
+ parse_err("name too long\n");
+ if (ret)
+ return 0;
+
+ ret = parse_num32(&start, token[1]);
+ if (ret)
+ parse_err("illegal start address\n");
+
+ ret = parse_num32(&len, token[2]);
+ if (ret)
+ parse_err("illegal device length\n");
+
+ register_device(name, start, len);
+
+ return 0;
+}
+
+module_param_call(phram, phram_setup, NULL, NULL, 000);
+MODULE_PARM_DESC(phram, "Memory region to map. \"map=<name>,<start><length>\"");
+
+/*
+ * Just for compatibility with slram, this is horrible and should go someday.
+ */
+static int __init slram_setup(const char *val, struct kernel_param *kp)
+{
+ char buf[256], *str = buf;
+
+ if (!val || !val[0])
+ parse_err("no arguments to \"slram=\"\n");
+
+ if (strnlen(val, sizeof(str)) >= sizeof(str))
+ parse_err("parameter too long\n");
+
+ strcpy(str, val);
+
+ while (str) {
+ char *token[3];
+ char *name;
+ uint32_t start;
+ uint32_t len;
+ int i, ret;
+
+ for (i=0; i<3; i++) {
+ token[i] = strsep(&str, ",");
+ if (token[i])
+ continue;
+ parse_err("wrong number of arguments to \"slram=\"\n");
+ }
+
+ /* name */
+ ret = parse_name(&name, token[0]);
+ if (ret == -ENOMEM)
+ parse_err("of memory\n");
+ if (ret == -ENOSPC)
+ parse_err("too long\n");
+ if (ret)
+ return 1;
+
+ /* start */
+ ret = parse_num32(&start, token[1]);
+ if (ret)
+ parse_err("illegal start address\n");
+
+ /* len */
+ if (token[2][0] == '+')
+ ret = parse_num32(&len, token[2] + 1);
+ else
+ ret = parse_num32(&len, token[2]);
+
+ if (ret)
+ parse_err("illegal device length\n");
+
+ if (token[2][0] != '+') {
+ if (len < start)
+ parse_err("end < start\n");
+ len -= start;
+ }
+
+ register_device(name, start, len);
+ }
+ return 1;
+}
+
+module_param_call(slram, slram_setup, NULL, NULL, 000);
+MODULE_PARM_DESC(slram, "List of memory regions to map. \"map=<name>,<start><length/end>\"");
+
+
+int __init init_phram(void)
+{
+ printk(KERN_ERR "phram loaded\n");
+ return 0;
+}
+
+static void __exit cleanup_phram(void)
+{
+ unregister_devices();
+}
+
+module_init(init_phram);
+module_exit(cleanup_phram);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Jörn Engel <joern@wh.fh-wedel.de>");
+MODULE_DESCRIPTION("MTD driver for physical RAM");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Db1550 board
+ *
+ * $Id: db1550-flash.c,v 1.3 2004/07/14 17:45:40 dwmw2 Exp $
+ *
+ * (C) 2004 Embedded Edge, LLC, based on db1550-flash.c:
+ * (C) 2003 Pete Popov <pete_popov@yahoo.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+#include <asm/au1000.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+
+
+static struct map_info db1550_map = {
+ .name = "Db1550 flash",
+};
+
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * Support only 64MB NOR Flash parts
+ */
+
+#if defined(CONFIG_MTD_DB1550_BOOT) && defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_BOTH_BANKS
+#elif defined(CONFIG_MTD_DB1550_BOOT) && !defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_BOOT_ONLY
+#elif !defined(CONFIG_MTD_DB1550_BOOT) && defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_USER_ONLY
+#endif
+
+#ifdef DB1550_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x1FC00000 - 0x18000000),
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000 - 0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1550_BOOT_ONLY)
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = 0x03c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1550_USER_ONLY)
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x4000000 - 0x200000), /* reserve 2MB for raw kernel */
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_DB1550 define combo error /* should never happen */
+#endif
+
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+static struct mtd_info *mymtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+#if defined(DB1550_BOTH_BANKS)
+ window_addr = 0x18000000;
+ window_size = 0x8000000;
+#elif defined(DB1550_BOOT_ONLY)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+#else /* USER ONLY */
+ window_addr = 0x1E000000;
+ window_size = 0x4000000;
+#endif
+ return 0;
+}
+
+int __init db1550_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ /* Default flash bankwidth */
+ db1550_map.bankwidth = flash_bankwidth;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = db1550_partitions;
+ nb_parts = NB_OF(db1550_partitions);
+ db1550_map.size = window_size;
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Pb1550 flash: probing %d-bit flash bus\n",
+ db1550_map.bankwidth*8);
+ db1550_map.virt =
+ (unsigned long)ioremap(window_addr, window_size);
+ mymtd = do_map_probe("cfi_probe", &db1550_map);
+ if (!mymtd) return -ENXIO;
+ mymtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(mymtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit db1550_mtd_cleanup(void)
+{
+ if (mymtd) {
+ del_mtd_partitions(mymtd);
+ map_destroy(mymtd);
+ }
+}
+
+module_init(db1550_mtd_init);
+module_exit(db1550_mtd_cleanup);
+
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Db1550 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Db1xxx boards
+ *
+ * $Id: db1x00-flash.c,v 1.3 2004/07/14 17:45:40 dwmw2 Exp $
+ *
+ * (C) 2003 Pete Popov <ppopov@pacbell.net>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+#include <asm/au1000.h>
+#include <asm/db1x00.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+static unsigned long flash_size;
+
+static BCSR * const bcsr = (BCSR *)0xAE000000;
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * The Db1x boards support different flash densities. We setup
+ * the mtd_partition structures below for default of 64Mbit
+ * flash densities, and override the partitions sizes, if
+ * necessary, after we check the board status register.
+ */
+
+#ifdef DB1X00_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x1c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1X00_BOOT_ONLY)
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x00c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1X00_USER_ONLY)
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x0e00000,
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_DB1X00 define combo error /* should never happen */
+#endif
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+#define NAME "Db1x00 Linux Flash"
+
+static struct map_info db1xxx_mtd_map = {
+ .name = NAME,
+};
+
+static struct mtd_partition *parsed_parts;
+static struct mtd_info *db1xxx_mtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+ switch ((bcsr->status >> 14) & 0x3) {
+ case 0: /* 64Mbit devices */
+ flash_size = 0x800000; /* 8MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ window_addr = 0x1E000000;
+ window_size = 0x2000000;
+#elif defined(DB1X00_BOOT_ONLY)
+ window_addr = 0x1F000000;
+ window_size = 0x1000000;
+#else /* USER ONLY */
+ window_addr = 0x1E000000;
+ window_size = 0x1000000;
+#endif
+ break;
+ case 1:
+ /* 128 Mbit devices */
+ flash_size = 0x1000000; /* 16MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+ /* USERFS from 0x1C00 0000 to 0x1FC0 0000 */
+ db1x00_partitions[0].size = 0x3C00000;
+#elif defined(DB1X00_BOOT_ONLY)
+ window_addr = 0x1E000000;
+ window_size = 0x2000000;
+ /* USERFS from 0x1E00 0000 to 0x1FC0 0000 */
+ db1x00_partitions[0].size = 0x1C00000;
+#else /* USER ONLY */
+ window_addr = 0x1C000000;
+ window_size = 0x2000000;
+ /* USERFS from 0x1C00 0000 to 0x1DE00000 */
+ db1x00_partitions[0].size = 0x1DE0000;
+#endif
+ break;
+ case 2:
+ /* 256 Mbit devices */
+ flash_size = 0x4000000; /* 64MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ return 1;
+#elif defined(DB1X00_BOOT_ONLY)
+ /* Boot ROM flash bank only; no user bank */
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+ /* USERFS from 0x1C00 0000 to 0x1FC00000 */
+ db1x00_partitions[0].size = 0x3C00000;
+#else /* USER ONLY */
+ return 1;
+#endif
+ break;
+ default:
+ return 1;
+ }
+ db1xxx_mtd_map.size = window_size;
+ db1xxx_mtd_map.bankwidth = flash_bankwidth;
+ db1xxx_mtd_map.phys = window_addr;
+ db1xxx_mtd_map.bankwidth = flash_bankwidth;
+ return 0;
+}
+
+int __init db1x00_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = db1x00_partitions;
+ nb_parts = NB_OF(db1x00_partitions);
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Db1xxx flash: probing %d-bit flash bus\n",
+ db1xxx_mtd_map.bankwidth*8);
+ db1xxx_mtd_map.virt = (unsigned long)ioremap(window_addr, window_size);
+ db1xxx_mtd = do_map_probe("cfi_probe", &db1xxx_mtd_map);
+ if (!db1xxx_mtd) return -ENXIO;
+ db1xxx_mtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(db1xxx_mtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit db1x00_mtd_cleanup(void)
+{
+ if (db1xxx_mtd) {
+ del_mtd_partitions(db1xxx_mtd);
+ map_destroy(db1xxx_mtd);
+ if (parsed_parts)
+ kfree(parsed_parts);
+ }
+}
+
+module_init(db1x00_mtd_init);
+module_exit(db1x00_mtd_cleanup);
+
+MODULE_AUTHOR("Pete Popov");
+MODULE_DESCRIPTION("Db1x00 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+
+/*
+ * drivers/mtd/maps/svme182.c
+ *
+ * Flash map driver for the Dy4 SVME182 board
+ *
+ * $Id: dmv182.c,v 1.3 2004/07/14 17:45:40 dwmw2 Exp $
+ *
+ * Copyright 2003-2004, TimeSys Corporation
+ *
+ * Based on the SVME181 flash map, by Tom Nelson, Dot4, Inc. for TimeSys Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/errno.h>
+
+/*
+ * This driver currently handles only the 16MiB user flash bank 1 on the
+ * board. It does not provide access to bank 0 (contains the Dy4 FFW), bank 2
+ * (VxWorks boot), or the optional 48MiB expansion flash.
+ *
+ * scott.wood@timesys.com: On the newer boards with 128MiB flash, it
+ * now supports the first 96MiB (the boot flash bank containing FFW
+ * is excluded). The VxWorks loader is in partition 1.
+ */
+
+#define FLASH_BASE_ADDR 0xf0000000
+#define FLASH_BANK_SIZE (128*1024*1024)
+
+MODULE_AUTHOR("Scott Wood, TimeSys Corporation <scott.wood@timesys.com>");
+MODULE_DESCRIPTION("User-programmable flash device on the Dy4 SVME182 board");
+MODULE_LICENSE("GPL");
+
+static struct map_info svme182_map = {
+ .name = "Dy4 SVME182",
+ .bankwidth = 32,
+ .size = 128 * 1024 * 1024
+};
+
+#define BOOTIMAGE_PART_SIZE ((6*1024*1024)-RESERVED_PART_SIZE)
+
+// Allow 6MiB for the kernel
+#define NEW_BOOTIMAGE_PART_SIZE (6 * 1024 * 1024)
+// Allow 1MiB for the bootloader
+#define NEW_BOOTLOADER_PART_SIZE (1024 * 1024)
+// Use the remaining 9MiB at the end of flash for the RFS
+#define NEW_RFS_PART_SIZE (0x01000000 - NEW_BOOTLOADER_PART_SIZE - \
+ NEW_BOOTIMAGE_PART_SIZE)
+
+static struct mtd_partition svme182_partitions[] = {
+ // The Lower PABS is only 128KiB, but the partition code doesn't
+ // like partitions that don't end on the largest erase block
+ // size of the device, even if all of the erase blocks in the
+ // partition are small ones. The hardware should prevent
+ // writes to the actual PABS areas.
+ {
+ name: "Lower PABS and CPU 0 bootloader or kernel",
+ size: 6*1024*1024,
+ offset: 0,
+ },
+ {
+ name: "Root Filesystem",
+ size: 10*1024*1024,
+ offset: MTDPART_OFS_NXTBLK
+ },
+ {
+ name: "CPU1 Bootloader",
+ size: 1024*1024,
+ offset: MTDPART_OFS_NXTBLK,
+ },
+ {
+ name: "Extra",
+ size: 110*1024*1024,
+ offset: MTDPART_OFS_NXTBLK
+ },
+ {
+ name: "Foundation Firmware and Upper PABS",
+ size: 1024*1024,
+ offset: MTDPART_OFS_NXTBLK,
+ mask_flags: MTD_WRITEABLE // read-only
+ }
+};
+
+static struct mtd_info *this_mtd;
+
+static int __init init_svme182(void)
+{
+ struct mtd_partition *partitions;
+ int num_parts = sizeof(svme182_partitions) / sizeof(struct mtd_partition);
+
+ partitions = svme182_partitions;
+
+ svme182_map.virt =
+ (unsigned long)ioremap(FLASH_BASE_ADDR, svme182_map.size);
+
+ if (svme182_map.virt == 0) {
+ printk("Failed to ioremap FLASH memory area.\n");
+ return -EIO;
+ }
+
+ simple_map_init(&svme182_map);
+
+ this_mtd = do_map_probe("cfi_probe", &svme182_map);
+ if (!this_mtd)
+ {
+ iounmap((void *)svme182_map.virt);
+ return -ENXIO;
+ }
+
+ printk(KERN_NOTICE "SVME182 flash device: %dMiB at 0x%08x\n",
+ this_mtd->size >> 20, FLASH_BASE_ADDR);
+
+ this_mtd->owner = THIS_MODULE;
+ add_mtd_partitions(this_mtd, partitions, num_parts);
+
+ return 0;
+}
+
+static void __exit cleanup_svme182(void)
+{
+ if (this_mtd)
+ {
+ del_mtd_partitions(this_mtd);
+ map_destroy(this_mtd);
+ }
+
+ if (svme182_map.virt)
+ {
+ iounmap((void *)svme182_map.virt);
+ svme182_map.virt = 0;
+ }
+
+ return;
+}
+
+module_init(init_svme182);
+module_exit(cleanup_svme182);
--- /dev/null
+/*
+ * ichxrom.c
+ *
+ * Normal mappings of chips in physical memory
+ * $Id: ichxrom.c,v 1.7 2004/07/14 18:14:09 eric Exp $
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/config.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/mtd/cfi.h>
+
+#define xstr(s) str(s)
+#define str(s) #s
+#define MOD_NAME xstr(KBUILD_BASENAME)
+
+#define MTD_DEV_NAME_LENGTH 16
+
+#define RESERVE_MEM_REGION 0
+
+
+#define MANUFACTURER_INTEL 0x0089
+#define I82802AB 0x00ad
+#define I82802AC 0x00ac
+
+#define ICHX_FWH_REGION_START 0xFF000000UL
+#define ICHX_FWH_REGION_SIZE 0x01000000UL
+#define BIOS_CNTL 0x4e
+#define FWH_DEC_EN1 0xE3
+#define FWH_DEC_EN2 0xF0
+#define FWH_SEL1 0xE8
+#define FWH_SEL2 0xEE
+
+struct ichxrom_map_info {
+ struct map_info map;
+ struct mtd_info *mtd;
+ unsigned long window_addr;
+ struct pci_dev *pdev;
+ struct resource window_rsrc;
+ struct resource rom_rsrc;
+ char mtd_name[MTD_DEV_NAME_LENGTH];
+};
+
+static inline unsigned long addr(struct map_info *map, unsigned long ofs)
+{
+ unsigned long offset;
+ offset = ((8*1024*1024) - map->size) + ofs;
+ if (offset >= (4*1024*1024)) {
+ offset += 0x400000;
+ }
+ return map->map_priv_1 + 0x400000 + offset;
+}
+
+static inline unsigned long dbg_addr(struct map_info *map, unsigned long addr)
+{
+ return addr - map->map_priv_1 + ICHX_FWH_REGION_START;
+}
+
+static map_word ichxrom_read(struct map_info *map, unsigned long ofs)
+{
+ map_word val;
+ int i;
+ switch(map->bankwidth) {
+ case 1: val.x[0] = __raw_readb(addr(map, ofs)); break;
+ case 2: val.x[0] = __raw_readw(addr(map, ofs)); break;
+ case 4: val.x[0] = __raw_readl(addr(map, ofs)); break;
+#if BITS_PER_LONG >= 64
+ case 8: val.x[0] = __raw_readq(addr(map, ofs)); break;
+#endif
+ default: val.x[0] = 0; break;
+ }
+ for(i = 1; i < map_words(map); i++) {
+ val.x[i] = 0;
+ }
+ return val;
+}
+
+static void ichxrom_copy_from(struct map_info *map, void *to, unsigned long from, ssize_t len)
+{
+ memcpy_fromio(to, addr(map, from), len);
+}
+
+static void ichxrom_write(struct map_info *map, map_word d, unsigned long ofs)
+{
+ switch(map->bankwidth) {
+ case 1: __raw_writeb(d.x[0], addr(map,ofs)); break;
+ case 2: __raw_writew(d.x[0], addr(map,ofs)); break;
+ case 4: __raw_writel(d.x[0], addr(map,ofs)); break;
+#if BITS_PER_LONG >= 64
+ case 8: __raw_writeq(d.x[0], addr(map,ofs)); break;
+#endif
+ }
+ mb();
+}
+
+static void ichxrom_copy_to(struct map_info *map, unsigned long to, const void *from, ssize_t len)
+{
+ memcpy_toio(addr(map, to), from, len);
+}
+
+static struct ichxrom_map_info ichxrom_map = {
+ .map = {
+ .name = MOD_NAME,
+ .phys = NO_XIP,
+ .size = 0,
+ .bankwidth = 1,
+ .read = ichxrom_read,
+ .copy_from = ichxrom_copy_from,
+ .write = ichxrom_write,
+ .copy_to = ichxrom_copy_to,
+ /* Firmware hubs only use vpp when being programmed
+ * in a factory setting. So in-place programming
+ * needs to use a different method.
+ */
+ },
+ /* remaining fields of structure are initialized to 0 */
+};
+
+enum fwh_lock_state {
+ FWH_DENY_WRITE = 1,
+ FWH_IMMUTABLE = 2,
+ FWH_DENY_READ = 4,
+};
+
+static void ichxrom_cleanup(struct ichxrom_map_info *info)
+{
+ u16 word;
+
+ /* Disable writes through the rom window */
+ pci_read_config_word(info->pdev, BIOS_CNTL, &word);
+ pci_write_config_word(info->pdev, BIOS_CNTL, word & ~1);
+
+ if (info->mtd) {
+ del_mtd_device(info->mtd);
+ map_destroy(info->mtd);
+ info->mtd = NULL;
+ info->map.virt = 0;
+ }
+ if (info->rom_rsrc.parent)
+ release_resource(&info->rom_rsrc);
+ if (info->window_rsrc.parent)
+ release_resource(&info->window_rsrc);
+
+ if (info->window_addr) {
+ iounmap((void *)(info->window_addr));
+ info->window_addr = 0;
+ }
+}
+
+
+static int ichxrom_set_lock_state(struct mtd_info *mtd, loff_t ofs, size_t len,
+ enum fwh_lock_state state)
+{
+ struct map_info *map = mtd->priv;
+ unsigned long start = ofs;
+ unsigned long end = start + len -1;
+
+ /* FIXME do I need to guard against concurrency here? */
+ /* round down to 64K boundaries */
+ start = start & ~0xFFFF;
+ end = end & ~0xFFFF;
+ while (start <= end) {
+ unsigned long ctrl_addr;
+ ctrl_addr = addr(map, start) - 0x400000 + 2;
+ writeb(state, ctrl_addr);
+ start = start + 0x10000;
+ }
+ return 0;
+}
+
+static int ichxrom_lock(struct mtd_info *mtd, loff_t ofs, size_t len)
+{
+ return ichxrom_set_lock_state(mtd, ofs, len, FWH_DENY_WRITE);
+}
+
+static int ichxrom_unlock(struct mtd_info *mtd, loff_t ofs, size_t len)
+{
+ return ichxrom_set_lock_state(mtd, ofs, len, 0);
+}
+
+static int __devinit ichxrom_init_one (struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ u16 word;
+ struct ichxrom_map_info *info = &ichxrom_map;
+ unsigned long map_size;
+ static char *probes[] = { "cfi_probe", "jedec_probe" };
+ struct cfi_private *cfi;
+
+ /* For now I just handle the ichx and I assume there
+ * are not a lot of resources up at the top of the address
+ * space. It is possible to handle other devices in the
+ * top 16MB but it is very painful. Also since
+ * you can only really attach a FWH to an ICHX there
+ * a number of simplifications you can make.
+ *
+ * Also you can page firmware hubs if an 8MB window isn't enough
+ * but don't currently handle that case either.
+ */
+
+ info->pdev = pdev;
+
+ /*
+ * Try to reserve the window mem region. If this fails then
+ * it is likely due to the window being "reseved" by the BIOS.
+ */
+ info->window_rsrc.name = MOD_NAME;
+ info->window_rsrc.start = ICHX_FWH_REGION_START;
+ info->window_rsrc.end = ICHX_FWH_REGION_START + ICHX_FWH_REGION_SIZE - 1;
+ info->window_rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ if (request_resource(&iomem_resource, &info->window_rsrc)) {
+ info->window_rsrc.parent = NULL;
+ printk(KERN_ERR MOD_NAME
+ " %s(): Unable to register resource"
+ " 0x%.08lx-0x%.08lx - kernel bug?\n",
+ __func__,
+ info->window_rsrc.start, info->window_rsrc.end);
+ }
+
+ /* Enable writes through the rom window */
+ pci_read_config_word(pdev, BIOS_CNTL, &word);
+ if (!(word & 1) && (word & (1<<1))) {
+ /* The BIOS will generate an error if I enable
+ * this device, so don't even try.
+ */
+ printk(KERN_ERR MOD_NAME ": firmware access control, I can't enable writes\n");
+ goto failed;
+ }
+ pci_write_config_word(pdev, BIOS_CNTL, word | 1);
+
+
+ /* Map the firmware hub into my address space. */
+ /* Does this use too much virtual address space? */
+ info->window_addr = (unsigned long)ioremap(
+ ICHX_FWH_REGION_START, ICHX_FWH_REGION_SIZE);
+ if (!info->window_addr) {
+ printk(KERN_ERR "Failed to ioremap\n");
+ goto failed;
+ }
+
+ /* For now assume the firmware has setup all relevant firmware
+ * windows. We don't have enough information to handle this case
+ * intelligently.
+ */
+
+ /* FIXME select the firmware hub and enable a window to it. */
+
+ info->mtd = NULL;
+ info->map.map_priv_1 = info->window_addr;
+
+ /* Loop through the possible bankwidths */
+ for(ichxrom_map.map.bankwidth = 4; ichxrom_map.map.bankwidth; ichxrom_map.map.bankwidth >>= 1) {
+ map_size = ICHX_FWH_REGION_SIZE;
+ while(!info->mtd && (map_size > 0)) {
+ int i;
+ info->map.size = map_size;
+ for(i = 0; i < sizeof(probes)/sizeof(char *); i++) {
+ info->mtd = do_map_probe(probes[i], &ichxrom_map.map);
+ if (info->mtd)
+ break;
+ }
+ map_size -= 512*1024;
+ }
+ if (info->mtd)
+ break;
+ }
+ if (!info->mtd) {
+ goto failed;
+ }
+ cfi = ichxrom_map.map.fldrv_priv;
+ if ((cfi->mfr == MANUFACTURER_INTEL) && (
+ (cfi->id == I82802AB) ||
+ (cfi->id == I82802AC)))
+ {
+ /* If it is a firmware hub put in the special lock
+ * and unlock routines.
+ */
+ info->mtd->lock = ichxrom_lock;
+ info->mtd->unlock = ichxrom_unlock;
+ }
+ if (info->mtd->size > info->map.size) {
+ printk(KERN_WARNING MOD_NAME " rom(%u) larger than window(%u). fixing...\n",
+ info->mtd->size, info->map.size);
+ info->mtd->size = info->map.size;
+ }
+
+ info->mtd->owner = THIS_MODULE;
+ add_mtd_device(info->mtd);
+
+ if (info->window_rsrc.parent) {
+ /*
+ * Registering the MTD device in iomem may not be possible
+ * if there is a BIOS "reserved" and BUSY range. If this
+ * fails then continue anyway.
+ */
+ snprintf(info->mtd_name, MTD_DEV_NAME_LENGTH,
+ "mtd%d", info->mtd->index);
+
+ info->rom_rsrc.name = info->mtd_name;
+ info->rom_rsrc.start = ICHX_FWH_REGION_START
+ + ICHX_FWH_REGION_SIZE - map_size;
+ info->rom_rsrc.end = ICHX_FWH_REGION_START
+ + ICHX_FWH_REGION_SIZE;
+ info->rom_rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ if (request_resource(&info->window_rsrc, &info->rom_rsrc)) {
+ printk(KERN_ERR MOD_NAME
+ ": cannot reserve MTD resource\n");
+ info->rom_rsrc.parent = NULL;
+ }
+ }
+
+ return 0;
+
+ failed:
+ ichxrom_cleanup(info);
+ return -ENODEV;
+}
+
+
+static void __devexit ichxrom_remove_one (struct pci_dev *pdev)
+{
+ struct ichxrom_map_info *info = &ichxrom_map;
+ u16 word;
+
+ del_mtd_device(info->mtd);
+ map_destroy(info->mtd);
+ info->mtd = NULL;
+ info->map.map_priv_1 = 0;
+
+ iounmap((void *)(info->window_addr));
+ info->window_addr = 0;
+
+ /* Disable writes through the rom window */
+ pci_read_config_word(pdev, BIOS_CNTL, &word);
+ pci_write_config_word(pdev, BIOS_CNTL, word & ~1);
+
+#if RESERVE_MEM_REGION
+ release_mem_region(ICHX_FWH_REGION_START, ICHX_FWH_REGION_SIZE);
+#endif
+}
+
+static struct pci_device_id ichxrom_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { 0, },
+};
+
+MODULE_DEVICE_TABLE(pci, ichxrom_pci_tbl);
+
+#if 0
+static struct pci_driver ichxrom_driver = {
+ .name = MOD_NAME,
+ .id_table = ichxrom_pci_tbl,
+ .probe = ichxrom_init_one,
+ .remove = ichxrom_remove_one,
+};
+#endif
+
+static struct pci_dev *mydev;
+int __init init_ichxrom(void)
+{
+ struct pci_dev *pdev;
+ struct pci_device_id *id;
+
+ pdev = NULL;
+ for (id = ichxrom_pci_tbl; id->vendor; id++) {
+ pdev = pci_find_device(id->vendor, id->device, NULL);
+ if (pdev) {
+ break;
+ }
+ }
+ if (pdev) {
+ mydev = pdev;
+ return ichxrom_init_one(pdev, &ichxrom_pci_tbl[0]);
+ }
+ return -ENXIO;
+#if 0
+ return pci_module_init(&ichxrom_driver);
+#endif
+}
+
+static void __exit cleanup_ichxrom(void)
+{
+ ichxrom_remove_one(mydev);
+}
+
+module_init(init_ichxrom);
+module_exit(cleanup_ichxrom);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Eric Biederman <ebiederman@lnxi.com>");
+MODULE_DESCRIPTION("MTD map driver for BIOS chips on the ICHX southbridge");
--- /dev/null
+/*======================================================================
+
+ drivers/mtd/maps/armflash.c: ARM Flash Layout/Partitioning
+
+ Copyright (C) 2000 ARM Limited
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ This is access code for flashes using ARM's flash partitioning
+ standards.
+
+ $Id: integrator-flash-v24.c,v 1.13 2004/07/12 21:59:44 dwmw2 Exp $
+
+======================================================================*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/system.h>
+
+// board specific stuff - sorry, it should be in arch/arm/mach-*.
+#ifdef CONFIG_ARCH_INTEGRATOR
+
+#define FLASH_BASE INTEGRATOR_FLASH_BASE
+#define FLASH_SIZE INTEGRATOR_FLASH_SIZE
+
+#define FLASH_PART_SIZE 0x400000
+
+#define SC_CTRLC (IO_ADDRESS(INTEGRATOR_SC_BASE) + INTEGRATOR_SC_CTRLC_OFFSET)
+#define SC_CTRLS (IO_ADDRESS(INTEGRATOR_SC_BASE) + INTEGRATOR_SC_CTRLS_OFFSET)
+#define EBI_CSR1 (IO_ADDRESS(INTEGRATOR_EBI_BASE) + INTEGRATOR_EBI_CSR1_OFFSET)
+#define EBI_LOCK (IO_ADDRESS(INTEGRATOR_EBI_BASE) + INTEGRATOR_EBI_LOCK_OFFSET)
+
+/*
+ * Initialise the flash access systems:
+ * - Disable VPP
+ * - Assert WP
+ * - Set write enable bit in EBI reg
+ */
+static void armflash_flash_init(void)
+{
+ unsigned int tmp;
+
+ __raw_writel(INTEGRATOR_SC_CTRL_nFLVPPEN | INTEGRATOR_SC_CTRL_nFLWP, SC_CTRLC);
+
+ tmp = __raw_readl(EBI_CSR1) | INTEGRATOR_EBI_WRITE_ENABLE;
+ __raw_writel(tmp, EBI_CSR1);
+
+ if (!(__raw_readl(EBI_CSR1) & INTEGRATOR_EBI_WRITE_ENABLE)) {
+ __raw_writel(0xa05f, EBI_LOCK);
+ __raw_writel(tmp, EBI_CSR1);
+ __raw_writel(0, EBI_LOCK);
+ }
+}
+
+/*
+ * Shutdown the flash access systems:
+ * - Disable VPP
+ * - Assert WP
+ * - Clear write enable bit in EBI reg
+ */
+static void armflash_flash_exit(void)
+{
+ unsigned int tmp;
+
+ __raw_writel(INTEGRATOR_SC_CTRL_nFLVPPEN | INTEGRATOR_SC_CTRL_nFLWP, SC_CTRLC);
+
+ /*
+ * Clear the write enable bit in system controller EBI register.
+ */
+ tmp = __raw_readl(EBI_CSR1) & ~INTEGRATOR_EBI_WRITE_ENABLE;
+ __raw_writel(tmp, EBI_CSR1);
+
+ if (__raw_readl(EBI_CSR1) & INTEGRATOR_EBI_WRITE_ENABLE) {
+ __raw_writel(0xa05f, EBI_LOCK);
+ __raw_writel(tmp, EBI_CSR1);
+ __raw_writel(0, EBI_LOCK);
+ }
+}
+
+static void armflash_flash_wp(int on)
+{
+ unsigned int reg;
+
+ if (on)
+ reg = SC_CTRLC;
+ else
+ reg = SC_CTRLS;
+
+ __raw_writel(INTEGRATOR_SC_CTRL_nFLWP, reg);
+}
+
+static void armflash_set_vpp(struct map_info *map, int on)
+{
+ unsigned int reg;
+
+ if (on)
+ reg = SC_CTRLS;
+ else
+ reg = SC_CTRLC;
+
+ __raw_writel(INTEGRATOR_SC_CTRL_nFLVPPEN, reg);
+}
+#endif
+
+#ifdef CONFIG_ARCH_P720T
+
+#define FLASH_BASE (0x04000000)
+#define FLASH_SIZE (64*1024*1024)
+
+#define FLASH_PART_SIZE (4*1024*1024)
+#define FLASH_BLOCK_SIZE (128*1024)
+
+static void armflash_flash_init(void)
+{
+}
+
+static void armflash_flash_exit(void)
+{
+}
+
+static void armflash_flash_wp(int on)
+{
+}
+
+static void armflash_set_vpp(struct map_info *map, int on)
+{
+}
+#endif
+
+
+static struct map_info armflash_map =
+{
+ .name = "AFS",
+ .set_vpp = armflash_set_vpp,
+ .phys = FLASH_BASE,
+};
+
+static struct mtd_info *mtd;
+static struct mtd_partition *parts;
+static const char *probes[] = { "RedBoot", "afs", NULL };
+
+static int __init armflash_cfi_init(void *base, u_int size)
+{
+ int ret;
+
+ armflash_flash_init();
+ armflash_flash_wp(1);
+
+ /*
+ * look for CFI based flash parts fitted to this board
+ */
+ armflash_map.size = size;
+ armflash_map.bankwidth = 4;
+ armflash_map.virt = (unsigned long) base;
+
+ simple_map_init(&armflash_map);
+
+ /*
+ * Also, the CFI layer automatically works out what size
+ * of chips we have, and does the necessary identification
+ * for us automatically.
+ */
+ mtd = do_map_probe("cfi_probe", &armflash_map);
+ if (!mtd)
+ return -ENXIO;
+
+ mtd->owner = THIS_MODULE;
+
+ ret = parse_mtd_partitions(mtd, probes, &parts, (void *)0);
+ if (ret > 0) {
+ ret = add_mtd_partitions(mtd, parts, ret);
+ if (ret)
+ printk(KERN_ERR "mtd partition registration "
+ "failed: %d\n", ret);
+ }
+
+ /*
+ * If we got an error, free all resources.
+ */
+ if (ret < 0) {
+ del_mtd_partitions(mtd);
+ map_destroy(mtd);
+ }
+
+ return ret;
+}
+
+static void armflash_cfi_exit(void)
+{
+ if (mtd) {
+ del_mtd_partitions(mtd);
+ map_destroy(mtd);
+ }
+ if (parts)
+ kfree(parts);
+}
+
+static int __init armflash_init(void)
+{
+ int err = -EBUSY;
+ void *base;
+
+ if (request_mem_region(FLASH_BASE, FLASH_SIZE, "flash") == NULL)
+ goto out;
+
+ base = ioremap(FLASH_BASE, FLASH_SIZE);
+ err = -ENOMEM;
+ if (base == NULL)
+ goto release;
+
+ err = armflash_cfi_init(base, FLASH_SIZE);
+ if (err) {
+ iounmap(base);
+release:
+ release_mem_region(FLASH_BASE, FLASH_SIZE);
+ }
+out:
+ return err;
+}
+
+static void __exit armflash_exit(void)
+{
+ armflash_cfi_exit();
+ iounmap((void *)armflash_map.virt);
+ release_mem_region(FLASH_BASE, FLASH_SIZE);
+ armflash_flash_exit();
+}
+
+module_init(armflash_init);
+module_exit(armflash_exit);
+
+MODULE_AUTHOR("ARM Ltd");
+MODULE_DESCRIPTION("ARM Integrator CFI map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * $Id: ixp4xx.c,v 1.1 2004/05/13 22:21:26 dsaxena Exp $
+ *
+ * drivers/mtd/maps/ixp4xx.c
+ *
+ * MTD Map file for IXP4XX based systems. Please do not make per-board
+ * changes in here. If your board needs special setup, do it in your
+ * platform level code in arch/arm/mach-ixp4xx/board-setup.c
+ *
+ * Original Author: Intel Corporation
+ * Maintainer: Deepak Saxena <dsaxena@mvista.com>
+ *
+ * Copyright (C) 2002 Intel Corporation
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <asm/io.h>
+#include <asm/mach-types.h>
+#include <asm/mach/flash.h>
+
+#include <linux/reboot.h>
+
+#ifndef __ARMEB__
+#define BYTE0(h) ((h) & 0xFF)
+#define BYTE1(h) (((h) >> 8) & 0xFF)
+#else
+#define BYTE0(h) (((h) >> 8) & 0xFF)
+#define BYTE1(h) ((h) & 0xFF)
+#endif
+
+static __u16
+ixp4xx_read16(struct map_info *map, unsigned long ofs)
+{
+ return *(__u16 *) (map->map_priv_1 + ofs);
+}
+
+/*
+ * The IXP4xx expansion bus only allows 16-bit wide acceses
+ * when attached to a 16-bit wide device (such as the 28F128J3A),
+ * so we can't just memcpy_fromio().
+ */
+static void
+ixp4xx_copy_from(struct map_info *map, void *to,
+ unsigned long from, ssize_t len)
+{
+ int i;
+ u8 *dest = (u8 *) to;
+ u16 *src = (u16 *) (map->map_priv_1 + from);
+ u16 data;
+
+ for (i = 0; i < (len / 2); i++) {
+ data = src[i];
+ dest[i * 2] = BYTE0(data);
+ dest[i * 2 + 1] = BYTE1(data);
+ }
+
+ if (len & 1)
+ dest[len - 1] = BYTE0(src[i]);
+}
+
+static void
+ixp4xx_write16(struct map_info *map, __u16 d, unsigned long adr)
+{
+ *(__u16 *) (map->map_priv_1 + adr) = d;
+}
+
+struct ixp4xx_flash_info {
+ struct mtd_info *mtd;
+ struct map_info map;
+ struct mtd_partition *partitions;
+ struct resource *res;
+};
+
+static const char *probes[] = { "RedBoot", "cmdlinepart", NULL };
+
+static int
+ixp4xx_flash_remove(struct device *_dev)
+{
+ struct platform_device *dev = to_platform_device(_dev);
+ struct flash_platform_data *plat = dev->dev.platform_data;
+ struct ixp4xx_flash_info *info = dev_get_drvdata(&dev->dev);
+
+ dev_set_drvdata(&dev->dev, NULL);
+
+ if(!info)
+ return 0;
+
+ /*
+ * This is required for a soft reboot to work.
+ */
+ ixp4xx_write16(&info->map, 0xff, 0x55 * 0x2);
+
+ if (info->mtd) {
+ del_mtd_partitions(info->mtd);
+ map_destroy(info->mtd);
+ }
+ if (info->map.map_priv_1)
+ iounmap((void *) info->map.map_priv_1);
+
+ if (info->partitions)
+ kfree(info->partitions);
+
+ if (info->res) {
+ release_resource(info->res);
+ kfree(info->res);
+ }
+
+ if (plat->exit)
+ plat->exit();
+
+ /* Disable flash write */
+ *IXP4XX_EXP_CS0 &= ~IXP4XX_FLASH_WRITABLE;
+
+ return 0;
+}
+
+static int ixp4xx_flash_probe(struct device *_dev)
+{
+ struct platform_device *dev = to_platform_device(_dev);
+ struct flash_platform_data *plat = dev->dev.platform_data;
+ struct ixp4xx_flash_info *info;
+ int err = -1;
+
+ if (!plat)
+ return -ENODEV;
+
+ if (plat->init) {
+ err = plat->init();
+ if (err)
+ return err;
+ }
+
+ info = kmalloc(sizeof(struct ixp4xx_flash_info), GFP_KERNEL);
+ if(!info) {
+ err = -ENOMEM;
+ goto Error;
+ }
+ memzero(info, sizeof(struct ixp4xx_flash_info));
+
+ dev_set_drvdata(&dev->dev, info);
+
+ /*
+ * Enable flash write
+ * TODO: Move this out to board specific code
+ */
+ *IXP4XX_EXP_CS0 |= IXP4XX_FLASH_WRITABLE;
+
+ /*
+ * Tell the MTD layer we're not 1:1 mapped so that it does
+ * not attempt to do a direct access on us.
+ */
+ info->map.phys = NO_XIP;
+ info->map.size = dev->resource->end - dev->resource->start + 1;
+
+ /*
+ * We only support 16-bit accesses for now. If and when
+ * any board use 8-bit access, we'll fixup the driver to
+ * handle that.
+ */
+ info->map.buswidth = 2;
+ info->map.name = dev->dev.bus_id;
+ info->map.read16 = ixp4xx_read16,
+ info->map.write16 = ixp4xx_write16,
+ info->map.copy_from = ixp4xx_copy_from,
+
+ info->res = request_mem_region(dev->resource->start,
+ dev->resource->end - dev->resource->start + 1,
+ "IXP4XXFlash");
+ if (!info->res) {
+ printk(KERN_ERR "IXP4XXFlash: Could not reserve memory region\n");
+ err = -ENOMEM;
+ goto Error;
+ }
+
+ info->map.map_priv_1 =
+ (unsigned long) ioremap(dev->resource->start,
+ dev->resource->end - dev->resource->start + 1);
+ if (!info->map.map_priv_1) {
+ printk(KERN_ERR "IXP4XXFlash: Failed to ioremap region\n");
+ err = -EIO;
+ goto Error;
+ }
+
+ info->mtd = do_map_probe(plat->map_name, &info->map);
+ if (!info->mtd) {
+ printk(KERN_ERR "IXP4XXFlash: map_probe failed\n");
+ err = -ENXIO;
+ goto Error;
+ }
+ info->mtd->owner = THIS_MODULE;
+
+ err = parse_mtd_partitions(info->mtd, probes, &info->partitions, 0);
+ if (err > 0) {
+ err = add_mtd_partitions(info->mtd, info->partitions, err);
+ if(err)
+ printk(KERN_ERR "Could not parse partitions\n");
+ }
+
+ if (err)
+ goto Error;
+
+ return 0;
+
+Error:
+ ixp4xx_flash_remove(_dev);
+ return err;
+}
+
+static struct device_driver ixp4xx_flash_driver = {
+ .name = "IXP4XX-Flash",
+ .bus = &platform_bus_type,
+ .probe = ixp4xx_flash_probe,
+ .remove = ixp4xx_flash_remove,
+};
+
+static int __init ixp4xx_flash_init(void)
+{
+ return driver_register(&ixp4xx_flash_driver);
+}
+
+static void __exit ixp4xx_flash_exit(void)
+{
+ driver_unregister(&ixp4xx_flash_driver);
+}
+
+
+module_init(ixp4xx_flash_init);
+module_exit(ixp4xx_flash_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("MTD map driver for Intel IXP4xx systems")
+MODULE_AUTHOR("Deepak Saxena");
+
--- /dev/null
+/*
+ * Flash on MPC-1211
+ *
+ * $Id: mpc1211.c,v 1.3 2004/07/14 17:45:40 dwmw2 Exp $
+ *
+ * (C) 2002 Interface, Saito.K & Jeanne
+ *
+ * GPL'd
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/config.h>
+
+static struct mtd_info *flash_mtd;
+static struct mtd_partition *parsed_parts;
+
+struct map_info mpc1211_flash_map = {
+ .name = "MPC-1211 FLASH",
+ .size = 0x80000,
+ .bankwidth = 1,
+};
+
+static struct mtd_partition mpc1211_partitions[] = {
+ {
+ .name = "IPL & ETH-BOOT",
+ .offset = 0x00000000,
+ .size = 0x10000,
+ },
+ {
+ .name = "Flash FS",
+ .offset = 0x00010000,
+ .size = MTDPART_SIZ_FULL,
+ }
+};
+
+static int __init init_mpc1211_maps(void)
+{
+ int nr_parts;
+
+ mpc1211_flash_map.phys = 0;
+ mpc1211_flash_map.virt = P2SEGADDR(0);
+
+ simple_map_init(&mpc1211_flash_map);
+
+ printk(KERN_NOTICE "Probing for flash chips at 0x00000000:\n");
+ flash_mtd = do_map_probe("jedec_probe", &mpc1211_flash_map);
+ if (!flash_mtd) {
+ printk(KERN_NOTICE "Flash chips not detected at either possible location.\n");
+ return -ENXIO;
+ }
+ printk(KERN_NOTICE "MPC-1211: Flash at 0x%08lx\n", mpc1211_flash_map.virt & 0x1fffffff);
+ flash_mtd->module = THIS_MODULE;
+
+ parsed_parts = mpc1211_partitions;
+ nr_parts = ARRAY_SIZE(mpc1211_partitions);
+
+ add_mtd_partitions(flash_mtd, parsed_parts, nr_parts);
+ return 0;
+}
+
+static void __exit cleanup_mpc1211_maps(void)
+{
+ if (parsed_parts)
+ del_mtd_partitions(flash_mtd);
+ else
+ del_mtd_device(flash_mtd);
+ map_destroy(flash_mtd);
+}
+
+module_init(init_mpc1211_maps);
+module_exit(cleanup_mpc1211_maps);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Saito.K & Jeanne <ksaito@interface.co.jp>");
+MODULE_DESCRIPTION("MTD map driver for MPC-1211 boards. Interface");
--- /dev/null
+/*
+ * NOR Flash memory access on TI Toto board
+ *
+ * jzhang@ti.com (C) 2003 Texas Instruments.
+ *
+ * (C) 2002 MontVista Software, Inc.
+ *
+ * $Id: omap-toto-flash.c,v 1.2 2004/07/12 21:59:44 dwmw2 Exp $
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/errno.h>
+#include <linux/init.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+
+
+#ifndef CONFIG_ARCH_OMAP
+#error This is for OMAP architecture only
+#endif
+
+//these lines need be moved to a hardware header file
+#define OMAP_TOTO_FLASH_BASE 0xd8000000
+#define OMAP_TOTO_FLASH_SIZE 0x80000
+
+static struct map_info omap_toto_map_flash = {
+ .name = "OMAP Toto flash",
+ .bankwidth = 2,
+ .virt = OMAP_TOTO_FLASH_BASE,
+};
+
+
+static struct mtd_partition toto_flash_partitions[] = {
+ {
+ .name = "BootLoader",
+ .size = 0x00040000, /* hopefully u-boot will stay 128k + 128*/
+ .offset = 0,
+ .mask_flags = MTD_WRITEABLE, /* force read-only */
+ }, {
+ .name = "ReservedSpace",
+ .size = 0x00030000,
+ .offset = MTDPART_OFS_APPEND,
+ //mask_flags: MTD_WRITEABLE, /* force read-only */
+ }, {
+ .name = "EnvArea", /* bottom 64KiB for env vars */
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+
+static struct mtd_partition *parsed_parts;
+
+static struct mtd_info *flash_mtd;
+
+static int __init init_flash (void)
+{
+
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+ int parsed_nr_parts = 0;
+ const char *part_type;
+
+ /*
+ * Static partition definition selection
+ */
+ part_type = "static";
+
+ parts = toto_flash_partitions;
+ nb_parts = ARRAY_SIZE(toto_flash_partitions);
+ omap_toto_map_flash.size = OMAP_TOTO_FLASH_SIZE;
+ omap_toto_map_flash.phys = virt_to_phys(OMAP_TOTO_FLASH_BASE);
+
+ simple_map_init(&omap_toto_map_flash);
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "OMAP toto flash: probing %d-bit flash bus\n",
+ omap_toto_map_flash.bankwidth*8);
+ flash_mtd = do_map_probe("jedec_probe", &omap_toto_map_flash);
+ if (!flash_mtd)
+ return -ENXIO;
+
+ if (parsed_nr_parts > 0) {
+ parts = parsed_parts;
+ nb_parts = parsed_nr_parts;
+ }
+
+ if (nb_parts == 0) {
+ printk(KERN_NOTICE "OMAP toto flash: no partition info available,"
+ "registering whole flash at once\n");
+ if (add_mtd_device(flash_mtd)){
+ return -ENXIO;
+ }
+ } else {
+ printk(KERN_NOTICE "Using %s partition definition\n",
+ part_type);
+ return add_mtd_partitions(flash_mtd, parts, nb_parts);
+ }
+ return 0;
+}
+
+int __init omap_toto_mtd_init(void)
+{
+ int status;
+
+ if (status = init_flash()) {
+ printk(KERN_ERR "OMAP Toto Flash: unable to init map for toto flash\n");
+ }
+ return status;
+}
+
+static void __exit omap_toto_mtd_cleanup(void)
+{
+ if (flash_mtd) {
+ del_mtd_partitions(flash_mtd);
+ map_destroy(flash_mtd);
+ if (parsed_parts)
+ kfree(parsed_parts);
+ }
+}
+
+module_init(omap_toto_mtd_init);
+module_exit(omap_toto_mtd_cleanup);
+
+MODULE_AUTHOR("Jian Zhang");
+MODULE_DESCRIPTION("OMAP Toto board map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Pb1550 board
+ *
+ * $Id: pb1550-flash.c,v 1.4 2004/07/14 17:45:40 dwmw2 Exp $
+ *
+ * (C) 2004 Embedded Edge, LLC, based on pb1550-flash.c:
+ * (C) 2003 Pete Popov <ppopov@pacbell.net>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+#include <asm/au1000.h>
+#include <asm/pb1550.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+
+
+static struct map_info pb1550_map = {
+ .name = "Pb1550 flash",
+};
+
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * Support only 64MB NOR Flash parts
+ */
+
+#ifdef PB1550_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x1FC00000 - 0x18000000),
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000 - 0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(PB1550_BOOT_ONLY)
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = 0x03c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(PB1550_USER_ONLY)
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x4000000 - 0x200000), /* reserve 2MB for raw kernel */
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_PB1550 define combo error /* should never happen */
+#endif
+
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+static struct mtd_info *mymtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+ u16 boot_swapboot;
+ boot_swapboot = (au_readl(MEM_STSTAT) & (0x7<<1)) |
+ ((bcsr->status >> 6) & 0x1);
+ printk("Pb1550 MTD: boot:swap %d\n", boot_swapboot);
+
+ switch (boot_swapboot) {
+ case 0: /* 512Mbit devices, both enabled */
+ case 1:
+ case 8:
+ case 9:
+#if defined(PB1550_BOTH_BANKS)
+ window_addr = 0x18000000;
+ window_size = 0x8000000;
+#elif defined(PB1550_BOOT_ONLY)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+#else /* USER ONLY */
+ window_addr = 0x1E000000;
+ window_size = 0x4000000;
+#endif
+ break;
+ case 0xC:
+ case 0xD:
+ case 0xE:
+ case 0xF:
+ /* 64 MB Boot NOR Flash is disabled */
+ /* and the start address is moved to 0x0C00000 */
+ window_addr = 0x0C000000;
+ window_size = 0x4000000;
+ default:
+ printk("Pb1550 MTD: unsupported boot:swap setting\n");
+ return 1;
+ }
+ return 0;
+}
+
+int __init pb1550_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ /* Default flash bankwidth */
+ pb1550_map.bankwidth = flash_bankwidth;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = pb1550_partitions;
+ nb_parts = NB_OF(pb1550_partitions);
+ pb1550_map.size = window_size;
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Pb1550 flash: probing %d-bit flash bus\n",
+ pb1550_map.bankwidth*8);
+ pb1550_map.virt =
+ (unsigned long)ioremap(window_addr, window_size);
+ mymtd = do_map_probe("cfi_probe", &pb1550_map);
+ if (!mymtd) return -ENXIO;
+ mymtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(mymtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit pb1550_mtd_cleanup(void)
+{
+ if (mymtd) {
+ del_mtd_partitions(mymtd);
+ map_destroy(mymtd);
+ }
+}
+
+module_init(pb1550_mtd_init);
+module_exit(pb1550_mtd_cleanup);
+
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Pb1550 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Handle mapping of the flash memory access routines on the SBC8240 board.
+ *
+ * Carolyn Smith, Tektronix, Inc.
+ *
+ * This code is GPLed
+ *
+ * $Id: sbc8240.c,v 1.4 2004/07/12 22:38:29 dwmw2 Exp $
+ *
+ */
+
+/*
+ * The SBC8240 has 2 flash banks.
+ * Bank 0 is a 512 KiB AMD AM29F040B; 8 x 64 KiB sectors.
+ * It contains the U-Boot code (7 sectors) and the environment (1 sector).
+ * Bank 1 is 4 x 1 MiB AMD AM29LV800BT; 15 x 64 KiB sectors, 1 x 32 KiB sector,
+ * 2 x 8 KiB sectors, 1 x 16 KiB sectors.
+ * Both parts are JEDEC compatible.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/cfi.h>
+
+#ifdef CONFIG_MTD_PARTITIONS
+#include <linux/mtd/partitions.h>
+#endif
+
+#define DEBUG
+
+#ifdef DEBUG
+# define debugk(fmt,args...) printk(fmt ,##args)
+#else
+# define debugk(fmt,args...)
+#endif
+
+
+#define WINDOW_ADDR0 0xFFF00000 /* 512 KiB */
+#define WINDOW_SIZE0 0x00080000
+#define BUSWIDTH0 1
+
+#define WINDOW_ADDR1 0xFF000000 /* 4 MiB */
+#define WINDOW_SIZE1 0x00400000
+#define BUSWIDTH1 8
+
+#define MSG_PREFIX "sbc8240:" /* prefix for our printk()'s */
+#define MTDID "sbc8240-%d" /* for mtdparts= partitioning */
+
+
+static struct map_info sbc8240_map[2] = {
+ {
+ .name = "sbc8240 Flash Bank #0",
+ .size = WINDOW_SIZE0,
+ .bankwidth = BUSWIDTH0,
+ },
+ {
+ .name = "sbc8240 Flash Bank #1",
+ .size = WINDOW_SIZE1,
+ .bankwidth = BUSWIDTH1,
+ }
+};
+
+#define NUM_FLASH_BANKS (sizeof(sbc8240_map) / sizeof(struct map_info))
+
+/*
+ * The following defines the partition layout of SBC8240 boards.
+ *
+ * See include/linux/mtd/partitions.h for definition of the
+ * mtd_partition structure.
+ *
+ * The *_max_flash_size is the maximum possible mapped flash size
+ * which is not necessarily the actual flash size. It must correspond
+ * to the value specified in the mapping definition defined by the
+ * "struct map_desc *_io_desc" for the corresponding machine.
+ */
+
+#ifdef CONFIG_MTD_PARTITIONS
+
+static struct mtd_partition sbc8240_uboot_partitions [] = {
+ /* Bank 0 */
+ {
+ .name = "U-boot", /* U-Boot Firmware */
+ .offset = 0,
+ .size = 0x00070000, /* 7 x 64 KiB sectors */
+ .mask_flags = MTD_WRITEABLE, /* force read-only */
+ },
+ {
+ .name = "environment", /* U-Boot environment */
+ .offset = 0x00070000,
+ .size = 0x00010000, /* 1 x 64 KiB sector */
+ },
+};
+
+static struct mtd_partition sbc8240_fs_partitions [] = {
+ {
+ .name = "jffs", /* JFFS filesystem */
+ .offset = 0,
+ .size = 0x003C0000, /* 4 * 15 * 64KiB */
+ },
+ {
+ .name = "tmp32",
+ .offset = 0x003C0000,
+ .size = 0x00020000, /* 4 * 32KiB */
+ },
+ {
+ .name = "tmp8a",
+ .offset = 0x003E0000,
+ .size = 0x00008000, /* 4 * 8KiB */
+ },
+ {
+ .name = "tmp8b",
+ .offset = 0x003E8000,
+ .size = 0x00008000, /* 4 * 8KiB */
+ },
+ {
+ .name = "tmp16",
+ .offset = 0x003F0000,
+ .size = 0x00010000, /* 4 * 16KiB */
+ }
+};
+
+#define NB_OF(x) (sizeof (x) / sizeof (x[0]))
+
+/* trivial struct to describe partition information */
+struct mtd_part_def
+{
+ int nums;
+ unsigned char *type;
+ struct mtd_partition* mtd_part;
+};
+
+static struct mtd_info *sbc8240_mtd[NUM_FLASH_BANKS];
+static struct mtd_part_def sbc8240_part_banks[NUM_FLASH_BANKS];
+
+
+#endif /* CONFIG_MTD_PARTITIONS */
+
+
+int __init init_sbc8240_mtd (void)
+{
+ static struct _cjs {
+ u_long addr;
+ u_long size;
+ } pt[NUM_FLASH_BANKS] = {
+ {
+ .addr = WINDOW_ADDR0,
+ .size = WINDOW_SIZE0
+ },
+ {
+ .addr = WINDOW_ADDR1,
+ .size = WINDOW_SIZE1
+ },
+ };
+
+ int devicesfound = 0;
+ int i;
+
+ for (i = 0; i < NUM_FLASH_BANKS; i++) {
+ printk (KERN_NOTICE MSG_PREFIX
+ "Probing 0x%08lx at 0x%08lx\n", pt[i].size, pt[i].addr);
+
+ sbc8240_map[i].map_priv_1 =
+ (unsigned long) ioremap (pt[i].addr, pt[i].size);
+ if (!sbc8240_map[i].map_priv_1) {
+ printk (MSG_PREFIX "failed to ioremap\n");
+ return -EIO;
+ }
+ simple_map_init(&sbc8240_mtd[i]);
+
+ sbc8240_mtd[i] = do_map_probe("jedec_probe", &sbc8240_map[i]);
+
+ if (sbc8240_mtd[i]) {
+ sbc8240_mtd[i]->module = THIS_MODULE;
+ devicesfound++;
+ }
+ }
+
+ if (!devicesfound) {
+ printk(KERN_NOTICE MSG_PREFIX
+ "No suppported flash chips found!\n");
+ return -ENXIO;
+ }
+
+#ifdef CONFIG_MTD_PARTITIONS
+ sbc8240_part_banks[0].mtd_part = sbc8240_uboot_partitions;
+ sbc8240_part_banks[0].type = "static image";
+ sbc8240_part_banks[0].nums = NB_OF(sbc8240_uboot_partitions);
+ sbc8240_part_banks[1].mtd_part = sbc8240_fs_partitions;
+ sbc8240_part_banks[1].type = "static file system";
+ sbc8240_part_banks[1].nums = NB_OF(sbc8240_fs_partitions);
+
+ for (i = 0; i < NUM_FLASH_BANKS; i++) {
+
+ if (!sbc8240_mtd[i]) continue;
+ if (sbc8240_part_banks[i].nums == 0) {
+ printk (KERN_NOTICE MSG_PREFIX
+ "No partition info available, registering whole device\n");
+ add_mtd_device(sbc8240_mtd[i]);
+ } else {
+ printk (KERN_NOTICE MSG_PREFIX
+ "Using %s partition definition\n", sbc8240_part_banks[i].mtd_part->name);
+ add_mtd_partitions (sbc8240_mtd[i],
+ sbc8240_part_banks[i].mtd_part,
+ sbc8240_part_banks[i].nums);
+ }
+ }
+#else
+ printk(KERN_NOTICE MSG_PREFIX
+ "Registering %d flash banks at once\n", devicesfound);
+
+ for (i = 0; i < devicesfound; i++) {
+ add_mtd_device(sbc8240_mtd[i]);
+ }
+#endif /* CONFIG_MTD_PARTITIONS */
+
+ return devicesfound == 0 ? -ENXIO : 0;
+}
+
+static void __exit cleanup_sbc8240_mtd (void)
+{
+ int i;
+
+ for (i = 0; i < NUM_FLASH_BANKS; i++) {
+ if (sbc8240_mtd[i]) {
+ del_mtd_device (sbc8240_mtd[i]);
+ map_destroy (sbc8240_mtd[i]);
+ }
+ if (sbc8240_map[i].map_priv_1) {
+ iounmap ((void *) sbc8240_map[i].map_priv_1);
+ sbc8240_map[i].map_priv_1 = 0;
+ }
+ }
+}
+
+module_init (init_sbc8240_mtd);
+module_exit (cleanup_sbc8240_mtd);
+
+MODULE_LICENSE ("GPL");
+MODULE_AUTHOR ("Carolyn Smith <carolyn.smith@tektronix.com>");
+MODULE_DESCRIPTION ("MTD map driver for SBC8240 boards");
+
--- /dev/null
+/*
+ * $Id: wr_sbc82xx_flash.c,v 1.1 2004/06/07 10:21:32 dwmw2 Exp $
+ *
+ * Map for flash chips on Wind River PowerQUICC II SBC82xx board.
+ *
+ * Copyright (C) 2004 Red Hat, Inc.
+ *
+ * Author: David Woodhouse <dwmw2@infradead.org>
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/config.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/immap_8260.h>
+
+static struct mtd_info *sbcmtd[3];
+static struct mtd_partition *sbcmtd_parts[3];
+
+struct map_info sbc82xx_flash_map[3] = {
+ {.name = "Boot flash"},
+ {.name = "Alternate boot flash"},
+ {.name = "User flash"}
+};
+
+static struct mtd_partition smallflash_parts[] = {
+ {
+ .name = "space",
+ .size = 0x100000,
+ .offset = 0,
+ }, {
+ .name = "bootloader",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+
+static struct mtd_partition bigflash_parts[] = {
+ {
+ .name = "bootloader",
+ .size = 0x80000,
+ .offset = 0,
+ }, {
+ .name = "file system",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+
+static const char *part_probes[] __initdata = {"cmdlinepart", "RedBoot", NULL};
+
+int __init init_sbc82xx_flash(void)
+{
+ volatile memctl8260_t *mc = &immr->im_memctl;
+ int bigflash;
+ int i;
+
+ /* First, register the boot flash, whichever we're booting from */
+ if ((mc->memc_br0 & 0x00001800) == 0x00001800) {
+ bigflash = 0;
+ } else if ((mc->memc_br0 & 0x00001800) == 0x00000800) {
+ bigflash = 1;
+ } else {
+ printk(KERN_WARNING "Bus Controller register BR0 is %08x. Cannot determine flash configuration\n", mc->memc_br0);
+ return 1;
+ }
+
+ /* Set parameters for the big flash chip (CS6 or CS0) */
+ sbc82xx_flash_map[bigflash].buswidth = 4;
+ sbc82xx_flash_map[bigflash].size = 0x4000000;
+
+ /* Set parameters for the small flash chip (CS0 or CS6) */
+ sbc82xx_flash_map[!bigflash].buswidth = 1;
+ sbc82xx_flash_map[!bigflash].size = 0x200000;
+
+ /* Set parameters for the user flash chip (CS1) */
+ sbc82xx_flash_map[2].buswidth = 4;
+ sbc82xx_flash_map[2].size = 0x4000000;
+
+ sbc82xx_flash_map[0].phys = mc->memc_br0 & 0xffff8000;
+ sbc82xx_flash_map[1].phys = mc->memc_br6 & 0xffff8000;
+ sbc82xx_flash_map[2].phys = mc->memc_br1 & 0xffff8000;
+
+ for (i=0; i<3; i++) {
+ int8_t flashcs[3] = { 0, 6, 1 };
+ int nr_parts;
+
+ printk(KERN_NOTICE "PowerQUICC II %s (%ld MiB on CS%d",
+ sbc82xx_flash_map[i].name, sbc82xx_flash_map[i].size >> 20, flashcs[i]);
+ if (!sbc82xx_flash_map[i].phys) {
+ /* We know it can't be at zero. */
+ printk("): disabled by bootloader.\n");
+ continue;
+ }
+ printk(" at %08lx)\n", sbc82xx_flash_map[i].phys);
+
+ sbc82xx_flash_map[i].virt = (unsigned long)ioremap(sbc82xx_flash_map[i].phys, sbc82xx_flash_map[i].size);
+
+ if (!sbc82xx_flash_map[i].virt) {
+ printk("Failed to ioremap\n");
+ continue;
+ }
+
+ simple_map_init(&sbc82xx_flash_map[i]);
+
+ sbcmtd[i] = do_map_probe("cfi_probe", &sbc82xx_flash_map[i]);
+
+ if (!sbcmtd[i])
+ continue;
+
+ sbcmtd[i]->owner = THIS_MODULE;
+
+ nr_parts = parse_mtd_partitions(sbcmtd[i], part_probes,
+ &sbcmtd_parts[i], 0);
+ if (nr_parts > 0) {
+ add_mtd_partitions (sbcmtd[i], sbcmtd_parts[i], nr_parts);
+ continue;
+ }
+
+ /* No partitioning detected. Use default */
+ if (i == 2) {
+ add_mtd_device(sbcmtd[i]);
+ } else if (i == bigflash) {
+ add_mtd_partitions (sbcmtd[i], bigflash_parts, ARRAY_SIZE(bigflash_parts));
+ } else {
+ add_mtd_partitions (sbcmtd[i], smallflash_parts, ARRAY_SIZE(smallflash_parts));
+ }
+ }
+ return 0;
+}
+
+static void __exit cleanup_sbc82xx_flash(void)
+{
+ int i;
+
+ for (i=0; i<3; i++) {
+ if (!sbcmtd[i])
+ continue;
+
+ if (i<2 || sbcmtd_parts[i])
+ del_mtd_partitions(sbcmtd[i]);
+ else
+ del_mtd_device(sbcmtd[i]);
+
+ kfree(sbcmtd_parts[i]);
+ map_destroy(sbcmtd[i]);
+
+ iounmap((void *)sbc82xx_flash_map[i].virt);
+ sbc82xx_flash_map[i].virt = 0;
+ }
+}
+
+module_init(init_sbc82xx_flash);
+module_exit(cleanup_sbc82xx_flash);
+
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>");
+MODULE_DESCRIPTION("Flash map driver for WindRiver PowerQUICC II");
--- /dev/null
+/*
+ * drivers/mtd/nand/au1550nd.c
+ *
+ * Copyright (C) 2004 Embedded Edge, LLC
+ *
+ * $Id: au1550nd.c,v 1.5 2004/05/17 07:19:35 ppopov Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <asm/au1000.h>
+#ifdef CONFIG_MIPS_PB1550
+#include <asm/pb1550.h>
+#endif
+#ifdef CONFIG_MIPS_DB1550
+#include <asm/db1x00.h>
+#endif
+
+
+/*
+ * MTD structure for NAND controller
+ */
+static struct mtd_info *au1550_mtd = NULL;
+static volatile u32 p_nand;
+static int nand_width = 1; /* default, only x8 supported for now */
+
+/* Internal buffers. Page buffer and oob buffer for one block*/
+static u_char data_buf[512 + 16];
+static u_char oob_buf[16 * 32];
+
+/*
+ * Define partitions for flash device
+ */
+const static struct mtd_partition partition_info[] = {
+#ifdef CONFIG_MIPS_PB1550
+#define NUM_PARTITIONS 2
+ {
+ .name = "Pb1550 NAND FS 0",
+ .offset = 0,
+ .size = 8*1024*1024
+ },
+ {
+ .name = "Pb1550 NAND FS 1",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL
+ }
+#endif
+#ifdef CONFIG_MIPS_DB1550
+#define NUM_PARTITIONS 2
+ {
+ .name = "Db1550 NAND FS 0",
+ .offset = 0,
+ .size = 8*1024*1024
+ },
+ {
+ .name = "Db1550 NAND FS 1",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL
+ }
+#endif
+};
+
+static inline void write_cmd_reg(u8 cmd)
+{
+ if (nand_width)
+ *((volatile u8 *)(p_nand + MEM_STNAND_CMD)) = cmd;
+ else
+ *((volatile u16 *)(p_nand + MEM_STNAND_CMD)) = cmd;
+ au_sync();
+}
+
+static inline void write_addr_reg(u8 addr)
+{
+ if (nand_width)
+ *((volatile u8 *)(p_nand + MEM_STNAND_ADDR)) = addr;
+ else
+ *((volatile u16 *)(p_nand + MEM_STNAND_ADDR)) = addr;
+ au_sync();
+}
+
+static inline void write_data_reg(u8 data)
+{
+ if (nand_width)
+ *((volatile u8 *)(p_nand + MEM_STNAND_DATA)) = data;
+ else
+ *((volatile u16 *)(p_nand + MEM_STNAND_DATA)) = data;
+ au_sync();
+}
+
+static inline u32 read_data_reg(void)
+{
+ u32 data;
+ if (nand_width) {
+ data = *((volatile u8 *)(p_nand + MEM_STNAND_DATA));
+ au_sync();
+ }
+ else {
+ data = *((volatile u16 *)(p_nand + MEM_STNAND_DATA));
+ au_sync();
+ }
+ return data;
+}
+
+void au1550_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+}
+
+int au1550_device_ready(struct mtd_info *mtd)
+{
+ int ready;
+ ready = (au_readl(MEM_STSTAT) & 0x1) ? 1 : 0;
+ return ready;
+}
+
+static u_char au1550_nand_read_byte(struct mtd_info *mtd)
+{
+ u_char ret;
+ ret = read_data_reg();
+ return ret;
+}
+
+static void au1550_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ write_data_reg((u8)byte);
+}
+
+static void
+au1550_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+
+ for (i=0; i<len; i++)
+ write_data_reg(buf[i]);
+}
+
+static void
+au1550_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+
+ for (i=0; i<len; i++)
+ buf[i] = (u_char)read_data_reg();
+}
+
+static int
+au1550_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != (u_char)read_data_reg())
+ return -EFAULT;
+
+ return 0;
+}
+
+static void au1550_nand_select_chip(struct mtd_info *mtd, int chip)
+{
+ switch(chip) {
+ case -1:
+ /* deassert chip enable */
+ au_writel(au_readl(MEM_STNDCTL) & ~0x20 , MEM_STNDCTL);
+ break;
+ case 0:
+ /* assert (force assert) chip enable */
+ au_writel(au_readl(MEM_STNDCTL) | 0x20 , MEM_STNDCTL);
+ break;
+
+ default:
+ BUG();
+ }
+}
+
+static void au1550_nand_command (struct mtd_info *mtd, unsigned command,
+ int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ write_cmd_reg(readcmd);
+ }
+ write_cmd_reg(command);
+
+ if (column != -1 || page_addr != -1) {
+
+ /* Serially input address */
+ if (column != -1)
+ write_addr_reg(column);
+ if (page_addr != -1) {
+ write_addr_reg((unsigned char) (page_addr & 0xff));
+ write_addr_reg(((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (mtd->size & 0x0c000000)
+ write_addr_reg((unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ }
+
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ break;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ write_cmd_reg(NAND_CMD_STATUS);
+ while ( !(read_data_reg() & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ udelay (this->chip_delay);
+ }
+
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+
+/*
+ * Main initialization routine
+ */
+int __init au1550_init (void)
+{
+ struct nand_chip *this;
+ u16 boot_swapboot = 0; /* default value */
+ u32 mem_time;
+
+ /* Allocate memory for MTD device structure and private data */
+ au1550_mtd = kmalloc (sizeof(struct mtd_info) +
+ sizeof (struct nand_chip), GFP_KERNEL);
+ if (!au1550_mtd) {
+ printk ("Unable to allocate NAND MTD dev structure.\n");
+ return -ENOMEM;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&au1550_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) au1550_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ au1550_mtd->priv = this;
+
+ /* disable interrupts */
+ au_writel(au_readl(MEM_STNDCTL) & ~(1<<8), MEM_STNDCTL);
+
+ /* disable NAND boot */
+ au_writel(au_readl(MEM_STNDCTL) & ~(1<<0), MEM_STNDCTL);
+
+#ifdef CONFIG_MIPS_PB1550
+ /* set gpio206 high */
+ au_writel(au_readl(GPIO2_DIR) & ~(1<<6), GPIO2_DIR);
+
+ boot_swapboot = (au_readl(MEM_STSTAT) & (0x7<<1)) |
+ ((bcsr->status >> 6) & 0x1);
+ switch (boot_swapboot) {
+ case 0:
+ case 2:
+ case 8:
+ case 0xC:
+ case 0xD:
+ /* x16 NAND Flash */
+ nand_width = 0;
+ printk("Pb1550 NAND: 16-bit NAND not supported by MTD\n");
+ break;
+ case 1:
+ case 9:
+ case 3:
+ case 0xE:
+ case 0xF:
+ /* x8 NAND Flash */
+ nand_width = 1;
+ break;
+ default:
+ printk("Pb1550 NAND: bad boot:swap\n");
+ kfree(au1550_mtd);
+ return 1;
+ }
+
+ /* Configure RCE1 - should be done by YAMON */
+ au_writel(0x5 | (nand_width << 22), MEM_STCFG1);
+ au_writel(NAND_TIMING, MEM_STTIME1);
+ mem_time = au_readl(MEM_STTIME1);
+ au_sync();
+
+ /* setup and enable chip select */
+ /* we really need to decode offsets only up till 0x20 */
+ au_writel((1<<28) | (NAND_PHYS_ADDR>>4) |
+ (((NAND_PHYS_ADDR + 0x1000)-1) & (0x3fff<<18)>>18),
+ MEM_STADDR1);
+ au_sync();
+#endif
+
+#ifdef CONFIG_MIPS_DB1550
+ /* Configure RCE1 - should be done by YAMON */
+ au_writel(0x00400005, MEM_STCFG1);
+ au_writel(0x00007774, MEM_STTIME1);
+ au_writel(0x12000FFF, MEM_STADDR1);
+#endif
+
+ p_nand = (volatile struct nand_regs *)ioremap(NAND_PHYS_ADDR, 0x1000);
+
+ /* Set address of hardware control function */
+ this->hwcontrol = au1550_hwcontrol;
+ this->dev_ready = au1550_device_ready;
+ /* 30 us command delay time */
+ this->chip_delay = 30;
+
+ this->cmdfunc = au1550_nand_command;
+ this->select_chip = au1550_nand_select_chip;
+ this->write_byte = au1550_nand_write_byte;
+ this->read_byte = au1550_nand_read_byte;
+ this->write_buf = au1550_nand_write_buf;
+ this->read_buf = au1550_nand_read_buf;
+ this->verify_buf = au1550_nand_verify_buf;
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Set internal data buffer */
+ this->data_buf = data_buf;
+ this->oob_buf = oob_buf;
+
+ /* Scan to find existence of the device */
+ if (nand_scan (au1550_mtd, 1)) {
+ kfree (au1550_mtd);
+ return -ENXIO;
+ }
+
+ /* Register the partitions */
+ add_mtd_partitions(au1550_mtd, partition_info, NUM_PARTITIONS);
+
+ return 0;
+}
+
+module_init(au1550_init);
+
+/*
+ * Clean up routine
+ */
+#ifdef MODULE
+static void __exit au1550_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) &au1550_mtd[1];
+
+ iounmap ((void *)p_nand);
+
+ /* Unregister partitions */
+ del_mtd_partitions(au1550_mtd);
+
+ /* Unregister the device */
+ del_mtd_device (au1550_mtd);
+
+ /* Free the MTD device structure */
+ kfree (au1550_mtd);
+}
+module_exit(au1550_cleanup);
+#endif
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Board-specific glue layer for NAND flash on Pb1550 board");
--- /dev/null
+/*
+ * drivers/mtd/nand/diskonchip.c
+ *
+ * (C) 2003 Red Hat, Inc.
+ *
+ * Author: David Woodhouse <dwmw2@infradead.org>
+ *
+ * Interface to generic NAND code for M-Systems DiskOnChip devices
+ *
+ * $Id: diskonchip.c,v 1.23 2004/07/13 00:14:35 dbrown Exp $
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <asm/io.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/doc2000.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/mtd/partitions.h>
+#include <linux/mtd/inftl.h>
+
+/* Where to look for the devices? */
+#ifndef CONFIG_MTD_DOCPROBE_ADDRESS
+#define CONFIG_MTD_DOCPROBE_ADDRESS 0
+#endif
+
+static unsigned long __initdata doc_locations[] = {
+#if defined (__alpha__) || defined(__i386__) || defined(__x86_64__)
+#ifdef CONFIG_MTD_DOCPROBE_HIGH
+ 0xfffc8000, 0xfffca000, 0xfffcc000, 0xfffce000,
+ 0xfffd0000, 0xfffd2000, 0xfffd4000, 0xfffd6000,
+ 0xfffd8000, 0xfffda000, 0xfffdc000, 0xfffde000,
+ 0xfffe0000, 0xfffe2000, 0xfffe4000, 0xfffe6000,
+ 0xfffe8000, 0xfffea000, 0xfffec000, 0xfffee000,
+#else /* CONFIG_MTD_DOCPROBE_HIGH */
+ 0xc8000, 0xca000, 0xcc000, 0xce000,
+ 0xd0000, 0xd2000, 0xd4000, 0xd6000,
+ 0xd8000, 0xda000, 0xdc000, 0xde000,
+ 0xe0000, 0xe2000, 0xe4000, 0xe6000,
+ 0xe8000, 0xea000, 0xec000, 0xee000,
+#endif /* CONFIG_MTD_DOCPROBE_HIGH */
+#elif defined(__PPC__)
+ 0xe4000000,
+#elif defined(CONFIG_MOMENCO_OCELOT)
+ 0x2f000000,
+ 0xff000000,
+#elif defined(CONFIG_MOMENCO_OCELOT_G) || defined (CONFIG_MOMENCO_OCELOT_C)
+ 0xff000000,
+##else
+#warning Unknown architecture for DiskOnChip. No default probe locations defined
+#endif
+ 0xffffffff };
+
+static struct mtd_info *doclist = NULL;
+
+struct doc_priv {
+ unsigned long virtadr;
+ unsigned long physadr;
+ u_char ChipID;
+ u_char CDSNControl;
+ int chips_per_floor; /* The number of chips detected on each floor */
+ int curfloor;
+ int curchip;
+ int mh0_page;
+ int mh1_page;
+ struct mtd_info *nextdoc;
+};
+
+/* Max number of eraseblocks to scan (from start of device) for the (I)NFTL
+ MediaHeader. The spec says to just keep going, I think, but that's just
+ silly. */
+#define MAX_MEDIAHEADER_SCAN 8
+
+/* This is the syndrome computed by the HW ecc generator upon reading an empty
+ page, one with all 0xff for data and stored ecc code. */
+static u_char empty_read_syndrome[6] = { 0x26, 0xff, 0x6d, 0x47, 0x73, 0x7a };
+/* This is the ecc value computed by the HW ecc generator upon writing an empty
+ page, one with all 0xff for data. */
+static u_char empty_write_ecc[6] = { 0x4b, 0x00, 0xe2, 0x0e, 0x93, 0xf7 };
+
+#define INFTL_BBT_RESERVED_BLOCKS 4
+
+#define DoC_is_Millennium(doc) ((doc)->ChipID == DOC_ChipID_DocMil)
+#define DoC_is_2000(doc) ((doc)->ChipID == DOC_ChipID_Doc2k)
+
+static void doc200x_hwcontrol(struct mtd_info *mtd, int cmd);
+static void doc200x_select_chip(struct mtd_info *mtd, int chip);
+
+static int debug=0;
+MODULE_PARM(debug, "i");
+
+static int try_dword=1;
+MODULE_PARM(try_dword, "i");
+
+static int no_ecc_failures=0;
+MODULE_PARM(no_ecc_failures, "i");
+
+static int no_autopart=0;
+MODULE_PARM(no_autopart, "i");
+
+#ifdef MTD_NAND_DISKONCHIP_BBTWRITE
+static int inftl_bbt_write=1;
+#else
+static int inftl_bbt_write=0;
+#endif
+MODULE_PARM(inftl_bbt_write, "i");
+
+static unsigned long doc_config_location = CONFIG_MTD_DOCPROBE_ADDRESS;
+MODULE_PARM(doc_config_location, "l");
+MODULE_PARM_DESC(doc_config_location, "Physical memory address at which to probe for DiskOnChip");
+
+static void DoC_Delay(struct doc_priv *doc, unsigned short cycles)
+{
+ volatile char dummy;
+ int i;
+
+ for (i = 0; i < cycles; i++) {
+ if (DoC_is_Millennium(doc))
+ dummy = ReadDOC(doc->virtadr, NOP);
+ else
+ dummy = ReadDOC(doc->virtadr, DOCStatus);
+ }
+
+}
+/* DOC_WaitReady: Wait for RDY line to be asserted by the flash chip */
+static int _DoC_WaitReady(struct doc_priv *doc)
+{
+ unsigned long docptr = doc->virtadr;
+ unsigned long timeo = jiffies + (HZ * 10);
+
+ if(debug) printk("_DoC_WaitReady...\n");
+ /* Out-of-line routine to wait for chip response */
+ while (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) {
+ if (time_after(jiffies, timeo)) {
+ printk("_DoC_WaitReady timed out.\n");
+ return -EIO;
+ }
+ udelay(1);
+ cond_resched();
+ }
+
+ return 0;
+}
+
+static inline int DoC_WaitReady(struct doc_priv *doc)
+{
+ unsigned long docptr = doc->virtadr;
+ int ret = 0;
+
+ DoC_Delay(doc, 4);
+
+ if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B))
+ /* Call the out-of-line routine to wait */
+ ret = _DoC_WaitReady(doc);
+
+ DoC_Delay(doc, 2);
+ if(debug) printk("DoC_WaitReady OK\n");
+ return ret;
+}
+
+static void doc2000_write_byte(struct mtd_info *mtd, u_char datum)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ if(debug)printk("write_byte %02x\n", datum);
+ WriteDOC(datum, docptr, CDSNSlowIO);
+ WriteDOC(datum, docptr, 2k_CDSN_IO);
+}
+
+static u_char doc2000_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ u_char ret;
+
+ ReadDOC(docptr, CDSNSlowIO);
+ DoC_Delay(doc, 2);
+ ret = ReadDOC(docptr, 2k_CDSN_IO);
+ if (debug) printk("read_byte returns %02x\n", ret);
+ return ret;
+}
+
+static void doc2000_writebuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+ if (debug)printk("writebuf of %d bytes: ", len);
+ for (i=0; i < len; i++) {
+ WriteDOC_(buf[i], docptr, DoC_2k_CDSN_IO + i);
+ if (debug && i < 16)
+ printk("%02x ", buf[i]);
+ }
+ if (debug) printk("\n");
+}
+
+static void doc2000_readbuf(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ if (debug)printk("readbuf of %d bytes: ", len);
+
+ for (i=0; i < len; i++) {
+ buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i);
+ }
+}
+
+static void doc2000_readbuf_dword(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ if (debug) printk("readbuf_dword of %d bytes: ", len);
+
+ if (unlikely((((unsigned long)buf)|len) & 3)) {
+ for (i=0; i < len; i++) {
+ *(uint8_t *)(&buf[i]) = ReadDOC(docptr, 2k_CDSN_IO + i);
+ }
+ } else {
+ for (i=0; i < len; i+=4) {
+ *(uint32_t*)(&buf[i]) = readl(docptr + DoC_2k_CDSN_IO + i);
+ }
+ }
+}
+
+static int doc2000_verifybuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ for (i=0; i < len; i++)
+ if (buf[i] != ReadDOC(docptr, 2k_CDSN_IO))
+ return -EFAULT;
+ return 0;
+}
+
+static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ uint16_t ret;
+
+ doc200x_select_chip(mtd, nr);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_READID);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRCLE);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETALE);
+ this->write_byte(mtd, 0);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRALE);
+
+ ret = this->read_byte(mtd) << 8;
+ ret |= this->read_byte(mtd);
+
+ if (doc->ChipID == DOC_ChipID_Doc2k && try_dword && !nr) {
+ /* First chip probe. See if we get same results by 32-bit access */
+ union {
+ uint32_t dword;
+ uint8_t byte[4];
+ } ident;
+ unsigned long docptr = doc->virtadr;
+
+ doc200x_hwcontrol(mtd, NAND_CTL_SETCLE);
+ doc2000_write_byte(mtd, NAND_CMD_READID);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRCLE);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETALE);
+ doc2000_write_byte(mtd, 0);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRALE);
+
+ ident.dword = readl(docptr + DoC_2k_CDSN_IO);
+ if (((ident.byte[0] << 8) | ident.byte[1]) == ret) {
+ printk(KERN_INFO "DiskOnChip 2000 responds to DWORD access\n");
+ this->read_buf = &doc2000_readbuf_dword;
+ }
+ }
+
+ return ret;
+}
+
+static void __init doc2000_count_chips(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ uint16_t mfrid;
+ int i;
+
+ /* Max 4 chips per floor on DiskOnChip 2000 */
+ doc->chips_per_floor = 4;
+
+ /* Find out what the first chip is */
+ mfrid = doc200x_ident_chip(mtd, 0);
+
+ /* Find how many chips in each floor. */
+ for (i = 1; i < 4; i++) {
+ if (doc200x_ident_chip(mtd, i) != mfrid)
+ break;
+ }
+ doc->chips_per_floor = i;
+ printk(KERN_DEBUG "Detected %d chips per floor.\n", i);
+}
+
+static int doc200x_wait(struct mtd_info *mtd, struct nand_chip *this, int state)
+{
+ struct doc_priv *doc = (void *)this->priv;
+
+ int status;
+
+ DoC_WaitReady(doc);
+ this->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
+ DoC_WaitReady(doc);
+ status = (int)this->read_byte(mtd);
+
+ return status;
+}
+
+static void doc2001_write_byte(struct mtd_info *mtd, u_char datum)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ WriteDOC(datum, docptr, CDSNSlowIO);
+ WriteDOC(datum, docptr, Mil_CDSN_IO);
+ WriteDOC(datum, docptr, WritePipeTerm);
+}
+
+static u_char doc2001_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ //ReadDOC(docptr, CDSNSlowIO);
+ /* 11.4.5 -- delay twice to allow extended length cycle */
+ DoC_Delay(doc, 2);
+ ReadDOC(docptr, ReadPipeInit);
+ //return ReadDOC(docptr, Mil_CDSN_IO);
+ return ReadDOC(docptr, LastDataRead);
+}
+
+static void doc2001_writebuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ for (i=0; i < len; i++)
+ WriteDOC_(buf[i], docptr, DoC_Mil_CDSN_IO + i);
+ /* Terminate write pipeline */
+ WriteDOC(0x00, docptr, WritePipeTerm);
+}
+
+static void doc2001_readbuf(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ /* Start read pipeline */
+ ReadDOC(docptr, ReadPipeInit);
+
+ for (i=0; i < len-1; i++)
+ buf[i] = ReadDOC(docptr, Mil_CDSN_IO);
+
+ /* Terminate read pipeline */
+ buf[i] = ReadDOC(docptr, LastDataRead);
+}
+
+static int doc2001_verifybuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+
+ /* Start read pipeline */
+ ReadDOC(docptr, ReadPipeInit);
+
+ for (i=0; i < len-1; i++)
+ if (buf[i] != ReadDOC(docptr, Mil_CDSN_IO)) {
+ ReadDOC(docptr, LastDataRead);
+ return i;
+ }
+ if (buf[i] != ReadDOC(docptr, LastDataRead))
+ return i;
+ return 0;
+}
+
+static void doc200x_select_chip(struct mtd_info *mtd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int floor = 0;
+
+ /* 11.4.4 -- deassert CE before changing chip */
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRNCE);
+
+ if(debug)printk("select chip (%d)\n", chip);
+
+ if (chip == -1)
+ return;
+
+ floor = chip / doc->chips_per_floor;
+ chip -= (floor * doc->chips_per_floor);
+
+ WriteDOC(floor, docptr, FloorSelect);
+ WriteDOC(chip, docptr, CDSNDeviceSelect);
+
+ doc200x_hwcontrol(mtd, NAND_CTL_SETNCE);
+
+ doc->curchip = chip;
+ doc->curfloor = floor;
+}
+
+static void doc200x_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ switch(cmd) {
+ case NAND_CTL_SETNCE:
+ doc->CDSNControl |= CDSN_CTRL_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ doc->CDSNControl &= ~CDSN_CTRL_CE;
+ break;
+ case NAND_CTL_SETCLE:
+ doc->CDSNControl |= CDSN_CTRL_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ doc->CDSNControl &= ~CDSN_CTRL_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ doc->CDSNControl |= CDSN_CTRL_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ doc->CDSNControl &= ~CDSN_CTRL_ALE;
+ break;
+ case NAND_CTL_SETWP:
+ doc->CDSNControl |= CDSN_CTRL_WP;
+ break;
+ case NAND_CTL_CLRWP:
+ doc->CDSNControl &= ~CDSN_CTRL_WP;
+ break;
+ }
+ if (debug)printk("hwcontrol(%d): %02x\n", cmd, doc->CDSNControl);
+ WriteDOC(doc->CDSNControl, docptr, CDSNControl);
+ /* 11.4.3 -- 4 NOPs after CSDNControl write */
+ DoC_Delay(doc, 4);
+}
+
+static int doc200x_dev_ready(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ /* 11.4.2 -- must NOP four times before checking FR/B# */
+ DoC_Delay(doc, 4);
+ if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) {
+ if(debug)
+ printk("not ready\n");
+ return 0;
+ }
+ /* 11.4.2 -- Must NOP twice if it's ready */
+ DoC_Delay(doc, 2);
+ if (debug)printk("was ready\n");
+ return 1;
+}
+
+static int doc200x_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip)
+{
+ /* This is our last resort if we couldn't find or create a BBT. Just
+ pretend all blocks are good. */
+ return 0;
+}
+
+static void doc200x_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+
+ /* Prime the ECC engine */
+ switch(mode) {
+ case NAND_ECC_READ:
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN, docptr, ECCConf);
+ break;
+ case NAND_ECC_WRITE:
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
+ break;
+ }
+}
+
+/* This code is only called on write */
+static int doc200x_calculate_ecc(struct mtd_info *mtd, const u_char *dat,
+ unsigned char *ecc_code)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ int i;
+ int emptymatch = 1;
+
+ /* flush the pipeline */
+ if (DoC_is_2000(doc)) {
+ WriteDOC(doc->CDSNControl & ~CDSN_CTRL_FLASH_IO, docptr, CDSNControl);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(doc->CDSNControl, docptr, CDSNControl);
+ } else {
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ }
+
+ for (i = 0; i < 6; i++) {
+ ecc_code[i] = ReadDOC_(docptr, DoC_ECCSyndrome0 + i);
+ if (ecc_code[i] != empty_write_ecc[i])
+ emptymatch = 0;
+ }
+ WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
+#if 0
+ /* If emptymatch=1, we might have an all-0xff data buffer. Check. */
+ if (emptymatch) {
+ /* Note: this somewhat expensive test should not be triggered
+ often. It could be optimized away by examining the data in
+ the writebuf routine, and remembering the result. */
+ for (i = 0; i < 512; i++) {
+ if (dat[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, we do have an all-0xff data buffer.
+ Return all-0xff ecc value instead of the computed one, so
+ it'll look just like a freshly-erased page. */
+ if (emptymatch) memset(ecc_code, 0xff, 6);
+#endif
+ return 0;
+}
+
+static int doc200x_correct_data(struct mtd_info *mtd, u_char *dat, u_char *read_ecc, u_char *calc_ecc)
+{
+ int i, ret = 0;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned long docptr = doc->virtadr;
+ volatile u_char dummy;
+ int emptymatch = 1;
+
+ /* flush the pipeline */
+ if (DoC_is_2000(doc)) {
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ } else {
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
+ }
+
+ /* Error occured ? */
+ if (dummy & 0x80) {
+ for (i = 0; i < 6; i++) {
+ calc_ecc[i] = ReadDOC_(docptr, DoC_ECCSyndrome0 + i);
+ if (calc_ecc[i] != empty_read_syndrome[i])
+ emptymatch = 0;
+ }
+ /* If emptymatch=1, the read syndrome is consistent with an
+ all-0xff data and stored ecc block. Check the stored ecc. */
+ if (emptymatch) {
+ for (i = 0; i < 6; i++) {
+ if (read_ecc[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, check the data block. */
+ if (emptymatch) {
+ /* Note: this somewhat expensive test should not be triggered
+ often. It could be optimized away by examining the data in
+ the readbuf routine, and remembering the result. */
+ for (i = 0; i < 512; i++) {
+ if (dat[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, this is almost certainly a freshly-
+ erased block, in which case the ECC will not come out right.
+ We'll suppress the error and tell the caller everything's
+ OK. Because it is. */
+ if (!emptymatch) ret = doc_decode_ecc (dat, calc_ecc);
+ if (ret > 0)
+ printk(KERN_ERR "doc200x_correct_data corrected %d errors\n", ret);
+ }
+ WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
+ if (no_ecc_failures && (ret == -1)) {
+ printk(KERN_ERR "suppressing ECC failure\n");
+ ret = 0;
+ }
+ return ret;
+}
+
+//u_char mydatabuf[528];
+
+static struct nand_oobinfo doc200x_oobinfo = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 6,
+ .eccpos = {0, 1, 2, 3, 4, 5},
+ .oobfree = { {8, 8} }
+};
+
+/* Find the (I)NFTL Media Header, and optionally also the mirror media header.
+ On sucessful return, buf will contain a copy of the media header for
+ further processing. id is the string to scan for, and will presumably be
+ either "ANAND" or "BNAND". If findmirror=1, also look for the mirror media
+ header. The page #s of the found media headers are placed in mh0_page and
+ mh1_page in the DOC private structure. */
+static int __init find_media_headers(struct mtd_info *mtd, u_char *buf,
+ const char *id, int findmirror)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ int offs, end = (MAX_MEDIAHEADER_SCAN << this->phys_erase_shift);
+ int ret, retlen;
+
+ end = min(end, mtd->size); // paranoia
+ for (offs = 0; offs < end; offs += mtd->erasesize) {
+ ret = mtd->read(mtd, offs, mtd->oobblock, &retlen, buf);
+ if (retlen != mtd->oobblock) continue;
+ if (ret) {
+ printk(KERN_WARNING "ECC error scanning DOC at 0x%x\n",
+ offs);
+ }
+ if (memcmp(buf, id, 6)) continue;
+ printk(KERN_INFO "Found DiskOnChip %s Media Header at 0x%x\n", id, offs);
+ if (doc->mh0_page == -1) {
+ doc->mh0_page = offs >> this->page_shift;
+ if (!findmirror) return 1;
+ continue;
+ }
+ doc->mh1_page = offs >> this->page_shift;
+ return 2;
+ }
+ if (doc->mh0_page == -1) {
+ printk(KERN_WARNING "DiskOnChip %s Media Header not found.\n", id);
+ return 0;
+ }
+ /* Only one mediaheader was found. We want buf to contain a
+ mediaheader on return, so we'll have to re-read the one we found. */
+ offs = doc->mh0_page << this->page_shift;
+ ret = mtd->read(mtd, offs, mtd->oobblock, &retlen, buf);
+ if (retlen != mtd->oobblock) {
+ /* Insanity. Give up. */
+ printk(KERN_ERR "Read DiskOnChip Media Header once, but can't reread it???\n");
+ return 0;
+ }
+ return 1;
+}
+
+static inline int __init nftl_partscan(struct mtd_info *mtd,
+ struct mtd_partition *parts)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ u_char *buf = this->data_buf;
+ struct NFTLMediaHeader *mh = (struct NFTLMediaHeader *) buf;
+ const int psize = 1 << this->page_shift;
+ int blocks, maxblocks;
+ int offs, numheaders;
+
+ if (!(numheaders=find_media_headers(mtd, buf, "ANAND", 1))) return 0;
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " DataOrgID = %s\n"
+ " NumEraseUnits = %d\n"
+ " FirstPhysicalEUN = %d\n"
+ " FormattedSize = %d\n"
+ " UnitSizeFactor = %d\n",
+ mh->DataOrgID, mh->NumEraseUnits,
+ mh->FirstPhysicalEUN, mh->FormattedSize,
+ mh->UnitSizeFactor);
+//#endif
+
+ blocks = mtd->size >> this->phys_erase_shift;
+ maxblocks = min(32768, mtd->erasesize - psize);
+
+ if (mh->UnitSizeFactor == 0x00) {
+ /* Auto-determine UnitSizeFactor. The constraints are:
+ - There can be at most 32768 virtual blocks.
+ - There can be at most (virtual block size - page size)
+ virtual blocks (because MediaHeader+BBT must fit in 1).
+ */
+ mh->UnitSizeFactor = 0xff;
+ while (blocks > maxblocks) {
+ blocks >>= 1;
+ maxblocks = min(32768, (maxblocks << 1) + psize);
+ mh->UnitSizeFactor--;
+ }
+ printk(KERN_WARNING "UnitSizeFactor=0x00 detected. Correct value is assumed to be 0x%02x.\n", mh->UnitSizeFactor);
+ }
+
+ /* NOTE: The lines below modify internal variables of the NAND and MTD
+ layers; variables with have already been configured by nand_scan.
+ Unfortunately, we didn't know before this point what these values
+ should be. Thus, this code is somewhat dependant on the exact
+ implementation of the NAND layer. */
+ if (mh->UnitSizeFactor != 0xff) {
+ this->bbt_erase_shift += (0xff - mh->UnitSizeFactor);
+ mtd->erasesize <<= (0xff - mh->UnitSizeFactor);
+ printk(KERN_INFO "Setting virtual erase size to %d\n", mtd->erasesize);
+ blocks = mtd->size >> this->bbt_erase_shift;
+ maxblocks = min(32768, mtd->erasesize - psize);
+ }
+
+ if (blocks > maxblocks) {
+ printk(KERN_ERR "UnitSizeFactor of 0x%02x is inconsistent with device size. Aborting.\n", mh->UnitSizeFactor);
+ return 0;
+ }
+
+ /* Skip past the media headers. */
+ offs = max(doc->mh0_page, doc->mh1_page);
+ offs <<= this->page_shift;
+ offs += mtd->erasesize;
+
+ //parts[0].name = " DiskOnChip Boot / Media Header partition";
+ //parts[0].offset = 0;
+ //parts[0].size = offs;
+
+ parts[0].name = " DiskOnChip BDTL partition";
+ parts[0].offset = offs;
+ parts[0].size = (mh->NumEraseUnits - numheaders) << this->bbt_erase_shift;
+
+ offs += parts[0].size;
+ if (offs < mtd->size) {
+ parts[1].name = " DiskOnChip Remainder partition";
+ parts[1].offset = offs;
+ parts[1].size = mtd->size - offs;
+ return 2;
+ }
+ return 1;
+}
+
+/* This is a stripped-down copy of the code in inftlmount.c */
+static inline int __init inftl_partscan(struct mtd_info *mtd,
+ struct mtd_partition *parts)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ u_char *buf = this->data_buf;
+ struct INFTLMediaHeader *mh = (struct INFTLMediaHeader *) buf;
+ struct INFTLPartition *ip;
+ int numparts = 0;
+ int blocks;
+ int vshift, lastvunit = 0;
+ int i;
+ int end = mtd->size;
+
+ if (inftl_bbt_write)
+ end -= (INFTL_BBT_RESERVED_BLOCKS << this->phys_erase_shift);
+
+ if (!find_media_headers(mtd, buf, "BNAND", 0)) return 0;
+ doc->mh1_page = doc->mh0_page + (4096 >> this->page_shift);
+
+ mh->NoOfBootImageBlocks = le32_to_cpu(mh->NoOfBootImageBlocks);
+ mh->NoOfBinaryPartitions = le32_to_cpu(mh->NoOfBinaryPartitions);
+ mh->NoOfBDTLPartitions = le32_to_cpu(mh->NoOfBDTLPartitions);
+ mh->BlockMultiplierBits = le32_to_cpu(mh->BlockMultiplierBits);
+ mh->FormatFlags = le32_to_cpu(mh->FormatFlags);
+ mh->PercentUsed = le32_to_cpu(mh->PercentUsed);
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " bootRecordID = %s\n"
+ " NoOfBootImageBlocks = %d\n"
+ " NoOfBinaryPartitions = %d\n"
+ " NoOfBDTLPartitions = %d\n"
+ " BlockMultiplerBits = %d\n"
+ " FormatFlgs = %d\n"
+ " OsakVersion = 0x%x\n"
+ " PercentUsed = %d\n",
+ mh->bootRecordID, mh->NoOfBootImageBlocks,
+ mh->NoOfBinaryPartitions,
+ mh->NoOfBDTLPartitions,
+ mh->BlockMultiplierBits, mh->FormatFlags,
+ mh->OsakVersion, mh->PercentUsed);
+//#endif
+
+ vshift = this->phys_erase_shift + mh->BlockMultiplierBits;
+
+ blocks = mtd->size >> vshift;
+ if (blocks > 32768) {
+ printk(KERN_ERR "BlockMultiplierBits=%d is inconsistent with device size. Aborting.\n", mh->BlockMultiplierBits);
+ return 0;
+ }
+
+ blocks = doc->chips_per_floor << (this->chip_shift - this->phys_erase_shift);
+ if (inftl_bbt_write && (blocks > mtd->erasesize)) {
+ printk(KERN_ERR "Writeable BBTs spanning more than one erase block are not yet supported. FIX ME!\n");
+ return 0;
+ }
+
+ /* Scan the partitions */
+ for (i = 0; (i < 4); i++) {
+ ip = &(mh->Partitions[i]);
+ ip->virtualUnits = le32_to_cpu(ip->virtualUnits);
+ ip->firstUnit = le32_to_cpu(ip->firstUnit);
+ ip->lastUnit = le32_to_cpu(ip->lastUnit);
+ ip->flags = le32_to_cpu(ip->flags);
+ ip->spareUnits = le32_to_cpu(ip->spareUnits);
+ ip->Reserved0 = le32_to_cpu(ip->Reserved0);
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " PARTITION[%d] ->\n"
+ " virtualUnits = %d\n"
+ " firstUnit = %d\n"
+ " lastUnit = %d\n"
+ " flags = 0x%x\n"
+ " spareUnits = %d\n",
+ i, ip->virtualUnits, ip->firstUnit,
+ ip->lastUnit, ip->flags,
+ ip->spareUnits);
+//#endif
+
+/*
+ if ((i == 0) && (ip->firstUnit > 0)) {
+ parts[0].name = " DiskOnChip IPL / Media Header partition";
+ parts[0].offset = 0;
+ parts[0].size = mtd->erasesize * ip->firstUnit;
+ numparts = 1;
+ }
+*/
+
+ if (ip->flags & INFTL_BINARY)
+ parts[numparts].name = " DiskOnChip BDK partition";
+ else
+ parts[numparts].name = " DiskOnChip BDTL partition";
+ parts[numparts].offset = ip->firstUnit << vshift;
+ parts[numparts].size = (1 + ip->lastUnit - ip->firstUnit) << vshift;
+ numparts++;
+ if (ip->lastUnit > lastvunit) lastvunit = ip->lastUnit;
+ if (ip->flags & INFTL_LAST) break;
+ }
+ lastvunit++;
+ if ((lastvunit << vshift) < end) {
+ parts[numparts].name = " DiskOnChip Remainder partition";
+ parts[numparts].offset = lastvunit << vshift;
+ parts[numparts].size = end - parts[numparts].offset;
+ numparts++;
+ }
+ return numparts;
+}
+
+static int __init nftl_scan_bbt(struct mtd_info *mtd)
+{
+ int ret, numparts;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ struct mtd_partition parts[2];
+
+ memset((char *) parts, 0, sizeof(parts));
+ /* On NFTL, we have to find the media headers before we can read the
+ BBTs, since they're stored in the media header eraseblocks. */
+ numparts = nftl_partscan(mtd, parts);
+ if (!numparts) return -EIO;
+ this->bbt_td->options = NAND_BBT_ABSPAGE | NAND_BBT_8BIT |
+ NAND_BBT_SAVECONTENT | NAND_BBT_WRITE |
+ NAND_BBT_VERSION;
+ this->bbt_td->veroffs = 7;
+ this->bbt_td->pages[0] = doc->mh0_page + 1;
+ if (doc->mh1_page != -1) {
+ this->bbt_md->options = NAND_BBT_ABSPAGE | NAND_BBT_8BIT |
+ NAND_BBT_SAVECONTENT | NAND_BBT_WRITE |
+ NAND_BBT_VERSION;
+ this->bbt_md->veroffs = 7;
+ this->bbt_md->pages[0] = doc->mh1_page + 1;
+ } else {
+ this->bbt_md = NULL;
+ }
+
+ /* It's safe to set bd=NULL below because NAND_BBT_CREATE is not set.
+ At least as nand_bbt.c is currently written. */
+ if ((ret = nand_scan_bbt(mtd, NULL)))
+ return ret;
+ add_mtd_device(mtd);
+#if defined(CONFIG_MTD_PARTITIONS) || defined(CONFIG_MTD_PARTITIONS_MODULE)
+ if (!no_autopart) add_mtd_partitions(mtd, parts, numparts);
+#endif
+ return 0;
+}
+
+static int __init inftl_scan_bbt(struct mtd_info *mtd)
+{
+ int ret, numparts;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ struct mtd_partition parts[5];
+
+ if (this->numchips > doc->chips_per_floor) {
+ printk(KERN_ERR "Multi-floor INFTL devices not yet supported.\n");
+ return -EIO;
+ }
+
+ if (mtd->size == (8<<20)) {
+#if 0
+/* This doesn't seem to work for me. I get ECC errors on every page. */
+ /* The Millennium 8MiB is actually an NFTL device! */
+ mtd->name = "DiskOnChip Millennium 8MiB (NFTL)";
+ return nftl_scan_bbt(mtd);
+#endif
+ printk(KERN_ERR "DiskOnChip Millennium 8MiB is not supported.\n");
+ return -EIO;
+ }
+
+ this->bbt_td->options = NAND_BBT_LASTBLOCK | NAND_BBT_8BIT |
+ NAND_BBT_VERSION;
+ if (inftl_bbt_write)
+ this->bbt_td->options |= NAND_BBT_WRITE;
+ this->bbt_td->offs = 8;
+ this->bbt_td->len = 8;
+ this->bbt_td->veroffs = 7;
+ this->bbt_td->maxblocks = INFTL_BBT_RESERVED_BLOCKS;
+ this->bbt_td->reserved_block_code = 0x01;
+ this->bbt_td->pattern = "MSYS_BBT";
+
+ this->bbt_md->options = NAND_BBT_LASTBLOCK | NAND_BBT_8BIT |
+ NAND_BBT_VERSION;
+ if (inftl_bbt_write)
+ this->bbt_md->options |= NAND_BBT_WRITE;
+ this->bbt_md->offs = 8;
+ this->bbt_md->len = 8;
+ this->bbt_md->veroffs = 7;
+ this->bbt_md->maxblocks = INFTL_BBT_RESERVED_BLOCKS;
+ this->bbt_md->reserved_block_code = 0x01;
+ this->bbt_md->pattern = "TBB_SYSM";
+
+ /* It's safe to set bd=NULL below because NAND_BBT_CREATE is not set.
+ At least as nand_bbt.c is currently written. */
+ if ((ret = nand_scan_bbt(mtd, NULL)))
+ return ret;
+ memset((char *) parts, 0, sizeof(parts));
+ numparts = inftl_partscan(mtd, parts);
+ /* At least for now, require the INFTL Media Header. We could probably
+ do without it for non-INFTL use, since all it gives us is
+ autopartitioning, but I want to give it more thought. */
+ if (!numparts) return -EIO;
+ add_mtd_device(mtd);
+#if defined(CONFIG_MTD_PARTITIONS) || defined(CONFIG_MTD_PARTITIONS_MODULE)
+ if (!no_autopart) add_mtd_partitions(mtd, parts, numparts);
+#endif
+ return 0;
+}
+
+static inline int __init doc2000_init(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+
+ this->write_byte = doc2000_write_byte;
+ this->read_byte = doc2000_read_byte;
+ this->write_buf = doc2000_writebuf;
+ this->read_buf = doc2000_readbuf;
+ this->verify_buf = doc2000_verifybuf;
+ this->scan_bbt = nftl_scan_bbt;
+
+ doc->CDSNControl = CDSN_CTRL_FLASH_IO | CDSN_CTRL_ECC_IO;
+ doc2000_count_chips(mtd);
+ mtd->name = "DiskOnChip 2000 (NFTL Model)";
+ return (4 * doc->chips_per_floor);
+}
+
+static inline int __init doc2001_init(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+
+ this->write_byte = doc2001_write_byte;
+ this->read_byte = doc2001_read_byte;
+ this->write_buf = doc2001_writebuf;
+ this->read_buf = doc2001_readbuf;
+ this->verify_buf = doc2001_verifybuf;
+ this->scan_bbt = inftl_scan_bbt;
+
+ ReadDOC(doc->virtadr, ChipID);
+ ReadDOC(doc->virtadr, ChipID);
+ ReadDOC(doc->virtadr, ChipID);
+ if (ReadDOC(doc->virtadr, ChipID) != DOC_ChipID_DocMil) {
+ /* It's not a Millennium; it's one of the newer
+ DiskOnChip 2000 units with a similar ASIC.
+ Treat it like a Millennium, except that it
+ can have multiple chips. */
+ doc2000_count_chips(mtd);
+ mtd->name = "DiskOnChip 2000 (INFTL Model)";
+ return (4 * doc->chips_per_floor);
+ } else {
+ /* Bog-standard Millennium */
+ doc->chips_per_floor = 1;
+ mtd->name = "DiskOnChip Millennium";
+ return 1;
+ }
+}
+
+static inline int __init doc_probe(unsigned long physadr)
+{
+ unsigned char ChipID;
+ struct mtd_info *mtd;
+ struct nand_chip *nand;
+ struct doc_priv *doc;
+ unsigned long virtadr;
+ unsigned char save_control;
+ unsigned char tmp, tmpb, tmpc;
+ int reg, len, numchips;
+ int ret = 0;
+
+ virtadr = (unsigned long)ioremap(physadr, DOC_IOREMAP_LEN);
+ if (!virtadr) {
+ printk(KERN_ERR "Diskonchip ioremap failed: 0x%x bytes at 0x%lx\n", DOC_IOREMAP_LEN, physadr);
+ return -EIO;
+ }
+
+ /* It's not possible to cleanly detect the DiskOnChip - the
+ * bootup procedure will put the device into reset mode, and
+ * it's not possible to talk to it without actually writing
+ * to the DOCControl register. So we store the current contents
+ * of the DOCControl register's location, in case we later decide
+ * that it's not a DiskOnChip, and want to put it back how we
+ * found it.
+ */
+ save_control = ReadDOC(virtadr, DOCControl);
+
+ /* Reset the DiskOnChip ASIC */
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_RESET,
+ virtadr, DOCControl);
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_RESET,
+ virtadr, DOCControl);
+
+ /* Enable the DiskOnChip ASIC */
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_NORMAL,
+ virtadr, DOCControl);
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_NORMAL,
+ virtadr, DOCControl);
+
+ ChipID = ReadDOC(virtadr, ChipID);
+
+ switch(ChipID) {
+ case DOC_ChipID_Doc2k:
+ reg = DoC_2k_ECCStatus;
+ break;
+ case DOC_ChipID_DocMil:
+ reg = DoC_ECCConf;
+ break;
+ default:
+ ret = -ENODEV;
+ goto notfound;
+ }
+ /* Check the TOGGLE bit in the ECC register */
+ tmp = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ tmpb = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ tmpc = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ if ((tmp == tmpb) || (tmp != tmpc)) {
+ printk(KERN_WARNING "Possible DiskOnChip at 0x%lx failed TOGGLE test, dropping.\n", physadr);
+ ret = -ENODEV;
+ goto notfound;
+ }
+
+ for (mtd = doclist; mtd; mtd = doc->nextdoc) {
+ nand = mtd->priv;
+ doc = (void *)nand->priv;
+ /* Use the alias resolution register to determine if this is
+ in fact the same DOC aliased to a new address. If writes
+ to one chip's alias resolution register change the value on
+ the other chip, they're the same chip. */
+ unsigned char oldval = ReadDOC(doc->virtadr, AliasResolution);
+ unsigned char newval = ReadDOC(virtadr, AliasResolution);
+ if (oldval != newval)
+ continue;
+ WriteDOC(~newval, virtadr, AliasResolution);
+ oldval = ReadDOC(doc->virtadr, AliasResolution);
+ WriteDOC(newval, virtadr, AliasResolution); // restore it
+ newval = ~newval;
+ if (oldval == newval) {
+ //printk(KERN_DEBUG "Found alias of DOC at 0x%lx to 0x%lx\n", doc->physadr, physadr);
+ goto notfound;
+ }
+ }
+
+ printk(KERN_NOTICE "DiskOnChip found at 0x%lx\n", physadr);
+
+ len = sizeof(struct mtd_info) +
+ sizeof(struct nand_chip) +
+ sizeof(struct doc_priv) +
+ (2 * sizeof(struct nand_bbt_descr));
+ mtd = kmalloc(len, GFP_KERNEL);
+ if (!mtd) {
+ printk(KERN_ERR "DiskOnChip kmalloc (%d bytes) failed!\n", len);
+ ret = -ENOMEM;
+ goto fail;
+ }
+ memset(mtd, 0, len);
+
+ nand = (struct nand_chip *) (mtd + 1);
+ doc = (struct doc_priv *) (nand + 1);
+ nand->bbt_td = (struct nand_bbt_descr *) (doc + 1);
+ nand->bbt_md = nand->bbt_td + 1;
+
+ mtd->priv = (void *) nand;
+ mtd->owner = THIS_MODULE;
+
+ nand->priv = (void *) doc;
+ nand->select_chip = doc200x_select_chip;
+ nand->hwcontrol = doc200x_hwcontrol;
+ nand->dev_ready = doc200x_dev_ready;
+ nand->waitfunc = doc200x_wait;
+ nand->block_bad = doc200x_block_bad;
+ nand->enable_hwecc = doc200x_enable_hwecc;
+ nand->calculate_ecc = doc200x_calculate_ecc;
+ nand->correct_data = doc200x_correct_data;
+ //nand->data_buf
+ nand->autooob = &doc200x_oobinfo;
+ nand->eccmode = NAND_ECC_HW6_512;
+ nand->options = NAND_USE_FLASH_BBT | NAND_HWECC_SYNDROME;
+
+ doc->physadr = physadr;
+ doc->virtadr = virtadr;
+ doc->ChipID = ChipID;
+ doc->curfloor = -1;
+ doc->curchip = -1;
+ doc->mh0_page = -1;
+ doc->mh1_page = -1;
+ doc->nextdoc = doclist;
+
+ if (ChipID == DOC_ChipID_Doc2k)
+ numchips = doc2000_init(mtd);
+ else
+ numchips = doc2001_init(mtd);
+
+ if ((ret = nand_scan(mtd, numchips))) {
+ /* DBB note: i believe nand_release is necessary here, as
+ buffers may have been allocated in nand_base. Check with
+ Thomas. FIX ME! */
+ /* nand_release will call del_mtd_device, but we haven't yet
+ added it. This is handled without incident by
+ del_mtd_device, as far as I can tell. */
+ nand_release(mtd);
+ kfree(mtd);
+ goto fail;
+ }
+
+ /* Success! */
+ doclist = mtd;
+ return 0;
+
+notfound:
+ /* Put back the contents of the DOCControl register, in case it's not
+ actually a DiskOnChip. */
+ WriteDOC(save_control, virtadr, DOCControl);
+fail:
+ iounmap((void *)virtadr);
+ return ret;
+}
+
+int __init init_nanddoc(void)
+{
+ int i;
+
+ if (doc_config_location) {
+ printk(KERN_INFO "Using configured DiskOnChip probe address 0x%lx\n", doc_config_location);
+ return doc_probe(doc_config_location);
+ } else {
+ for (i=0; (doc_locations[i] != 0xffffffff); i++) {
+ doc_probe(doc_locations[i]);
+ }
+ }
+ /* No banner message any more. Print a message if no DiskOnChip
+ found, so the user knows we at least tried. */
+ if (!doclist) {
+ printk(KERN_INFO "No valid DiskOnChip devices found\n");
+ return -ENODEV;
+ }
+ return 0;
+}
+
+void __exit cleanup_nanddoc(void)
+{
+ struct mtd_info *mtd, *nextmtd;
+ struct nand_chip *nand;
+ struct doc_priv *doc;
+
+ for (mtd = doclist; mtd; mtd = nextmtd) {
+ nand = mtd->priv;
+ doc = (void *)nand->priv;
+
+ nextmtd = doc->nextdoc;
+ nand_release(mtd);
+ iounmap((void *)doc->virtadr);
+ kfree(mtd);
+ }
+}
+
+module_init(init_nanddoc);
+module_exit(cleanup_nanddoc);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>");
+MODULE_DESCRIPTION("M-Systems DiskOnChip 2000 and Millennium device driver\n");
--- /dev/null
+/*
+ * drivers/mtd/nand.c
+ *
+ * Overview:
+ * This is the generic MTD driver for NAND flash devices. It should be
+ * capable of working with almost all NAND chips currently available.
+ * Basic support for AG-AND chips is provided.
+ *
+ * Additional technical information is available on
+ * http://www.linux-mtd.infradead.org/tech/nand.html
+ *
+ * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)
+ * 2002 Thomas Gleixner (tglx@linutronix.de)
+ *
+ * 02-08-2004 tglx: support for strange chips, which cannot auto increment
+ * pages on read / read_oob
+ *
+ * 03-17-2004 tglx: Check ready before auto increment check. Simon Bayes
+ * pointed this out, as he marked an auto increment capable chip
+ * as NOAUTOINCR in the board driver.
+ * Make reads over block boundaries work too
+ *
+ * 04-14-2004 tglx: first working version for 2k page size chips
+ *
+ * 05-19-2004 tglx: Basic support for Renesas AG-AND chips
+ *
+ * Credits:
+ * David Woodhouse for adding multichip support
+ *
+ * Aleph One Ltd. and Toby Churchill Ltd. for supporting the
+ * rework for 2K page size chips
+ *
+ * TODO:
+ * Enable cached programming for 2k page size chips
+ * Check, if mtd->ecctype should be set to MTD_ECC_HW
+ * if we have HW ecc support.
+ * The AG-AND chips have nice features for speed improvement,
+ * which are not supported yet. Read / program 4 pages in one go.
+ *
+ * $Id: nand_base.c,v 1.113 2004/07/14 16:31:31 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <asm/io.h>
+
+#if defined(CONFIG_MTD_PARTITIONS) || defined(CONFIG_MTD_PARTITIONS_MODULE)
+#include <linux/mtd/partitions.h>
+#endif
+
+/* Define default oob placement schemes for large and small page devices */
+static struct nand_oobinfo nand_oob_8 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 3,
+ .eccpos = {0, 1, 2},
+ .oobfree = { {3, 2}, {6, 2} }
+};
+
+static struct nand_oobinfo nand_oob_16 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 6,
+ .eccpos = {0, 1, 2, 3, 6, 7},
+ .oobfree = { {8, 8} }
+};
+
+static struct nand_oobinfo nand_oob_64 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 24,
+ .eccpos = {
+ 40, 41, 42, 43, 44, 45, 46, 47,
+ 48, 49, 50, 51, 52, 53, 54, 55,
+ 56, 57, 58, 59, 60, 61, 62, 63},
+ .oobfree = { {2, 38} }
+};
+
+/* This is used for padding purposes in nand_write_oob */
+static u_char ffchars[] = {
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+};
+
+/*
+ * NAND low-level MTD interface functions
+ */
+static void nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len);
+static void nand_read_buf(struct mtd_info *mtd, u_char *buf, int len);
+static int nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len);
+
+static int nand_read (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf);
+static int nand_read_ecc (struct mtd_info *mtd, loff_t from, size_t len,
+ size_t * retlen, u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel);
+static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf);
+static int nand_write (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf);
+static int nand_write_ecc (struct mtd_info *mtd, loff_t to, size_t len,
+ size_t * retlen, const u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel);
+static int nand_write_oob (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char *buf);
+static int nand_writev (struct mtd_info *mtd, const struct kvec *vecs,
+ unsigned long count, loff_t to, size_t * retlen);
+static int nand_writev_ecc (struct mtd_info *mtd, const struct kvec *vecs,
+ unsigned long count, loff_t to, size_t * retlen, u_char *eccbuf, struct nand_oobinfo *oobsel);
+static int nand_erase (struct mtd_info *mtd, struct erase_info *instr);
+static void nand_sync (struct mtd_info *mtd);
+
+/* Some internal functions */
+static int nand_write_page (struct mtd_info *mtd, struct nand_chip *this, int page, u_char *oob_buf,
+ struct nand_oobinfo *oobsel, int mode);
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+static int nand_verify_pages (struct mtd_info *mtd, struct nand_chip *this, int page, int numpages,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int chipnr, int oobmode);
+#else
+#define nand_verify_pages(...) (0)
+#endif
+
+static void nand_get_chip (struct nand_chip *this, struct mtd_info *mtd, int new_state);
+
+/**
+ * nand_release_chip - [GENERIC] release chip
+ * @mtd: MTD device structure
+ *
+ * Deselect, release chip lock and wake up anyone waiting on the device
+ */
+static void nand_release_chip (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* De-select the NAND device */
+ this->select_chip(mtd, -1);
+ /* Release the chip */
+ spin_lock_bh (&this->chip_lock);
+ this->state = FL_READY;
+ wake_up (&this->wq);
+ spin_unlock_bh (&this->chip_lock);
+}
+
+/**
+ * nand_read_byte - [DEFAULT] read one byte from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 8bit buswith
+ */
+static u_char nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return readb(this->IO_ADDR_R);
+}
+
+/**
+ * nand_write_byte - [DEFAULT] write one byte to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * Default write function for 8it buswith
+ */
+static void nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writeb(byte, this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_byte16 - [DEFAULT] read one byte endianess aware from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 16bit buswith with
+ * endianess conversion
+ */
+static u_char nand_read_byte16(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return (u_char) cpu_to_le16(readw(this->IO_ADDR_R));
+}
+
+/**
+ * nand_write_byte16 - [DEFAULT] write one byte endianess aware to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * Default write function for 16bit buswith with
+ * endianess conversion
+ */
+static void nand_write_byte16(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(le16_to_cpu((u16) byte), this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_word - [DEFAULT] read one word from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 16bit buswith without
+ * endianess conversion
+ */
+static u16 nand_read_word(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return readw(this->IO_ADDR_R);
+}
+
+/**
+ * nand_write_word - [DEFAULT] write one word to the chip
+ * @mtd: MTD device structure
+ * @word: data word to write
+ *
+ * Default write function for 16bit buswith without
+ * endianess conversion
+ */
+static void nand_write_word(struct mtd_info *mtd, u16 word)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(word, this->IO_ADDR_W);
+}
+
+/**
+ * nand_select_chip - [DEFAULT] control CE line
+ * @mtd: MTD device structure
+ * @chip: chipnumber to select, -1 for deselect
+ *
+ * Default select function for 1 chip devices.
+ */
+static void nand_select_chip(struct mtd_info *mtd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ switch(chip) {
+ case -1:
+ this->hwcontrol(mtd, NAND_CTL_CLRNCE);
+ break;
+ case 0:
+ this->hwcontrol(mtd, NAND_CTL_SETNCE);
+ break;
+
+ default:
+ BUG();
+ }
+}
+
+/**
+ * nand_write_buf - [DEFAULT] write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * Default write function for 8bit buswith
+ */
+static void nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ writeb(buf[i], this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_buf - [DEFAULT] read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * Default read function for 8bit buswith
+ */
+static void nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = readb(this->IO_ADDR_R);
+}
+
+/**
+ * nand_verify_buf - [DEFAULT] Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * Default verify function for 8bit buswith
+ */
+static int nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != readb(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/**
+ * nand_write_buf16 - [DEFAULT] write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * Default write function for 16bit buswith
+ */
+static void nand_write_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ writew(p[i], this->IO_ADDR_W);
+
+}
+
+/**
+ * nand_read_buf16 - [DEFAULT] read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * Default read function for 16bit buswith
+ */
+static void nand_read_buf16(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ p[i] = readw(this->IO_ADDR_R);
+}
+
+/**
+ * nand_verify_buf16 - [DEFAULT] Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * Default verify function for 16bit buswith
+ */
+static int nand_verify_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ if (p[i] != readw(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/**
+ * nand_block_bad - [DEFAULT] Read bad block marker from the chip
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ * @getchip: 0, if the chip is already selected
+ *
+ * Check, if the block is bad.
+ */
+static int nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip)
+{
+ int page, chipnr, res = 0;
+ struct nand_chip *this = mtd->priv;
+ u16 bad;
+
+ if (getchip) {
+ page = (int)(ofs >> this->page_shift);
+ chipnr = (int)(ofs >> this->chip_shift);
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd, FL_READING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+ } else
+ page = (int) ofs;
+
+ if (this->options & NAND_BUSWIDTH_16) {
+ this->cmdfunc (mtd, NAND_CMD_READOOB, this->badblockpos & 0xFE, page & this->pagemask);
+ bad = cpu_to_le16(this->read_word(mtd));
+ if (this->badblockpos & 0x1)
+ bad >>= 1;
+ if ((bad & 0xFF) != 0xff)
+ res = 1;
+ } else {
+ this->cmdfunc (mtd, NAND_CMD_READOOB, this->badblockpos, page & this->pagemask);
+ if (this->read_byte(mtd) != 0xff)
+ res = 1;
+ }
+
+ if (getchip) {
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+ }
+
+ return res;
+}
+
+/**
+ * nand_default_block_markbad - [DEFAULT] mark a block bad
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ *
+ * This is the default implementation, which can be overridden by
+ * a hardware specific driver.
+*/
+static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs)
+{
+ struct nand_chip *this = mtd->priv;
+ u_char buf[2] = {0, 0};
+ size_t retlen;
+ int block;
+
+ /* Get block number */
+ block = ((int) ofs) >> this->bbt_erase_shift;
+ this->bbt[block >> 2] |= 0x01 << ((block & 0x03) << 1);
+
+ /* Do we have a flash based bad block table ? */
+ if (this->options & NAND_USE_FLASH_BBT)
+ return nand_update_bbt (mtd, ofs);
+
+ /* We write two bytes, so we dont have to mess with 16 bit access */
+ ofs += mtd->oobsize + (this->badblockpos & ~0x01);
+ return nand_write_oob (mtd, ofs , 2, &retlen, buf);
+}
+
+/**
+ * nand_check_wp - [GENERIC] check if the chip is write protected
+ * @mtd: MTD device structure
+ * Check, if the device is write protected
+ *
+ * The function expects, that the device is already selected
+ */
+static int nand_check_wp (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Check the WP bit */
+ this->cmdfunc (mtd, NAND_CMD_STATUS, -1, -1);
+ return (this->read_byte(mtd) & 0x80) ? 0 : 1;
+}
+
+/**
+ * nand_block_checkbad - [GENERIC] Check if a block is marked bad
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ * @getchip: 0, if the chip is already selected
+ * @allowbbt: 1, if its allowed to access the bbt area
+ *
+ * Check, if the block is bad. Either by reading the bad block table or
+ * calling of the scan function.
+ */
+static int nand_block_checkbad (struct mtd_info *mtd, loff_t ofs, int getchip, int allowbbt)
+{
+ struct nand_chip *this = mtd->priv;
+
+ if (!this->bbt)
+ return this->block_bad(mtd, ofs, getchip);
+
+ /* Return info from the table */
+ return nand_isbad_bbt (mtd, ofs, allowbbt);
+}
+
+/**
+ * nand_command - [DEFAULT] Send command to NAND device
+ * @mtd: MTD device structure
+ * @command: the command to be sent
+ * @column: the column address for this command, -1 if none
+ * @page_addr: the page address for this command, -1 if none
+ *
+ * Send command to NAND device. This function is used for small page
+ * devices (256/512 Bytes per page)
+ */
+static void nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1) {
+ /* Adjust columns for 16 bit buswidth */
+ if (this->options & NAND_BUSWIDTH_16)
+ column >>= 1;
+ this->write_byte(mtd, column);
+ }
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (this->chipsize & 0x0c000000)
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+/**
+ * nand_command_lp - [DEFAULT] Send command to NAND large page device
+ * @mtd: MTD device structure
+ * @command: the command to be sent
+ * @column: the column address for this command, -1 if none
+ * @page_addr: the page address for this command, -1 if none
+ *
+ * Send command to NAND device. This is the version for the new large page devices
+ * We dont have the seperate regions as we have in the small page devices.
+ * We must emulate NAND_CMD_READOOB to keep the code compatible.
+ *
+ */
+static void nand_command_lp (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Emulate NAND_CMD_READOOB */
+ if (command == NAND_CMD_READOOB) {
+ column += mtd->oobblock;
+ command = NAND_CMD_READ0;
+ }
+
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /* Write out the command to the device. */
+ this->write_byte(mtd, command);
+ /* End command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1) {
+ /* Adjust columns for 16 bit buswidth */
+ if (this->options & NAND_BUSWIDTH_16)
+ column >>= 1;
+ this->write_byte(mtd, column & 0xff);
+ this->write_byte(mtd, column >> 8);
+ }
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for devices > 128MiB */
+ if (this->chipsize > (128 << 20))
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0xff));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_CACHEDPROG:
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ return;
+
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ case NAND_CMD_READ0:
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /* Write out the start read command */
+ this->write_byte(mtd, NAND_CMD_READSTART);
+ /* End command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ /* Fall through into ready check */
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+/**
+ * nand_get_chip - [GENERIC] Get chip for selected access
+ * @this: the nand chip descriptor
+ * @mtd: MTD device structure
+ * @new_state: the state which is requested
+ *
+ * Get the device and lock it for exclusive access
+ */
+static void nand_get_chip (struct nand_chip *this, struct mtd_info *mtd, int new_state)
+{
+
+ DECLARE_WAITQUEUE (wait, current);
+
+ /*
+ * Grab the lock and see if the device is available
+ */
+retry:
+ spin_lock_bh (&this->chip_lock);
+
+ if (this->state == FL_READY) {
+ this->state = new_state;
+ spin_unlock_bh (&this->chip_lock);
+ return;
+ }
+
+ set_current_state (TASK_UNINTERRUPTIBLE);
+ add_wait_queue (&this->wq, &wait);
+ spin_unlock_bh (&this->chip_lock);
+ schedule ();
+ remove_wait_queue (&this->wq, &wait);
+ goto retry;
+}
+
+/**
+ * nand_wait - [DEFAULT] wait until the command is done
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @state: state to select the max. timeout value
+ *
+ * Wait for command done. This applies to erase and program only
+ * Erase can take up to 400ms and program up to 20ms according to
+ * general NAND and SmartMedia specs
+ *
+*/
+static int nand_wait(struct mtd_info *mtd, struct nand_chip *this, int state)
+{
+
+ unsigned long timeo = jiffies;
+ int status;
+
+ if (state == FL_ERASING)
+ timeo += (HZ * 400) / 1000;
+ else
+ timeo += (HZ * 20) / 1000;
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+
+ spin_lock_bh (&this->chip_lock);
+ if ((state == FL_ERASING) && (this->options & NAND_IS_AND))
+ this->cmdfunc (mtd, NAND_CMD_STATUS_MULTI, -1, -1);
+ else
+ this->cmdfunc (mtd, NAND_CMD_STATUS, -1, -1);
+
+ while (time_before(jiffies, timeo)) {
+ /* Check, if we were interrupted */
+ if (this->state != state) {
+ spin_unlock_bh (&this->chip_lock);
+ return 0;
+ }
+ if (this->dev_ready) {
+ if (this->dev_ready(mtd))
+ break;
+ }
+ if (this->read_byte(mtd) & NAND_STATUS_READY)
+ break;
+
+ spin_unlock_bh (&this->chip_lock);
+ yield ();
+ spin_lock_bh (&this->chip_lock);
+ }
+ status = (int) this->read_byte(mtd);
+ spin_unlock_bh (&this->chip_lock);
+
+ return status;
+}
+
+/**
+ * nand_write_page - [GENERIC] write one page
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @page: startpage inside the chip, must be called with (page & this->pagemask)
+ * @oob_buf: out of band data buffer
+ * @oobsel: out of band selecttion structre
+ * @cached: 1 = enable cached programming if supported by chip
+ *
+ * Nand_page_program function is used for write and writev !
+ * This function will always program a full page of data
+ * If you call it with a non page aligned buffer, you're lost :)
+ *
+ * Cached programming is not supported yet.
+ */
+static int nand_write_page (struct mtd_info *mtd, struct nand_chip *this, int page,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int cached)
+{
+ int i, status;
+ u_char ecc_code[8];
+ int eccmode = oobsel->useecc ? this->eccmode : NAND_ECC_NONE;
+ int *oob_config = oobsel->eccpos;
+ int datidx = 0, eccidx = 0, eccsteps = this->eccsteps;
+ int eccbytes = 0;
+
+ /* FIXME: Enable cached programming */
+ cached = 0;
+
+ /* Send command to begin auto page programming */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, 0x00, page);
+
+ /* Write out complete page of data, take care of eccmode */
+ switch (eccmode) {
+ /* No ecc, write all */
+ case NAND_ECC_NONE:
+ printk (KERN_WARNING "Writing data without ECC to NAND-FLASH is not recommended\n");
+ this->write_buf(mtd, this->data_poi, mtd->oobblock);
+ break;
+
+ /* Software ecc 3/256, write all */
+ case NAND_ECC_SOFT:
+ for (; eccsteps; eccsteps--) {
+ this->calculate_ecc(mtd, &this->data_poi[datidx], ecc_code);
+ for (i = 0; i < 3; i++, eccidx++)
+ oob_buf[oob_config[eccidx]] = ecc_code[i];
+ datidx += this->eccsize;
+ }
+ this->write_buf(mtd, this->data_poi, mtd->oobblock);
+ break;
+
+ /* Hardware ecc 8 byte / 512 byte data */
+ case NAND_ECC_HW8_512:
+ eccbytes += 2;
+ /* Hardware ecc 6 byte / 512 byte data */
+ case NAND_ECC_HW6_512:
+ eccbytes += 3;
+ /* Hardware ecc 3 byte / 256 data */
+ /* Hardware ecc 3 byte / 512 byte data */
+ case NAND_ECC_HW3_256:
+ case NAND_ECC_HW3_512:
+ eccbytes += 3;
+ for (; eccsteps; eccsteps--) {
+ /* enable hardware ecc logic for write */
+ this->enable_hwecc(mtd, NAND_ECC_WRITE);
+ this->write_buf(mtd, &this->data_poi[datidx], this->eccsize);
+ this->calculate_ecc(mtd, &this->data_poi[datidx], ecc_code);
+ for (i = 0; i < eccbytes; i++, eccidx++)
+ oob_buf[oob_config[eccidx]] = ecc_code[i];
+ /* If the hardware ecc provides syndromes then
+ * the ecc code must be written immidiately after
+ * the data bytes (words) */
+ if (this->options & NAND_HWECC_SYNDROME)
+ this->write_buf(mtd, ecc_code, eccbytes);
+
+ datidx += this->eccsize;
+ }
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ /* Write out OOB data */
+ if (this->options & NAND_HWECC_SYNDROME)
+ this->write_buf(mtd, &oob_buf[oobsel->eccbytes], mtd->oobsize - oobsel->eccbytes);
+ else
+ this->write_buf(mtd, oob_buf, mtd->oobsize);
+
+ /* Send command to actually program the data */
+ this->cmdfunc (mtd, cached ? NAND_CMD_CACHEDPROG : NAND_CMD_PAGEPROG, -1, -1);
+
+ if (!cached) {
+ /* call wait ready function */
+ status = this->waitfunc (mtd, this, FL_WRITING);
+ /* See if device thinks it succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write, page 0x%08x, ", __FUNCTION__, page);
+ return -EIO;
+ }
+ } else {
+ /* FIXME: Implement cached programming ! */
+ /* wait until cache is ready*/
+ // status = this->waitfunc (mtd, this, FL_CACHEDRPG);
+ }
+ return 0;
+}
+
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+/**
+ * nand_verify_pages - [GENERIC] verify the chip contents after a write
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @page: startpage inside the chip, must be called with (page & this->pagemask)
+ * @numpages: number of pages to verify
+ * @oob_buf: out of band data buffer
+ * @oobsel: out of band selecttion structre
+ * @chipnr: number of the current chip
+ * @oobmode: 1 = full buffer verify, 0 = ecc only
+ *
+ * The NAND device assumes that it is always writing to a cleanly erased page.
+ * Hence, it performs its internal write verification only on bits that
+ * transitioned from 1 to 0. The device does NOT verify the whole page on a
+ * byte by byte basis. It is possible that the page was not completely erased
+ * or the page is becoming unusable due to wear. The read with ECC would catch
+ * the error later when the ECC page check fails, but we would rather catch
+ * it early in the page write stage. Better to write no data than invalid data.
+ */
+static int nand_verify_pages (struct mtd_info *mtd, struct nand_chip *this, int page, int numpages,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int chipnr, int oobmode)
+{
+ int i, j, datidx = 0, oobofs = 0, res = -EIO;
+ int eccsteps = this->eccsteps;
+ int hweccbytes;
+ u_char oobdata[64];
+
+ hweccbytes = (this->options & NAND_HWECC_SYNDROME) ? (oobsel->eccbytes / eccsteps) : 0;
+
+ /* Send command to read back the first page */
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0, page);
+
+ for(;;) {
+ for (j = 0; j < eccsteps; j++) {
+ /* Loop through and verify the data */
+ if (this->verify_buf(mtd, &this->data_poi[datidx], mtd->eccsize)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ datidx += mtd->eccsize;
+ /* Have we a hw generator layout ? */
+ if (!hweccbytes)
+ continue;
+ if (this->verify_buf(mtd, &this->oob_buf[oobofs], hweccbytes)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ oobofs += hweccbytes;
+ }
+
+ /* check, if we must compare all data or if we just have to
+ * compare the ecc bytes
+ */
+ if (oobmode) {
+ if (this->verify_buf(mtd, &oob_buf[oobofs], mtd->oobsize - hweccbytes * eccsteps)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ } else {
+ /* Read always, else autoincrement fails */
+ this->read_buf(mtd, oobdata, mtd->oobsize - hweccbytes * eccsteps);
+
+ if (oobsel->useecc != MTD_NANDECC_OFF && !hweccbytes) {
+ int ecccnt = oobsel->eccbytes;
+
+ for (i = 0; i < ecccnt; i++) {
+ int idx = oobsel->eccpos[i];
+ if (oobdata[idx] != oob_buf[oobofs + idx] ) {
+ DEBUG (MTD_DEBUG_LEVEL0,
+ "%s: Failed ECC write "
+ "verify, page 0x%08x, " "%6i bytes were succesful\n", __FUNCTION__, page, i);
+ goto out;
+ }
+ }
+ }
+ }
+ oobofs += mtd->oobsize - hweccbytes * eccsteps;
+ page++;
+ numpages--;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ * Do this also before returning, so the chip is
+ * ready for the next command.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* All done, return happy */
+ if (!numpages)
+ return 0;
+
+
+ /* Check, if the chip supports auto page increment */
+ if (!NAND_CANAUTOINCR(this))
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0x00, page);
+ }
+ /*
+ * Terminate the read command. We come here in case of an error
+ * So we must issue a reset command.
+ */
+out:
+ this->cmdfunc (mtd, NAND_CMD_RESET, -1, -1);
+ return res;
+}
+#endif
+
+/**
+ * nand_read - [MTD Interface] MTD compability function for nand_read_ecc
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ *
+ * This function simply calls nand_read_ecc with oob buffer and oobsel = NULL
+*/
+static int nand_read (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf)
+{
+ return nand_read_ecc (mtd, from, len, retlen, buf, NULL, NULL);
+}
+
+
+/**
+ * nand_read_ecc - [MTD Interface] Read data with ECC
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ * @oob_buf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND read with ECC
+ */
+static int nand_read_ecc (struct mtd_info *mtd, loff_t from, size_t len,
+ size_t * retlen, u_char * buf, u_char * oob_buf, struct nand_oobinfo *oobsel)
+{
+ int i, j, col, realpage, page, end, ecc, chipnr, sndcmd = 1;
+ int read = 0, oob = 0, ecc_status = 0, ecc_failed = 0;
+ struct nand_chip *this = mtd->priv;
+ u_char *data_poi, *oob_data = oob_buf;
+ u_char ecc_calc[32];
+ u_char ecc_code[32];
+ int eccmode, eccsteps;
+ int *oob_config, datidx;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+ int eccbytes = 3;
+ int compareecc = 1;
+ int oobreadlen;
+
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_read_ecc: from = 0x%08x, len = %i\n", (unsigned int) from, (int) len);
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: Attempt read beyond end of device\n");
+ *retlen = 0;
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd ,FL_READING);
+
+ /* use userspace supplied oobinfo, if zero */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE)
+ oobsel = this->autooob;
+
+ eccmode = oobsel->useecc ? this->eccmode : NAND_ECC_NONE;
+ oob_config = oobsel->eccpos;
+
+ /* Select the NAND device */
+ chipnr = (int)(from >> this->chip_shift);
+ this->select_chip(mtd, chipnr);
+
+ /* First we calculate the starting page */
+ realpage = (int) (from >> this->page_shift);
+ page = realpage & this->pagemask;
+
+ /* Get raw starting column */
+ col = from & (mtd->oobblock - 1);
+
+ end = mtd->oobblock;
+ ecc = this->eccsize;
+ switch (eccmode) {
+ case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
+ eccbytes = 6;
+ break;
+ case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
+ eccbytes = 8;
+ break;
+ case NAND_ECC_NONE:
+ compareecc = 0;
+ break;
+ }
+
+ if (this->options & NAND_HWECC_SYNDROME)
+ compareecc = 0;
+
+ oobreadlen = mtd->oobsize;
+ if (this->options & NAND_HWECC_SYNDROME)
+ oobreadlen -= oobsel->eccbytes;
+
+ /* Loop until all data read */
+ while (read < len) {
+
+ int aligned = (!col && (len - read) >= end);
+ /*
+ * If the read is not page aligned, we have to read into data buffer
+ * due to ecc, else we read into return buffer direct
+ */
+ if (aligned)
+ data_poi = &buf[read];
+ else
+ data_poi = this->data_buf;
+
+ /* Check, if we have this page in the buffer
+ *
+ * FIXME: Make it work when we must provide oob data too,
+ * check the usage of data_buf oob field
+ */
+ if (realpage == this->pagebuf && !oob_buf) {
+ /* aligned read ? */
+ if (aligned)
+ memcpy (data_poi, this->data_buf, end);
+ goto readdata;
+ }
+
+ /* Check, if we must send the read command */
+ if (sndcmd) {
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0x00, page);
+ sndcmd = 0;
+ }
+
+ /* get oob area, if we have no oob buffer from fs-driver */
+ if (!oob_buf || oobsel->useecc == MTD_NANDECC_AUTOPLACE)
+ oob_data = &this->data_buf[end];
+
+ eccsteps = this->eccsteps;
+
+ switch (eccmode) {
+ case NAND_ECC_NONE: { /* No ECC, Read in a page */
+ static unsigned long lastwhinge = 0;
+ if ((lastwhinge / HZ) != (jiffies / HZ)) {
+ printk (KERN_WARNING "Reading data from NAND FLASH without ECC is not recommended\n");
+ lastwhinge = jiffies;
+ }
+ this->read_buf(mtd, data_poi, end);
+ break;
+ }
+
+ case NAND_ECC_SOFT: /* Software ECC 3/256: Read in a page + oob data */
+ this->read_buf(mtd, data_poi, end);
+ for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=3, datidx += ecc)
+ this->calculate_ecc(mtd, &data_poi[datidx], &ecc_calc[i]);
+ break;
+
+ case NAND_ECC_HW3_256: /* Hardware ECC 3 byte /256 byte data */
+ case NAND_ECC_HW3_512: /* Hardware ECC 3 byte /512 byte data */
+ case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
+ case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
+ for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=eccbytes, datidx += ecc) {
+ this->enable_hwecc(mtd, NAND_ECC_READ);
+ this->read_buf(mtd, &data_poi[datidx], ecc);
+
+ /* HW ecc with syndrome calculation must read the
+ * syndrome from flash immidiately after the data */
+ if (!compareecc) {
+ /* Some hw ecc generators need to know when the
+ * syndrome is read from flash */
+ this->enable_hwecc(mtd, NAND_ECC_READSYN);
+ this->read_buf(mtd, &oob_data[i], eccbytes);
+ /* We calc error correction directly, it checks the hw
+ * generator for an error, reads back the syndrome and
+ * does the error correction on the fly */
+ if (this->correct_data(mtd, &data_poi[datidx], &oob_data[i], &ecc_code[i]) == -1) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: "
+ "Failed ECC read, page 0x%08x on chip %d\n", page, chipnr);
+ ecc_failed++;
+ }
+ } else {
+ this->calculate_ecc(mtd, &data_poi[datidx], &ecc_calc[i]);
+ }
+ }
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ /* read oobdata */
+ this->read_buf(mtd, &oob_data[mtd->oobsize - oobreadlen], oobreadlen);
+
+ /* Skip ECC check, if not requested (ECC_NONE or HW_ECC with syndromes) */
+ if (!compareecc)
+ goto readoob;
+
+ /* Pick the ECC bytes out of the oob data */
+ for (j = 0; j < oobsel->eccbytes; j++)
+ ecc_code[j] = oob_data[oob_config[j]];
+
+ /* correct data, if neccecary */
+ for (i = 0, j = 0, datidx = 0; i < this->eccsteps; i++, datidx += ecc) {
+ ecc_status = this->correct_data(mtd, &data_poi[datidx], &ecc_code[j], &ecc_calc[j]);
+
+ /* Get next chunk of ecc bytes */
+ j += eccbytes;
+
+ /* Check, if we have a fs supplied oob-buffer,
+ * This is the legacy mode. Used by YAFFS1
+ * Should go away some day
+ */
+ if (oob_buf && oobsel->useecc == MTD_NANDECC_PLACE) {
+ int *p = (int *)(&oob_data[mtd->oobsize]);
+ p[i] = ecc_status;
+ }
+
+ if (ecc_status == -1) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: " "Failed ECC read, page 0x%08x\n", page);
+ ecc_failed++;
+ }
+ }
+
+ readoob:
+ /* check, if we have a fs supplied oob-buffer */
+ if (oob_buf) {
+ /* without autoplace. Legacy mode used by YAFFS1 */
+ switch(oobsel->useecc) {
+ case MTD_NANDECC_AUTOPLACE:
+ /* Walk through the autoplace chunks */
+ for (i = 0, j = 0; j < mtd->oobavail; i++) {
+ int from = oobsel->oobfree[i][0];
+ int num = oobsel->oobfree[i][1];
+ memcpy(&oob_buf[oob], &oob_data[from], num);
+ j+= num;
+ }
+ oob += mtd->oobavail;
+ break;
+ case MTD_NANDECC_PLACE:
+ /* YAFFS1 legacy mode */
+ oob_data += this->eccsteps * sizeof (int);
+ default:
+ oob_data += mtd->oobsize;
+ }
+ }
+ readdata:
+ /* Partial page read, transfer data into fs buffer */
+ if (!aligned) {
+ for (j = col; j < end && read < len; j++)
+ buf[read++] = data_poi[j];
+ this->pagebuf = realpage;
+ } else
+ read += mtd->oobblock;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ if (read == len)
+ break;
+
+ /* For subsequent reads align to page boundary. */
+ col = 0;
+ /* Increment page address */
+ realpage++;
+
+ page = realpage & this->pagemask;
+ /* Check, if we cross a chip boundary */
+ if (!page) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ /* Check, if the chip supports auto page increment
+ * or if we have hit a block boundary.
+ */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck))
+ sndcmd = 1;
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ /*
+ * Return success, if no ECC failures, else -EIO
+ * fs driver will take care of that, because
+ * retlen == desired len and result == -EIO
+ */
+ *retlen = read;
+ return ecc_failed ? -EIO : 0;
+}
+
+/**
+ * nand_read_oob - [MTD Interface] NAND read out-of-band
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ *
+ * NAND read out-of-band data from the spare area
+ */
+static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf)
+{
+ int i, col, page, chipnr;
+ struct nand_chip *this = mtd->priv;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_read_oob: from = 0x%08x, len = %i\n", (unsigned int) from, (int) len);
+
+ /* Shift to get page */
+ page = (int)(from >> this->page_shift);
+ chipnr = (int)(from >> this->chip_shift);
+
+ /* Mask to get column */
+ col = from & (mtd->oobsize - 1);
+
+ /* Initialize return length value */
+ *retlen = 0;
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_oob: Attempt read beyond end of device\n");
+ *retlen = 0;
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd , FL_READING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Send the read command */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, col, page & this->pagemask);
+ /*
+ * Read the data, if we read more than one page
+ * oob data, let the device transfer the data !
+ */
+ i = 0;
+ while (i < len) {
+ int thislen = mtd->oobsize - col;
+ thislen = min_t(int, thislen, len);
+ this->read_buf(mtd, &buf[i], thislen);
+ i += thislen;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* Read more ? */
+ if (i < len) {
+ page++;
+ col = 0;
+
+ /* Check, if we cross a chip boundary */
+ if (!(page & this->pagemask)) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+
+ /* Check, if the chip supports auto page increment
+ * or if we have hit a block boundary.
+ */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck)) {
+ /* For subsequent page reads set offset to 0 */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, 0x0, page & this->pagemask);
+ }
+ }
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ /* Return happy */
+ *retlen = len;
+ return 0;
+}
+
+/**
+ * nand_read_raw - [GENERIC] Read raw data including oob into buffer
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @ooblen: number of oob data bytes to read
+ *
+ * Read raw data including oob into buffer
+ */
+int nand_read_raw (struct mtd_info *mtd, uint8_t *buf, loff_t from, size_t len, size_t ooblen)
+{
+ struct nand_chip *this = mtd->priv;
+ int page = (int) (from >> this->page_shift);
+ int chip = (int) (from >> this->chip_shift);
+ int sndcmd = 1;
+ int cnt = 0;
+ int pagesize = mtd->oobblock + mtd->oobsize;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_raw: Attempt read beyond end of device\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd , FL_READING);
+
+ this->select_chip (mtd, chip);
+
+ /* Add requested oob length */
+ len += ooblen;
+
+ while (len) {
+ if (sndcmd)
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0, page & this->pagemask);
+ sndcmd = 0;
+
+ this->read_buf (mtd, &buf[cnt], pagesize);
+
+ len -= pagesize;
+ cnt += pagesize;
+ page++;
+
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* Check, if the chip supports auto page increment */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck))
+ sndcmd = 1;
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+ return 0;
+}
+
+
+/**
+ * nand_prepare_oobbuf - [GENERIC] Prepare the out of band buffer
+ * @mtd: MTD device structure
+ * @fsbuf: buffer given by fs driver
+ * @oobsel: out of band selection structre
+ * @autoplace: 1 = place given buffer into the oob bytes
+ * @numpages: number of pages to prepare
+ *
+ * Return:
+ * 1. Filesystem buffer available and autoplacement is off,
+ * return filesystem buffer
+ * 2. No filesystem buffer or autoplace is off, return internal
+ * buffer
+ * 3. Filesystem buffer is given and autoplace selected
+ * put data from fs buffer into internal buffer and
+ * retrun internal buffer
+ *
+ * Note: The internal buffer is filled with 0xff. This must
+ * be done only once, when no autoplacement happens
+ * Autoplacement sets the buffer dirty flag, which
+ * forces the 0xff fill before using the buffer again.
+ *
+*/
+static u_char * nand_prepare_oobbuf (struct mtd_info *mtd, u_char *fsbuf, struct nand_oobinfo *oobsel,
+ int autoplace, int numpages)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, len, ofs;
+
+ /* Zero copy fs supplied buffer */
+ if (fsbuf && !autoplace)
+ return fsbuf;
+
+ /* Check, if the buffer must be filled with ff again */
+ if (this->oobdirty) {
+ memset (this->oob_buf, 0xff,
+ mtd->oobsize << (this->phys_erase_shift - this->page_shift));
+ this->oobdirty = 0;
+ }
+
+ /* If we have no autoplacement or no fs buffer use the internal one */
+ if (!autoplace || !fsbuf)
+ return this->oob_buf;
+
+ /* Walk through the pages and place the data */
+ this->oobdirty = 1;
+ ofs = 0;
+ while (numpages--) {
+ for (i = 0, len = 0; len < mtd->oobavail; i++) {
+ int to = ofs + oobsel->oobfree[i][0];
+ int num = oobsel->oobfree[i][1];
+ memcpy (&this->oob_buf[to], fsbuf, num);
+ len += num;
+ fsbuf += num;
+ }
+ ofs += mtd->oobavail;
+ }
+ return this->oob_buf;
+}
+
+#define NOTALIGNED(x) (x & (mtd->oobblock-1)) != 0
+
+/**
+ * nand_write - [MTD Interface] compability function for nand_write_ecc
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ *
+ * This function simply calls nand_write_ecc with oob buffer and oobsel = NULL
+ *
+*/
+static int nand_write (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf)
+{
+ return (nand_write_ecc (mtd, to, len, retlen, buf, NULL, NULL));
+}
+
+/**
+ * nand_write_ecc - [MTD Interface] NAND write with ECC
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ * @eccbuf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND write with ECC
+ */
+static int nand_write_ecc (struct mtd_info *mtd, loff_t to, size_t len,
+ size_t * retlen, const u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel)
+{
+ int startpage, page, ret = -EIO, oob = 0, written = 0, chipnr;
+ int autoplace = 0, numpages, totalpages;
+ struct nand_chip *this = mtd->priv;
+ u_char *oobbuf, *bufstart;
+ int ppblock = (1 << (this->phys_erase_shift - this->page_shift));
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_write_ecc: to = 0x%08x, len = %i\n", (unsigned int) to, (int) len);
+
+ /* Initialize retlen, in case of early exit */
+ *retlen = 0;
+
+ /* Do not allow write past end of device */
+ if ((to + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: Attempt to write past end of page\n");
+ return -EINVAL;
+ }
+
+ /* reject writes, which are not page aligned */
+ if (NOTALIGNED (to) || NOTALIGNED(len)) {
+ printk (KERN_NOTICE "nand_write_ecc: Attempt to write not page aligned data\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd, FL_WRITING);
+
+ /* Calculate chipnr */
+ chipnr = (int)(to >> this->chip_shift);
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* if oobsel is NULL, use chip defaults */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE) {
+ oobsel = this->autooob;
+ autoplace = 1;
+ }
+
+ /* Setup variables and oob buffer */
+ totalpages = len >> this->page_shift;
+ page = (int) (to >> this->page_shift);
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page <= this->pagebuf && this->pagebuf < (page + totalpages))
+ this->pagebuf = -1;
+
+ /* Set it relative to chip */
+ page &= this->pagemask;
+ startpage = page;
+ /* Calc number of pages we can write in one go */
+ numpages = min (ppblock - (startpage & (ppblock - 1)), totalpages);
+ oobbuf = nand_prepare_oobbuf (mtd, eccbuf, oobsel, autoplace, numpages);
+ bufstart = (u_char *)buf;
+
+ /* Loop until all data is written */
+ while (written < len) {
+
+ this->data_poi = (u_char*) &buf[written];
+ /* Write one page. If this is the last page to write
+ * or the last page in this block, then use the
+ * real pageprogram command, else select cached programming
+ * if supported by the chip.
+ */
+ ret = nand_write_page (mtd, this, page, &oobbuf[oob], oobsel, (--numpages > 0));
+ if (ret) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: write_page failed %d\n", ret);
+ goto out;
+ }
+ /* Next oob page */
+ oob += mtd->oobsize;
+ /* Update written bytes count */
+ written += mtd->oobblock;
+ if (written == len)
+ goto cmp;
+
+ /* Increment page address */
+ page++;
+
+ /* Have we hit a block boundary ? Then we have to verify and
+ * if verify is ok, we have to setup the oob buffer for
+ * the next pages.
+ */
+ if (!(page & (ppblock - 1))){
+ int ofs;
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage,
+ page - startpage,
+ oobbuf, oobsel, chipnr, (eccbuf != NULL));
+ if (ret) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: verify_pages failed %d\n", ret);
+ goto out;
+ }
+ *retlen = written;
+
+ ofs = autoplace ? mtd->oobavail : mtd->oobsize;
+ if (eccbuf)
+ eccbuf += (page - startpage) * ofs;
+ totalpages -= page - startpage;
+ numpages = min (totalpages, ppblock);
+ page &= this->pagemask;
+ startpage = page;
+ oobbuf = nand_prepare_oobbuf (mtd, eccbuf, oobsel,
+ autoplace, numpages);
+ /* Check, if we cross a chip boundary */
+ if (!page) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ }
+ /* Verify the remaining pages */
+cmp:
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage, totalpages,
+ oobbuf, oobsel, chipnr, (eccbuf != NULL));
+ if (!ret)
+ *retlen = written;
+ else
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: verify_pages failed %d\n", ret);
+
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ return ret;
+}
+
+
+/**
+ * nand_write_oob - [MTD Interface] NAND write out-of-band
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ *
+ * NAND write out-of-band
+ */
+static int nand_write_oob (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf)
+{
+ int column, page, status, ret = -EIO, chipnr;
+ struct nand_chip *this = mtd->priv;
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_write_oob: to = 0x%08x, len = %i\n", (unsigned int) to, (int) len);
+
+ /* Shift to get page */
+ page = (int) (to >> this->page_shift);
+ chipnr = (int) (to >> this->chip_shift);
+
+ /* Mask to get column */
+ column = to & (mtd->oobsize - 1);
+
+ /* Initialize return length value */
+ *retlen = 0;
+
+ /* Do not allow write past end of page */
+ if ((column + len) > mtd->oobsize) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: Attempt to write past end of page\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd, FL_WRITING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Reset the chip. Some chips (like the Toshiba TC5832DC found
+ in one of my DiskOnChip 2000 test units) will clear the whole
+ data page too if we don't do this. I have no clue why, but
+ I seem to have 'fixed' it in the doc2000 driver in
+ August 1999. dwmw2. */
+ this->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page == this->pagebuf)
+ this->pagebuf = -1;
+
+ if (NAND_MUST_PAD(this)) {
+ /* Write out desired data */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, mtd->oobblock, page & this->pagemask);
+ /* prepad 0xff for partial programming */
+ this->write_buf(mtd, ffchars, column);
+ /* write data */
+ this->write_buf(mtd, buf, len);
+ /* postpad 0xff for partial programming */
+ this->write_buf(mtd, ffchars, mtd->oobsize - (len+column));
+ } else {
+ /* Write out desired data */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, mtd->oobblock + column, page & this->pagemask);
+ /* write data */
+ this->write_buf(mtd, buf, len);
+ }
+ /* Send command to program the OOB data */
+ this->cmdfunc (mtd, NAND_CMD_PAGEPROG, -1, -1);
+
+ status = this->waitfunc (mtd, this, FL_WRITING);
+
+ /* See if device thinks it succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: " "Failed write, page 0x%08x\n", page);
+ ret = -EIO;
+ goto out;
+ }
+ /* Return happy */
+ *retlen = len;
+
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+ /* Send command to read back the data */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, column, page & this->pagemask);
+
+ if (this->verify_buf(mtd, buf, len)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: " "Failed write verify, page 0x%08x\n", page);
+ ret = -EIO;
+ goto out;
+ }
+#endif
+ ret = 0;
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ return ret;
+}
+
+
+/**
+ * nand_writev - [MTD Interface] compabilty function for nand_writev_ecc
+ * @mtd: MTD device structure
+ * @vecs: the iovectors to write
+ * @count: number of vectors
+ * @to: offset to write to
+ * @retlen: pointer to variable to store the number of written bytes
+ *
+ * NAND write with kvec. This just calls the ecc function
+ */
+static int nand_writev (struct mtd_info *mtd, const struct kvec *vecs, unsigned long count,
+ loff_t to, size_t * retlen)
+{
+ return (nand_writev_ecc (mtd, vecs, count, to, retlen, NULL, NULL));
+}
+
+/**
+ * nand_writev_ecc - [MTD Interface] write with iovec with ecc
+ * @mtd: MTD device structure
+ * @vecs: the iovectors to write
+ * @count: number of vectors
+ * @to: offset to write to
+ * @retlen: pointer to variable to store the number of written bytes
+ * @eccbuf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND write with iovec with ecc
+ */
+static int nand_writev_ecc (struct mtd_info *mtd, const struct kvec *vecs, unsigned long count,
+ loff_t to, size_t * retlen, u_char *eccbuf, struct nand_oobinfo *oobsel)
+{
+ int i, page, len, total_len, ret = -EIO, written = 0, chipnr;
+ int oob, numpages, autoplace = 0, startpage;
+ struct nand_chip *this = mtd->priv;
+ int ppblock = (1 << (this->phys_erase_shift - this->page_shift));
+ u_char *oobbuf, *bufstart;
+
+ /* Preset written len for early exit */
+ *retlen = 0;
+
+ /* Calculate total length of data */
+ total_len = 0;
+ for (i = 0; i < count; i++)
+ total_len += (int) vecs[i].iov_len;
+
+ DEBUG (MTD_DEBUG_LEVEL3,
+ "nand_writev: to = 0x%08x, len = %i, count = %ld\n", (unsigned int) to, (unsigned int) total_len, count);
+
+ /* Do not allow write past end of page */
+ if ((to + total_len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_writev: Attempted write past end of device\n");
+ return -EINVAL;
+ }
+
+ /* reject writes, which are not page aligned */
+ if (NOTALIGNED (to) || NOTALIGNED(total_len)) {
+ printk (KERN_NOTICE "nand_write_ecc: Attempt to write not page aligned data\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd, FL_WRITING);
+
+ /* Get the current chip-nr */
+ chipnr = (int) (to >> this->chip_shift);
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* if oobsel is NULL, use chip defaults */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE) {
+ oobsel = this->autooob;
+ autoplace = 1;
+ }
+
+ /* Setup start page */
+ page = (int) (to >> this->page_shift);
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page <= this->pagebuf && this->pagebuf < ((to + total_len) >> this->page_shift))
+ this->pagebuf = -1;
+
+ startpage = page & this->pagemask;
+
+ /* Loop until all kvec' data has been written */
+ len = 0;
+ while (count) {
+ /* If the given tuple is >= pagesize then
+ * write it out from the iov
+ */
+ if ((vecs->iov_len - len) >= mtd->oobblock) {
+ /* Calc number of pages we can write
+ * out of this iov in one go */
+ numpages = (vecs->iov_len - len) >> this->page_shift;
+ /* Do not cross block boundaries */
+ numpages = min (ppblock - (startpage & (ppblock - 1)), numpages);
+ oobbuf = nand_prepare_oobbuf (mtd, NULL, oobsel, autoplace, numpages);
+ bufstart = (u_char *)vecs->iov_base;
+ bufstart += len;
+ this->data_poi = bufstart;
+ oob = 0;
+ for (i = 1; i <= numpages; i++) {
+ /* Write one page. If this is the last page to write
+ * then use the real pageprogram command, else select
+ * cached programming if supported by the chip.
+ */
+ ret = nand_write_page (mtd, this, page & this->pagemask,
+ &oobbuf[oob], oobsel, i != numpages);
+ if (ret)
+ goto out;
+ this->data_poi += mtd->oobblock;
+ len += mtd->oobblock;
+ oob += mtd->oobsize;
+ page++;
+ }
+ /* Check, if we have to switch to the next tuple */
+ if (len >= (int) vecs->iov_len) {
+ vecs++;
+ len = 0;
+ count--;
+ }
+ } else {
+ /* We must use the internal buffer, read data out of each
+ * tuple until we have a full page to write
+ */
+ int cnt = 0;
+ while (cnt < mtd->oobblock) {
+ if (vecs->iov_base != NULL && vecs->iov_len)
+ this->data_buf[cnt++] = ((u_char *) vecs->iov_base)[len++];
+ /* Check, if we have to switch to the next tuple */
+ if (len >= (int) vecs->iov_len) {
+ vecs++;
+ len = 0;
+ count--;
+ }
+ }
+ this->pagebuf = page;
+ this->data_poi = this->data_buf;
+ bufstart = this->data_poi;
+ numpages = 1;
+ oobbuf = nand_prepare_oobbuf (mtd, NULL, oobsel, autoplace, numpages);
+ ret = nand_write_page (mtd, this, page & this->pagemask,
+ oobbuf, oobsel, 0);
+ if (ret)
+ goto out;
+ page++;
+ }
+
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage, numpages, oobbuf, oobsel, chipnr, 0);
+ if (ret)
+ goto out;
+
+ written += mtd->oobblock * numpages;
+ /* All done ? */
+ if (!count)
+ break;
+
+ startpage = page & this->pagemask;
+ /* Check, if we cross a chip boundary */
+ if (!startpage) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ ret = 0;
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ *retlen = written;
+ return ret;
+}
+
+/**
+ * single_erease_cmd - [GENERIC] NAND standard block erase command function
+ * @mtd: MTD device structure
+ * @page: the page address of the block which will be erased
+ *
+ * Standard erase command for NAND chips
+ */
+static void single_erase_cmd (struct mtd_info *mtd, int page)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Send commands to erase a block */
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page);
+ this->cmdfunc (mtd, NAND_CMD_ERASE2, -1, -1);
+}
+
+/**
+ * multi_erease_cmd - [GENERIC] AND specific block erase command function
+ * @mtd: MTD device structure
+ * @page: the page address of the block which will be erased
+ *
+ * AND multi block erase command function
+ * Erase 4 consecutive blocks
+ */
+static void multi_erase_cmd (struct mtd_info *mtd, int page)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Send commands to erase a block */
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page);
+ this->cmdfunc (mtd, NAND_CMD_ERASE2, -1, -1);
+}
+
+/**
+ * nand_erase - [MTD Interface] erase block(s)
+ * @mtd: MTD device structure
+ * @instr: erase instruction
+ *
+ * Erase one ore more blocks
+ */
+static int nand_erase (struct mtd_info *mtd, struct erase_info *instr)
+{
+ return nand_erase_nand (mtd, instr, 0);
+}
+
+/**
+ * nand_erase_intern - [NAND Interface] erase block(s)
+ * @mtd: MTD device structure
+ * @instr: erase instruction
+ * @allowbbt: allow erasing the bbt area
+ *
+ * Erase one ore more blocks
+ */
+int nand_erase_nand (struct mtd_info *mtd, struct erase_info *instr, int allowbbt)
+{
+ int page, len, status, pages_per_block, ret, chipnr;
+ struct nand_chip *this = mtd->priv;
+
+ DEBUG (MTD_DEBUG_LEVEL3,
+ "nand_erase: start = 0x%08x, len = %i\n", (unsigned int) instr->addr, (unsigned int) instr->len);
+
+ /* Start address must align on block boundary */
+ if (instr->addr & ((1 << this->phys_erase_shift) - 1)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Unaligned address\n");
+ return -EINVAL;
+ }
+
+ /* Length must align on block boundary */
+ if (instr->len & ((1 << this->phys_erase_shift) - 1)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Length not block aligned\n");
+ return -EINVAL;
+ }
+
+ /* Do not allow erase past end of device */
+ if ((instr->len + instr->addr) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Erase past end of device\n");
+ return -EINVAL;
+ }
+
+ instr->fail_addr = 0xffffffff;
+
+ /* Grab the lock and see if the device is available */
+ nand_get_chip (this, mtd, FL_ERASING);
+
+ /* Shift to get first page */
+ page = (int) (instr->addr >> this->page_shift);
+ chipnr = (int) (instr->addr >> this->chip_shift);
+
+ /* Calculate pages in each block */
+ pages_per_block = 1 << (this->phys_erase_shift - this->page_shift);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check the WP bit */
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Device is write protected!!!\n");
+ instr->state = MTD_ERASE_FAILED;
+ goto erase_exit;
+ }
+
+ /* Loop through the pages */
+ len = instr->len;
+
+ instr->state = MTD_ERASING;
+
+ while (len) {
+ /* Check if we have a bad block, we do not erase bad blocks ! */
+ if (nand_block_checkbad(mtd, ((loff_t) page) << this->page_shift, 0, allowbbt)) {
+ printk (KERN_WARNING "nand_erase: attempt to erase a bad block at page 0x%08x\n", page);
+ instr->state = MTD_ERASE_FAILED;
+ goto erase_exit;
+ }
+
+ /* Invalidate the page cache, if we erase the block which contains
+ the current cached page */
+ if (page <= this->pagebuf && this->pagebuf < (page + pages_per_block))
+ this->pagebuf = -1;
+
+ this->erase_cmd (mtd, page & this->pagemask);
+
+ status = this->waitfunc (mtd, this, FL_ERASING);
+
+ /* See if block erase succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: " "Failed erase, page 0x%08x\n", page);
+ instr->state = MTD_ERASE_FAILED;
+ instr->fail_addr = (page << this->page_shift);
+ goto erase_exit;
+ }
+
+ /* Increment page address and decrement length */
+ len -= (1 << this->phys_erase_shift);
+ page += pages_per_block;
+
+ /* Check, if we cross a chip boundary */
+ if (len && !(page & this->pagemask)) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ instr->state = MTD_ERASE_DONE;
+
+erase_exit:
+
+ ret = instr->state == MTD_ERASE_DONE ? 0 : -EIO;
+ /* Do call back function */
+ if (!ret && instr->callback)
+ instr->callback (instr);
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_chip(mtd);
+
+ /* Return more or less happy */
+ return ret;
+}
+
+/**
+ * nand_sync - [MTD Interface] sync
+ * @mtd: MTD device structure
+ *
+ * Sync is actually a wait for chip ready function
+ */
+static void nand_sync (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ DECLARE_WAITQUEUE (wait, current);
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_sync: called\n");
+
+retry:
+ /* Grab the spinlock */
+ spin_lock_bh (&this->chip_lock);
+
+ /* See what's going on */
+ switch (this->state) {
+ case FL_READY:
+ case FL_SYNCING:
+ this->state = FL_SYNCING;
+ spin_unlock_bh (&this->chip_lock);
+ break;
+
+ default:
+ /* Not an idle state */
+ add_wait_queue (&this->wq, &wait);
+ spin_unlock_bh (&this->chip_lock);
+ schedule ();
+
+ remove_wait_queue (&this->wq, &wait);
+ goto retry;
+ }
+
+ /* Lock the device */
+ spin_lock_bh (&this->chip_lock);
+
+ /* Set the device to be ready again */
+ if (this->state == FL_SYNCING) {
+ this->state = FL_READY;
+ wake_up (&this->wq);
+ }
+
+ /* Unlock the device */
+ spin_unlock_bh (&this->chip_lock);
+}
+
+
+/**
+ * nand_block_isbad - [MTD Interface] Check whether the block at the given offset is bad
+ * @mtd: MTD device structure
+ * @ofs: offset relative to mtd start
+ */
+static int nand_block_isbad (struct mtd_info *mtd, loff_t ofs)
+{
+ /* Check for invalid offset */
+ if (ofs > mtd->size)
+ return -EINVAL;
+
+ return nand_block_checkbad (mtd, ofs, 1, 0);
+}
+
+/**
+ * nand_block_markbad - [MTD Interface] Mark the block at the given offset as bad
+ * @mtd: MTD device structure
+ * @ofs: offset relative to mtd start
+ */
+static int nand_block_markbad (struct mtd_info *mtd, loff_t ofs)
+{
+ struct nand_chip *this = mtd->priv;
+ int ret;
+
+ if ((ret = nand_block_isbad(mtd, ofs))) {
+ /* If it was bad already, return success and do nothing. */
+ if (ret > 0)
+ return 0;
+ return ret;
+ }
+
+ return this->block_markbad(mtd, ofs);
+}
+
+/**
+ * nand_scan - [NAND Interface] Scan for the NAND device
+ * @mtd: MTD device structure
+ * @maxchips: Number of chips to scan for
+ *
+ * This fills out all the not initialized function pointers
+ * with the defaults.
+ * The flash ID is read and the mtd/chip structures are
+ * filled with the appropriate values. Buffers are allocated if
+ * they are not provided by the board driver
+ *
+ */
+int nand_scan (struct mtd_info *mtd, int maxchips)
+{
+ int i, j, nand_maf_id, nand_dev_id, busw;
+ struct nand_chip *this = mtd->priv;
+
+ /* Get buswidth to select the correct functions*/
+ busw = this->options & NAND_BUSWIDTH_16;
+
+ /* check for proper chip_delay setup, set 20us if not */
+ if (!this->chip_delay)
+ this->chip_delay = 20;
+
+ /* check, if a user supplied command function given */
+ if (this->cmdfunc == NULL)
+ this->cmdfunc = nand_command;
+
+ /* check, if a user supplied wait function given */
+ if (this->waitfunc == NULL)
+ this->waitfunc = nand_wait;
+
+ if (!this->select_chip)
+ this->select_chip = nand_select_chip;
+ if (!this->write_byte)
+ this->write_byte = busw ? nand_write_byte16 : nand_write_byte;
+ if (!this->read_byte)
+ this->read_byte = busw ? nand_read_byte16 : nand_read_byte;
+ if (!this->write_word)
+ this->write_word = nand_write_word;
+ if (!this->read_word)
+ this->read_word = nand_read_word;
+ if (!this->block_bad)
+ this->block_bad = nand_block_bad;
+ if (!this->block_markbad)
+ this->block_markbad = nand_default_block_markbad;
+ if (!this->write_buf)
+ this->write_buf = busw ? nand_write_buf16 : nand_write_buf;
+ if (!this->read_buf)
+ this->read_buf = busw ? nand_read_buf16 : nand_read_buf;
+ if (!this->verify_buf)
+ this->verify_buf = busw ? nand_verify_buf16 : nand_verify_buf;
+ if (!this->scan_bbt)
+ this->scan_bbt = nand_default_bbt;
+
+ /* Select the device */
+ this->select_chip(mtd, 0);
+
+ /* Send the command for reading device ID */
+ this->cmdfunc (mtd, NAND_CMD_READID, 0x00, -1);
+
+ /* Read manufacturer and device IDs */
+ nand_maf_id = this->read_byte(mtd);
+ nand_dev_id = this->read_byte(mtd);
+
+ /* Print and store flash device information */
+ for (i = 0; nand_flash_ids[i].name != NULL; i++) {
+
+ if (nand_dev_id != nand_flash_ids[i].id)
+ continue;
+
+ if (!mtd->name) mtd->name = nand_flash_ids[i].name;
+ this->chipsize = nand_flash_ids[i].chipsize << 20;
+
+ /* New devices have all the information in additional id bytes */
+ if (!nand_flash_ids[i].pagesize) {
+ int extid;
+ /* The 3rd id byte contains non relevant data ATM */
+ extid = this->read_byte(mtd);
+ /* The 4th id byte is the important one */
+ extid = this->read_byte(mtd);
+ /* Calc pagesize */
+ mtd->oobblock = 1024 << (extid & 0x3);
+ extid >>= 2;
+ /* Calc oobsize */
+ mtd->oobsize = (8 << (extid & 0x03)) * (mtd->oobblock / 512);
+ extid >>= 2;
+ /* Calc blocksize. Blocksize is multiples of 64KiB */
+ mtd->erasesize = (64 * 1024) << (extid & 0x03);
+ extid >>= 2;
+ /* Get buswidth information */
+ busw = (extid & 0x01) ? NAND_BUSWIDTH_16 : 0;
+
+ } else {
+ /* Old devices have this data hardcoded in the
+ * device id table */
+ mtd->erasesize = nand_flash_ids[i].erasesize;
+ mtd->oobblock = nand_flash_ids[i].pagesize;
+ mtd->oobsize = mtd->oobblock / 32;
+ busw = nand_flash_ids[i].options & NAND_BUSWIDTH_16;
+ }
+
+ /* Check, if buswidth is correct. Hardware drivers should set
+ * this correct ! */
+ if (busw != (this->options & NAND_BUSWIDTH_16)) {
+ printk (KERN_INFO "NAND device: Manufacturer ID:"
+ " 0x%02x, Chip ID: 0x%02x (%s %s)\n", nand_maf_id, nand_dev_id,
+ nand_manuf_ids[i].name , mtd->name);
+ printk (KERN_WARNING
+ "NAND bus width %d instead %d bit\n",
+ (this->options & NAND_BUSWIDTH_16) ? 16 : 8,
+ busw ? 16 : 8);
+ this->select_chip(mtd, -1);
+ return 1;
+ }
+
+ /* Calculate the address shift from the page size */
+ this->page_shift = ffs(mtd->oobblock) - 1;
+ this->bbt_erase_shift = this->phys_erase_shift = ffs(mtd->erasesize) - 1;
+ this->chip_shift = ffs(this->chipsize) - 1;
+
+ /* Set the bad block position */
+ this->badblockpos = mtd->oobblock > 512 ?
+ NAND_LARGE_BADBLOCK_POS : NAND_SMALL_BADBLOCK_POS;
+
+ /* Get chip options, preserve non chip based options */
+ this->options &= ~NAND_CHIPOPTIONS_MSK;
+ this->options |= nand_flash_ids[i].options & NAND_CHIPOPTIONS_MSK;
+ /* Set this as a default. Board drivers can override it, if neccecary */
+ this->options |= NAND_NO_AUTOINCR;
+ /* Check if this is a not a samsung device. Do not clear the options
+ * for chips which are not having an extended id.
+ */
+ if (nand_maf_id != NAND_MFR_SAMSUNG && !nand_flash_ids[i].pagesize)
+ this->options &= ~NAND_SAMSUNG_LP_OPTIONS;
+
+ /* Check for AND chips with 4 page planes */
+ if (this->options & NAND_4PAGE_ARRAY)
+ this->erase_cmd = multi_erase_cmd;
+ else
+ this->erase_cmd = single_erase_cmd;
+
+ /* Do not replace user supplied command function ! */
+ if (mtd->oobblock > 512 && this->cmdfunc == nand_command)
+ this->cmdfunc = nand_command_lp;
+
+ /* Try to identify manufacturer */
+ for (j = 0; nand_manuf_ids[j].id != 0x0; j++) {
+ if (nand_manuf_ids[j].id == nand_maf_id)
+ break;
+ }
+ printk (KERN_INFO "NAND device: Manufacturer ID:"
+ " 0x%02x, Chip ID: 0x%02x (%s %s)\n", nand_maf_id, nand_dev_id,
+ nand_manuf_ids[j].name , nand_flash_ids[i].name);
+ break;
+ }
+
+ if (!nand_flash_ids[i].name) {
+ printk (KERN_WARNING "No NAND device found!!!\n");
+ this->select_chip(mtd, -1);
+ return 1;
+ }
+
+ for (i=1; i < maxchips; i++) {
+ this->select_chip(mtd, i);
+
+ /* Send the command for reading device ID */
+ this->cmdfunc (mtd, NAND_CMD_READID, 0x00, -1);
+
+ /* Read manufacturer and device IDs */
+ if (nand_maf_id != this->read_byte(mtd) ||
+ nand_dev_id != this->read_byte(mtd))
+ break;
+ }
+ if (i > 1)
+ printk(KERN_INFO "%d NAND chips detected\n", i);
+
+ /* Allocate buffers, if neccecary */
+ if (!this->oob_buf) {
+ size_t len;
+ len = mtd->oobsize << (this->phys_erase_shift - this->page_shift);
+ this->oob_buf = kmalloc (len, GFP_KERNEL);
+ if (!this->oob_buf) {
+ printk (KERN_ERR "nand_scan(): Cannot allocate oob_buf\n");
+ return -ENOMEM;
+ }
+ this->options |= NAND_OOBBUF_ALLOC;
+ }
+
+ if (!this->data_buf) {
+ size_t len;
+ len = mtd->oobblock + mtd->oobsize;
+ this->data_buf = kmalloc (len, GFP_KERNEL);
+ if (!this->data_buf) {
+ if (this->options & NAND_OOBBUF_ALLOC)
+ kfree (this->oob_buf);
+ printk (KERN_ERR "nand_scan(): Cannot allocate data_buf\n");
+ return -ENOMEM;
+ }
+ this->options |= NAND_DATABUF_ALLOC;
+ }
+
+ /* Store the number of chips and calc total size for mtd */
+ this->numchips = i;
+ mtd->size = i * this->chipsize;
+ /* Convert chipsize to number of pages per chip -1. */
+ this->pagemask = (this->chipsize >> this->page_shift) - 1;
+ /* Preset the internal oob buffer */
+ memset(this->oob_buf, 0xff, mtd->oobsize << (this->phys_erase_shift - this->page_shift));
+
+ /* If no default placement scheme is given, select an
+ * appropriate one */
+ if (!this->autooob) {
+ /* Select the appropriate default oob placement scheme for
+ * placement agnostic filesystems */
+ switch (mtd->oobsize) {
+ case 8:
+ this->autooob = &nand_oob_8;
+ break;
+ case 16:
+ this->autooob = &nand_oob_16;
+ break;
+ case 64:
+ this->autooob = &nand_oob_64;
+ break;
+ default:
+ printk (KERN_WARNING "No oob scheme defined for oobsize %d\n",
+ mtd->oobsize);
+ BUG();
+ }
+ }
+
+ /* The number of bytes available for the filesystem to place fs dependend
+ * oob data */
+ if (this->options & NAND_BUSWIDTH_16) {
+ mtd->oobavail = mtd->oobsize - (this->autooob->eccbytes + 2);
+ if (this->autooob->eccbytes & 0x01)
+ mtd->oobavail--;
+ } else
+ mtd->oobavail = mtd->oobsize - (this->autooob->eccbytes + 1);
+
+ /*
+ * check ECC mode, default to software
+ * if 3byte/512byte hardware ECC is selected and we have 256 byte pagesize
+ * fallback to software ECC
+ */
+ this->eccsize = 256; /* set default eccsize */
+
+ switch (this->eccmode) {
+
+ case NAND_ECC_HW3_512:
+ case NAND_ECC_HW6_512:
+ case NAND_ECC_HW8_512:
+ if (mtd->oobblock == 256) {
+ printk (KERN_WARNING "512 byte HW ECC not possible on 256 Byte pagesize, fallback to SW ECC \n");
+ this->eccmode = NAND_ECC_SOFT;
+ this->calculate_ecc = nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ break;
+ } else
+ this->eccsize = 512; /* set eccsize to 512 and fall through for function check */
+
+ case NAND_ECC_HW3_256:
+ if (this->calculate_ecc && this->correct_data && this->enable_hwecc)
+ break;
+ printk (KERN_WARNING "No ECC functions supplied, Hardware ECC not possible\n");
+ BUG();
+
+ case NAND_ECC_NONE:
+ printk (KERN_WARNING "NAND_ECC_NONE selected by board driver. This is not recommended !!\n");
+ this->eccmode = NAND_ECC_NONE;
+ break;
+
+ case NAND_ECC_SOFT:
+ this->calculate_ecc = nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ mtd->eccsize = this->eccsize;
+
+ /* Set the number of read / write steps for one page to ensure ECC generation */
+ switch (this->eccmode) {
+ case NAND_ECC_HW3_512:
+ case NAND_ECC_HW6_512:
+ case NAND_ECC_HW8_512:
+ this->eccsteps = mtd->oobblock / 512;
+ break;
+ case NAND_ECC_HW3_256:
+ case NAND_ECC_SOFT:
+ this->eccsteps = mtd->oobblock / 256;
+ break;
+
+ case NAND_ECC_NONE:
+ this->eccsteps = 1;
+ break;
+ }
+
+ /* Initialize state, waitqueue and spinlock */
+ this->state = FL_READY;
+ init_waitqueue_head (&this->wq);
+ spin_lock_init (&this->chip_lock);
+
+ /* De-select the device */
+ this->select_chip(mtd, -1);
+
+ /* Invalidate the pagebuffer reference */
+ this->pagebuf = -1;
+
+ /* Fill in remaining MTD driver data */
+ mtd->type = MTD_NANDFLASH;
+ mtd->flags = MTD_CAP_NANDFLASH | MTD_ECC;
+ mtd->ecctype = MTD_ECC_SW;
+ mtd->erase = nand_erase;
+ mtd->point = NULL;
+ mtd->unpoint = NULL;
+ mtd->read = nand_read;
+ mtd->write = nand_write;
+ mtd->read_ecc = nand_read_ecc;
+ mtd->write_ecc = nand_write_ecc;
+ mtd->read_oob = nand_read_oob;
+ mtd->write_oob = nand_write_oob;
+ mtd->readv = NULL;
+ mtd->writev = nand_writev;
+ mtd->writev_ecc = nand_writev_ecc;
+ mtd->sync = nand_sync;
+ mtd->lock = NULL;
+ mtd->unlock = NULL;
+ mtd->suspend = NULL;
+ mtd->resume = NULL;
+ mtd->block_isbad = nand_block_isbad;
+ mtd->block_markbad = nand_block_markbad;
+
+ /* and make the autooob the default one */
+ memcpy(&mtd->oobinfo, this->autooob, sizeof(mtd->oobinfo));
+
+ mtd->owner = THIS_MODULE;
+
+ /* Build bad block table */
+ return this->scan_bbt (mtd);
+}
+
+/**
+ * nand_release - [NAND Interface] Free resources held by the NAND device
+ * @mtd: MTD device structure
+*/
+void nand_release (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+#if defined(CONFIG_MTD_PARTITIONS) || defined(CONFIG_MTD_PARTITIONS_MODULE)
+ /* Unregister partitions */
+ del_mtd_partitions (mtd);
+#endif
+ /* Unregister the device */
+ del_mtd_device (mtd);
+
+ /* Free bad block table memory, if allocated */
+ if (this->bbt)
+ kfree (this->bbt);
+ /* Buffer allocated by nand_scan ? */
+ if (this->options & NAND_OOBBUF_ALLOC)
+ kfree (this->oob_buf);
+ /* Buffer allocated by nand_scan ? */
+ if (this->options & NAND_DATABUF_ALLOC)
+ kfree (this->data_buf);
+}
+
+EXPORT_SYMBOL (nand_scan);
+EXPORT_SYMBOL (nand_release);
+
+MODULE_LICENSE ("GPL");
+MODULE_AUTHOR ("Steven J. Hill <sjhill@realitydiluted.com>, Thomas Gleixner <tglx@linutronix.de>");
+MODULE_DESCRIPTION ("Generic NAND flash driver code");
--- /dev/null
+/*
+ * drivers/mtd/nand_bbt.c
+ *
+ * Overview:
+ * Bad block table support for the NAND driver
+ *
+ * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)
+ *
+ * $Id: nand_bbt.c,v 1.24 2004/06/28 08:25:35 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Description:
+ *
+ * When nand_scan_bbt is called, then it tries to find the bad block table
+ * depending on the options in the bbt descriptor(s). If a bbt is found
+ * then the contents are read and the memory based bbt is created. If a
+ * mirrored bbt is selected then the mirror is searched too and the
+ * versions are compared. If the mirror has a greater version number
+ * than the mirror bbt is used to build the memory based bbt.
+ * If the tables are not versioned, then we "or" the bad block information.
+ * If one of the bbt's is out of date or does not exist it is (re)created.
+ * If no bbt exists at all then the device is scanned for factory marked
+ * good / bad blocks and the bad block tables are created.
+ *
+ * For manufacturer created bbts like the one found on M-SYS DOC devices
+ * the bbt is searched and read but never created
+ *
+ * The autogenerated bad block table is located in the last good blocks
+ * of the device. The table is mirrored, so it can be updated eventually.
+ * The table is marked in the oob area with an ident pattern and a version
+ * number which indicates which of both tables is more up to date.
+ *
+ * The table uses 2 bits per block
+ * 11b: block is good
+ * 00b: block is factory marked bad
+ * 01b, 10b: block is marked bad due to wear
+ *
+ * The memory bad block table uses the following scheme:
+ * 00b: block is good
+ * 01b: block is marked bad due to wear
+ * 10b: block is reserved (to protect the bbt area)
+ * 11b: block is factory marked bad
+ *
+ * Multichip devices like DOC store the bad block info per floor.
+ *
+ * Following assumptions are made:
+ * - bbts start at a page boundary, if autolocated on a block boundary
+ * - the space neccecary for a bbt in FLASH does not exceed a block boundary
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+
+
+/**
+ * check_pattern - [GENERIC] check if a pattern is in the buffer
+ * @buf: the buffer to search
+ * @len: the length of buffer to search
+ * @paglen: the pagelength
+ * @td: search pattern descriptor
+ *
+ * Check for a pattern at the given place. Used to search bad block
+ * tables and good / bad block identifiers.
+ * If the SCAN_EMPTY option is set then check, if all bytes except the
+ * pattern area contain 0xff
+ *
+*/
+static int check_pattern (uint8_t *buf, int len, int paglen, struct nand_bbt_descr *td)
+{
+ int i, end;
+ uint8_t *p = buf;
+
+ end = paglen + td->offs;
+ if (td->options & NAND_BBT_SCANEMPTY) {
+ for (i = 0; i < end; i++) {
+ if (p[i] != 0xff)
+ return -1;
+ }
+ }
+ p += end;
+
+ /* Compare the pattern */
+ for (i = 0; i < td->len; i++) {
+ if (p[i] != td->pattern[i])
+ return -1;
+ }
+
+ p += td->len;
+ end += td->len;
+ if (td->options & NAND_BBT_SCANEMPTY) {
+ for (i = end; i < len; i++) {
+ if (*p++ != 0xff)
+ return -1;
+ }
+ }
+ return 0;
+}
+
+/**
+ * read_bbt - [GENERIC] Read the bad block table starting from page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @page: the starting page
+ * @num: the number of bbt descriptors to read
+ * @bits: number of bits per block
+ * @offs: offset in the memory table
+ *
+ * Read the bad block table starting from page.
+ *
+ */
+static int read_bbt (struct mtd_info *mtd, uint8_t *buf, int page, int num,
+ int bits, int offs, int reserved_block_code)
+{
+ int res, i, j, act = 0;
+ struct nand_chip *this = mtd->priv;
+ size_t retlen, len, totlen;
+ loff_t from;
+ uint8_t msk = (uint8_t) ((1 << bits) - 1);
+
+ totlen = (num * bits) >> 3;
+ from = ((loff_t)page) << this->page_shift;
+
+ while (totlen) {
+ len = min (totlen, (size_t) (1 << this->bbt_erase_shift));
+ res = mtd->read_ecc (mtd, from, len, &retlen, buf, NULL, this->autooob);
+ if (res < 0) {
+ if (retlen != len) {
+ printk (KERN_INFO "nand_bbt: Error reading bad block table\n");
+ return res;
+ }
+ printk (KERN_WARNING "nand_bbt: ECC error while reading bad block table\n");
+ }
+
+ /* Analyse data */
+ for (i = 0; i < len; i++) {
+ uint8_t dat = buf[i];
+ for (j = 0; j < 8; j += bits, act += 2) {
+ uint8_t tmp = (dat >> j) & msk;
+ if (tmp == msk)
+ continue;
+ if (reserved_block_code &&
+ (tmp == reserved_block_code)) {
+ printk (KERN_DEBUG "nand_read_bbt: Reserved block at 0x%08x\n",
+ ((offs << 2) + (act >> 1)) << this->bbt_erase_shift);
+ this->bbt[offs + (act >> 3)] |= 0x2 << (act & 0x06);
+ continue;
+ }
+ /* Leave it for now, if its matured we can move this
+ * message to MTD_DEBUG_LEVEL0 */
+ printk (KERN_DEBUG "nand_read_bbt: Bad block at 0x%08x\n",
+ ((offs << 2) + (act >> 1)) << this->bbt_erase_shift);
+ /* Factory marked bad or worn out ? */
+ if (tmp == 0)
+ this->bbt[offs + (act >> 3)] |= 0x3 << (act & 0x06);
+ else
+ this->bbt[offs + (act >> 3)] |= 0x1 << (act & 0x06);
+ }
+ }
+ totlen -= len;
+ from += len;
+ }
+ return 0;
+}
+
+/**
+ * read_abs_bbt - [GENERIC] Read the bad block table starting at a given page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @chip: read the table for a specific chip, -1 read all chips.
+ * Applies only if NAND_BBT_PERCHIP option is set
+ *
+ * Read the bad block table for all chips starting at a given page
+ * We assume that the bbt bits are in consecutive order.
+*/
+static int read_abs_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ int res = 0, i;
+ int bits;
+
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+ if (td->options & NAND_BBT_PERCHIP) {
+ int offs = 0;
+ for (i = 0; i < this->numchips; i++) {
+ if (chip == -1 || chip == i)
+ res = read_bbt (mtd, buf, td->pages[i], this->chipsize >> this->bbt_erase_shift, bits, offs, td->reserved_block_code);
+ if (res)
+ return res;
+ offs += this->chipsize >> (this->bbt_erase_shift + 2);
+ }
+ } else {
+ res = read_bbt (mtd, buf, td->pages[0], mtd->size >> this->bbt_erase_shift, bits, 0, td->reserved_block_code);
+ if (res)
+ return res;
+ }
+ return 0;
+}
+
+/**
+ * read_abs_bbts - [GENERIC] Read the bad block table(s) for all chips starting at a given page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ *
+ * Read the bad block table(s) for all chips starting at a given page
+ * We assume that the bbt bits are in consecutive order.
+ *
+*/
+static int read_abs_bbts (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td,
+ struct nand_bbt_descr *md)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Read the primary version, if available */
+ if (td->options & NAND_BBT_VERSION) {
+ nand_read_raw (mtd, buf, td->pages[0] << this->page_shift, mtd->oobblock, mtd->oobsize);
+ td->version[0] = buf[mtd->oobblock + td->veroffs];
+ printk (KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", td->pages[0], td->version[0]);
+ }
+
+ /* Read the mirror version, if available */
+ if (md && (md->options & NAND_BBT_VERSION)) {
+ nand_read_raw (mtd, buf, md->pages[0] << this->page_shift, mtd->oobblock, mtd->oobsize);
+ md->version[0] = buf[mtd->oobblock + md->veroffs];
+ printk (KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", md->pages[0], md->version[0]);
+ }
+
+ return 1;
+}
+
+/**
+ * create_bbt - [GENERIC] Create a bad block table by scanning the device
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @bd: descriptor for the good/bad block search pattern
+ * @chip: create the table for a specific chip, -1 read all chips.
+ * Applies only if NAND_BBT_PERCHIP option is set
+ *
+ * Create a bad block table by scanning the device
+ * for the given good/bad block identify pattern
+ */
+static void create_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, j, numblocks, len, scanlen;
+ int startblock;
+ loff_t from;
+ size_t readlen, ooblen;
+
+ printk (KERN_INFO "Scanning device for bad blocks\n");
+
+ if (bd->options & NAND_BBT_SCANALLPAGES)
+ len = 1 << (this->bbt_erase_shift - this->page_shift);
+ else {
+ if (bd->options & NAND_BBT_SCAN2NDPAGE)
+ len = 2;
+ else
+ len = 1;
+ }
+ scanlen = mtd->oobblock + mtd->oobsize;
+ readlen = len * mtd->oobblock;
+ ooblen = len * mtd->oobsize;
+
+ if (chip == -1) {
+ /* Note that numblocks is 2 * (real numblocks) here, see i+=2 below as it
+ * makes shifting and masking less painful */
+ numblocks = mtd->size >> (this->bbt_erase_shift - 1);
+ startblock = 0;
+ from = 0;
+ } else {
+ if (chip >= this->numchips) {
+ printk (KERN_WARNING "create_bbt(): chipnr (%d) > available chips (%d)\n",
+ chip + 1, this->numchips);
+ return;
+ }
+ numblocks = this->chipsize >> (this->bbt_erase_shift - 1);
+ startblock = chip * numblocks;
+ numblocks += startblock;
+ from = startblock << (this->bbt_erase_shift - 1);
+ }
+
+ for (i = startblock; i < numblocks;) {
+ nand_read_raw (mtd, buf, from, readlen, ooblen);
+ for (j = 0; j < len; j++) {
+ if (check_pattern (&buf[j * scanlen], scanlen, mtd->oobblock, bd)) {
+ this->bbt[i >> 3] |= 0x03 << (i & 0x6);
+ printk (KERN_WARNING "Bad eraseblock %d at 0x%08x\n",
+ i >> 1, (unsigned int) from);
+ break;
+ }
+ }
+ i += 2;
+ from += (1 << this->bbt_erase_shift);
+ }
+}
+
+/**
+ * search_bbt - [GENERIC] scan the device for a specific bad block table
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ *
+ * Read the bad block table by searching for a given ident pattern.
+ * Search is preformed either from the beginning up or from the end of
+ * the device downwards. The search starts always at the start of a
+ * block.
+ * If the option NAND_BBT_PERCHIP is given, each chip is searched
+ * for a bbt, which contains the bad block information of this chip.
+ * This is neccecary to provide support for certain DOC devices.
+ *
+ * The bbt ident pattern resides in the oob area of the first page
+ * in a block.
+ */
+static int search_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, chips;
+ int bits, startblock, block, dir;
+ int scanlen = mtd->oobblock + mtd->oobsize;
+ int bbtblocks;
+
+ /* Search direction top -> down ? */
+ if (td->options & NAND_BBT_LASTBLOCK) {
+ startblock = (mtd->size >> this->bbt_erase_shift) -1;
+ dir = -1;
+ } else {
+ startblock = 0;
+ dir = 1;
+ }
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chips = this->numchips;
+ bbtblocks = this->chipsize >> this->bbt_erase_shift;
+ startblock &= bbtblocks - 1;
+ } else {
+ chips = 1;
+ bbtblocks = mtd->size >> this->bbt_erase_shift;
+ }
+
+ /* Number of bits for each erase block in the bbt */
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+
+ for (i = 0; i < chips; i++) {
+ /* Reset version information */
+ td->version[i] = 0;
+ td->pages[i] = -1;
+ /* Scan the maximum number of blocks */
+ for (block = 0; block < td->maxblocks; block++) {
+ int actblock = startblock + dir * block;
+ /* Read first page */
+ nand_read_raw (mtd, buf, actblock << this->bbt_erase_shift, mtd->oobblock, mtd->oobsize);
+ if (!check_pattern(buf, scanlen, mtd->oobblock, td)) {
+ td->pages[i] = actblock << (this->bbt_erase_shift - this->page_shift);
+ if (td->options & NAND_BBT_VERSION) {
+ td->version[i] = buf[mtd->oobblock + td->veroffs];
+ }
+ break;
+ }
+ }
+ startblock += this->chipsize >> this->bbt_erase_shift;
+ }
+ /* Check, if we found a bbt for each requested chip */
+ for (i = 0; i < chips; i++) {
+ if (td->pages[i] == -1)
+ printk (KERN_WARNING "Bad block table not found for chip %d\n", i);
+ else
+ printk (KERN_DEBUG "Bad block table found at page %d, version 0x%02X\n", td->pages[i], td->version[i]);
+ }
+ return 0;
+}
+
+/**
+ * search_read_bbts - [GENERIC] scan the device for bad block table(s)
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ *
+ * Search and read the bad block table(s)
+*/
+static int search_read_bbts (struct mtd_info *mtd, uint8_t *buf,
+ struct nand_bbt_descr *td, struct nand_bbt_descr *md)
+{
+ /* Search the primary table */
+ search_bbt (mtd, buf, td);
+
+ /* Search the mirror table */
+ if (md)
+ search_bbt (mtd, buf, md);
+
+ /* Force result check */
+ return 1;
+}
+
+
+/**
+ * write_bbt - [GENERIC] (Re)write the bad block table
+ *
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ * @chipsel: selector for a specific chip, -1 for all
+ *
+ * (Re)write the bad block table
+ *
+*/
+static int write_bbt (struct mtd_info *mtd, uint8_t *buf,
+ struct nand_bbt_descr *td, struct nand_bbt_descr *md, int chipsel)
+{
+ struct nand_chip *this = mtd->priv;
+ struct nand_oobinfo oobinfo;
+ struct erase_info einfo;
+ int i, j, res, chip = 0;
+ int bits, startblock, dir, page, offs, numblocks, sft, sftmsk;
+ int nrchips, bbtoffs, pageoffs;
+ uint8_t msk[4];
+ uint8_t rcode = td->reserved_block_code;
+ size_t retlen, len = 0;
+ loff_t to;
+
+ if (!rcode)
+ rcode = 0xff;
+ /* Write bad block table per chip rather than per device ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ numblocks = (int) (this->chipsize >> this->bbt_erase_shift);
+ /* Full device write or specific chip ? */
+ if (chipsel == -1) {
+ nrchips = this->numchips;
+ } else {
+ nrchips = chipsel + 1;
+ chip = chipsel;
+ }
+ } else {
+ numblocks = (int) (mtd->size >> this->bbt_erase_shift);
+ nrchips = 1;
+ }
+
+ /* Loop through the chips */
+ for (; chip < nrchips; chip++) {
+
+ /* There was already a version of the table, reuse the page
+ * This applies for absolute placement too, as we have the
+ * page nr. in td->pages.
+ */
+ if (td->pages[chip] != -1) {
+ page = td->pages[chip];
+ goto write;
+ }
+
+ /* Automatic placement of the bad block table */
+ /* Search direction top -> down ? */
+ if (td->options & NAND_BBT_LASTBLOCK) {
+ startblock = numblocks * (chip + 1) - 1;
+ dir = -1;
+ } else {
+ startblock = chip * numblocks;
+ dir = 1;
+ }
+
+ for (i = 0; i < td->maxblocks; i++) {
+ int block = startblock + dir * i;
+ /* Check, if the block is bad */
+ switch ((this->bbt[block >> 2] >> (2 * (block & 0x03))) & 0x03) {
+ case 0x01:
+ case 0x03:
+ continue;
+ }
+ page = block << (this->bbt_erase_shift - this->page_shift);
+ /* Check, if the block is used by the mirror table */
+ if (!md || md->pages[chip] != page)
+ goto write;
+ }
+ printk (KERN_ERR "No space left to write bad block table\n");
+ return -ENOSPC;
+write:
+
+ /* Set up shift count and masks for the flash table */
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+ switch (bits) {
+ case 1: sft = 3; sftmsk = 0x07; msk[0] = 0x00; msk[1] = 0x01; msk[2] = ~rcode; msk[3] = 0x01; break;
+ case 2: sft = 2; sftmsk = 0x06; msk[0] = 0x00; msk[1] = 0x01; msk[2] = ~rcode; msk[3] = 0x03; break;
+ case 4: sft = 1; sftmsk = 0x04; msk[0] = 0x00; msk[1] = 0x0C; msk[2] = ~rcode; msk[3] = 0x0f; break;
+ case 8: sft = 0; sftmsk = 0x00; msk[0] = 0x00; msk[1] = 0x0F; msk[2] = ~rcode; msk[3] = 0xff; break;
+ default: return -EINVAL;
+ }
+
+ bbtoffs = chip * (numblocks >> 2);
+
+ to = ((loff_t) page) << this->page_shift;
+
+ memcpy (&oobinfo, this->autooob, sizeof(oobinfo));
+ oobinfo.useecc = MTD_NANDECC_PLACEONLY;
+
+ /* Must we save the block contents ? */
+ if (td->options & NAND_BBT_SAVECONTENT) {
+ /* Make it block aligned */
+ to &= ~((loff_t) ((1 << this->bbt_erase_shift) - 1));
+ len = 1 << this->bbt_erase_shift;
+ res = mtd->read_ecc (mtd, to, len, &retlen, buf, &buf[len], &oobinfo);
+ if (res < 0) {
+ if (retlen != len) {
+ printk (KERN_INFO "nand_bbt: Error reading block for writing the bad block table\n");
+ return res;
+ }
+ printk (KERN_WARNING "nand_bbt: ECC error while reading block for writing bad block table\n");
+ }
+ /* Calc the byte offset in the buffer */
+ pageoffs = page - (int)(to >> this->page_shift);
+ offs = pageoffs << this->page_shift;
+ /* Preset the bbt area with 0xff */
+ memset (&buf[offs], 0xff, (size_t)(numblocks >> sft));
+ /* Preset the bbt's oob area with 0xff */
+ memset (&buf[len + pageoffs * mtd->oobsize], 0xff,
+ ((len >> this->page_shift) - pageoffs) * mtd->oobsize);
+ if (td->options & NAND_BBT_VERSION) {
+ buf[len + (pageoffs * mtd->oobsize) + td->veroffs] = td->version[chip];
+ }
+ } else {
+ /* Calc length */
+ len = (size_t) (numblocks >> sft);
+ /* Make it page aligned ! */
+ len = (len + (mtd->oobblock-1)) & ~(mtd->oobblock-1);
+ /* Preset the buffer with 0xff */
+ memset (buf, 0xff, len + (len >> this->page_shift) * mtd->oobsize);
+ offs = 0;
+ /* Pattern is located in oob area of first page */
+ memcpy (&buf[len + td->offs], td->pattern, td->len);
+ if (td->options & NAND_BBT_VERSION) {
+ buf[len + td->veroffs] = td->version[chip];
+ }
+ }
+
+ /* walk through the memory table */
+ for (i = 0; i < numblocks; ) {
+ uint8_t dat;
+ dat = this->bbt[bbtoffs + (i >> 2)];
+ for (j = 0; j < 4; j++ , i++) {
+ int sftcnt = (i << (3 - sft)) & sftmsk;
+ /* Do not store the reserved bbt blocks ! */
+ buf[offs + (i >> sft)] &= ~(msk[dat & 0x03] << sftcnt);
+ dat >>= 2;
+ }
+ }
+
+ memset (&einfo, 0, sizeof (einfo));
+ einfo.mtd = mtd;
+ einfo.addr = (unsigned long) to;
+ einfo.len = 1 << this->bbt_erase_shift;
+ res = nand_erase_nand (mtd, &einfo, 1);
+ if (res < 0) {
+ printk (KERN_WARNING "nand_bbt: Error during block erase: %d\n", res);
+ return res;
+ }
+
+ res = mtd->write_ecc (mtd, to, len, &retlen, buf, &buf[len], &oobinfo);
+ if (res < 0) {
+ printk (KERN_WARNING "nand_bbt: Error while writing bad block table %d\n", res);
+ return res;
+ }
+ printk (KERN_DEBUG "Bad block table written to 0x%08x, version 0x%02X\n",
+ (unsigned int) to, td->version[chip]);
+
+ /* Mark it as used */
+ td->pages[chip] = page;
+ }
+ return 0;
+}
+
+/**
+ * nand_memory_bbt - [GENERIC] create a memory based bad block table
+ * @mtd: MTD device structure
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function creates a memory based bbt by scanning the device
+ * for manufacturer / software marked good / bad blocks
+*/
+static int nand_memory_bbt (struct mtd_info *mtd, struct nand_bbt_descr *bd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Ensure that we only scan for the pattern and nothing else */
+ bd->options = 0;
+ create_bbt (mtd, this->data_buf, bd, -1);
+ return 0;
+}
+
+/**
+ * check_create - [GENERIC] create and write bbt(s) if neccecary
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function checks the results of the previous call to read_bbt
+ * and creates / updates the bbt(s) if neccecary
+ * Creation is neccecary if no bbt was found for the chip/device
+ * Update is neccecary if one of the tables is missing or the
+ * version nr. of one table is less than the other
+*/
+static int check_create (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd)
+{
+ int i, chips, writeops, chipsel, res;
+ struct nand_chip *this = mtd->priv;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+ struct nand_bbt_descr *rd, *rd2;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP)
+ chips = this->numchips;
+ else
+ chips = 1;
+
+ for (i = 0; i < chips; i++) {
+ writeops = 0;
+ rd = NULL;
+ rd2 = NULL;
+ /* Per chip or per device ? */
+ chipsel = (td->options & NAND_BBT_PERCHIP) ? i : -1;
+ /* Mirrored table avilable ? */
+ if (md) {
+ if (td->pages[i] == -1 && md->pages[i] == -1) {
+ writeops = 0x03;
+ goto create;
+ }
+
+ if (td->pages[i] == -1) {
+ rd = md;
+ td->version[i] = md->version[i];
+ writeops = 1;
+ goto writecheck;
+ }
+
+ if (md->pages[i] == -1) {
+ rd = td;
+ md->version[i] = td->version[i];
+ writeops = 2;
+ goto writecheck;
+ }
+
+ if (td->version[i] == md->version[i]) {
+ rd = td;
+ if (!(td->options & NAND_BBT_VERSION))
+ rd2 = md;
+ goto writecheck;
+ }
+
+ if (((int8_t) (td->version[i] - md->version[i])) > 0) {
+ rd = td;
+ md->version[i] = td->version[i];
+ writeops = 2;
+ } else {
+ rd = md;
+ td->version[i] = md->version[i];
+ writeops = 1;
+ }
+
+ goto writecheck;
+
+ } else {
+ if (td->pages[i] == -1) {
+ writeops = 0x01;
+ goto create;
+ }
+ rd = td;
+ goto writecheck;
+ }
+create:
+ /* Create the bad block table by scanning the device ? */
+ if (!(td->options & NAND_BBT_CREATE))
+ continue;
+
+ /* Create the table in memory by scanning the chip(s) */
+ create_bbt (mtd, buf, bd, chipsel);
+
+ td->version[i] = 1;
+ if (md)
+ md->version[i] = 1;
+writecheck:
+ /* read back first ? */
+ if (rd)
+ read_abs_bbt (mtd, buf, rd, chipsel);
+ /* If they weren't versioned, read both. */
+ if (rd2)
+ read_abs_bbt (mtd, buf, rd2, chipsel);
+
+ /* Write the bad block table to the device ? */
+ if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, td, md, chipsel);
+ if (res < 0)
+ return res;
+ }
+
+ /* Write the mirror bad block table to the device ? */
+ if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, md, td, chipsel);
+ if (res < 0)
+ return res;
+ }
+ }
+ return 0;
+}
+
+/**
+ * mark_bbt_regions - [GENERIC] mark the bad block table regions
+ * @mtd: MTD device structure
+ * @td: bad block table descriptor
+ *
+ * The bad block table regions are marked as "bad" to prevent
+ * accidental erasures / writes. The regions are identified by
+ * the mark 0x02.
+*/
+static void mark_bbt_region (struct mtd_info *mtd, struct nand_bbt_descr *td)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, j, chips, block, nrblocks, update;
+ uint8_t oldval, newval;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chips = this->numchips;
+ nrblocks = (int)(this->chipsize >> this->bbt_erase_shift);
+ } else {
+ chips = 1;
+ nrblocks = (int)(mtd->size >> this->bbt_erase_shift);
+ }
+
+ for (i = 0; i < chips; i++) {
+ if ((td->options & NAND_BBT_ABSPAGE) ||
+ !(td->options & NAND_BBT_WRITE)) {
+ if (td->pages[i] == -1) continue;
+ block = td->pages[i] >> (this->bbt_erase_shift - this->page_shift);
+ block <<= 1;
+ oldval = this->bbt[(block >> 3)];
+ newval = oldval | (0x2 << (block & 0x06));
+ this->bbt[(block >> 3)] = newval;
+ if ((oldval != newval) && td->reserved_block_code)
+ nand_update_bbt(mtd, block << (this->bbt_erase_shift - 1));
+ continue;
+ }
+ update = 0;
+ if (td->options & NAND_BBT_LASTBLOCK)
+ block = ((i + 1) * nrblocks) - td->maxblocks;
+ else
+ block = i * nrblocks;
+ block <<= 1;
+ for (j = 0; j < td->maxblocks; j++) {
+ oldval = this->bbt[(block >> 3)];
+ newval = oldval | (0x2 << (block & 0x06));
+ this->bbt[(block >> 3)] = newval;
+ if (oldval != newval) update = 1;
+ block += 2;
+ }
+ /* If we want reserved blocks to be recorded to flash, and some
+ new ones have been marked, then we need to update the stored
+ bbts. This should only happen once. */
+ if (update && td->reserved_block_code)
+ nand_update_bbt(mtd, (block - 2) << (this->bbt_erase_shift - 1));
+ }
+}
+
+/**
+ * nand_scan_bbt - [NAND Interface] scan, find, read and maybe create bad block table(s)
+ * @mtd: MTD device structure
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function checks, if a bad block table(s) is/are already
+ * available. If not it scans the device for manufacturer
+ * marked good / bad blocks and writes the bad block table(s) to
+ * the selected place.
+ *
+ * The bad block table memory is allocated here. It must be freed
+ * by calling the nand_free_bbt function.
+ *
+*/
+int nand_scan_bbt (struct mtd_info *mtd, struct nand_bbt_descr *bd)
+{
+ struct nand_chip *this = mtd->priv;
+ int len, res = 0;
+ uint8_t *buf;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+
+ len = mtd->size >> (this->bbt_erase_shift + 2);
+ /* Allocate memory (2bit per block) */
+ this->bbt = (uint8_t *) kmalloc (len, GFP_KERNEL);
+ if (!this->bbt) {
+ printk (KERN_ERR "nand_scan_bbt: Out of memory\n");
+ return -ENOMEM;
+ }
+ /* Clear the memory bad block table */
+ memset (this->bbt, 0x00, len);
+
+ /* If no primary table decriptor is given, scan the device
+ * to build a memory based bad block table
+ */
+ if (!td)
+ return nand_memory_bbt(mtd, bd);
+
+ /* Allocate a temporary buffer for one eraseblock incl. oob */
+ len = (1 << this->bbt_erase_shift);
+ len += (len >> this->page_shift) * mtd->oobsize;
+ buf = kmalloc (len, GFP_KERNEL);
+ if (!buf) {
+ printk (KERN_ERR "nand_bbt: Out of memory\n");
+ kfree (this->bbt);
+ this->bbt = NULL;
+ return -ENOMEM;
+ }
+
+ /* Is the bbt at a given page ? */
+ if (td->options & NAND_BBT_ABSPAGE) {
+ res = read_abs_bbts (mtd, buf, td, md);
+ } else {
+ /* Search the bad block table using a pattern in oob */
+ res = search_read_bbts (mtd, buf, td, md);
+ }
+
+ if (res)
+ res = check_create (mtd, buf, bd);
+
+ /* Prevent the bbt regions from erasing / writing */
+ mark_bbt_region (mtd, td);
+ if (md)
+ mark_bbt_region (mtd, md);
+
+ kfree (buf);
+ return res;
+}
+
+
+/**
+ * nand_update_bbt - [NAND Interface] update bad block table(s)
+ * @mtd: MTD device structure
+ * @offs: the offset of the newly marked block
+ *
+ * The function updates the bad block table(s)
+*/
+int nand_update_bbt (struct mtd_info *mtd, loff_t offs)
+{
+ struct nand_chip *this = mtd->priv;
+ int len, res = 0, writeops = 0;
+ int chip, chipsel;
+ uint8_t *buf;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+
+ if (!this->bbt || !td)
+ return -EINVAL;
+
+ len = mtd->size >> (this->bbt_erase_shift + 2);
+ /* Allocate a temporary buffer for one eraseblock incl. oob */
+ len = (1 << this->bbt_erase_shift);
+ len += (len >> this->page_shift) * mtd->oobsize;
+ buf = kmalloc (len, GFP_KERNEL);
+ if (!buf) {
+ printk (KERN_ERR "nand_update_bbt: Out of memory\n");
+ return -ENOMEM;
+ }
+
+ writeops = md != NULL ? 0x03 : 0x01;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chip = (int) (offs >> this->chip_shift);
+ chipsel = chip;
+ } else {
+ chip = 0;
+ chipsel = -1;
+ }
+
+ td->version[chip]++;
+ if (md)
+ md->version[chip]++;
+
+ /* Write the bad block table to the device ? */
+ if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, td, md, chipsel);
+ if (res < 0)
+ goto out;
+ }
+ /* Write the mirror bad block table to the device ? */
+ if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, md, td, chipsel);
+ }
+
+out:
+ kfree (buf);
+ return res;
+}
+
+/* Define some generic bad / good block scan pattern which are used
+ * while scanning a device for factory marked good / bad blocks
+ *
+ * The memory based patterns just
+ */
+static uint8_t scan_ff_pattern[] = { 0xff, 0xff };
+
+static struct nand_bbt_descr smallpage_memorybased = {
+ .options = 0,
+ .offs = 5,
+ .len = 1,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr largepage_memorybased = {
+ .options = 0,
+ .offs = 0,
+ .len = 2,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr smallpage_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 5,
+ .len = 1,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr largepage_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 0,
+ .len = 2,
+ .pattern = scan_ff_pattern
+};
+
+static uint8_t scan_agand_pattern[] = { 0x1C, 0x71, 0xC7, 0x1C, 0x71, 0xC7 };
+
+static struct nand_bbt_descr agand_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 0x20,
+ .len = 6,
+ .pattern = scan_agand_pattern
+};
+
+/* Generic flash bbt decriptors
+*/
+static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' };
+static uint8_t mirror_pattern[] = {'1', 't', 'b', 'B' };
+
+static struct nand_bbt_descr bbt_main_descr = {
+ .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
+ | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
+ .offs = 8,
+ .len = 4,
+ .veroffs = 12,
+ .maxblocks = 4,
+ .pattern = bbt_pattern
+};
+
+static struct nand_bbt_descr bbt_mirror_descr = {
+ .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
+ | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
+ .offs = 8,
+ .len = 4,
+ .veroffs = 12,
+ .maxblocks = 4,
+ .pattern = mirror_pattern
+};
+
+/**
+ * nand_default_bbt - [NAND Interface] Select a default bad block table for the device
+ * @mtd: MTD device structure
+ *
+ * This function selects the default bad block table
+ * support for the device and calls the nand_scan_bbt function
+ *
+*/
+int nand_default_bbt (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Default for AG-AND. We must use a flash based
+ * bad block table as the devices have factory marked
+ * _good_ blocks. Erasing those blocks leads to loss
+ * of the good / bad information, so we _must_ store
+ * this information in a good / bad table during
+ * startup
+ */
+ if (this->options & NAND_IS_AND) {
+ /* Use the default pattern descriptors */
+ if (!this->bbt_td) {
+ this->bbt_td = &bbt_main_descr;
+ this->bbt_md = &bbt_mirror_descr;
+ }
+ this->options |= NAND_USE_FLASH_BBT;
+ return nand_scan_bbt (mtd, &agand_flashbased);
+ }
+
+ /* Is a flash based bad block table requested ? */
+ if (this->options & NAND_USE_FLASH_BBT) {
+ /* Use the default pattern descriptors */
+ if (!this->bbt_td) {
+ this->bbt_td = &bbt_main_descr;
+ this->bbt_md = &bbt_mirror_descr;
+ }
+ if (mtd->oobblock > 512)
+ return nand_scan_bbt (mtd, &largepage_flashbased);
+ else
+ return nand_scan_bbt (mtd, &smallpage_flashbased);
+ } else {
+ this->bbt_td = NULL;
+ this->bbt_md = NULL;
+ if (mtd->oobblock > 512)
+ return nand_scan_bbt (mtd, &largepage_memorybased);
+ else
+ return nand_scan_bbt (mtd, &smallpage_memorybased);
+ }
+}
+
+/**
+ * nand_isbad_bbt - [NAND Interface] Check if a block is bad
+ * @mtd: MTD device structure
+ * @offs: offset in the device
+ * @allowbbt: allow access to bad block table region
+ *
+*/
+int nand_isbad_bbt (struct mtd_info *mtd, loff_t offs, int allowbbt)
+{
+ struct nand_chip *this = mtd->priv;
+ int block;
+ uint8_t res;
+
+ /* Get block number * 2 */
+ block = (int) (offs >> (this->bbt_erase_shift - 1));
+ res = (this->bbt[block >> 3] >> (block & 0x06)) & 0x03;
+
+ DEBUG (MTD_DEBUG_LEVEL2, "nand_isbad_bbt(): bbt info for offs 0x%08x: (block %d) 0x%02x\n",
+ (unsigned int)offs, res, block >> 1);
+
+ switch ((int)res) {
+ case 0x00: return 0;
+ case 0x01: return 1;
+ case 0x02: return allowbbt ? 0 : 1;
+ }
+ return 1;
+}
+
+EXPORT_SYMBOL (nand_scan_bbt);
+EXPORT_SYMBOL (nand_default_bbt);
--- /dev/null
+/*
+ * drivers/mtd/nand/ppchameleonevb.c
+ *
+ * Copyright (C) 2003 DAVE Srl (info@wawnet.biz)
+ *
+ * Derived from drivers/mtd/nand/edb7312.c
+ *
+ *
+ * $Id: ppchameleonevb.c,v 1.2 2004/05/05 22:09:54 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Overview:
+ * This is a device driver for the NAND flash devices found on the
+ * PPChameleon/PPChameleonEVB system.
+ * PPChameleon options (autodetected):
+ * - BA model: no NAND
+ * - ME model: 32MB (Samsung K9F5608U0B)
+ * - HI model: 128MB (Samsung K9F1G08UOM)
+ * PPChameleonEVB options:
+ * - 32MB (Samsung K9F5608U0B)
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <platforms/PPChameleonEVB.h>
+
+#undef USE_READY_BUSY_PIN
+#define USE_READY_BUSY_PIN
+/* see datasheets (tR) */
+#define NAND_BIG_DELAY_US 25
+#define NAND_SMALL_DELAY_US 10
+
+/* handy sizes */
+#define SZ_4M 0x00400000
+#define NAND_SMALL_SIZE 0x02000000
+#define NAND_MTD_NAME "ppchameleon-nand"
+#define NAND_EVB_MTD_NAME "ppchameleonevb-nand"
+
+/* GPIO pins used to drive NAND chip mounted on processor module */
+#define NAND_nCE_GPIO_PIN (0x80000000 >> 1)
+#define NAND_CLE_GPIO_PIN (0x80000000 >> 2)
+#define NAND_ALE_GPIO_PIN (0x80000000 >> 3)
+#define NAND_RB_GPIO_PIN (0x80000000 >> 4)
+/* GPIO pins used to drive NAND chip mounted on EVB */
+#define NAND_EVB_nCE_GPIO_PIN (0x80000000 >> 14)
+#define NAND_EVB_CLE_GPIO_PIN (0x80000000 >> 15)
+#define NAND_EVB_ALE_GPIO_PIN (0x80000000 >> 16)
+#define NAND_EVB_RB_GPIO_PIN (0x80000000 >> 31)
+
+/*
+ * MTD structure for PPChameleonEVB board
+ */
+static struct mtd_info *ppchameleon_mtd = NULL;
+static struct mtd_info *ppchameleonevb_mtd = NULL;
+
+/*
+ * Module stuff
+ */
+static int ppchameleon_fio_pbase = CFG_NAND0_PADDR;
+static int ppchameleonevb_fio_pbase = CFG_NAND1_PADDR;
+
+#ifdef MODULE
+MODULE_PARM(ppchameleon_fio_pbase, "i");
+__setup("ppchameleon_fio_pbase=",ppchameleon_fio_pbase);
+MODULE_PARM(ppchameleonevb_fio_pbase, "i");
+__setup("ppchameleonevb_fio_pbase=",ppchameleonevb_fio_pbase);
+#endif
+
+/* Internal buffers. Page buffer and oob buffer for one block */
+static u_char data_buf[2048 + 64];
+static u_char oob_buf[64 * 64];
+static u_char data_buf_evb[512 + 16];
+static u_char oob_buf_evb[16 * 32];
+
+#ifdef CONFIG_MTD_PARTITIONS
+/*
+ * Define static partitions for flash devices
+ */
+static struct mtd_partition partition_info_hi[] = {
+ { name: "PPChameleon HI Nand Flash",
+ offset: 0,
+ size: 128*1024*1024 }
+};
+
+static struct mtd_partition partition_info_me[] = {
+ { name: "PPChameleon ME Nand Flash",
+ offset: 0,
+ size: 32*1024*1024 }
+};
+
+static struct mtd_partition partition_info_evb[] = {
+ { name: "PPChameleonEVB Nand Flash",
+ offset: 0,
+ size: 32*1024*1024 }
+};
+
+#define NUM_PARTITIONS 1
+
+extern int parse_cmdline_partitions(struct mtd_info *master,
+ struct mtd_partition **pparts,
+ const char *mtd_id);
+#endif
+
+
+/*
+ * hardware specific access to control-lines
+ */
+static void ppchameleon_hwcontrol(struct mtd_info *mtdinfo, int cmd)
+{
+ switch(cmd) {
+
+ case NAND_CTL_SETCLE:
+ MACRO_NAND_CTL_SETCLE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRCLE:
+ MACRO_NAND_CTL_CLRCLE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_SETALE:
+ MACRO_NAND_CTL_SETALE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRALE:
+ MACRO_NAND_CTL_CLRALE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_SETNCE:
+ MACRO_NAND_ENABLE_CE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRNCE:
+ MACRO_NAND_DISABLE_CE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ }
+}
+
+static void ppchameleonevb_hwcontrol(struct mtd_info *mtdinfo, int cmd)
+{
+ switch(cmd) {
+
+ case NAND_CTL_SETCLE:
+ MACRO_NAND_CTL_SETCLE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRCLE:
+ MACRO_NAND_CTL_CLRCLE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_SETALE:
+ MACRO_NAND_CTL_SETALE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRALE:
+ MACRO_NAND_CTL_CLRALE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_SETNCE:
+ MACRO_NAND_ENABLE_CE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRNCE:
+ MACRO_NAND_DISABLE_CE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ }
+}
+
+#ifdef USE_READY_BUSY_PIN
+/*
+ * read device ready pin
+ */
+static int ppchameleon_device_ready(struct mtd_info *minfo)
+{
+ if (in_be32((volatile unsigned*)GPIO0_IR) & NAND_RB_GPIO_PIN)
+ return 1;
+ return 0;
+}
+
+static int ppchameleonevb_device_ready(struct mtd_info *minfo)
+{
+ if (in_be32((volatile unsigned*)GPIO0_IR) & NAND_EVB_RB_GPIO_PIN)
+ return 1;
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_MTD_PARTITIONS
+const char *part_probes[] = { "cmdlinepart", NULL };
+const char *part_probes_evb[] = { "cmdlinepart", NULL };
+#endif
+
+/*
+ * Main initialization routine
+ */
+static int __init ppchameleonevb_init (void)
+{
+ struct nand_chip *this;
+ const char *part_type = 0;
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ int ppchameleon_fio_base;
+ int ppchameleonevb_fio_base;
+
+
+ /*********************************
+ * Processor module NAND (if any) *
+ *********************************/
+ /* Allocate memory for MTD device structure and private data */
+ ppchameleon_mtd = kmalloc(sizeof(struct mtd_info) +
+ sizeof(struct nand_chip),
+ GFP_KERNEL);
+ if (!ppchameleon_mtd) {
+ printk("Unable to allocate PPChameleon NAND MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* map physical address */
+ ppchameleon_fio_base = (unsigned long)ioremap(ppchameleon_fio_pbase, SZ_4M);
+ if(!ppchameleon_fio_base) {
+ printk("ioremap PPChameleon NAND flash failed\n");
+ kfree(ppchameleon_mtd);
+ return -EIO;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&ppchameleon_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) ppchameleon_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ ppchameleon_mtd->priv = this;
+
+ /* Initialize GPIOs */
+ /* Pin mapping for NAND chip */
+ /*
+ CE GPIO_01
+ CLE GPIO_02
+ ALE GPIO_03
+ R/B GPIO_04
+ */
+ /* output select */
+ out_be32((volatile unsigned*)GPIO0_OSRH, in_be32((volatile unsigned*)GPIO0_OSRH) & 0xC0FFFFFF);
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xC0FFFFFF);
+ /* enable output driver */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) | NAND_nCE_GPIO_PIN | NAND_CLE_GPIO_PIN | NAND_ALE_GPIO_PIN);
+#ifdef USE_READY_BUSY_PIN
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xFF3FFFFF);
+ /* high-impedecence */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) & (~NAND_RB_GPIO_PIN));
+ /* input select */
+ out_be32((volatile unsigned*)GPIO0_ISR1H, (in_be32((volatile unsigned*)GPIO0_ISR1H) & 0xFF3FFFFF) | 0x00400000);
+#endif
+
+ /* insert callbacks */
+ this->IO_ADDR_R = ppchameleon_fio_base;
+ this->IO_ADDR_W = ppchameleon_fio_base;
+ this->hwcontrol = ppchameleon_hwcontrol;
+#ifdef USE_READY_BUSY_PIN
+ this->dev_ready = ppchameleon_device_ready;
+#endif
+ this->chip_delay = NAND_BIG_DELAY_US;
+ /* ECC mode */
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Set internal data buffer */
+ this->data_buf = data_buf;
+ this->oob_buf = oob_buf;
+
+ /* Scan to find existence of the device (it could not be mounted) */
+ if (nand_scan (ppchameleon_mtd, 1)) {
+ iounmap((void *)ppchameleon_fio_base);
+ kfree (ppchameleon_mtd);
+ goto nand_evb_init;
+ }
+
+#ifndef USE_READY_BUSY_PIN
+ /* Adjust delay if necessary */
+ if (ppchameleon_mtd->size == NAND_SMALL_SIZE)
+ this->chip_delay = NAND_SMALL_DELAY_US;
+#endif
+
+#ifdef CONFIG_MTD_PARTITIONS
+ ppchameleon_mtd->name = "ppchameleon-nand";
+ mtd_parts_nb = parse_mtd_partitions(ppchameleon_mtd, part_probes, &mtd_parts, 0);
+ if (mtd_parts_nb > 0)
+ part_type = "command line";
+ else
+ mtd_parts_nb = 0;
+#endif
+ if (mtd_parts_nb == 0)
+ {
+ if (ppchameleon_mtd->size == NAND_SMALL_SIZE)
+ mtd_parts = partition_info_me;
+ else
+ mtd_parts = partition_info_hi;
+ mtd_parts_nb = NUM_PARTITIONS;
+ part_type = "static";
+ }
+
+ /* Register the partitions */
+ printk(KERN_NOTICE "Using %s partition definition\n", part_type);
+ add_mtd_partitions(ppchameleon_mtd, mtd_parts, mtd_parts_nb);
+
+nand_evb_init:
+ /****************************
+ * EVB NAND (always present) *
+ ****************************/
+ /* Allocate memory for MTD device structure and private data */
+ ppchameleonevb_mtd = kmalloc(sizeof(struct mtd_info) +
+ sizeof(struct nand_chip),
+ GFP_KERNEL);
+ if (!ppchameleonevb_mtd) {
+ printk("Unable to allocate PPChameleonEVB NAND MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* map physical address */
+ ppchameleonevb_fio_base = (unsigned long)ioremap(ppchameleonevb_fio_pbase, SZ_4M);
+ if(!ppchameleonevb_fio_base) {
+ printk("ioremap PPChameleonEVB NAND flash failed\n");
+ kfree(ppchameleonevb_mtd);
+ return -EIO;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&ppchameleonevb_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) ppchameleonevb_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ ppchameleonevb_mtd->priv = this;
+
+ /* Initialize GPIOs */
+ /* Pin mapping for NAND chip */
+ /*
+ CE GPIO_14
+ CLE GPIO_15
+ ALE GPIO_16
+ R/B GPIO_31
+ */
+ /* output select */
+ out_be32((volatile unsigned*)GPIO0_OSRH, in_be32((volatile unsigned*)GPIO0_OSRH) & 0xFFFFFFF0);
+ out_be32((volatile unsigned*)GPIO0_OSRL, in_be32((volatile unsigned*)GPIO0_OSRL) & 0x3FFFFFFF);
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xFFFFFFF0);
+ out_be32((volatile unsigned*)GPIO0_TSRL, in_be32((volatile unsigned*)GPIO0_TSRL) & 0x3FFFFFFF);
+ /* enable output driver */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) | NAND_EVB_nCE_GPIO_PIN | NAND_EVB_CLE_GPIO_PIN | NAND_EVB_ALE_GPIO_PIN);
+#ifdef USE_READY_BUSY_PIN
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRL, in_be32((volatile unsigned*)GPIO0_TSRL) & 0xFFFFFFFC);
+ /* high-impedecence */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) & (~NAND_EVB_RB_GPIO_PIN));
+ /* input select */
+ out_be32((volatile unsigned*)GPIO0_ISR1L, (in_be32((volatile unsigned*)GPIO0_ISR1L) & 0xFFFFFFFC) | 0x00000001);
+#endif
+
+
+ /* insert callbacks */
+ this->IO_ADDR_R = ppchameleonevb_fio_base;
+ this->IO_ADDR_W = ppchameleonevb_fio_base;
+ this->hwcontrol = ppchameleonevb_hwcontrol;
+#ifdef USE_READY_BUSY_PIN
+ this->dev_ready = ppchameleonevb_device_ready;
+#endif
+ this->chip_delay = NAND_SMALL_DELAY_US;
+
+ /* ECC mode */
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Set internal data buffer */
+ this->data_buf = data_buf_evb;
+ this->oob_buf = oob_buf_evb;
+
+ /* Scan to find existence of the device */
+ if (nand_scan (ppchameleonevb_mtd, 1)) {
+ iounmap((void *)ppchameleonevb_fio_base);
+ kfree (ppchameleonevb_mtd);
+ return -ENXIO;
+ }
+
+#ifdef CONFIG_MTD_PARTITIONS
+ ppchameleonevb_mtd->name = NAND_EVB_MTD_NAME;
+ mtd_parts_nb = parse_mtd_partitions(ppchameleonevb_mtd, part_probes_evb, &mtd_parts, 0);
+ if (mtd_parts_nb > 0)
+ part_type = "command line";
+ else
+ mtd_parts_nb = 0;
+#endif
+ if (mtd_parts_nb == 0)
+ {
+ mtd_parts = partition_info_evb;
+ mtd_parts_nb = NUM_PARTITIONS;
+ part_type = "static";
+ }
+
+ /* Register the partitions */
+ printk(KERN_NOTICE "Using %s partition definition\n", part_type);
+ add_mtd_partitions(ppchameleonevb_mtd, mtd_parts, mtd_parts_nb);
+
+ /* Return happy */
+ return 0;
+}
+module_init(ppchameleonevb_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit ppchameleonevb_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) &ppchameleonevb_mtd[1];
+
+ /* Unregister the device */
+ del_mtd_device (ppchameleonevb_mtd);
+
+ /* Free internal data buffer */
+ kfree (this->data_buf);
+
+ /* Free the MTD device structure */
+ kfree (ppchameleonevb_mtd);
+}
+module_exit(ppchameleonevb_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("DAVE Srl <support-ppchameleon@dave-tech.it>");
+MODULE_DESCRIPTION("MTD map driver for DAVE Srl PPChameleonEVB board");
--- /dev/null
+/*
+ * drivers/mtd/nand/toto.c
+ *
+ * Copyright (c) 2003 Texas Instruments
+ *
+ * Derived from drivers/mtd/autcpu12.c
+ *
+ * Copyright (c) 2002 Thomas Gleixner <tgxl@linutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device found on the
+ * TI fido board. It supports 32MiB and 64MiB cards
+ *
+ * $Id: toto.c,v 1.2 2003/10/21 10:04:58 dwmw2 Exp $
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <asm/arch/hardware.h>
+#include <asm/sizes.h>
+#include <asm/arch/toto.h>
+#include <asm/arch-omap1510/hardware.h>
+#include <asm/arch/gpio.h>
+
+/*
+ * MTD structure for TOTO board
+ */
+static struct mtd_info *toto_mtd = NULL;
+
+static int toto_io_base = OMAP_FLASH_1_BASE;
+
+#define CONFIG_NAND_WORKAROUND 1
+
+#define NAND_NCE 0x4000
+#define NAND_CLE 0x1000
+#define NAND_ALE 0x0002
+#define NAND_MASK (NAND_CLE | NAND_ALE | NAND_NCE)
+
+#define T_NAND_CTL_CLRALE(iob) gpiosetout(NAND_ALE, 0)
+#define T_NAND_CTL_SETALE(iob) gpiosetout(NAND_ALE, NAND_ALE)
+#ifdef CONFIG_NAND_WORKAROUND /* "some" dev boards busted, blue wired to rts2 :( */
+#define T_NAND_CTL_CLRCLE(iob) gpiosetout(NAND_CLE, 0); rts2setout(2, 2)
+#define T_NAND_CTL_SETCLE(iob) gpiosetout(NAND_CLE, NAND_CLE); rts2setout(2, 0)
+#else
+#define T_NAND_CTL_CLRCLE(iob) gpiosetout(NAND_CLE, 0)
+#define T_NAND_CTL_SETCLE(iob) gpiosetout(NAND_CLE, NAND_CLE)
+#endif
+#define T_NAND_CTL_SETNCE(iob) gpiosetout(NAND_NCE, 0)
+#define T_NAND_CTL_CLRNCE(iob) gpiosetout(NAND_NCE, NAND_NCE)
+
+/*
+ * Define partitions for flash devices
+ */
+
+static struct mtd_partition partition_info64M[] = {
+ { .name = "toto kernel partition 1",
+ .offset = 0,
+ .size = 2 * SZ_1M },
+ { .name = "toto file sys partition 2",
+ .offset = 2 * SZ_1M,
+ .size = 14 * SZ_1M },
+ { .name = "toto user partition 3",
+ .offset = 16 * SZ_1M,
+ .size = 16 * SZ_1M },
+ { .name = "toto devboard extra partition 4",
+ .offset = 32 * SZ_1M,
+ .size = 32 * SZ_1M },
+};
+
+static struct mtd_partition partition_info32M[] = {
+ { .name = "toto kernel partition 1",
+ .offset = 0,
+ .size = 2 * SZ_1M },
+ { .name = "toto file sys partition 2",
+ .offset = 2 * SZ_1M,
+ .size = 14 * SZ_1M },
+ { .name = "toto user partition 3",
+ .offset = 16 * SZ_1M,
+ .size = 16 * SZ_1M },
+};
+
+#define NUM_PARTITIONS32M 3
+#define NUM_PARTITIONS64M 4
+/*
+ * hardware specific access to control-lines
+*/
+
+static void toto_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+
+ udelay(1); /* hopefully enough time for tc make proceding write to clear */
+ switch(cmd){
+
+ case NAND_CTL_SETCLE: T_NAND_CTL_SETCLE(cmd); break;
+ case NAND_CTL_CLRCLE: T_NAND_CTL_CLRCLE(cmd); break;
+
+ case NAND_CTL_SETALE: T_NAND_CTL_SETALE(cmd); break;
+ case NAND_CTL_CLRALE: T_NAND_CTL_CLRALE(cmd); break;
+
+ case NAND_CTL_SETNCE: T_NAND_CTL_SETNCE(cmd); break;
+ case NAND_CTL_CLRNCE: T_NAND_CTL_CLRNCE(cmd); break;
+ }
+ udelay(1); /* allow time to ensure gpio state to over take memory write */
+}
+
+/*
+ * Main initialization routine
+ */
+int __init toto_init (void)
+{
+ struct nand_chip *this;
+ int err = 0;
+
+ /* Allocate memory for MTD device structure and private data */
+ toto_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!toto_mtd) {
+ printk (KERN_WARNING "Unable to allocate toto NAND MTD device structure.\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&toto_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) toto_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ toto_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = toto_io_base;
+ this->IO_ADDR_W = toto_io_base;
+ this->hwcontrol = toto_hwcontrol;
+ this->dev_ready = NULL;
+ /* 25 us command delay time */
+ this->chip_delay = 30;
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (toto_mtd, 1)) {
+ err = -ENXIO;
+ goto out_mtd;
+ }
+
+ /* Allocate memory for internal data buffer */
+ this->data_buf = kmalloc (sizeof(u_char) * (toto_mtd->oobblock + toto_mtd->oobsize), GFP_KERNEL);
+ if (!this->data_buf) {
+ printk (KERN_WARNING "Unable to allocate NAND data buffer for toto.\n");
+ err = -ENOMEM;
+ goto out_mtd;
+ }
+
+ /* Register the partitions */
+ switch(toto_mtd->size){
+ case SZ_64M: add_mtd_partitions(toto_mtd, partition_info64M, NUM_PARTITIONS64M); break;
+ case SZ_32M: add_mtd_partitions(toto_mtd, partition_info32M, NUM_PARTITIONS32M); break;
+ default: {
+ printk (KERN_WARNING "Unsupported Nand device\n");
+ err = -ENXIO;
+ goto out_buf;
+ }
+ }
+
+ gpioreserve(NAND_MASK); /* claim our gpios */
+ archflashwp(0,0); /* open up flash for writing */
+
+ goto out;
+
+out_buf:
+ kfree (this->data_buf);
+out_mtd:
+ kfree (toto_mtd);
+out:
+ return err;
+}
+
+module_init(toto_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit toto_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) &toto_mtd[1];
+
+ /* Unregister partitions */
+ del_mtd_partitions(toto_mtd);
+
+ /* Unregister the device */
+ del_mtd_device (toto_mtd);
+
+ /* Free internal data buffers */
+ kfree (this->data_buf);
+
+ /* Free the MTD device structure */
+ kfree (toto_mtd);
+
+ /* stop flash writes */
+ archflashwp(0,1);
+
+ /* release gpios to system */
+ gpiorelease(NAND_MASK);
+}
+module_exit(toto_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Richard Woodruff <r-woodruff2@ti.com>");
+MODULE_DESCRIPTION("Glue layer for NAND flash on toto board");
--- /dev/null
+/*
+ * drivers/mtd/tx4925ndfmc.c
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device found on the
+ * Toshiba RBTX4925 reference board, which is a SmartMediaCard. It supports
+ * 16MiB, 32MiB and 64MiB cards.
+ *
+ * Author: MontaVista Software, Inc. source@mvista.com
+ *
+ * Derived from drivers/mtd/autcpu12.c
+ * Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de)
+ *
+ * $Id: tx4925ndfmc.c,v 1.2 2004/03/27 19:55:53 gleixner Exp $
+ *
+ * Copyright (C) 2001 Toshiba Corporation
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <linux/delay.h>
+#include <asm/io.h>
+#include <asm/tx4925/tx4925_nand.h>
+
+extern struct nand_oobinfo jffs2_oobinfo;
+
+/*
+ * MTD structure for RBTX4925 board
+ */
+static struct mtd_info *tx4925ndfmc_mtd = NULL;
+
+/*
+ * Module stuff
+ */
+#if LINUX_VERSION_CODE < 0x20212 && defined(MODULE)
+#define tx4925ndfmc_init init_module
+#define tx4925ndfmc_cleanup cleanup_module
+#endif
+
+/*
+ * Define partitions for flash devices
+ */
+
+static struct mtd_partition partition_info16k[] = {
+ { .name = "RBTX4925 flash partition 1",
+ .offset = 0,
+ .size = 8 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 8 * 0x00100000,
+ .size = 8 * 0x00100000 },
+};
+
+static struct mtd_partition partition_info32k[] = {
+ { .name = "RBTX4925 flash partition 1",
+ .offset = 0,
+ .size = 8 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 8 * 0x00100000,
+ .size = 24 * 0x00100000 },
+};
+
+static struct mtd_partition partition_info64k[] = {
+ { .name = "User FS",
+ .offset = 0,
+ .size = 16 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 16 * 0x00100000,
+ .size = 48 * 0x00100000},
+};
+
+static struct mtd_partition partition_info128k[] = {
+ { .name = "Skip bad section",
+ .offset = 0,
+ .size = 16 * 0x00100000 },
+ { .name = "User FS",
+ .offset = 16 * 0x00100000,
+ .size = 112 * 0x00100000 },
+};
+#define NUM_PARTITIONS16K 2
+#define NUM_PARTITIONS32K 2
+#define NUM_PARTITIONS64K 2
+#define NUM_PARTITIONS128K 2
+
+/*
+ * hardware specific access to control-lines
+*/
+static void tx4925ndfmc_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+
+ switch(cmd){
+
+ case NAND_CTL_SETCLE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ALE;
+ break;
+ case NAND_CTL_SETNCE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_CE;
+ break;
+ case NAND_CTL_SETWP:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_WE;
+ break;
+ case NAND_CTL_CLRWP:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_WE;
+ break;
+ }
+}
+
+/*
+* read device ready pin
+*/
+static int tx4925ndfmc_device_ready(struct mtd_info *mtd)
+{
+ int ready;
+ ready = (tx4925_ndfmcptr->sr & TX4925_NDSFR_BUSY) ? 0 : 1;
+ return ready;
+}
+void tx4925ndfmc_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ /* reset first */
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_ENAB;
+}
+static void tx4925ndfmc_disable_ecc(void)
+{
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+}
+static void tx4925ndfmc_enable_read_ecc(void)
+{
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_READ;
+}
+void tx4925ndfmc_readecc(struct mtd_info *mtd, const u_char *dat, u_char *ecc_code){
+ int i;
+ u_char *ecc = ecc_code;
+ tx4925ndfmc_enable_read_ecc();
+ for (i = 0;i < 6;i++,ecc++)
+ *ecc = tx4925_read_nfmc(&(tx4925_ndfmcptr->dtr));
+ tx4925ndfmc_disable_ecc();
+}
+void tx4925ndfmc_device_setup(void)
+{
+
+ *(unsigned char *)0xbb005000 &= ~0x08;
+
+ /* reset NDFMC */
+ tx4925_ndfmcptr->rstr |= TX4925_NDFRSTR_RST;
+ while (tx4925_ndfmcptr->rstr & TX4925_NDFRSTR_RST);
+
+ /* setup BusSeparete, Hold Time, Strobe Pulse Width */
+ tx4925_ndfmcptr->mcr = TX4925_BSPRT ? TX4925_NDFMCR_BSPRT : 0;
+ tx4925_ndfmcptr->spr = TX4925_HOLD << 4 | TX4925_SPW;
+}
+static u_char tx4925ndfmc_nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return tx4925_read_nfmc(this->IO_ADDR_R);
+}
+
+static void tx4925ndfmc_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ tx4925_write_nfmc(byte, this->IO_ADDR_W);
+}
+
+static void tx4925ndfmc_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ tx4925_write_nfmc(buf[i], this->IO_ADDR_W);
+}
+
+static void tx4925ndfmc_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = tx4925_read_nfmc(this->IO_ADDR_R);
+}
+
+static int tx4925ndfmc_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != tx4925_read_nfmc(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * Send command to NAND device
+ */
+static void tx4925ndfmc_nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1)
+ this->write_byte(mtd, column);
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (mtd->size & 0x0c000000)
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ /* Turn off WE */
+ this->hwcontrol (mtd, NAND_CTL_CLRWP);
+ return;
+
+ case NAND_CMD_SEQIN:
+ /* Turn on WE */
+ this->hwcontrol (mtd, NAND_CTL_SETWP);
+ return;
+
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+extern int parse_cmdline_partitions(struct mtd_info *master, struct mtd_partitio
+n **pparts, char *);
+#endif
+
+/*
+ * Main initialization routine
+ */
+extern int nand_correct_data(struct mtd_info *mtd, u_char *dat, u_char *read_ecc, u_char *calc_ecc);
+int __init tx4925ndfmc_init (void)
+{
+ struct nand_chip *this;
+ int err = 0;
+
+ /* Allocate memory for MTD device structure and private data */
+ tx4925ndfmc_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!tx4925ndfmc_mtd) {
+ printk ("Unable to allocate RBTX4925 NAND MTD device structure.\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+ tx4925ndfmc_device_setup();
+
+ /* io is indirect via a register so don't need to ioremap address */
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&tx4925ndfmc_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) tx4925ndfmc_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ tx4925ndfmc_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = (unsigned long)&(tx4925_ndfmcptr->dtr);
+ this->IO_ADDR_W = (unsigned long)&(tx4925_ndfmcptr->dtr);
+ this->hwcontrol = tx4925ndfmc_hwcontrol;
+ this->enable_hwecc = tx4925ndfmc_enable_hwecc;
+ this->calculate_ecc = tx4925ndfmc_readecc;
+ this->correct_data = nand_correct_data;
+ this->eccmode = NAND_ECC_HW6_512;
+ this->dev_ready = tx4925ndfmc_device_ready;
+ /* 20 us command delay time */
+ this->chip_delay = 20;
+ this->read_byte = tx4925ndfmc_nand_read_byte;
+ this->write_byte = tx4925ndfmc_nand_write_byte;
+ this->cmdfunc = tx4925ndfmc_nand_command;
+ this->write_buf = tx4925ndfmc_nand_write_buf;
+ this->read_buf = tx4925ndfmc_nand_read_buf;
+ this->verify_buf = tx4925ndfmc_nand_verify_buf;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (tx4925ndfmc_mtd, 1)) {
+ err = -ENXIO;
+ goto out_ior;
+ }
+
+ /* Allocate memory for internal data buffer */
+ this->data_buf = kmalloc (sizeof(u_char) * (tx4925ndfmc_mtd->oobblock + tx4925ndfmc_mtd->oobsize), GFP_KERNEL);
+ if (!this->data_buf) {
+ printk ("Unable to allocate NAND data buffer for RBTX4925.\n");
+ err = -ENOMEM;
+ goto out_ior;
+ }
+
+ /* Register the partitions */
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+ {
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ mtd_parts_nb = parse_cmdline_partitions(tx4925ndfmc_mtd, &mtd_parts, "tx4925ndfmc");
+ if (mtd_parts_nb > 0)
+ add_mtd_partitions(tx4925ndfmc_mtd, mtd_parts, mtd_parts_nb);
+ else
+ add_mtd_device(tx4925ndfmc_mtd);
+ }
+#else /* ifdef CONFIG_MTD_CMDLINE_PARTS */
+ switch(tx4925ndfmc_mtd->size){
+ case 0x01000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info16k, NUM_PARTITIONS16K); break;
+ case 0x02000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info32k, NUM_PARTITIONS32K); break;
+ case 0x04000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info64k, NUM_PARTITIONS64K); break;
+ case 0x08000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info128k, NUM_PARTITIONS128K); break;
+ default: {
+ printk ("Unsupported SmartMedia device\n");
+ err = -ENXIO;
+ goto out_buf;
+ }
+ }
+#endif /* ifdef CONFIG_MTD_CMDLINE_PARTS */
+ goto out;
+
+out_buf:
+ kfree (this->data_buf);
+out_ior:
+out:
+ return err;
+}
+
+module_init(tx4925ndfmc_init);
+
+/*
+ * Clean up routine
+ */
+#ifdef MODULE
+static void __exit tx4925ndfmc_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) &tx4925ndfmc_mtd[1];
+
+ /* Unregister partitions */
+ del_mtd_partitions(tx4925ndfmc_mtd);
+
+ /* Unregister the device */
+ del_mtd_device (tx4925ndfmc_mtd);
+
+ /* Free internal data buffers */
+ kfree (this->data_buf);
+
+ /* Free the MTD device structure */
+ kfree (tx4925ndfmc_mtd);
+}
+module_exit(tx4925ndfmc_cleanup);
+#endif
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Alice Hennessy <ahennessy@mvista.com>");
+MODULE_DESCRIPTION("Glue layer for SmartMediaCard on Toshiba RBTX4925");
--- /dev/null
+/*
+ * drivers/mtd/nand/tx4938ndfmc.c
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device connected to
+ * TX4938 internal NAND Memory Controller.
+ * TX4938 NDFMC is almost same as TX4925 NDFMC, but register size are 64 bit.
+ *
+ * Author: source@mvista.com
+ *
+ * Based on spia.c by Steven J. Hill
+ *
+ * $Id: tx4938ndfmc.c,v 1.2 2004/03/27 19:55:53 gleixner Exp $
+ *
+ * Copyright (C) 2000-2001 Toshiba Corporation
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <asm/bootinfo.h>
+#include <linux/delay.h>
+#include <asm/tx4938/rbtx4938.h>
+
+extern struct nand_oobinfo jffs2_oobinfo;
+
+/*
+ * MTD structure for TX4938 NDFMC
+ */
+static struct mtd_info *tx4938ndfmc_mtd;
+
+/*
+ * Define partitions for flash device
+ */
+#define flush_wb() (void)tx4938_ndfmcptr->mcr;
+
+#define NUM_PARTITIONS 3
+#define NUMBER_OF_CIS_BLOCKS 24
+#define SIZE_OF_BLOCK 0x00004000
+#define NUMBER_OF_BLOCK_PER_ZONE 1024
+#define SIZE_OF_ZONE (NUMBER_OF_BLOCK_PER_ZONE * SIZE_OF_BLOCK)
+#ifndef CONFIG_MTD_CMDLINE_PARTS
+/*
+ * You can use the following sample of MTD partitions
+ * on the NAND Flash Memory 32MB or more.
+ *
+ * The following figure shows the image of the sample partition on
+ * the 32MB NAND Flash Memory.
+ *
+ * Block No.
+ * 0 +-----------------------------+ ------
+ * | CIS | ^
+ * 24 +-----------------------------+ |
+ * | kernel image | | Zone 0
+ * | | |
+ * +-----------------------------+ |
+ * 1023 | unused area | v
+ * +-----------------------------+ ------
+ * 1024 | JFFS2 | ^
+ * | | |
+ * | | | Zone 1
+ * | | |
+ * | | |
+ * | | v
+ * 2047 +-----------------------------+ ------
+ *
+ */
+static struct mtd_partition partition_info[NUM_PARTITIONS] = {
+ {
+ .name = "RBTX4938 CIS Area",
+ .offset = 0,
+ .size = (NUMBER_OF_CIS_BLOCKS * SIZE_OF_BLOCK),
+ .mask_flags = MTD_WRITEABLE /* This partition is NOT writable */
+ },
+ {
+ .name = "RBTX4938 kernel image",
+ .offset = MTDPART_OFS_APPEND,
+ .size = 8 * 0x00100000, /* 8MB (Depends on size of kernel image) */
+ .mask_flags = MTD_WRITEABLE /* This partition is NOT writable */
+ },
+ {
+ .name = "Root FS (JFFS2)",
+ .offset = (0 + SIZE_OF_ZONE), /* start address of next zone */
+ .size = MTDPART_SIZ_FULL
+ },
+};
+#endif
+
+static void tx4938ndfmc_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ switch (cmd) {
+ case NAND_CTL_SETCLE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_ALE;
+ break;
+ /* TX4938_NDFMCR_CE bit is 0:high 1:low */
+ case NAND_CTL_SETNCE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_CE;
+ break;
+ case NAND_CTL_SETWP:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_WE;
+ break;
+ case NAND_CTL_CLRWP:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_WE;
+ break;
+ }
+}
+static int tx4938ndfmc_dev_ready(struct mtd_info *mtd)
+{
+ flush_wb();
+ return !(tx4938_ndfmcptr->sr & TX4938_NDFSR_BUSY);
+}
+static void tx4938ndfmc_calculate_ecc(struct mtd_info *mtd, const u_char *dat, u_char *ecc_code)
+{
+ u32 mcr = tx4938_ndfmcptr->mcr;
+ mcr &= ~TX4938_NDFMCR_ECC_ALL;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_READ;
+ ecc_code[1] = tx4938_ndfmcptr->dtr;
+ ecc_code[0] = tx4938_ndfmcptr->dtr;
+ ecc_code[2] = tx4938_ndfmcptr->dtr;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+}
+static void tx4938ndfmc_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ u32 mcr = tx4938_ndfmcptr->mcr;
+ mcr &= ~TX4938_NDFMCR_ECC_ALL;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_RESET;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_ON;
+}
+
+static u_char tx4938ndfmc_nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return tx4938_read_nfmc(this->IO_ADDR_R);
+}
+
+static void tx4938ndfmc_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ tx4938_write_nfmc(byte, this->IO_ADDR_W);
+}
+
+static void tx4938ndfmc_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ tx4938_write_nfmc(buf[i], this->IO_ADDR_W);
+}
+
+static void tx4938ndfmc_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = tx4938_read_nfmc(this->IO_ADDR_R);
+}
+
+static int tx4938ndfmc_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != tx4938_read_nfmc(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * Send command to NAND device
+ */
+static void tx4938ndfmc_nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1)
+ this->write_byte(mtd, column);
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (mtd->size & 0x0c000000)
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ /* Turn off WE */
+ this->hwcontrol (mtd, NAND_CTL_CLRWP);
+ return;
+
+ case NAND_CMD_SEQIN:
+ /* Turn on WE */
+ this->hwcontrol (mtd, NAND_CTL_SETWP);
+ return;
+
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+extern int parse_cmdline_partitions(struct mtd_info *master, struct mtd_partition **pparts, char *);
+#endif
+/*
+ * Main initialization routine
+ */
+int __init tx4938ndfmc_init (void)
+{
+ struct nand_chip *this;
+ int bsprt = 0, hold = 0xf, spw = 0xf;
+ int protected = 0;
+
+ if ((*rbtx4938_piosel_ptr & 0x0c) != 0x08) {
+ printk("TX4938 NDFMC: disabled by IOC PIOSEL\n");
+ return -ENODEV;
+ }
+ bsprt = 1;
+ hold = 2;
+ spw = 9 - 1; /* 8 GBUSCLK = 80ns (@ GBUSCLK 100MHz) */
+
+ if ((tx4938_ccfgptr->pcfg &
+ (TX4938_PCFG_ATA_SEL|TX4938_PCFG_ISA_SEL|TX4938_PCFG_NDF_SEL))
+ != TX4938_PCFG_NDF_SEL) {
+ printk("TX4938 NDFMC: disabled by PCFG.\n");
+ return -ENODEV;
+ }
+
+ /* reset NDFMC */
+ tx4938_ndfmcptr->rstr |= TX4938_NDFRSTR_RST;
+ while (tx4938_ndfmcptr->rstr & TX4938_NDFRSTR_RST)
+ ;
+ /* setup BusSeparete, Hold Time, Strobe Pulse Width */
+ tx4938_ndfmcptr->mcr = bsprt ? TX4938_NDFMCR_BSPRT : 0;
+ tx4938_ndfmcptr->spr = hold << 4 | spw;
+
+ /* Allocate memory for MTD device structure and private data */
+ tx4938ndfmc_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!tx4938ndfmc_mtd) {
+ printk ("Unable to allocate TX4938 NDFMC MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&tx4938ndfmc_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) tx4938ndfmc_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ tx4938ndfmc_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = (unsigned long)&tx4938_ndfmcptr->dtr;
+ this->IO_ADDR_W = (unsigned long)&tx4938_ndfmcptr->dtr;
+ this->hwcontrol = tx4938ndfmc_hwcontrol;
+ this->dev_ready = tx4938ndfmc_dev_ready;
+ this->calculate_ecc = tx4938ndfmc_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ this->enable_hwecc = tx4938ndfmc_enable_hwecc;
+ this->eccmode = NAND_ECC_HW3_256;
+ this->chip_delay = 100;
+ this->read_byte = tx4938ndfmc_nand_read_byte;
+ this->write_byte = tx4938ndfmc_nand_write_byte;
+ this->cmdfunc = tx4938ndfmc_nand_command;
+ this->write_buf = tx4938ndfmc_nand_write_buf;
+ this->read_buf = tx4938ndfmc_nand_read_buf;
+ this->verify_buf = tx4938ndfmc_nand_verify_buf;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (tx4938ndfmc_mtd, 1)) {
+ kfree (tx4938ndfmc_mtd);
+ return -ENXIO;
+ }
+
+ /* Allocate memory for internal data buffer */
+ this->data_buf = kmalloc (sizeof(u_char) * (tx4938ndfmc_mtd->oobblock + tx4938ndfmc_mtd->oobsize), GFP_KERNEL);
+ if (!this->data_buf) {
+ printk ("Unable to allocate NAND data buffer for TX4938.\n");
+ kfree (tx4938ndfmc_mtd);
+ return -ENOMEM;
+ }
+
+ if (protected) {
+ printk(KERN_INFO "TX4938 NDFMC: write protected.\n");
+ tx4938ndfmc_mtd->flags &= ~(MTD_WRITEABLE | MTD_ERASEABLE);
+ }
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+ {
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ mtd_parts_nb = parse_cmdline_partitions(tx4938ndfmc_mtd, &mtd_parts, "tx4938ndfmc");
+ if (mtd_parts_nb > 0)
+ add_mtd_partitions(tx4938ndfmc_mtd, mtd_parts, mtd_parts_nb);
+ else
+ add_mtd_device(tx4938ndfmc_mtd);
+ }
+#else
+ add_mtd_partitions(tx4938ndfmc_mtd, partition_info, NUM_PARTITIONS );
+#endif
+
+ return 0;
+}
+module_init(tx4938ndfmc_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit tx4938ndfmc_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) tx4938ndfmc_mtd->priv;
+
+ /* Unregister the device */
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+ del_mtd_partitions(tx4938ndfmc_mtd);
+#endif
+ del_mtd_device (tx4938ndfmc_mtd);
+
+ /* Free the MTD device structure */
+ kfree (tx4938ndfmc_mtd);
+
+ /* Free internal data buffer */
+ kfree (this->data_buf);
+}
+module_exit(tx4938ndfmc_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Alice Hennessy <ahennessy@mvista.com>");
+MODULE_DESCRIPTION("Board-specific glue layer for NAND flash on TX4938 NDFMC");
--- /dev/null
+config FEC_8XX
+ tristate "Motorola 8xx FEC driver"
+ depends on NET_ETHERNET && 8xx && (NETTA || NETPHONE)
+ select MII
+
+config FEC_8XX_GENERIC_PHY
+ bool "Support any generic PHY"
+ depends on FEC_8XX
+ default y
+
+config FEC_8XX_DM9161_PHY
+ bool "Support DM9161 PHY"
+ depends on FEC_8XX
+ default n
--- /dev/null
+#
+# Makefile for the Motorola 8xx FEC ethernet controller
+#
+
+obj-$(CONFIG_FEC_8XX) += fec_8xx.o
+
+fec_8xx-objs := fec_main.o fec_mii.o
+
+# the platform instantatiation objects
+ifeq ($(CONFIG_NETTA),y)
+fec_8xx-objs += fec_8xx-netta.o
+endif
--- /dev/null
+/*
+ * FEC instantatiation file for NETTA
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+static struct fec_platform_info fec1_info = {
+ .fec_no = 0,
+ .use_mdio = 1,
+ .phy_addr = 8,
+ .fec_irq = SIU_LEVEL1,
+ .phy_irq = CPM_IRQ_OFFSET + CPMVEC_PIO_PC6,
+ .rx_ring = 128,
+ .tx_ring = 16,
+ .rx_copybreak = 240,
+ .use_napi = 1,
+ .napi_weight = 17,
+};
+
+static struct fec_platform_info fec2_info = {
+ .fec_no = 1,
+ .use_mdio = 1,
+ .phy_addr = 2,
+ .fec_irq = SIU_LEVEL3,
+ .phy_irq = CPM_IRQ_OFFSET + CPMVEC_PIO_PC7,
+ .rx_ring = 128,
+ .tx_ring = 16,
+ .rx_copybreak = 240,
+ .use_napi = 1,
+ .napi_weight = 17,
+};
+
+static struct net_device *fec1_dev;
+static struct net_device *fec2_dev;
+
+/* XXX custom u-boot & Linux startup needed */
+extern const char *__fw_getenv(const char *var);
+
+/* access ports */
+#define setbits32(_addr, _v) __fec_out32(&(_addr), __fec_in32(&(_addr)) | (_v))
+#define clrbits32(_addr, _v) __fec_out32(&(_addr), __fec_in32(&(_addr)) & ~(_v))
+
+#define setbits16(_addr, _v) __fec_out16(&(_addr), __fec_in16(&(_addr)) | (_v))
+#define clrbits16(_addr, _v) __fec_out16(&(_addr), __fec_in16(&(_addr)) & ~(_v))
+
+int fec_8xx_platform_init(void)
+{
+ immap_t *immap = (immap_t *)IMAP_ADDR;
+ bd_t *bd = (bd_t *) __res;
+ const char *s;
+ char *e;
+ int i;
+
+ /* use MDC for MII */
+ setbits16(immap->im_ioport.iop_pdpar, 0x0080);
+ clrbits16(immap->im_ioport.iop_pddir, 0x0080);
+
+ /* configure FEC1 pins */
+ setbits16(immap->im_ioport.iop_papar, 0xe810);
+ setbits16(immap->im_ioport.iop_padir, 0x0810);
+ clrbits16(immap->im_ioport.iop_padir, 0xe000);
+
+ setbits32(immap->im_cpm.cp_pbpar, 0x00000001);
+ clrbits32(immap->im_cpm.cp_pbdir, 0x00000001);
+
+ setbits32(immap->im_cpm.cp_cptr, 0x00000100);
+ clrbits32(immap->im_cpm.cp_cptr, 0x00000050);
+
+ clrbits16(immap->im_ioport.iop_pcpar, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcdir, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcso, 0x0200);
+ setbits16(immap->im_ioport.iop_pcint, 0x0200);
+
+ /* configure FEC2 pins */
+ setbits32(immap->im_cpm.cp_pepar, 0x00039620);
+ setbits32(immap->im_cpm.cp_pedir, 0x00039620);
+ setbits32(immap->im_cpm.cp_peso, 0x00031000);
+ clrbits32(immap->im_cpm.cp_peso, 0x00008620);
+
+ setbits32(immap->im_cpm.cp_cptr, 0x00000080);
+ clrbits32(immap->im_cpm.cp_cptr, 0x00000028);
+
+ clrbits16(immap->im_ioport.iop_pcpar, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcdir, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcso, 0x0200);
+ setbits16(immap->im_ioport.iop_pcint, 0x0200);
+
+ /* fill up */
+ fec1_info.sys_clk = bd->bi_intfreq;
+ fec2_info.sys_clk = bd->bi_intfreq;
+
+ s = __fw_getenv("ethaddr");
+ if (s != NULL) {
+ for (i = 0; i < 6; i++) {
+ fec1_info.macaddr[i] = simple_strtoul(s, &e, 16);
+ if (*e)
+ s = e + 1;
+ }
+ }
+
+ s = __fw_getenv("eth1addr");
+ if (s != NULL) {
+ for (i = 0; i < 6; i++) {
+ fec2_info.macaddr[i] = simple_strtoul(s, &e, 16);
+ if (*e)
+ s = e + 1;
+ }
+ }
+
+ fec_8xx_init_one(&fec1_info, &fec1_dev);
+ fec_8xx_init_one(&fec2_info, &fec2_dev);
+
+ return fec1_dev != NULL && fec2_dev != NULL ? 0 : -1;
+}
+
+void fec_8xx_platform_cleanup(void)
+{
+ if (fec2_dev != NULL)
+ fec_8xx_cleanup_one(fec2_dev);
+
+ if (fec1_dev != NULL)
+ fec_8xx_cleanup_one(fec1_dev);
+}
--- /dev/null
+#ifndef FEC_8XX_H
+#define FEC_8XX_H
+
+#include <linux/mii.h>
+#include <linux/netdevice.h>
+
+#include <linux/types.h>
+
+/* HW info */
+
+/* CRC polynomium used by the FEC for the multicast group filtering */
+#define FEC_CRC_POLY 0x04C11DB7
+
+#define MII_ADVERTISE_HALF (ADVERTISE_100HALF | \
+ ADVERTISE_10HALF | ADVERTISE_CSMA)
+#define MII_ADVERTISE_ALL (ADVERTISE_100FULL | \
+ ADVERTISE_10FULL | MII_ADVERTISE_HALF)
+
+/* Interrupt events/masks.
+*/
+#define FEC_ENET_HBERR 0x80000000U /* Heartbeat error */
+#define FEC_ENET_BABR 0x40000000U /* Babbling receiver */
+#define FEC_ENET_BABT 0x20000000U /* Babbling transmitter */
+#define FEC_ENET_GRA 0x10000000U /* Graceful stop complete */
+#define FEC_ENET_TXF 0x08000000U /* Full frame transmitted */
+#define FEC_ENET_TXB 0x04000000U /* A buffer was transmitted */
+#define FEC_ENET_RXF 0x02000000U /* Full frame received */
+#define FEC_ENET_RXB 0x01000000U /* A buffer was received */
+#define FEC_ENET_MII 0x00800000U /* MII interrupt */
+#define FEC_ENET_EBERR 0x00400000U /* SDMA bus error */
+
+#define FEC_ECNTRL_PINMUX 0x00000004
+#define FEC_ECNTRL_ETHER_EN 0x00000002
+#define FEC_ECNTRL_RESET 0x00000001
+
+#define FEC_RCNTRL_BC_REJ 0x00000010
+#define FEC_RCNTRL_PROM 0x00000008
+#define FEC_RCNTRL_MII_MODE 0x00000004
+#define FEC_RCNTRL_DRT 0x00000002
+#define FEC_RCNTRL_LOOP 0x00000001
+
+#define FEC_TCNTRL_FDEN 0x00000004
+#define FEC_TCNTRL_HBC 0x00000002
+#define FEC_TCNTRL_GTS 0x00000001
+
+/* values for MII phy_status */
+
+#define PHY_CONF_ANE 0x0001 /* 1 auto-negotiation enabled */
+#define PHY_CONF_LOOP 0x0002 /* 1 loopback mode enabled */
+#define PHY_CONF_SPMASK 0x00f0 /* mask for speed */
+#define PHY_CONF_10HDX 0x0010 /* 10 Mbit half duplex supported */
+#define PHY_CONF_10FDX 0x0020 /* 10 Mbit full duplex supported */
+#define PHY_CONF_100HDX 0x0040 /* 100 Mbit half duplex supported */
+#define PHY_CONF_100FDX 0x0080 /* 100 Mbit full duplex supported */
+
+#define PHY_STAT_LINK 0x0100 /* 1 up - 0 down */
+#define PHY_STAT_FAULT 0x0200 /* 1 remote fault */
+#define PHY_STAT_ANC 0x0400 /* 1 auto-negotiation complete */
+#define PHY_STAT_SPMASK 0xf000 /* mask for speed */
+#define PHY_STAT_10HDX 0x1000 /* 10 Mbit half duplex selected */
+#define PHY_STAT_10FDX 0x2000 /* 10 Mbit full duplex selected */
+#define PHY_STAT_100HDX 0x4000 /* 100 Mbit half duplex selected */
+#define PHY_STAT_100FDX 0x8000 /* 100 Mbit full duplex selected */
+
+typedef struct phy_info {
+ unsigned int id;
+ const char *name;
+ void (*startup) (struct net_device * dev);
+ void (*shutdown) (struct net_device * dev);
+ void (*ack_int) (struct net_device * dev);
+} phy_info_t;
+
+/* The FEC stores dest/src/type, data, and checksum for receive packets.
+ */
+#define MAX_MTU 1508 /* Allow fullsized pppoe packets over VLAN */
+#define MIN_MTU 46 /* this is data size */
+#define CRC_LEN 4
+
+#define PKT_MAXBUF_SIZE (MAX_MTU+ETH_HLEN+CRC_LEN)
+#define PKT_MINBUF_SIZE (MIN_MTU+ETH_HLEN+CRC_LEN)
+
+/* Must be a multiple of 4 */
+#define PKT_MAXBLR_SIZE ((PKT_MAXBUF_SIZE+3) & ~3)
+/* This is needed so that invalidate_xxx wont invalidate too much */
+#define ENET_RX_FRSIZE L1_CACHE_ALIGN(PKT_MAXBUF_SIZE)
+
+/* platform interface */
+
+struct fec_platform_info {
+ int fec_no; /* FEC index */
+ int use_mdio; /* use external MII */
+ int phy_addr; /* the phy address */
+ int fec_irq, phy_irq; /* the irq for the controller */
+ int rx_ring, tx_ring; /* number of buffers on rx */
+ int sys_clk; /* system clock */
+ __u8 macaddr[6]; /* mac address */
+ int rx_copybreak; /* limit we copy small frames */
+ int use_napi; /* use NAPI */
+ int napi_weight; /* NAPI weight */
+};
+
+/* forward declaration */
+struct fec;
+
+struct fec_enet_private {
+ spinlock_t lock; /* during all ops except TX pckt processing */
+ spinlock_t tx_lock; /* during fec_start_xmit and fec_tx */
+ int fecno;
+ struct fec *fecp;
+ const struct fec_platform_info *fpi;
+ int rx_ring, tx_ring;
+ dma_addr_t ring_mem_addr;
+ void *ring_base;
+ struct sk_buff **rx_skbuff;
+ struct sk_buff **tx_skbuff;
+ cbd_t *rx_bd_base; /* Address of Rx and Tx buffers. */
+ cbd_t *tx_bd_base;
+ cbd_t *dirty_tx; /* ring entries to be free()ed. */
+ cbd_t *cur_rx;
+ cbd_t *cur_tx;
+ int tx_free;
+ struct net_device_stats stats;
+ struct timer_list phy_timer_list;
+ const struct phy_info *phy;
+ unsigned int fec_phy_speed;
+ __u32 msg_enable;
+ struct mii_if_info mii_if;
+};
+
+/***************************************************************************/
+
+void fec_restart(struct net_device *dev, int duplex, int speed);
+void fec_stop(struct net_device *dev);
+
+/***************************************************************************/
+
+int fec_mii_read(struct net_device *dev, int phy_id, int location);
+void fec_mii_write(struct net_device *dev, int phy_id, int location, int value);
+
+int fec_mii_phy_id_detect(struct net_device *dev);
+void fec_mii_startup(struct net_device *dev);
+void fec_mii_shutdown(struct net_device *dev);
+void fec_mii_ack_int(struct net_device *dev);
+
+void fec_mii_link_status_change_check(struct net_device *dev, int init_media);
+
+/***************************************************************************/
+
+#define FEC1_NO 0x00
+#define FEC2_NO 0x01
+#define FEC3_NO 0x02
+
+int fec_8xx_init_one(const struct fec_platform_info *fpi,
+ struct net_device **devp);
+int fec_8xx_cleanup_one(struct net_device *dev);
+
+/***************************************************************************/
+
+#define DRV_MODULE_NAME "fec_8xx"
+#define PFX DRV_MODULE_NAME ": "
+#define DRV_MODULE_VERSION "0.1"
+#define DRV_MODULE_RELDATE "May 6, 2004"
+
+/***************************************************************************/
+
+int fec_8xx_platform_init(void);
+void fec_8xx_platform_cleanup(void);
+
+/***************************************************************************/
+
+/* FEC access macros */
+#if defined(CONFIG_8xx)
+/* for a 8xx __raw_xxx's are sufficient */
+#define __fec_out32(addr, x) __raw_writel(x, addr)
+#define __fec_out16(addr, x) __raw_writew(x, addr)
+#define __fec_in32(addr) __raw_readl(addr)
+#define __fec_in16(addr) __raw_readw(addr)
+#else
+/* for others play it safe */
+#define __fec_out32(addr, x) out_be32(addr, x)
+#define __fec_out16(addr, x) out_be16(addr, x)
+#define __fec_in32(addr) in_be32(addr)
+#define __fec_in16(addr) in_be16(addr)
+#endif
+
+/* write */
+#define FW(_fecp, _reg, _v) __fec_out32(&(_fecp)->fec_ ## _reg, (_v))
+
+/* read */
+#define FR(_fecp, _reg) __fec_in32(&(_fecp)->fec_ ## _reg)
+
+/* set bits */
+#define FS(_fecp, _reg, _v) FW(_fecp, _reg, FR(_fecp, _reg) | (_v))
+
+/* clear bits */
+#define FC(_fecp, _reg, _v) FW(_fecp, _reg, FR(_fecp, _reg) & ~(_v))
+
+/* buffer descriptor access macros */
+
+/* write */
+#define CBDW_SC(_cbd, _sc) __fec_out16(&(_cbd)->cbd_sc, (_sc))
+#define CBDW_DATLEN(_cbd, _datlen) __fec_out16(&(_cbd)->cbd_datlen, (_datlen))
+#define CBDW_BUFADDR(_cbd, _bufaddr) __fec_out32(&(_cbd)->cbd_bufaddr, (_bufaddr))
+
+/* read */
+#define CBDR_SC(_cbd) __fec_in16(&(_cbd)->cbd_sc)
+#define CBDR_DATLEN(_cbd) __fec_in16(&(_cbd)->cbd_datlen)
+#define CBDR_BUFADDR(_cbd) __fec_in32(&(_cbd)->cbd_bufaddr)
+
+/* set bits */
+#define CBDS_SC(_cbd, _sc) CBDW_SC(_cbd, CBDR_SC(_cbd) | (_sc))
+
+/* clear bits */
+#define CBDC_SC(_cbd, _sc) CBDW_SC(_cbd, CBDR_SC(_cbd) & ~(_sc))
+
+/***************************************************************************/
+
+#endif
--- /dev/null
+/*
+ * Fast Ethernet Controller (FEC) driver for Motorola MPC8xx.
+ *
+ * Copyright (c) 2003 Intracom S.A.
+ * by Pantelis Antoniou <panto@intracom.gr>
+ *
+ * Heavily based on original FEC driver by Dan Malek <dan@embeddededge.com>
+ * and modifications by Joakim Tjernlund <joakim.tjernlund@lumentis.se>
+ *
+ * Released under the GPL
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+#include <asm/dma-mapping.h>
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+#define FEC_MAX_MULTICAST_ADDRS 64
+
+/*************************************************/
+
+static char version[] __devinitdata =
+ DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")" "\n";
+
+MODULE_AUTHOR("Pantelis Antoniou <panto@intracom.gr>");
+MODULE_DESCRIPTION("Motorola 8xx FEC ethernet driver");
+MODULE_LICENSE("GPL");
+
+MODULE_PARM(fec_8xx_debug, "i");
+MODULE_PARM_DESC(fec_8xx_debug,
+ "FEC 8xx bitmapped debugging message enable value");
+
+int fec_8xx_debug = -1; /* -1 == use FEC_8XX_DEF_MSG_ENABLE as value */
+
+/*************************************************/
+
+/*
+ * Delay to wait for FEC reset command to complete (in us)
+ */
+#define FEC_RESET_DELAY 50
+
+/*****************************************************************************************/
+
+static void fec_whack_reset(fec_t * fecp)
+{
+ int i;
+
+ /*
+ * Whack a reset. We should wait for this.
+ */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_RESET);
+ for (i = 0;
+ (FR(fecp, ecntrl) & FEC_ECNTRL_RESET) != 0 && i < FEC_RESET_DELAY;
+ i++)
+ udelay(1);
+
+ if (i == FEC_RESET_DELAY)
+ printk(KERN_WARNING "FEC Reset timeout!\n");
+
+}
+
+/****************************************************************************/
+
+/*
+ * Transmitter timeout.
+ */
+#define TX_TIMEOUT (2*HZ)
+
+/****************************************************************************/
+
+/*
+ * Returns the CRC needed when filling in the hash table for
+ * multicast group filtering
+ * pAddr must point to a MAC address (6 bytes)
+ */
+static __u32 fec_mulicast_calc_crc(char *pAddr)
+{
+ u8 byte;
+ int byte_count;
+ int bit_count;
+ __u32 crc = 0xffffffff;
+ u8 msb;
+
+ for (byte_count = 0; byte_count < 6; byte_count++) {
+ byte = pAddr[byte_count];
+ for (bit_count = 0; bit_count < 8; bit_count++) {
+ msb = crc >> 31;
+ crc <<= 1;
+ if (msb ^ (byte & 0x1)) {
+ crc ^= FEC_CRC_POLY;
+ }
+ byte >>= 1;
+ }
+ }
+ return (crc);
+}
+
+/*
+ * Set or clear the multicast filter for this adaptor.
+ * Skeleton taken from sunlance driver.
+ * The CPM Ethernet implementation allows Multicast as well as individual
+ * MAC address filtering. Some of the drivers check to make sure it is
+ * a group multicast address, and discard those that are not. I guess I
+ * will do the same for now, but just remove the test if you want
+ * individual filtering as well (do the upper net layers want or support
+ * this kind of feature?).
+ */
+static void fec_set_multicast_list(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ struct dev_mc_list *pmc;
+ __u32 crc;
+ int temp;
+ __u32 csrVal;
+ int hash_index;
+ __u32 hthi, htlo;
+ unsigned long flags;
+
+
+ if ((dev->flags & IFF_PROMISC) != 0) {
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FS(fecp, r_cntrl, FEC_RCNTRL_PROM);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ /*
+ * Log any net taps.
+ */
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s: Promiscuous mode enabled.\n", dev->name);
+ return;
+
+ }
+
+ if ((dev->flags & IFF_ALLMULTI) != 0 ||
+ dev->mc_count > FEC_MAX_MULTICAST_ADDRS) {
+ /*
+ * Catch all multicast addresses, set the filter to all 1's.
+ */
+ hthi = 0xffffffffU;
+ htlo = 0xffffffffU;
+ } else {
+ hthi = 0;
+ htlo = 0;
+
+ /*
+ * Now populate the hash table
+ */
+ for (pmc = dev->mc_list; pmc != NULL; pmc = pmc->next) {
+ crc = fec_mulicast_calc_crc(pmc->dmi_addr);
+ temp = (crc & 0x3f) >> 1;
+ hash_index = ((temp & 0x01) << 4) |
+ ((temp & 0x02) << 2) |
+ ((temp & 0x04)) |
+ ((temp & 0x08) >> 2) |
+ ((temp & 0x10) >> 4);
+ csrVal = (1 << hash_index);
+ if (crc & 1)
+ hthi |= csrVal;
+ else
+ htlo |= csrVal;
+ }
+ }
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FC(fecp, r_cntrl, FEC_RCNTRL_PROM);
+ FW(fecp, hash_table_high, hthi);
+ FW(fecp, hash_table_low, htlo);
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+static int fec_set_mac_address(struct net_device *dev, void *addr)
+{
+ struct sockaddr *mac = addr;
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec *fecp = fep->fecp;
+ int i;
+ __u32 addrhi, addrlo;
+ unsigned long flags;
+
+ /* Get pointer to SCC area in parameter RAM. */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = mac->sa_data[i];
+
+ /*
+ * Set station address.
+ */
+ addrhi = ((__u32) dev->dev_addr[0] << 24) |
+ ((__u32) dev->dev_addr[1] << 16) |
+ ((__u32) dev->dev_addr[2] << 8) |
+ (__u32) dev->dev_addr[3];
+ addrlo = ((__u32) dev->dev_addr[4] << 24) |
+ ((__u32) dev->dev_addr[5] << 16);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FW(fecp, addr_low, addrhi);
+ FW(fecp, addr_high, addrlo);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return 0;
+}
+
+/*
+ * This function is called to start or restart the FEC during a link
+ * change. This only happens when switching between half and full
+ * duplex.
+ */
+void fec_restart(struct net_device *dev, int duplex, int speed)
+{
+#ifdef CONFIG_DUET
+ immap_t *immap = (immap_t *) IMAP_ADDR;
+ __u32 cptr;
+#endif
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+ cbd_t *bdp;
+ struct sk_buff *skb;
+ int i;
+ __u32 addrhi, addrlo;
+
+ fec_whack_reset(fep->fecp);
+
+ /*
+ * Set station address.
+ */
+ addrhi = ((__u32) dev->dev_addr[0] << 24) |
+ ((__u32) dev->dev_addr[1] << 16) |
+ ((__u32) dev->dev_addr[2] << 8) |
+ (__u32) dev->dev_addr[3];
+ addrlo = ((__u32) dev->dev_addr[4] << 24) |
+ ((__u32) dev->dev_addr[5] << 16);
+ FW(fecp, addr_low, addrhi);
+ FW(fecp, addr_high, addrlo);
+
+ /*
+ * Reset all multicast.
+ */
+ FW(fecp, hash_table_high, 0);
+ FW(fecp, hash_table_low, 0);
+
+ /*
+ * Set maximum receive buffer size.
+ */
+ FW(fecp, r_buff_size, PKT_MAXBLR_SIZE);
+ FW(fecp, r_hash, PKT_MAXBUF_SIZE);
+
+ /*
+ * Set receive and transmit descriptor base.
+ */
+ FW(fecp, r_des_start, iopa((__u32) (fep->rx_bd_base)));
+ FW(fecp, x_des_start, iopa((__u32) (fep->tx_bd_base)));
+
+ fep->dirty_tx = fep->cur_tx = fep->tx_bd_base;
+ fep->tx_free = fep->tx_ring;
+ fep->cur_rx = fep->rx_bd_base;
+
+ /*
+ * Reset SKB receive buffers
+ */
+ for (i = 0; i < fep->rx_ring; i++) {
+ if ((skb = fep->rx_skbuff[i]) == NULL)
+ continue;
+ fep->rx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * Initialize the receive buffer descriptors.
+ */
+ for (i = 0, bdp = fep->rx_bd_base; i < fep->rx_ring; i++, bdp++) {
+ skb = dev_alloc_skb(ENET_RX_FRSIZE);
+ if (skb == NULL) {
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s Memory squeeze, unable to allocate skb\n",
+ dev->name);
+ fep->stats.rx_dropped++;
+ break;
+ }
+ fep->rx_skbuff[i] = skb;
+ skb->dev = dev;
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skb->data,
+ L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
+ DMA_FROM_DEVICE));
+ CBDW_DATLEN(bdp, 0); /* zero */
+ CBDW_SC(bdp, BD_ENET_RX_EMPTY |
+ ((i < fep->rx_ring - 1) ? 0 : BD_SC_WRAP));
+ }
+ /*
+ * if we failed, fillup remainder
+ */
+ for (; i < fep->rx_ring; i++, bdp++) {
+ fep->rx_skbuff[i] = NULL;
+ CBDW_SC(bdp, (i < fep->rx_ring - 1) ? 0 : BD_SC_WRAP);
+ }
+
+ /*
+ * Reset SKB transmit buffers.
+ */
+ for (i = 0; i < fep->tx_ring; i++) {
+ if ((skb = fep->tx_skbuff[i]) == NULL)
+ continue;
+ fep->tx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * ...and the same for transmit.
+ */
+ for (i = 0, bdp = fep->tx_bd_base; i < fep->tx_ring; i++, bdp++) {
+ fep->tx_skbuff[i] = NULL;
+ CBDW_BUFADDR(bdp, virt_to_bus(NULL));
+ CBDW_DATLEN(bdp, 0);
+ CBDW_SC(bdp, (i < fep->tx_ring - 1) ? 0 : BD_SC_WRAP);
+ }
+
+ /*
+ * Enable big endian and don't care about SDMA FC.
+ */
+ FW(fecp, fun_code, 0x78000000);
+
+ /*
+ * Set MII speed.
+ */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+
+ /*
+ * Clear any outstanding interrupt.
+ */
+ FW(fecp, ievent, 0xffc0);
+ FW(fecp, ivec, (fpi->fec_irq / 2) << 29);
+
+ /*
+ * adjust to speed (only for DUET & RMII)
+ */
+#ifdef CONFIG_DUET
+ cptr = in_be32(&immap->im_cpm.cp_cptr);
+ switch (fpi->fec_no) {
+ case 0:
+ /*
+ * check if in RMII mode
+ */
+ if ((cptr & 0x100) == 0)
+ break;
+
+ if (speed == 10)
+ cptr |= 0x0000010;
+ else if (speed == 100)
+ cptr &= ~0x0000010;
+ break;
+ case 1:
+ /*
+ * check if in RMII mode
+ */
+ if ((cptr & 0x80) == 0)
+ break;
+
+ if (speed == 10)
+ cptr |= 0x0000008;
+ else if (speed == 100)
+ cptr &= ~0x0000008;
+ break;
+ default:
+ break;
+ }
+ out_be32(&immap->im_cpm.cp_cptr, cptr);
+#endif
+
+ FW(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ /*
+ * adjust to duplex mode
+ */
+ if (duplex) {
+ FC(fecp, r_cntrl, FEC_RCNTRL_DRT);
+ FS(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD enable */
+ } else {
+ FS(fecp, r_cntrl, FEC_RCNTRL_DRT);
+ FC(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD disable */
+ }
+
+ /*
+ * Enable interrupts we wish to service.
+ */
+ FW(fecp, imask, FEC_ENET_TXF | FEC_ENET_TXB |
+ FEC_ENET_RXF | FEC_ENET_RXB);
+
+ /*
+ * And last, enable the transmit and receive processing.
+ */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, r_des_active, 0x01000000);
+}
+
+void fec_stop(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ struct sk_buff *skb;
+ int i;
+
+ if ((FR(fecp, ecntrl) & FEC_ECNTRL_ETHER_EN) == 0)
+ return; /* already down */
+
+ FW(fecp, x_cntrl, 0x01); /* Graceful transmit stop */
+ for (i = 0; ((FR(fecp, ievent) & 0x10000000) == 0) &&
+ i < FEC_RESET_DELAY; i++)
+ udelay(1);
+
+ if (i == FEC_RESET_DELAY)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s FEC timeout on graceful transmit stop\n",
+ dev->name);
+ /*
+ * Disable FEC. Let only MII interrupts.
+ */
+ FW(fecp, imask, 0);
+ FW(fecp, ecntrl, ~FEC_ECNTRL_ETHER_EN);
+
+ /*
+ * Reset SKB transmit buffers.
+ */
+ for (i = 0; i < fep->tx_ring; i++) {
+ if ((skb = fep->tx_skbuff[i]) == NULL)
+ continue;
+ fep->tx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * Reset SKB receive buffers
+ */
+ for (i = 0; i < fep->rx_ring; i++) {
+ if ((skb = fep->rx_skbuff[i]) == NULL)
+ continue;
+ fep->rx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+}
+
+/* common receive function */
+static int fec_enet_rx_common(struct net_device *dev, int *budget)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+ cbd_t *bdp;
+ struct sk_buff *skb, *skbn, *skbt;
+ int received = 0;
+ __u16 pkt_len, sc;
+ int curidx;
+ int rx_work_limit;
+
+ if (fpi->use_napi) {
+ rx_work_limit = min(dev->quota, *budget);
+
+ if (!netif_running(dev))
+ return 0;
+ }
+
+ /*
+ * First, grab all of the stats for the incoming packet.
+ * These get messed up if we get called due to a busy condition.
+ */
+ bdp = fep->cur_rx;
+
+ /* clear RX status bits for napi*/
+ if (fpi->use_napi)
+ FW(fecp, ievent, FEC_ENET_RXF | FEC_ENET_RXB);
+
+ while (((sc = CBDR_SC(bdp)) & BD_ENET_RX_EMPTY) == 0) {
+
+ curidx = bdp - fep->rx_bd_base;
+
+ /*
+ * Since we have allocated space to hold a complete frame,
+ * the last indicator should be set.
+ */
+ if ((sc & BD_ENET_RX_LAST) == 0)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s rcv is not +last\n",
+ dev->name);
+
+ /*
+ * Check for errors.
+ */
+ if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_CL |
+ BD_ENET_RX_NO | BD_ENET_RX_CR | BD_ENET_RX_OV)) {
+ fep->stats.rx_errors++;
+ /* Frame too long or too short. */
+ if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH))
+ fep->stats.rx_length_errors++;
+ /* Frame alignment */
+ if (sc & (BD_ENET_RX_NO | BD_ENET_RX_CL))
+ fep->stats.rx_frame_errors++;
+ /* CRC Error */
+ if (sc & BD_ENET_RX_CR)
+ fep->stats.rx_crc_errors++;
+ /* FIFO overrun */
+ if (sc & BD_ENET_RX_OV)
+ fep->stats.rx_crc_errors++;
+
+ skbn = fep->rx_skbuff[curidx];
+ BUG_ON(skbn == NULL);
+
+ } else {
+
+ /* napi, got packet but no quota */
+ if (fpi->use_napi && --rx_work_limit < 0)
+ break;
+
+ skb = fep->rx_skbuff[curidx];
+ BUG_ON(skb == NULL);
+
+ /*
+ * Process the incoming frame.
+ */
+ fep->stats.rx_packets++;
+ pkt_len = CBDR_DATLEN(bdp) - 4; /* remove CRC */
+ fep->stats.rx_bytes += pkt_len + 4;
+
+ if (pkt_len <= fpi->rx_copybreak) {
+ /* +2 to make IP header L1 cache aligned */
+ skbn = dev_alloc_skb(pkt_len + 2);
+ if (skbn != NULL) {
+ skb_reserve(skbn, 2); /* align IP header */
+ memcpy(skbn->data, skb->data, pkt_len);
+ /* swap */
+ skbt = skb;
+ skb = skbn;
+ skbn = skbt;
+ }
+ } else
+ skbn = dev_alloc_skb(ENET_RX_FRSIZE);
+
+ if (skbn != NULL) {
+ skb->dev = dev;
+ skb_put(skb, pkt_len); /* Make room */
+ skb->protocol = eth_type_trans(skb, dev);
+ received++;
+ if (!fpi->use_napi)
+ netif_rx(skb);
+ else
+ netif_receive_skb(skb);
+ } else {
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s Memory squeeze, dropping packet.\n",
+ dev->name);
+ fep->stats.rx_dropped++;
+ skbn = skb;
+ }
+ }
+
+ fep->rx_skbuff[curidx] = skbn;
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skbn->data,
+ L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
+ DMA_FROM_DEVICE));
+ CBDW_DATLEN(bdp, 0);
+ CBDW_SC(bdp, (sc & ~BD_ENET_RX_STATS) | BD_ENET_RX_EMPTY);
+
+ /*
+ * Update BD pointer to next entry.
+ */
+ if ((sc & BD_ENET_RX_WRAP) == 0)
+ bdp++;
+ else
+ bdp = fep->rx_bd_base;
+
+ /*
+ * Doing this here will keep the FEC running while we process
+ * incoming frames. On a heavily loaded network, we should be
+ * able to keep up at the expense of system resources.
+ */
+ FW(fecp, r_des_active, 0x01000000);
+ }
+
+ fep->cur_rx = bdp;
+
+ if (fpi->use_napi) {
+ dev->quota -= received;
+ *budget -= received;
+
+ if (rx_work_limit < 0)
+ return 1; /* not done */
+
+ /* done */
+ netif_rx_complete(dev);
+
+ /* enable RX interrupt bits */
+ FS(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ }
+
+ return 0;
+}
+
+static void fec_enet_tx(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ cbd_t *bdp;
+ struct sk_buff *skb;
+ int dirtyidx, do_wake;
+ __u16 sc;
+
+ spin_lock(&fep->lock);
+ bdp = fep->dirty_tx;
+
+ do_wake = 0;
+ while (((sc = CBDR_SC(bdp)) & BD_ENET_TX_READY) == 0) {
+
+ dirtyidx = bdp - fep->tx_bd_base;
+
+ if (fep->tx_free == fep->tx_ring)
+ break;
+
+ skb = fep->tx_skbuff[dirtyidx];
+
+ /*
+ * Check for errors.
+ */
+ if (sc & (BD_ENET_TX_HB | BD_ENET_TX_LC |
+ BD_ENET_TX_RL | BD_ENET_TX_UN | BD_ENET_TX_CSL)) {
+ fep->stats.tx_errors++;
+ if (sc & BD_ENET_TX_HB) /* No heartbeat */
+ fep->stats.tx_heartbeat_errors++;
+ if (sc & BD_ENET_TX_LC) /* Late collision */
+ fep->stats.tx_window_errors++;
+ if (sc & BD_ENET_TX_RL) /* Retrans limit */
+ fep->stats.tx_aborted_errors++;
+ if (sc & BD_ENET_TX_UN) /* Underrun */
+ fep->stats.tx_fifo_errors++;
+ if (sc & BD_ENET_TX_CSL) /* Carrier lost */
+ fep->stats.tx_carrier_errors++;
+ } else
+ fep->stats.tx_packets++;
+
+ if (sc & BD_ENET_TX_READY)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s HEY! Enet xmit interrupt and TX_READY.\n",
+ dev->name);
+
+ /*
+ * Deferred means some collisions occurred during transmit,
+ * but we eventually sent the packet OK.
+ */
+ if (sc & BD_ENET_TX_DEF)
+ fep->stats.collisions++;
+
+ /*
+ * Free the sk buffer associated with this last transmit.
+ */
+ dev_kfree_skb_irq(skb);
+ fep->tx_skbuff[dirtyidx] = NULL;
+
+ /*
+ * Update pointer to next buffer descriptor to be transmitted.
+ */
+ if ((sc & BD_ENET_TX_WRAP) == 0)
+ bdp++;
+ else
+ bdp = fep->tx_bd_base;
+
+ /*
+ * Since we have freed up a buffer, the ring is no longer
+ * full.
+ */
+ if (!fep->tx_free++)
+ do_wake = 1;
+ }
+
+ fep->dirty_tx = bdp;
+
+ spin_unlock(&fep->lock);
+
+ if (do_wake && netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+}
+
+/*
+ * The interrupt handler.
+ * This is called from the MPC core interrupt.
+ */
+static irqreturn_t
+fec_enet_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ struct fec_enet_private *fep;
+ const struct fec_platform_info *fpi;
+ fec_t *fecp;
+ __u32 int_events;
+ __u32 int_events_napi;
+
+ if (unlikely(dev == NULL))
+ return IRQ_NONE;
+
+ fep = netdev_priv(dev);
+ fecp = fep->fecp;
+ fpi = fep->fpi;
+
+ /*
+ * Get the interrupt events that caused us to be here.
+ */
+ while ((int_events = FR(fecp, ievent) & FR(fecp, imask)) != 0) {
+
+ if (!fpi->use_napi)
+ FW(fecp, ievent, int_events);
+ else {
+ int_events_napi = int_events & ~(FEC_ENET_RXF | FEC_ENET_RXB);
+ FW(fecp, ievent, int_events_napi);
+ }
+
+ if ((int_events & (FEC_ENET_HBERR | FEC_ENET_BABR |
+ FEC_ENET_BABT | FEC_ENET_EBERR)) != 0)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s FEC ERROR(s) 0x%x\n",
+ dev->name, int_events);
+
+ if ((int_events & FEC_ENET_RXF) != 0) {
+ if (!fpi->use_napi)
+ fec_enet_rx_common(dev, NULL);
+ else {
+ if (netif_rx_schedule_prep(dev)) {
+ /* disable rx interrupts */
+ FC(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ __netif_rx_schedule(dev);
+ } else {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s driver bug! interrupt while in poll!\n",
+ dev->name);
+ FC(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ }
+ }
+ }
+
+ if ((int_events & FEC_ENET_TXF) != 0)
+ fec_enet_tx(dev);
+ }
+
+ return IRQ_HANDLED;
+}
+
+/* This interrupt occurs when the PHY detects a link change. */
+static irqreturn_t
+fec_mii_link_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ struct fec_enet_private *fep;
+ const struct fec_platform_info *fpi;
+
+ if (unlikely(dev == NULL))
+ return IRQ_NONE;
+
+ fep = netdev_priv(dev);
+ fpi = fep->fpi;
+
+ if (!fpi->use_mdio)
+ return IRQ_NONE;
+
+ /*
+ * Acknowledge the interrupt if possible. If we have not
+ * found the PHY yet we can't process or acknowledge the
+ * interrupt now. Instead we ignore this interrupt for now,
+ * which we can do since it is edge triggered. It will be
+ * acknowledged later by fec_enet_open().
+ */
+ if (!fep->phy)
+ return IRQ_NONE;
+
+ fec_mii_ack_int(dev);
+ fec_mii_link_status_change_check(dev, 0);
+
+ return IRQ_HANDLED;
+}
+
+
+/**********************************************************************************/
+
+static int fec_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ cbd_t *bdp;
+ int curidx;
+ unsigned long flags;
+
+ spin_lock_irqsave(&fep->tx_lock, flags);
+
+ /*
+ * Fill in a Tx ring entry
+ */
+ bdp = fep->cur_tx;
+
+ if (!fep->tx_free || (CBDR_SC(bdp) & BD_ENET_TX_READY)) {
+ netif_stop_queue(dev);
+ spin_unlock_irqrestore(&fep->tx_lock, flags);
+
+ /*
+ * Ooops. All transmit buffers are full. Bail out.
+ * This should not happen, since the tx queue should be stopped.
+ */
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s tx queue full!.\n", dev->name);
+ return 1;
+ }
+
+ curidx = bdp - fep->tx_bd_base;
+ /*
+ * Clear all of the status flags.
+ */
+ CBDC_SC(bdp, BD_ENET_TX_STATS);
+
+ /*
+ * Save skb pointer.
+ */
+ fep->tx_skbuff[curidx] = skb;
+
+ fep->stats.tx_bytes += skb->len;
+
+ /*
+ * Push the data cache so the CPM does not get stale memory data.
+ */
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skb->data,
+ skb->len, DMA_TO_DEVICE));
+ CBDW_DATLEN(bdp, skb->len);
+
+ dev->trans_start = jiffies;
+
+ /*
+ * If this was the last BD in the ring, start at the beginning again.
+ */
+ if ((CBDR_SC(bdp) & BD_ENET_TX_WRAP) == 0)
+ fep->cur_tx++;
+ else
+ fep->cur_tx = fep->tx_bd_base;
+
+ if (!--fep->tx_free)
+ netif_stop_queue(dev);
+
+ /*
+ * Trigger transmission start
+ */
+ CBDS_SC(bdp, BD_ENET_TX_READY | BD_ENET_TX_INTR |
+ BD_ENET_TX_LAST | BD_ENET_TX_TC);
+ FW(fecp, x_des_active, 0x01000000);
+
+ spin_unlock_irqrestore(&fep->tx_lock, flags);
+
+ return 0;
+}
+
+static void fec_timeout(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->stats.tx_errors++;
+
+ if (fep->tx_free)
+ netif_wake_queue(dev);
+
+ /* check link status again */
+ fec_mii_link_status_change_check(dev, 0);
+}
+
+static int fec_enet_open(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ unsigned long flags;
+
+ /* Install our interrupt handler. */
+ if (request_irq(fpi->fec_irq, fec_enet_interrupt, 0, "fec", dev) != 0) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s Could not allocate FEC IRQ!", dev->name);
+ return -EINVAL;
+ }
+
+ /* Install our phy interrupt handler */
+ if (fpi->phy_irq != -1 &&
+ request_irq(fpi->phy_irq, fec_mii_link_interrupt, 0, "fec-phy",
+ dev) != 0) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s Could not allocate PHY IRQ!", dev->name);
+ free_irq(fpi->fec_irq, dev);
+ return -EINVAL;
+ }
+
+ if (fpi->use_mdio) {
+ fec_mii_startup(dev);
+ netif_carrier_off(dev);
+ fec_mii_link_status_change_check(dev, 1);
+ } else {
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_restart(dev, 1, 100); /* XXX this sucks */
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ netif_carrier_on(dev);
+ netif_start_queue(dev);
+ }
+ return 0;
+}
+
+static int fec_enet_close(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ unsigned long flags;
+
+ netif_stop_queue(dev);
+ netif_carrier_off(dev);
+
+ if (fpi->use_mdio)
+ fec_mii_shutdown(dev);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_stop(dev);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ /* release any irqs */
+ if (fpi->phy_irq != -1)
+ free_irq(fpi->phy_irq, dev);
+ free_irq(fpi->fec_irq, dev);
+
+ return 0;
+}
+
+static struct net_device_stats *fec_enet_get_stats(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return &fep->stats;
+}
+
+static int fec_enet_poll(struct net_device *dev, int *budget)
+{
+ return fec_enet_rx_common(dev, budget);
+}
+
+/*************************************************************************/
+
+static void fec_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_MODULE_NAME);
+ strcpy(info->version, DRV_MODULE_VERSION);
+}
+
+static int fec_get_regs_len(struct net_device *dev)
+{
+ return sizeof(fec_t);
+}
+
+static void fec_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len < sizeof(fec_t))
+ return;
+
+ regs->version = 0;
+ spin_lock_irqsave(&fep->lock, flags);
+ memcpy_fromio(p, fep->fecp, sizeof(fec_t));
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+static int fec_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = mii_ethtool_gset(&fep->mii_if, cmd);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return rc;
+}
+
+static int fec_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = mii_ethtool_sset(&fep->mii_if, cmd);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return rc;
+}
+
+static int fec_nway_reset(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return mii_nway_restart(&fep->mii_if);
+}
+
+static __u32 fec_get_msglevel(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return fep->msg_enable;
+}
+
+static void fec_set_msglevel(struct net_device *dev, __u32 value)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fep->msg_enable = value;
+}
+
+static struct ethtool_ops fec_ethtool_ops = {
+ .get_drvinfo = fec_get_drvinfo,
+ .get_regs_len = fec_get_regs_len,
+ .get_settings = fec_get_settings,
+ .set_settings = fec_set_settings,
+ .nway_reset = fec_nway_reset,
+ .get_link = ethtool_op_get_link,
+ .get_msglevel = fec_get_msglevel,
+ .set_msglevel = fec_set_msglevel,
+ .get_tx_csum = ethtool_op_get_tx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum, /* local! */
+ .get_sg = ethtool_op_get_sg,
+ .set_sg = ethtool_op_set_sg,
+ .get_regs = fec_get_regs,
+};
+
+static int fec_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct mii_ioctl_data *mii = (struct mii_ioctl_data *)&rq->ifr_data;
+ unsigned long flags;
+ int rc;
+
+ if (!netif_running(dev))
+ return -EINVAL;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = generic_mii_ioctl(&fep->mii_if, mii, cmd, NULL);
+ spin_unlock_irqrestore(&fep->lock, flags);
+ return rc;
+}
+
+int fec_8xx_init_one(const struct fec_platform_info *fpi,
+ struct net_device **devp)
+{
+ immap_t *immap = (immap_t *) IMAP_ADDR;
+ static int fec_8xx_version_printed = 0;
+ struct net_device *dev = NULL;
+ struct fec_enet_private *fep = NULL;
+ fec_t *fecp = NULL;
+ int i;
+ int err = 0;
+ int registered = 0;
+ __u32 siel;
+
+ *devp = NULL;
+
+ switch (fpi->fec_no) {
+ case 0:
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+ break;
+#ifdef CONFIG_DUET
+ case 1:
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec2;
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+
+ if (fec_8xx_version_printed++ == 0)
+ printk(KERN_INFO "%s", version);
+
+ i = sizeof(*fep) + (sizeof(struct sk_buff **) *
+ (fpi->rx_ring + fpi->tx_ring));
+
+ dev = alloc_etherdev(i);
+ if (!dev) {
+ err = -ENOMEM;
+ goto err;
+ }
+ SET_MODULE_OWNER(dev);
+
+ fep = netdev_priv(dev);
+
+ /* partial reset of FEC */
+ fec_whack_reset(fecp);
+
+ /* point rx_skbuff, tx_skbuff */
+ fep->rx_skbuff = (struct sk_buff **)&fep[1];
+ fep->tx_skbuff = fep->rx_skbuff + fpi->rx_ring;
+
+ fep->fecp = fecp;
+ fep->fpi = fpi;
+
+ /* init locks */
+ spin_lock_init(&fep->lock);
+ spin_lock_init(&fep->tx_lock);
+
+ /*
+ * Set the Ethernet address.
+ */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = fpi->macaddr[i];
+
+ fep->ring_base = dma_alloc_coherent(NULL,
+ (fpi->tx_ring + fpi->rx_ring) *
+ sizeof(cbd_t), &fep->ring_mem_addr,
+ GFP_KERNEL);
+ if (fep->ring_base == NULL) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s dma alloc failed.\n", dev->name);
+ err = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set receive and transmit descriptor base.
+ */
+ fep->rx_bd_base = fep->ring_base;
+ fep->tx_bd_base = fep->rx_bd_base + fpi->rx_ring;
+
+ /* initialize ring size variables */
+ fep->tx_ring = fpi->tx_ring;
+ fep->rx_ring = fpi->rx_ring;
+
+ /* SIU interrupt */
+ if (fpi->phy_irq != -1 &&
+ (fpi->phy_irq >= SIU_IRQ0 && fpi->phy_irq < SIU_LEVEL7)) {
+
+ siel = in_be32(&immap->im_siu_conf.sc_siel);
+ if ((fpi->phy_irq & 1) == 0)
+ siel |= (0x80000000 >> fpi->phy_irq);
+ else
+ siel &= ~(0x80000000 >> (fpi->phy_irq & ~1));
+ out_be32(&immap->im_siu_conf.sc_siel, siel);
+ }
+
+ /*
+ * The FEC Ethernet specific entries in the device structure.
+ */
+ dev->open = fec_enet_open;
+ dev->hard_start_xmit = fec_enet_start_xmit;
+ dev->tx_timeout = fec_timeout;
+ dev->watchdog_timeo = TX_TIMEOUT;
+ dev->stop = fec_enet_close;
+ dev->get_stats = fec_enet_get_stats;
+ dev->set_multicast_list = fec_set_multicast_list;
+ dev->set_mac_address = fec_set_mac_address;
+ if (fpi->use_napi) {
+ dev->poll = fec_enet_poll;
+ dev->weight = fpi->napi_weight;
+ }
+ dev->ethtool_ops = &fec_ethtool_ops;
+ dev->do_ioctl = fec_ioctl;
+
+ fep->fec_phy_speed =
+ ((((fpi->sys_clk + 4999999) / 2500000) / 2) & 0x3F) << 1;
+
+ init_timer(&fep->phy_timer_list);
+
+ /* partial reset of FEC so that only MII works */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+ FW(fecp, ievent, 0xffc0);
+ FW(fecp, ivec, (fpi->fec_irq / 2) << 29);
+ FW(fecp, imask, 0);
+ FW(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+
+ netif_carrier_off(dev);
+
+ err = register_netdev(dev);
+ if (err != 0)
+ goto err;
+ registered = 1;
+
+ if (fpi->use_mdio) {
+ fep->mii_if.dev = dev;
+ fep->mii_if.mdio_read = fec_mii_read;
+ fep->mii_if.mdio_write = fec_mii_write;
+ fep->mii_if.phy_id_mask = 0x1f;
+ fep->mii_if.reg_num_mask = 0x1f;
+ fep->mii_if.phy_id = fec_mii_phy_id_detect(dev);
+ }
+
+ *devp = dev;
+
+ return 0;
+
+ err:
+ if (dev != NULL) {
+ if (fecp != NULL)
+ fec_whack_reset(fecp);
+
+ if (registered)
+ unregister_netdev(dev);
+
+ if (fep != NULL) {
+ if (fep->ring_base)
+ dma_free_coherent(NULL,
+ (fpi->tx_ring +
+ fpi->rx_ring) *
+ sizeof(cbd_t), fep->ring_base,
+ fep->ring_mem_addr);
+ }
+ free_netdev(dev);
+ }
+ return err;
+}
+
+int fec_8xx_cleanup_one(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ fec_whack_reset(fecp);
+
+ unregister_netdev(dev);
+
+ dma_free_coherent(NULL, (fpi->tx_ring + fpi->rx_ring) * sizeof(cbd_t),
+ fep->ring_base, fep->ring_mem_addr);
+
+ free_netdev(dev);
+
+ return 0;
+}
+
+/**************************************************************************************/
+/**************************************************************************************/
+/**************************************************************************************/
+
+static int __init fec_8xx_init(void)
+{
+ return fec_8xx_platform_init();
+}
+
+static void __exit fec_8xx_cleanup(void)
+{
+ fec_8xx_platform_cleanup();
+}
+
+/**************************************************************************************/
+/**************************************************************************************/
+/**************************************************************************************/
+
+module_init(fec_8xx_init);
+module_exit(fec_8xx_cleanup);
--- /dev/null
+/*
+ * Fast Ethernet Controller (FEC) driver for Motorola MPC8xx.
+ *
+ * Copyright (c) 2003 Intracom S.A.
+ * by Pantelis Antoniou <panto@intracom.gr>
+ *
+ * Heavily based on original FEC driver by Dan Malek <dan@embeddededge.com>
+ * and modifications by Joakim Tjernlund <joakim.tjernlund@lumentis.se>
+ *
+ * Released under the GPL
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/bitops.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+
+/*************************************************/
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+/* Make MII read/write commands for the FEC.
+*/
+#define mk_mii_read(REG) (0x60020000 | ((REG & 0x1f) << 18))
+#define mk_mii_write(REG, VAL) (0x50020000 | ((REG & 0x1f) << 18) | (VAL & 0xffff))
+#define mk_mii_end 0
+
+/*************************************************/
+
+/* XXX both FECs use the MII interface of FEC1 */
+static spinlock_t fec_mii_lock = SPIN_LOCK_UNLOCKED;
+
+#define FEC_MII_LOOPS 10000
+
+int fec_mii_read(struct net_device *dev, int phy_id, int location)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp;
+ int i, ret = -1;
+ unsigned long flags;
+
+ /* XXX MII interface is only connected to FEC1 */
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+
+ spin_lock_irqsave(&fec_mii_lock, flags);
+
+ if ((FR(fecp, r_cntrl) & FEC_RCNTRL_MII_MODE) == 0) {
+ FS(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FS(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, ievent, FEC_ENET_MII);
+ }
+
+ /* Add PHY address to register command. */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+ FW(fecp, mii_data, (phy_id << 23) | mk_mii_read(location));
+
+ for (i = 0; i < FEC_MII_LOOPS; i++)
+ if ((FR(fecp, ievent) & FEC_ENET_MII) != 0)
+ break;
+
+ if (i < FEC_MII_LOOPS) {
+ FW(fecp, ievent, FEC_ENET_MII);
+ ret = FR(fecp, mii_data) & 0xffff;
+ }
+
+ spin_unlock_irqrestore(&fec_mii_lock, flags);
+
+ return ret;
+}
+
+void fec_mii_write(struct net_device *dev, int phy_id, int location, int value)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp;
+ unsigned long flags;
+ int i;
+
+ /* XXX MII interface is only connected to FEC1 */
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+
+ spin_lock_irqsave(&fec_mii_lock, flags);
+
+ if ((FR(fecp, r_cntrl) & FEC_RCNTRL_MII_MODE) == 0) {
+ FS(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FS(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, ievent, FEC_ENET_MII);
+ }
+
+ /* Add PHY address to register command. */
+ FW(fecp, mii_speed, fep->fec_phy_speed); /* always adapt mii speed */
+ FW(fecp, mii_data, (phy_id << 23) | mk_mii_write(location, value));
+
+ for (i = 0; i < FEC_MII_LOOPS; i++)
+ if ((FR(fecp, ievent) & FEC_ENET_MII) != 0)
+ break;
+
+ if (i < FEC_MII_LOOPS)
+ FW(fecp, ievent, FEC_ENET_MII);
+
+ spin_unlock_irqrestore(&fec_mii_lock, flags);
+}
+
+/*************************************************/
+
+#ifdef CONFIG_FEC_8XX_GENERIC_PHY
+
+/*
+ * Generic PHY support.
+ * Should work for all PHYs, but link change is detected by polling
+ */
+
+static void generic_timer_callback(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->phy_timer_list.expires = jiffies + HZ / 2;
+
+ add_timer(&fep->phy_timer_list);
+
+ fec_mii_link_status_change_check(dev, 0);
+}
+
+static void generic_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->phy_timer_list.expires = jiffies + HZ / 2; /* every 500ms */
+ fep->phy_timer_list.data = (unsigned long)dev;
+ fep->phy_timer_list.function = generic_timer_callback;
+ add_timer(&fep->phy_timer_list);
+}
+
+static void generic_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ del_timer_sync(&fep->phy_timer_list);
+}
+
+#endif
+
+#ifdef CONFIG_FEC_8XX_DM9161_PHY
+
+/* ------------------------------------------------------------------------- */
+/* The Davicom DM9161 is used on the NETTA board */
+
+/* register definitions */
+
+#define MII_DM9161_ACR 16 /* Aux. Config Register */
+#define MII_DM9161_ACSR 17 /* Aux. Config/Status Register */
+#define MII_DM9161_10TCSR 18 /* 10BaseT Config/Status Reg. */
+#define MII_DM9161_INTR 21 /* Interrupt Register */
+#define MII_DM9161_RECR 22 /* Receive Error Counter Reg. */
+#define MII_DM9161_DISCR 23 /* Disconnect Counter Register */
+
+static void dm9161_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_write(dev, fep->mii_if.phy_id, MII_DM9161_INTR, 0x0000);
+}
+
+static void dm9161_ack_int(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_read(dev, fep->mii_if.phy_id, MII_DM9161_INTR);
+}
+
+static void dm9161_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_write(dev, fep->mii_if.phy_id, MII_DM9161_INTR, 0x0f00);
+}
+
+#endif
+
+/**********************************************************************************/
+
+static const struct phy_info phy_info[] = {
+#ifdef CONFIG_FEC_8XX_DM9161_PHY
+ {
+ .id = 0x00181b88,
+ .name = "DM9161",
+ .startup = dm9161_startup,
+ .ack_int = dm9161_ack_int,
+ .shutdown = dm9161_shutdown,
+ },
+#endif
+#ifdef CONFIG_FEC_8XX_GENERIC_PHY
+ {
+ .id = 0,
+ .name = "GENERIC",
+ .startup = generic_startup,
+ .shutdown = generic_shutdown,
+ },
+#endif
+};
+
+/**********************************************************************************/
+
+int fec_mii_phy_id_detect(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ int i, r, start, end, phytype, physubtype;
+ const struct phy_info *phy;
+ int phy_hwid, phy_id;
+
+ /* if no MDIO */
+ if (fpi->use_mdio == 0)
+ return -1;
+
+ phy_hwid = -1;
+ fep->phy = NULL;
+
+ /* auto-detect? */
+ if (fpi->phy_addr == -1) {
+ start = 0;
+ end = 32;
+ } else { /* direct */
+ start = fpi->phy_addr;
+ end = start + 1;
+ }
+
+ for (phy_id = start; phy_id < end; phy_id++) {
+ r = fec_mii_read(dev, phy_id, MII_PHYSID1);
+ if (r == -1 || (phytype = (r & 0xffff)) == 0xffff)
+ continue;
+ r = fec_mii_read(dev, phy_id, MII_PHYSID2);
+ if (r == -1 || (physubtype = (r & 0xffff)) == 0xffff)
+ continue;
+ phy_hwid = (phytype << 16) | physubtype;
+ if (phy_hwid != -1)
+ break;
+ }
+
+ if (phy_hwid == -1) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s No PHY detected!\n", dev->name);
+ return -1;
+ }
+
+ for (i = 0, phy = phy_info; i < sizeof(phy_info) / sizeof(phy_info[0]);
+ i++, phy++)
+ if (phy->id == (phy_hwid >> 4) || phy->id == 0)
+ break;
+
+ if (i >= sizeof(phy_info) / sizeof(phy_info[0])) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s PHY id 0x%08x is not supported!\n",
+ dev->name, phy_hwid);
+ return -1;
+ }
+
+ fep->phy = phy;
+
+ printk(KERN_INFO DRV_MODULE_NAME
+ ": %s Phy @ 0x%x, type %s (0x%08x)\n",
+ dev->name, phy_id, fep->phy->name, phy_hwid);
+
+ return phy_id;
+}
+
+void fec_mii_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->startup == NULL)
+ return;
+
+ (*fep->phy->startup) (dev);
+}
+
+void fec_mii_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->shutdown == NULL)
+ return;
+
+ (*fep->phy->shutdown) (dev);
+}
+
+void fec_mii_ack_int(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->ack_int == NULL)
+ return;
+
+ (*fep->phy->ack_int) (dev);
+}
+
+/* helper function */
+static int mii_negotiated(struct mii_if_info *mii)
+{
+ int advert, lpa, val;
+
+ if (!mii_link_ok(mii))
+ return 0;
+
+ val = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_BMSR);
+ if ((val & BMSR_ANEGCOMPLETE) == 0)
+ return 0;
+
+ advert = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_ADVERTISE);
+ lpa = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_LPA);
+
+ return mii_nway_result(advert & lpa);
+}
+
+void fec_mii_link_status_change_check(struct net_device *dev, int init_media)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned int media;
+ unsigned long flags;
+
+ if (mii_check_media(&fep->mii_if, netif_msg_link(fep), init_media) == 0)
+ return;
+
+ media = mii_negotiated(&fep->mii_if);
+
+ if (netif_carrier_ok(dev)) {
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_restart(dev, !!(media & ADVERTISE_FULL),
+ (media & (ADVERTISE_100FULL | ADVERTISE_100HALF)) ?
+ 100 : 10);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ netif_start_queue(dev);
+ } else {
+ netif_stop_queue(dev);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_stop(dev);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ }
+}
--- /dev/null
+/*
+ * ibm_emac.h
+ *
+ *
+ * Armin Kuster akuster@mvista.com
+ * June, 2002
+ *
+ * Copyright 2002 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_H_
+#define _IBM_EMAC_H_
+/* General defines needed for the driver */
+
+/* Emac */
+typedef struct emac_regs {
+ u32 em0mr0;
+ u32 em0mr1;
+ u32 em0tmr0;
+ u32 em0tmr1;
+ u32 em0rmr;
+ u32 em0isr;
+ u32 em0iser;
+ u32 em0iahr;
+ u32 em0ialr;
+ u32 em0vtpid;
+ u32 em0vtci;
+ u32 em0ptr;
+ u32 em0iaht1;
+ u32 em0iaht2;
+ u32 em0iaht3;
+ u32 em0iaht4;
+ u32 em0gaht1;
+ u32 em0gaht2;
+ u32 em0gaht3;
+ u32 em0gaht4;
+ u32 em0lsah;
+ u32 em0lsal;
+ u32 em0ipgvr;
+ u32 em0stacr;
+ u32 em0trtr;
+ u32 em0rwmr;
+} emac_t;
+
+/* MODE REG 0 */
+#define EMAC_M0_RXI 0x80000000
+#define EMAC_M0_TXI 0x40000000
+#define EMAC_M0_SRST 0x20000000
+#define EMAC_M0_TXE 0x10000000
+#define EMAC_M0_RXE 0x08000000
+#define EMAC_M0_WKE 0x04000000
+
+/* MODE Reg 1 */
+#define EMAC_M1_FDE 0x80000000
+#define EMAC_M1_ILE 0x40000000
+#define EMAC_M1_VLE 0x20000000
+#define EMAC_M1_EIFC 0x10000000
+#define EMAC_M1_APP 0x08000000
+#define EMAC_M1_AEMI 0x02000000
+#define EMAC_M1_IST 0x01000000
+#define EMAC_M1_MF_1000GPCS 0x00c00000 /* Internal GPCS */
+#define EMAC_M1_MF_1000MBPS 0x00800000 /* External GPCS */
+#define EMAC_M1_MF_100MBPS 0x00400000
+#define EMAC_M1_RFS_16K 0x00280000 /* 000 for 512 byte */
+#define EMAC_M1_TR 0x00008000
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_M1_RFS_8K 0x00200000
+#define EMAC_M1_RFS_4K 0x00180000
+#define EMAC_M1_RFS_2K 0x00100000
+#define EMAC_M1_RFS_1K 0x00080000
+#define EMAC_M1_TX_FIFO_16K 0x00050000 /* 0's for 512 byte */
+#define EMAC_M1_TX_FIFO_8K 0x00040000
+#define EMAC_M1_TX_FIFO_4K 0x00030000
+#define EMAC_M1_TX_FIFO_2K 0x00020000
+#define EMAC_M1_TX_FIFO_1K 0x00010000
+#define EMAC_M1_TX_TR 0x00008000
+#define EMAC_M1_TX_MWSW 0x00001000 /* 0 wait for status */
+#define EMAC_M1_JUMBO_ENABLE 0x00000800 /* Upt to 9Kr status */
+#define EMAC_M1_OPB_CLK_66 0x00000008 /* 66Mhz */
+#define EMAC_M1_OPB_CLK_83 0x00000010 /* 83Mhz */
+#define EMAC_M1_OPB_CLK_100 0x00000018 /* 100Mhz */
+#define EMAC_M1_OPB_CLK_100P 0x00000020 /* 100Mhz+ */
+#else /* CONFIG_IBM_EMAC4 */
+#define EMAC_M1_RFS_4K 0x00300000 /* ~4k for 512 byte */
+#define EMAC_M1_RFS_2K 0x00200000
+#define EMAC_M1_RFS_1K 0x00100000
+#define EMAC_M1_TX_FIFO_2K 0x00080000 /* 0's for 512 byte */
+#define EMAC_M1_TX_FIFO_1K 0x00040000
+#define EMAC_M1_TR0_DEPEND 0x00010000 /* 0'x for single packet */
+#define EMAC_M1_TR1_DEPEND 0x00004000
+#define EMAC_M1_TR1_MULTI 0x00002000
+#define EMAC_M1_JUMBO_ENABLE 0x00001000
+#endif /* CONFIG_IBM_EMAC4 */
+#define EMAC_M1_BASE (EMAC_M1_TX_FIFO_2K | \
+ EMAC_M1_APP | \
+ EMAC_M1_TR)
+
+/* Transmit Mode Register 0 */
+#define EMAC_TMR0_GNP0 0x80000000
+#define EMAC_TMR0_GNP1 0x40000000
+#define EMAC_TMR0_GNPD 0x20000000
+#define EMAC_TMR0_FC 0x10000000
+#define EMAC_TMR0_TFAE_2_32 0x00000001
+#define EMAC_TMR0_TFAE_4_64 0x00000002
+#define EMAC_TMR0_TFAE_8_128 0x00000003
+#define EMAC_TMR0_TFAE_16_256 0x00000004
+#define EMAC_TMR0_TFAE_32_512 0x00000005
+#define EMAC_TMR0_TFAE_64_1024 0x00000006
+#define EMAC_TMR0_TFAE_128_2048 0x00000007
+
+/* Receive Mode Register */
+#define EMAC_RMR_SP 0x80000000
+#define EMAC_RMR_SFCS 0x40000000
+#define EMAC_RMR_ARRP 0x20000000
+#define EMAC_RMR_ARP 0x10000000
+#define EMAC_RMR_AROP 0x08000000
+#define EMAC_RMR_ARPI 0x04000000
+#define EMAC_RMR_PPP 0x02000000
+#define EMAC_RMR_PME 0x01000000
+#define EMAC_RMR_PMME 0x00800000
+#define EMAC_RMR_IAE 0x00400000
+#define EMAC_RMR_MIAE 0x00200000
+#define EMAC_RMR_BAE 0x00100000
+#define EMAC_RMR_MAE 0x00080000
+#define EMAC_RMR_RFAF_2_32 0x00000001
+#define EMAC_RMR_RFAF_4_64 0x00000002
+#define EMAC_RMR_RFAF_8_128 0x00000003
+#define EMAC_RMR_RFAF_16_256 0x00000004
+#define EMAC_RMR_RFAF_32_512 0x00000005
+#define EMAC_RMR_RFAF_64_1024 0x00000006
+#define EMAC_RMR_RFAF_128_2048 0x00000007
+#define EMAC_RMR_BASE (EMAC_RMR_IAE | EMAC_RMR_BAE)
+
+/* Interrupt Status & enable Regs */
+#define EMAC_ISR_OVR 0x02000000
+#define EMAC_ISR_PP 0x01000000
+#define EMAC_ISR_BP 0x00800000
+#define EMAC_ISR_RP 0x00400000
+#define EMAC_ISR_SE 0x00200000
+#define EMAC_ISR_ALE 0x00100000
+#define EMAC_ISR_BFCS 0x00080000
+#define EMAC_ISR_PTLE 0x00040000
+#define EMAC_ISR_ORE 0x00020000
+#define EMAC_ISR_IRE 0x00010000
+#define EMAC_ISR_DBDM 0x00000200
+#define EMAC_ISR_DB0 0x00000100
+#define EMAC_ISR_SE0 0x00000080
+#define EMAC_ISR_TE0 0x00000040
+#define EMAC_ISR_DB1 0x00000020
+#define EMAC_ISR_SE1 0x00000010
+#define EMAC_ISR_TE1 0x00000008
+#define EMAC_ISR_MOS 0x00000002
+#define EMAC_ISR_MOF 0x00000001
+
+/* STA CONTROL REG */
+#define EMAC_STACR_OC 0x00008000
+#define EMAC_STACR_PHYE 0x00004000
+#define EMAC_STACR_WRITE 0x00002000
+#define EMAC_STACR_READ 0x00001000
+#define EMAC_STACR_CLK_83MHZ 0x00000800 /* 0's for 50Mhz */
+#define EMAC_STACR_CLK_66MHZ 0x00000400
+#define EMAC_STACR_CLK_100MHZ 0x00000C00
+
+/* Transmit Request Threshold Register */
+#define EMAC_TRTR_1600 0x18000000 /* 0's for 64 Bytes */
+#define EMAC_TRTR_1024 0x0f000000
+#define EMAC_TRTR_512 0x07000000
+#define EMAC_TRTR_256 0x03000000
+#define EMAC_TRTR_192 0x10000000
+#define EMAC_TRTR_128 0x01000000
+
+#define EMAC_TX_CTRL_GFCS 0x0200
+#define EMAC_TX_CTRL_GP 0x0100
+#define EMAC_TX_CTRL_ISA 0x0080
+#define EMAC_TX_CTRL_RSA 0x0040
+#define EMAC_TX_CTRL_IVT 0x0020
+#define EMAC_TX_CTRL_RVT 0x0010
+#define EMAC_TX_CTRL_TAH_CSUM 0x000e /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG4 0x000a /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG3 0x0008 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG2 0x0006 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG1 0x0004 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG0 0x0002 /* TAH only */
+#define EMAC_TX_CTRL_TAH_DIS 0x0000 /* TAH only */
+
+#define EMAC_TX_CTRL_DFLT ( \
+ MAL_TX_CTRL_INTR | EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP )
+
+/* madmal transmit status / Control bits */
+#define EMAC_TX_ST_BFCS 0x0200
+#define EMAC_TX_ST_BPP 0x0100
+#define EMAC_TX_ST_LCS 0x0080
+#define EMAC_TX_ST_ED 0x0040
+#define EMAC_TX_ST_EC 0x0020
+#define EMAC_TX_ST_LC 0x0010
+#define EMAC_TX_ST_MC 0x0008
+#define EMAC_TX_ST_SC 0x0004
+#define EMAC_TX_ST_UR 0x0002
+#define EMAC_TX_ST_SQE 0x0001
+
+/* madmal receive status / Control bits */
+#define EMAC_RX_ST_OE 0x0200
+#define EMAC_RX_ST_PP 0x0100
+#define EMAC_RX_ST_BP 0x0080
+#define EMAC_RX_ST_RP 0x0040
+#define EMAC_RX_ST_SE 0x0020
+#define EMAC_RX_ST_AE 0x0010
+#define EMAC_RX_ST_BFCS 0x0008
+#define EMAC_RX_ST_PTL 0x0004
+#define EMAC_RX_ST_ORE 0x0002
+#define EMAC_RX_ST_IRE 0x0001
+#define EMAC_BAD_RX_PACKET 0x02ff
+#define EMAC_CSUM_VER_ERROR 0x0003
+
+/* identify a bad rx packet dependent on emac features */
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_IS_BAD_RX_PACKET(desc) \
+ (((desc & (EMAC_BAD_RX_PACKET & ~EMAC_CSUM_VER_ERROR)) || \
+ ((desc & EMAC_CSUM_VER_ERROR) == EMAC_RX_ST_ORE) || \
+ ((desc & EMAC_CSUM_VER_ERROR) == EMAC_RX_ST_IRE)))
+#else
+#define EMAC_IS_BAD_RX_PACKET(desc) \
+ (desc & EMAC_BAD_RX_PACKET)
+#endif
+
+/* Revision specific EMAC register defaults */
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_M1_DEFAULT (EMAC_M1_BASE | \
+ EMAC_M1_OPB_CLK_83 | \
+ EMAC_M1_TX_MWSW)
+#define EMAC_RMR_DEFAULT (EMAC_RMR_BASE | \
+ EMAC_RMR_RFAF_128_2048)
+#define EMAC_TMR0_XMIT (EMAC_TMR0_GNP0 | \
+ EMAC_TMR0_TFAE_128_2048)
+#define EMAC_TRTR_DEFAULT EMAC_TRTR_1024
+#else /* !CONFIG_IBM_EMAC4 */
+#define EMAC_M1_DEFAULT EMAC_M1_BASE
+#define EMAC_RMR_DEFAULT EMAC_RMR_BASE
+#define EMAC_TMR0_XMIT EMAC_TMR0_GNP0
+#define EMAC_TRTR_DEFAULT EMAC_TRTR_1600
+#endif /* CONFIG_IBM_EMAC4 */
+
+/* SoC implementation specific EMAC register defaults */
+#if defined(CONFIG_440GP)
+#define EMAC_RWMR_DEFAULT 0x80009000
+#define EMAC_TMR0_DEFAULT 0x00000000
+#define EMAC_TMR1_DEFAULT 0xf8640000
+#elif defined(CONFIG_440GX)
+#define EMAC_RWMR_DEFAULT 0x1000a200
+#define EMAC_TMR0_DEFAULT EMAC_TMR0_TFAE_128_2048
+#define EMAC_TMR1_DEFAULT 0x88810000
+#else
+#define EMAC_RWMR_DEFAULT 0x0f002000
+#define EMAC_TMR0_DEFAULT 0x00000000
+#define EMAC_TMR1_DEFAULT 0x380f0000
+#endif /* CONFIG_440GP */
+
+#endif
--- /dev/null
+/*
+ * ibm_emac_core.h
+ *
+ * Ethernet driver for the built in ethernet on the IBM 405 PowerPC
+ * processor.
+ *
+ * Armin Kuster akuster@mvista.com
+ * Sept, 2001
+ *
+ * Orignial driver
+ * Johnnie Peters
+ * jpeters@mvista.com
+ *
+ * Copyright 2000 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_CORE_H_
+#define _IBM_EMAC_CORE_H_
+
+#include <linux/netdevice.h>
+#include <asm/ocp.h>
+#include <asm/mmu.h> /* For phys_addr_t */
+
+#include "ibm_emac.h"
+#include "ibm_emac_phy.h"
+#include "ibm_emac_rgmii.h"
+#include "ibm_emac_zmii.h"
+#include "ibm_emac_mal.h"
+#include "ibm_emac_tah.h"
+
+#ifndef CONFIG_IBM_EMAC_TXB
+#define NUM_TX_BUFF 64
+#define NUM_RX_BUFF 64
+#else
+#define NUM_TX_BUFF CONFIG_IBM_EMAC_TXB
+#define NUM_RX_BUFF CONFIG_IBM_EMAC_RXB
+#endif
+
+/* This does 16 byte alignment, exactly what we need.
+ * The packet length includes FCS, but we don't want to
+ * include that when passing upstream as it messes up
+ * bridging applications.
+ */
+#ifndef CONFIG_IBM_EMAC_SKBRES
+#define SKB_RES 2
+#else
+#define SKB_RES CONFIG_IBM_EMAC_SKBRES
+#endif
+
+/* Note about alignement. alloc_skb() returns a cache line
+ * aligned buffer. However, dev_alloc_skb() will add 16 more
+ * bytes and "reserve" them, so our buffer will actually end
+ * on a half cache line. What we do is to use directly
+ * alloc_skb, allocate 16 more bytes to match the total amount
+ * allocated by dev_alloc_skb(), but we don't reserve.
+ */
+#define MAX_NUM_BUF_DESC 255
+#define DESC_BUF_SIZE 4080 /* max 4096-16 */
+#define DESC_BUF_SIZE_REG (DESC_BUF_SIZE / 16)
+
+/* Transmitter timeout. */
+#define TX_TIMEOUT (2*HZ)
+
+/* MDIO latency delay */
+#define MDIO_DELAY 50
+
+/* Power managment shift registers */
+#define IBM_CPM_EMMII 0 /* Shift value for MII */
+#define IBM_CPM_EMRX 1 /* Shift value for recv */
+#define IBM_CPM_EMTX 2 /* Shift value for MAC */
+#define IBM_CPM_EMAC(x) (((x)>>IBM_CPM_EMMII) | ((x)>>IBM_CPM_EMRX) | ((x)>>IBM_CPM_EMTX))
+
+#define ENET_HEADER_SIZE 14
+#define ENET_FCS_SIZE 4
+#define ENET_DEF_MTU_SIZE 1500
+#define ENET_DEF_BUF_SIZE (ENET_DEF_MTU_SIZE + ENET_HEADER_SIZE + ENET_FCS_SIZE)
+#define EMAC_MIN_FRAME 64
+#define EMAC_MAX_FRAME 9018
+#define EMAC_MIN_MTU (EMAC_MIN_FRAME - ENET_HEADER_SIZE - ENET_FCS_SIZE)
+#define EMAC_MAX_MTU (EMAC_MAX_FRAME - ENET_HEADER_SIZE - ENET_FCS_SIZE)
+
+#ifdef CONFIG_IBM_EMAC_ERRMSG
+void emac_serr_dump_0(struct net_device *dev);
+void emac_serr_dump_1(struct net_device *dev);
+void emac_err_dump(struct net_device *dev, int em0isr);
+void emac_phy_dump(struct net_device *);
+void emac_desc_dump(struct net_device *);
+void emac_mac_dump(struct net_device *);
+void emac_mal_dump(struct net_device *);
+#else
+#define emac_serr_dump_0(dev) do { } while (0)
+#define emac_serr_dump_1(dev) do { } while (0)
+#define emac_err_dump(dev,x) do { } while (0)
+#define emac_phy_dump(dev) do { } while (0)
+#define emac_desc_dump(dev) do { } while (0)
+#define emac_mac_dump(dev) do { } while (0)
+#define emac_mal_dump(dev) do { } while (0)
+#endif
+
+struct ocp_enet_private {
+ struct sk_buff *tx_skb[NUM_TX_BUFF];
+ struct sk_buff *rx_skb[NUM_RX_BUFF];
+ struct mal_descriptor *tx_desc;
+ struct mal_descriptor *rx_desc;
+ struct mal_descriptor *rx_dirty;
+ struct net_device_stats stats;
+ int tx_cnt;
+ int rx_slot;
+ int dirty_rx;
+ int tx_slot;
+ int ack_slot;
+ int rx_buffer_size;
+
+ struct mii_phy phy_mii;
+ int mii_phy_addr;
+ int want_autoneg;
+ int timer_ticks;
+ struct timer_list link_timer;
+ struct net_device *mdio_dev;
+
+ struct ocp_device *rgmii_dev;
+ int rgmii_input;
+
+ struct ocp_device *zmii_dev;
+ int zmii_input;
+
+ struct ibm_ocp_mal *mal;
+ int mal_tx_chan, mal_rx_chan;
+ struct mal_commac commac;
+
+ struct ocp_device *tah_dev;
+
+ int opened;
+ int going_away;
+ int wol_irq;
+ emac_t *emacp;
+ struct ocp_device *ocpdev;
+ struct net_device *ndev;
+ spinlock_t lock;
+};
+#endif /* _IBM_EMAC_CORE_H_ */
--- /dev/null
+/*
+ * ibm_ocp_debug.c
+ *
+ * This has all the debug routines that where in *_enet.c
+ *
+ * Armin Kuster akuster@mvista.com
+ * April , 2002
+ *
+ * Copyright 2002 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <asm/io.h>
+#include "ibm_ocp_mal.h"
+#include "ibm_ocp_zmii.h"
+#include "ibm_ocp_enet.h"
+
+extern int emac_phy_read(struct net_device *dev, int mii_id, int reg);
+
+void emac_phy_dump(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ unsigned long i;
+ uint data;
+
+ printk(KERN_DEBUG " Prepare for Phy dump....\n");
+ for (i = 0; i < 0x1A; i++) {
+ data = emac_phy_read(dev, fep->mii_phy_addr, i);
+ printk(KERN_DEBUG "Phy reg 0x%lx ==> %4x\n", i, data);
+ if (i == 0x07)
+ i = 0x0f;
+ }
+}
+
+void emac_desc_dump(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ int curr_slot;
+
+ printk(KERN_DEBUG
+ "dumping the receive descriptors: current slot is %d\n",
+ fep->rx_slot);
+ for (curr_slot = 0; curr_slot < NUM_RX_BUFF; curr_slot++) {
+ printk(KERN_DEBUG
+ "Desc %02d: status 0x%04x, length %3d, addr 0x%x\n",
+ curr_slot, fep->rx_desc[curr_slot].ctrl,
+ fep->rx_desc[curr_slot].data_len,
+ (unsigned int)fep->rx_desc[curr_slot].data_ptr);
+ }
+}
+
+void emac_mac_dump(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ volatile emac_t *emacp = fep->emacp;
+
+ printk(KERN_DEBUG "EMAC DEBUG ********** \n");
+ printk(KERN_DEBUG "EMAC_M0 ==> 0x%x\n", in_be32(&emacp->em0mr0));
+ printk(KERN_DEBUG "EMAC_M1 ==> 0x%x\n", in_be32(&emacp->em0mr1));
+ printk(KERN_DEBUG "EMAC_TXM0==> 0x%x\n", in_be32(&emacp->em0tmr0));
+ printk(KERN_DEBUG "EMAC_TXM1==> 0x%x\n", in_be32(&emacp->em0tmr1));
+ printk(KERN_DEBUG "EMAC_RXM ==> 0x%x\n", in_be32(&emacp->em0rmr));
+ printk(KERN_DEBUG "EMAC_ISR ==> 0x%x\n", in_be32(&emacp->em0isr));
+ printk(KERN_DEBUG "EMAC_IER ==> 0x%x\n", in_be32(&emacp->em0iser));
+ printk(KERN_DEBUG "EMAC_IAH ==> 0x%x\n", in_be32(&emacp->em0iahr));
+ printk(KERN_DEBUG "EMAC_IAL ==> 0x%x\n", in_be32(&emacp->em0ialr));
+ printk(KERN_DEBUG "EMAC_VLAN_TPID_REG ==> 0x%x\n",
+ in_be32(&emacp->em0vtpid));
+}
+
+void emac_mal_dump(struct net_device *dev)
+{
+ struct ibm_ocp_mal *mal = ((struct ocp_enet_private *)dev->priv)->mal;
+
+ printk(KERN_DEBUG " MAL DEBUG ********** \n");
+ printk(KERN_DEBUG " MCR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALCR));
+ printk(KERN_DEBUG " ESR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALESR));
+ printk(KERN_DEBUG " IER ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALIER));
+#ifdef CONFIG_40x
+ printk(KERN_DEBUG " DBR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALDBR));
+#endif /* CONFIG_40x */
+ printk(KERN_DEBUG " TXCASR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCASR));
+ printk(KERN_DEBUG " TXCARR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCARR));
+ printk(KERN_DEBUG " TXEOBISR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXEOBISR));
+ printk(KERN_DEBUG " TXDEIR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXDEIR));
+ printk(KERN_DEBUG " RXCASR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXCASR));
+ printk(KERN_DEBUG " RXCARR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXCARR));
+ printk(KERN_DEBUG " RXEOBISR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXEOBISR));
+ printk(KERN_DEBUG " RXDEIR ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXDEIR));
+ printk(KERN_DEBUG " TXCTP0R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCTP0R));
+ printk(KERN_DEBUG " TXCTP1R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCTP1R));
+ printk(KERN_DEBUG " TXCTP2R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCTP2R));
+ printk(KERN_DEBUG " TXCTP3R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALTXCTP3R));
+ printk(KERN_DEBUG " RXCTP0R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXCTP0R));
+ printk(KERN_DEBUG " RXCTP1R ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRXCTP1R));
+ printk(KERN_DEBUG " RCBS0 ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRCBS0));
+ printk(KERN_DEBUG " RCBS1 ==> 0x%x\n",
+ (unsigned int)get_mal_dcrn(mal, DCRN_MALRCBS1));
+}
+
+void emac_serr_dump_0(struct net_device *dev)
+{
+ struct ibm_ocp_mal *mal = ((struct ocp_enet_private *)dev->priv)->mal;
+ unsigned long int mal_error, plb_error, plb_addr;
+
+ mal_error = get_mal_dcrn(mal, DCRN_MALESR);
+ printk(KERN_DEBUG "ppc405_eth_serr: %s channel %ld \n",
+ (mal_error & 0x40000000) ? "Receive" :
+ "Transmit", (mal_error & 0x3e000000) >> 25);
+ printk(KERN_DEBUG " ----- latched error -----\n");
+ if (mal_error & MALESR_DE)
+ printk(KERN_DEBUG " DE: descriptor error\n");
+ if (mal_error & MALESR_OEN)
+ printk(KERN_DEBUG " ONE: OPB non-fullword error\n");
+ if (mal_error & MALESR_OTE)
+ printk(KERN_DEBUG " OTE: OPB timeout error\n");
+ if (mal_error & MALESR_OSE)
+ printk(KERN_DEBUG " OSE: OPB slave error\n");
+
+ if (mal_error & MALESR_PEIN) {
+ plb_error = mfdcr(DCRN_PLB0_BESR);
+ printk(KERN_DEBUG
+ " PEIN: PLB error, PLB0_BESR is 0x%x\n",
+ (unsigned int)plb_error);
+ plb_addr = mfdcr(DCRN_PLB0_BEAR);
+ printk(KERN_DEBUG
+ " PEIN: PLB error, PLB0_BEAR is 0x%x\n",
+ (unsigned int)plb_addr);
+ }
+}
+
+void emac_serr_dump_1(struct net_device *dev)
+{
+ struct ibm_ocp_mal *mal = ((struct ocp_enet_private *)dev->priv)->mal;
+ int mal_error = get_mal_dcrn(mal, DCRN_MALESR);
+
+ printk(KERN_DEBUG " ----- cumulative errors -----\n");
+ if (mal_error & MALESR_DEI)
+ printk(KERN_DEBUG " DEI: descriptor error interrupt\n");
+ if (mal_error & MALESR_ONEI)
+ printk(KERN_DEBUG " OPB non-fullword error interrupt\n");
+ if (mal_error & MALESR_OTEI)
+ printk(KERN_DEBUG " OTEI: timeout error interrupt\n");
+ if (mal_error & MALESR_OSEI)
+ printk(KERN_DEBUG " OSEI: slave error interrupt\n");
+ if (mal_error & MALESR_PBEI)
+ printk(KERN_DEBUG " PBEI: PLB bus error interrupt\n");
+}
+
+void emac_err_dump(struct net_device *dev, int em0isr)
+{
+ printk(KERN_DEBUG "%s: on-chip ethernet error:\n", dev->name);
+
+ if (em0isr & EMAC_ISR_OVR)
+ printk(KERN_DEBUG " OVR: overrun\n");
+ if (em0isr & EMAC_ISR_PP)
+ printk(KERN_DEBUG " PP: control pause packet\n");
+ if (em0isr & EMAC_ISR_BP)
+ printk(KERN_DEBUG " BP: packet error\n");
+ if (em0isr & EMAC_ISR_RP)
+ printk(KERN_DEBUG " RP: runt packet\n");
+ if (em0isr & EMAC_ISR_SE)
+ printk(KERN_DEBUG " SE: short event\n");
+ if (em0isr & EMAC_ISR_ALE)
+ printk(KERN_DEBUG " ALE: odd number of nibbles in packet\n");
+ if (em0isr & EMAC_ISR_BFCS)
+ printk(KERN_DEBUG " BFCS: bad FCS\n");
+ if (em0isr & EMAC_ISR_PTLE)
+ printk(KERN_DEBUG " PTLE: oversized packet\n");
+ if (em0isr & EMAC_ISR_ORE)
+ printk(KERN_DEBUG
+ " ORE: packet length field > max allowed LLC\n");
+ if (em0isr & EMAC_ISR_IRE)
+ printk(KERN_DEBUG " IRE: In Range error\n");
+ if (em0isr & EMAC_ISR_DBDM)
+ printk(KERN_DEBUG " DBDM: xmit error or SQE\n");
+ if (em0isr & EMAC_ISR_DB0)
+ printk(KERN_DEBUG " DB0: xmit error or SQE on TX channel 0\n");
+ if (em0isr & EMAC_ISR_SE0)
+ printk(KERN_DEBUG
+ " SE0: Signal Quality Error test failure from TX channel 0\n");
+ if (em0isr & EMAC_ISR_TE0)
+ printk(KERN_DEBUG " TE0: xmit channel 0 aborted\n");
+ if (em0isr & EMAC_ISR_DB1)
+ printk(KERN_DEBUG " DB1: xmit error or SQE on TX channel \n");
+ if (em0isr & EMAC_ISR_SE1)
+ printk(KERN_DEBUG
+ " SE1: Signal Quality Error test failure from TX channel 1\n");
+ if (em0isr & EMAC_ISR_TE1)
+ printk(KERN_DEBUG " TE1: xmit channel 1 aborted\n");
+ if (em0isr & EMAC_ISR_MOS)
+ printk(KERN_DEBUG " MOS\n");
+ if (em0isr & EMAC_ISR_MOF)
+ printk(KERN_DEBUG " MOF\n");
+
+ emac_mac_dump(dev);
+ emac_mal_dump(dev);
+}
--- /dev/null
+/*
+ * ibm_ocp_mal.c
+ *
+ * Armin Kuster akuster@mvista.com
+ * Juen, 2002
+ *
+ * Copyright 2002 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/netdevice.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/ocp.h>
+
+#include "ibm_emac_mal.h"
+
+// Locking: Should we share a lock with the client ? The client could provide
+// a lock pointer (optionally) in the commac structure... I don't think this is
+// really necessary though
+
+/* This lock protects the commac list. On today UP implementations, it's
+ * really only used as IRQ protection in mal_{register,unregister}_commac()
+ */
+static rwlock_t mal_list_lock = RW_LOCK_UNLOCKED;
+
+int mal_register_commac(struct ibm_ocp_mal *mal, struct mal_commac *commac)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&mal_list_lock, flags);
+
+ /* Don't let multiple commacs claim the same channel */
+ if ((mal->tx_chan_mask & commac->tx_chan_mask) ||
+ (mal->rx_chan_mask & commac->rx_chan_mask)) {
+ write_unlock_irqrestore(&mal_list_lock, flags);
+ return -EBUSY;
+ }
+
+ mal->tx_chan_mask |= commac->tx_chan_mask;
+ mal->rx_chan_mask |= commac->rx_chan_mask;
+
+ list_add(&commac->list, &mal->commac);
+
+ write_unlock_irqrestore(&mal_list_lock, flags);
+
+ MOD_INC_USE_COUNT;
+
+ return 0;
+}
+
+int mal_unregister_commac(struct ibm_ocp_mal *mal, struct mal_commac *commac)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&mal_list_lock, flags);
+
+ mal->tx_chan_mask &= ~commac->tx_chan_mask;
+ mal->rx_chan_mask &= ~commac->rx_chan_mask;
+
+ list_del_init(&commac->list);
+
+ write_unlock_irqrestore(&mal_list_lock, flags);
+
+ MOD_DEC_USE_COUNT;
+
+ return 0;
+}
+
+int mal_set_rcbs(struct ibm_ocp_mal *mal, int channel, unsigned long size)
+{
+ switch (channel) {
+ case 0:
+ set_mal_dcrn(mal, DCRN_MALRCBS0, size);
+ break;
+#ifdef DCRN_MALRCBS1
+ case 1:
+ set_mal_dcrn(mal, DCRN_MALRCBS1, size);
+ break;
+#endif
+#ifdef DCRN_MALRCBS2
+ case 2:
+ set_mal_dcrn(mal, DCRN_MALRCBS2, size);
+ break;
+#endif
+#ifdef DCRN_MALRCBS3
+ case 3:
+ set_mal_dcrn(mal, DCRN_MALRCBS3, size);
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static irqreturn_t mal_serr(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ unsigned long mal_error;
+
+ /*
+ * This SERR applies to one of the devices on the MAL, here we charge
+ * it against the first EMAC registered for the MAL.
+ */
+
+ mal_error = get_mal_dcrn(mal, DCRN_MALESR);
+
+ printk(KERN_ERR "%s: System Error (MALESR=%lx)\n",
+ "MAL" /* FIXME: get the name right */ , mal_error);
+
+ /* FIXME: decipher error */
+ /* DIXME: distribute to commacs, if possible */
+
+ /* Clear the error status register */
+ set_mal_dcrn(mal, DCRN_MALESR, mal_error);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_txeob(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long isr;
+
+ isr = get_mal_dcrn(mal, DCRN_MALTXEOBISR);
+ set_mal_dcrn(mal, DCRN_MALTXEOBISR, isr);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (isr & mc->tx_chan_mask) {
+ mc->ops->txeob(mc->dev, isr & mc->tx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_rxeob(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long isr;
+
+ isr = get_mal_dcrn(mal, DCRN_MALRXEOBISR);
+ set_mal_dcrn(mal, DCRN_MALRXEOBISR, isr);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (isr & mc->rx_chan_mask) {
+ mc->ops->rxeob(mc->dev, isr & mc->rx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_txde(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long deir;
+
+ deir = get_mal_dcrn(mal, DCRN_MALTXDEIR);
+
+ /* FIXME: print which MAL correctly */
+ printk(KERN_WARNING "%s: Tx descriptor error (MALTXDEIR=%lx)\n",
+ "MAL", deir);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (deir & mc->tx_chan_mask) {
+ mc->ops->txde(mc->dev, deir & mc->tx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * This interrupt should be very rare at best. This occurs when
+ * the hardware has a problem with the receive descriptors. The manual
+ * states that it occurs when the hardware cannot the receive descriptor
+ * empty bit is not set. The recovery mechanism will be to
+ * traverse through the descriptors, handle any that are marked to be
+ * handled and reinitialize each along the way. At that point the driver
+ * will be restarted.
+ */
+static irqreturn_t mal_rxde(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long deir;
+
+ deir = get_mal_dcrn(mal, DCRN_MALRXDEIR);
+
+ /*
+ * This really is needed. This case encountered in stress testing.
+ */
+ if (deir == 0)
+ return IRQ_HANDLED;
+
+ /* FIXME: print which MAL correctly */
+ printk(KERN_WARNING "%s: Rx descriptor error (MALRXDEIR=%lx)\n",
+ "MAL", deir);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (deir & mc->rx_chan_mask) {
+ mc->ops->rxde(mc->dev, deir & mc->rx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static int __init mal_probe(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_mal *mal = NULL;
+ struct ocp_func_mal_data *maldata;
+ int err = 0;
+
+ maldata = (struct ocp_func_mal_data *)ocpdev->def->additions;
+ if (maldata == NULL) {
+ printk(KERN_ERR "mal%d: Missing additional datas !\n",
+ ocpdev->def->index);
+ return -ENODEV;
+ }
+
+ mal = kmalloc(sizeof(struct ibm_ocp_mal), GFP_KERNEL);
+ if (mal == NULL) {
+ printk(KERN_ERR
+ "mal%d: Out of memory allocating MAL structure !\n",
+ ocpdev->def->index);
+ return -ENOMEM;
+ }
+ memset(mal, 0, sizeof(*mal));
+
+ switch (ocpdev->def->index) {
+ case 0:
+ mal->dcrbase = DCRN_MAL_BASE;
+ break;
+#ifdef DCRN_MAL1_BASE
+ case 1:
+ mal->dcrbase = DCRN_MAL1_BASE;
+ break;
+#endif
+ default:
+ BUG();
+ }
+
+ /**************************/
+
+ INIT_LIST_HEAD(&mal->commac);
+
+ set_mal_dcrn(mal, DCRN_MALRXCARR, 0xFFFFFFFF);
+ set_mal_dcrn(mal, DCRN_MALTXCARR, 0xFFFFFFFF);
+
+ set_mal_dcrn(mal, DCRN_MALCR, MALCR_MMSR); /* 384 */
+ /* FIXME: Add delay */
+
+ /* Set the MAL configuration register */
+ set_mal_dcrn(mal, DCRN_MALCR,
+ MALCR_PLBB | MALCR_OPBBL | MALCR_LEA |
+ MALCR_PLBLT_DEFAULT);
+
+ /* It would be nice to allocate buffers separately for each
+ * channel, but we can't because the channels share the upper
+ * 13 bits of address lines. Each channels buffer must also
+ * be 4k aligned, so we allocate 4k for each channel. This is
+ * inefficient FIXME: do better, if possible */
+ mal->tx_virt_addr = dma_alloc_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN *
+ maldata->num_tx_chans,
+ &mal->tx_phys_addr, GFP_KERNEL);
+ if (mal->tx_virt_addr == NULL) {
+ printk(KERN_ERR
+ "mal%d: Out of memory allocating MAL descriptors !\n",
+ ocpdev->def->index);
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ /* God, oh, god, I hate DCRs */
+ set_mal_dcrn(mal, DCRN_MALTXCTP0R, mal->tx_phys_addr);
+#ifdef DCRN_MALTXCTP1R
+ if (maldata->num_tx_chans > 1)
+ set_mal_dcrn(mal, DCRN_MALTXCTP1R,
+ mal->tx_phys_addr + MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP1R */
+#ifdef DCRN_MALTXCTP2R
+ if (maldata->num_tx_chans > 2)
+ set_mal_dcrn(mal, DCRN_MALTXCTP2R,
+ mal->tx_phys_addr + 2 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP2R */
+#ifdef DCRN_MALTXCTP3R
+ if (maldata->num_tx_chans > 3)
+ set_mal_dcrn(mal, DCRN_MALTXCTP3R,
+ mal->tx_phys_addr + 3 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP3R */
+#ifdef DCRN_MALTXCTP4R
+ if (maldata->num_tx_chans > 4)
+ set_mal_dcrn(mal, DCRN_MALTXCTP4R,
+ mal->tx_phys_addr + 4 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP4R */
+#ifdef DCRN_MALTXCTP5R
+ if (maldata->num_tx_chans > 5)
+ set_mal_dcrn(mal, DCRN_MALTXCTP5R,
+ mal->tx_phys_addr + 5 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP5R */
+#ifdef DCRN_MALTXCTP6R
+ if (maldata->num_tx_chans > 6)
+ set_mal_dcrn(mal, DCRN_MALTXCTP6R,
+ mal->tx_phys_addr + 6 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP6R */
+#ifdef DCRN_MALTXCTP7R
+ if (maldata->num_tx_chans > 7)
+ set_mal_dcrn(mal, DCRN_MALTXCTP7R,
+ mal->tx_phys_addr + 7 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP7R */
+
+ mal->rx_virt_addr = dma_alloc_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN *
+ maldata->num_rx_chans,
+ &mal->rx_phys_addr, GFP_KERNEL);
+
+ set_mal_dcrn(mal, DCRN_MALRXCTP0R, mal->rx_phys_addr);
+#ifdef DCRN_MALRXCTP1R
+ if (maldata->num_rx_chans > 1)
+ set_mal_dcrn(mal, DCRN_MALRXCTP1R,
+ mal->rx_phys_addr + MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP1R */
+#ifdef DCRN_MALRXCTP2R
+ if (maldata->num_rx_chans > 2)
+ set_mal_dcrn(mal, DCRN_MALRXCTP2R,
+ mal->rx_phys_addr + 2 * MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP2R */
+#ifdef DCRN_MALRXCTP3R
+ if (maldata->num_rx_chans > 3)
+ set_mal_dcrn(mal, DCRN_MALRXCTP3R,
+ mal->rx_phys_addr + 3 * MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP3R */
+
+ err = request_irq(maldata->serr_irq, mal_serr, 0, "MAL SERR", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->txde_irq, mal_txde, 0, "MAL TX DE ", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->txeob_irq, mal_txeob, 0, "MAL TX EOB", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->rxde_irq, mal_rxde, 0, "MAL RX DE", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->rxeob_irq, mal_rxeob, 0, "MAL RX EOB", mal);
+ if (err)
+ goto fail;
+
+ set_mal_dcrn(mal, DCRN_MALIER,
+ MALIER_DE | MALIER_NE | MALIER_TE |
+ MALIER_OPBE | MALIER_PLBE);
+
+ /* Advertise me to the rest of the world */
+ ocp_set_drvdata(ocpdev, mal);
+
+ printk(KERN_INFO "mal%d: Initialized, %d tx channels, %d rx channels\n",
+ ocpdev->def->index, maldata->num_tx_chans,
+ maldata->num_rx_chans);
+
+ return 0;
+
+ fail:
+ /* FIXME: dispose requested IRQs ! */
+ if (err && mal)
+ kfree(mal);
+ return err;
+}
+
+static void __exit mal_remove(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_mal *mal = ocp_get_drvdata(ocpdev);
+ struct ocp_func_mal_data *maldata = ocpdev->def->additions;
+
+ BUG_ON(!maldata);
+
+ ocp_set_drvdata(ocpdev, NULL);
+
+ /* FIXME: shut down the MAL, deal with dependency with emac */
+ free_irq(maldata->serr_irq, mal);
+ free_irq(maldata->txde_irq, mal);
+ free_irq(maldata->txeob_irq, mal);
+ free_irq(maldata->rxde_irq, mal);
+ free_irq(maldata->rxeob_irq, mal);
+
+ if (mal->tx_virt_addr)
+ dma_free_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN * maldata->num_tx_chans,
+ mal->tx_virt_addr, mal->tx_phys_addr);
+
+ if (mal->rx_virt_addr)
+ dma_free_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN * maldata->num_rx_chans,
+ mal->rx_virt_addr, mal->rx_phys_addr);
+
+ kfree(mal);
+}
+
+/* Structure for a device driver */
+static struct ocp_device_id mal_ids[] = {
+ {.vendor = OCP_ANY_ID,.function = OCP_FUNC_MAL},
+ {.vendor = OCP_VENDOR_INVALID}
+};
+
+static struct ocp_driver mal_driver = {
+ .name = "mal",
+ .id_table = mal_ids,
+
+ .probe = mal_probe,
+ .remove = mal_remove,
+};
+
+static int __init init_mals(void)
+{
+ int rc;
+
+ rc = ocp_register_driver(&mal_driver);
+ if (rc < 0) {
+ ocp_unregister_driver(&mal_driver);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static void __exit exit_mals(void)
+{
+ ocp_unregister_driver(&mal_driver);
+}
+
+module_init(init_mals);
+module_exit(exit_mals);
--- /dev/null
+#ifndef _IBM_EMAC_MAL_H
+#define _IBM_EMAC_MAL_H
+
+#include <linux/list.h>
+
+#define MAL_DT_ALIGN (4096) /* Alignment for each channel's descriptor table */
+
+#define MAL_CHAN_MASK(chan) (0x80000000 >> (chan))
+
+/* MAL Buffer Descriptor structure */
+struct mal_descriptor {
+ unsigned short ctrl; /* MAL / Commac status control bits */
+ short data_len; /* Max length is 4K-1 (12 bits) */
+ unsigned char *data_ptr; /* pointer to actual data buffer */
+} __attribute__ ((packed));
+
+/* the following defines are for the MadMAL status and control registers. */
+/* MADMAL transmit and receive status/control bits */
+#define MAL_RX_CTRL_EMPTY 0x8000
+#define MAL_RX_CTRL_WRAP 0x4000
+#define MAL_RX_CTRL_CM 0x2000
+#define MAL_RX_CTRL_LAST 0x1000
+#define MAL_RX_CTRL_FIRST 0x0800
+#define MAL_RX_CTRL_INTR 0x0400
+
+#define MAL_TX_CTRL_READY 0x8000
+#define MAL_TX_CTRL_WRAP 0x4000
+#define MAL_TX_CTRL_CM 0x2000
+#define MAL_TX_CTRL_LAST 0x1000
+#define MAL_TX_CTRL_INTR 0x0400
+
+struct mal_commac_ops {
+ void (*txeob) (void *dev, u32 chanmask);
+ void (*txde) (void *dev, u32 chanmask);
+ void (*rxeob) (void *dev, u32 chanmask);
+ void (*rxde) (void *dev, u32 chanmask);
+};
+
+struct mal_commac {
+ struct mal_commac_ops *ops;
+ void *dev;
+ u32 tx_chan_mask, rx_chan_mask;
+ struct list_head list;
+};
+
+struct ibm_ocp_mal {
+ int dcrbase;
+
+ struct list_head commac;
+ u32 tx_chan_mask, rx_chan_mask;
+
+ dma_addr_t tx_phys_addr;
+ struct mal_descriptor *tx_virt_addr;
+
+ dma_addr_t rx_phys_addr;
+ struct mal_descriptor *rx_virt_addr;
+};
+
+#define GET_MAL_STANZA(base,dcrn) \
+ case base: \
+ x = mfdcr(dcrn(base)); \
+ break;
+
+#define SET_MAL_STANZA(base,dcrn, val) \
+ case base: \
+ mtdcr(dcrn(base), (val)); \
+ break;
+
+#define GET_MAL0_STANZA(dcrn) GET_MAL_STANZA(DCRN_MAL_BASE,dcrn)
+#define SET_MAL0_STANZA(dcrn,val) SET_MAL_STANZA(DCRN_MAL_BASE,dcrn,val)
+
+#ifdef DCRN_MAL1_BASE
+#define GET_MAL1_STANZA(dcrn) GET_MAL_STANZA(DCRN_MAL1_BASE,dcrn)
+#define SET_MAL1_STANZA(dcrn,val) SET_MAL_STANZA(DCRN_MAL1_BASE,dcrn,val)
+#else /* ! DCRN_MAL1_BASE */
+#define GET_MAL1_STANZA(dcrn)
+#define SET_MAL1_STANZA(dcrn,val)
+#endif
+
+#define get_mal_dcrn(mal, dcrn) ({ \
+ u32 x; \
+ switch ((mal)->dcrbase) { \
+ GET_MAL0_STANZA(dcrn) \
+ GET_MAL1_STANZA(dcrn) \
+ default: \
+ BUG(); \
+ } \
+x; })
+
+#define set_mal_dcrn(mal, dcrn, val) do { \
+ switch ((mal)->dcrbase) { \
+ SET_MAL0_STANZA(dcrn,val) \
+ SET_MAL1_STANZA(dcrn,val) \
+ default: \
+ BUG(); \
+ } } while (0)
+
+static inline void mal_enable_tx_channels(struct ibm_ocp_mal *mal, u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALTXCASR,
+ get_mal_dcrn(mal, DCRN_MALTXCASR) | chanmask);
+}
+
+static inline void mal_disable_tx_channels(struct ibm_ocp_mal *mal,
+ u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALTXCARR, chanmask);
+}
+
+static inline void mal_enable_rx_channels(struct ibm_ocp_mal *mal, u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALRXCASR,
+ get_mal_dcrn(mal, DCRN_MALRXCASR) | chanmask);
+}
+
+static inline void mal_disable_rx_channels(struct ibm_ocp_mal *mal,
+ u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALRXCARR, chanmask);
+}
+
+extern int mal_register_commac(struct ibm_ocp_mal *mal,
+ struct mal_commac *commac);
+extern int mal_unregister_commac(struct ibm_ocp_mal *mal,
+ struct mal_commac *commac);
+
+extern int mal_set_rcbs(struct ibm_ocp_mal *mal, int channel,
+ unsigned long size);
+
+#endif /* _IBM_EMAC_MAL_H */
--- /dev/null
+/*
+ * ibm_ocp_phy.c
+ *
+ * PHY drivers for the ibm ocp ethernet driver. Borrowed
+ * from sungem_phy.c, though I only kept the generic MII
+ * driver for now.
+ *
+ * This file should be shared with other drivers or eventually
+ * merged as the "low level" part of miilib
+ *
+ * (c) 2003, Benjamin Herrenscmidt (benh@kernel.crashing.org)
+ *
+ */
+
+#include <linux/config.h>
+
+#include <linux/module.h>
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/types.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/delay.h>
+
+#include "ibm_emac_phy.h"
+
+static int reset_one_mii_phy(struct mii_phy *phy, int phy_id)
+{
+ u16 val;
+ int limit = 10000;
+
+ val = __phy_read(phy, phy_id, MII_BMCR);
+ val &= ~BMCR_ISOLATE;
+ val |= BMCR_RESET;
+ __phy_write(phy, phy_id, MII_BMCR, val);
+
+ udelay(100);
+
+ while (limit--) {
+ val = __phy_read(phy, phy_id, MII_BMCR);
+ if ((val & BMCR_RESET) == 0)
+ break;
+ udelay(10);
+ }
+ if ((val & BMCR_ISOLATE) && limit > 0)
+ __phy_write(phy, phy_id, MII_BMCR, val & ~BMCR_ISOLATE);
+
+ return (limit <= 0);
+}
+
+static int cis8201_init(struct mii_phy *phy)
+{
+ u16 epcr;
+
+ epcr = phy_read(phy, MII_CIS8201_EPCR);
+ epcr &= ~EPCR_MODE_MASK;
+
+ switch (phy->mode) {
+ case PHY_MODE_TBI:
+ epcr |= EPCR_TBI_MODE;
+ break;
+ case PHY_MODE_RTBI:
+ epcr |= EPCR_RTBI_MODE;
+ break;
+ case PHY_MODE_GMII:
+ epcr |= EPCR_GMII_MODE;
+ break;
+ case PHY_MODE_RGMII:
+ default:
+ epcr |= EPCR_RGMII_MODE;
+ }
+
+ phy_write(phy, MII_CIS8201_EPCR, epcr);
+
+ return 0;
+}
+
+static int genmii_setup_aneg(struct mii_phy *phy, u32 advertise)
+{
+ u16 ctl, adv;
+
+ phy->autoneg = 1;
+ phy->speed = SPEED_10;
+ phy->duplex = DUPLEX_HALF;
+ phy->pause = 0;
+ phy->advertising = advertise;
+
+ /* Setup standard advertise */
+ adv = phy_read(phy, MII_ADVERTISE);
+ adv &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
+ if (advertise & ADVERTISED_10baseT_Half)
+ adv |= ADVERTISE_10HALF;
+ if (advertise & ADVERTISED_10baseT_Full)
+ adv |= ADVERTISE_10FULL;
+ if (advertise & ADVERTISED_100baseT_Half)
+ adv |= ADVERTISE_100HALF;
+ if (advertise & ADVERTISED_100baseT_Full)
+ adv |= ADVERTISE_100FULL;
+ phy_write(phy, MII_ADVERTISE, adv);
+
+ /* Start/Restart aneg */
+ ctl = phy_read(phy, MII_BMCR);
+ ctl |= (BMCR_ANENABLE | BMCR_ANRESTART);
+ phy_write(phy, MII_BMCR, ctl);
+
+ return 0;
+}
+
+static int genmii_setup_forced(struct mii_phy *phy, int speed, int fd)
+{
+ u16 ctl;
+
+ phy->autoneg = 0;
+ phy->speed = speed;
+ phy->duplex = fd;
+ phy->pause = 0;
+
+ ctl = phy_read(phy, MII_BMCR);
+ ctl &= ~(BMCR_FULLDPLX | BMCR_SPEED100 | BMCR_ANENABLE);
+
+ /* First reset the PHY */
+ phy_write(phy, MII_BMCR, ctl | BMCR_RESET);
+
+ /* Select speed & duplex */
+ switch (speed) {
+ case SPEED_10:
+ break;
+ case SPEED_100:
+ ctl |= BMCR_SPEED100;
+ break;
+ case SPEED_1000:
+ default:
+ return -EINVAL;
+ }
+ if (fd == DUPLEX_FULL)
+ ctl |= BMCR_FULLDPLX;
+ phy_write(phy, MII_BMCR, ctl);
+
+ return 0;
+}
+
+static int genmii_poll_link(struct mii_phy *phy)
+{
+ u16 status;
+
+ (void)phy_read(phy, MII_BMSR);
+ status = phy_read(phy, MII_BMSR);
+ if ((status & BMSR_LSTATUS) == 0)
+ return 0;
+ if (phy->autoneg && !(status & BMSR_ANEGCOMPLETE))
+ return 0;
+ return 1;
+}
+
+#define MII_CIS8201_ACSR 0x1c
+#define ACSR_DUPLEX_STATUS 0x0020
+#define ACSR_SPEED_1000BASET 0x0010
+#define ACSR_SPEED_100BASET 0x0008
+
+static int cis8201_read_link(struct mii_phy *phy)
+{
+ u16 acsr;
+
+ if (phy->autoneg) {
+ acsr = phy_read(phy, MII_CIS8201_ACSR);
+
+ if (acsr & ACSR_DUPLEX_STATUS)
+ phy->duplex = DUPLEX_FULL;
+ else
+ phy->duplex = DUPLEX_HALF;
+ if (acsr & ACSR_SPEED_1000BASET) {
+ phy->speed = SPEED_1000;
+ } else if (acsr & ACSR_SPEED_100BASET)
+ phy->speed = SPEED_100;
+ else
+ phy->speed = SPEED_10;
+ phy->pause = 0;
+ }
+ /* On non-aneg, we assume what we put in BMCR is the speed,
+ * though magic-aneg shouldn't prevent this case from occurring
+ */
+
+ return 0;
+}
+
+static int genmii_read_link(struct mii_phy *phy)
+{
+ u16 lpa;
+
+ if (phy->autoneg) {
+ lpa = phy_read(phy, MII_LPA);
+
+ if (lpa & (LPA_10FULL | LPA_100FULL))
+ phy->duplex = DUPLEX_FULL;
+ else
+ phy->duplex = DUPLEX_HALF;
+ if (lpa & (LPA_100FULL | LPA_100HALF))
+ phy->speed = SPEED_100;
+ else
+ phy->speed = SPEED_10;
+ phy->pause = 0;
+ }
+ /* On non-aneg, we assume what we put in BMCR is the speed,
+ * though magic-aneg shouldn't prevent this case from occurring
+ */
+
+ return 0;
+}
+
+#define MII_BASIC_FEATURES (SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | \
+ SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | \
+ SUPPORTED_Autoneg | SUPPORTED_TP | SUPPORTED_MII)
+#define MII_GBIT_FEATURES (MII_BASIC_FEATURES | \
+ SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full)
+
+/* CIS8201 phy ops */
+static struct mii_phy_ops cis8201_phy_ops = {
+ init:cis8201_init,
+ setup_aneg:genmii_setup_aneg,
+ setup_forced:genmii_setup_forced,
+ poll_link:genmii_poll_link,
+ read_link:cis8201_read_link
+};
+
+/* Generic implementation for most 10/100 PHYs */
+static struct mii_phy_ops generic_phy_ops = {
+ setup_aneg:genmii_setup_aneg,
+ setup_forced:genmii_setup_forced,
+ poll_link:genmii_poll_link,
+ read_link:genmii_read_link
+};
+
+static struct mii_phy_def cis8201_phy_def = {
+ phy_id:0x000fc410,
+ phy_id_mask:0x000ffff0,
+ name:"CIS8201 Gigabit Ethernet",
+ features:MII_GBIT_FEATURES,
+ magic_aneg:0,
+ ops:&cis8201_phy_ops
+};
+
+static struct mii_phy_def genmii_phy_def = {
+ phy_id:0x00000000,
+ phy_id_mask:0x00000000,
+ name:"Generic MII",
+ features:MII_BASIC_FEATURES,
+ magic_aneg:0,
+ ops:&generic_phy_ops
+};
+
+static struct mii_phy_def *mii_phy_table[] = {
+ &cis8201_phy_def,
+ &genmii_phy_def,
+ NULL
+};
+
+int mii_phy_probe(struct mii_phy *phy, int mii_id)
+{
+ int rc;
+ u32 id;
+ struct mii_phy_def *def;
+ int i;
+
+ phy->autoneg = 0;
+ phy->advertising = 0;
+ phy->mii_id = mii_id;
+ phy->speed = 0;
+ phy->duplex = 0;
+ phy->pause = 0;
+
+ /* Take PHY out of isloate mode and reset it. */
+ rc = reset_one_mii_phy(phy, mii_id);
+ if (rc)
+ return -ENODEV;
+
+ /* Read ID and find matching entry */
+ id = (phy_read(phy, MII_PHYSID1) << 16 | phy_read(phy, MII_PHYSID2))
+ & 0xfffffff0;
+ for (i = 0; (def = mii_phy_table[i]) != NULL; i++)
+ if ((id & def->phy_id_mask) == def->phy_id)
+ break;
+ /* Should never be NULL (we have a generic entry), but... */
+ if (def == NULL)
+ return -ENODEV;
+
+ phy->def = def;
+
+ /* Setup default advertising */
+ phy->advertising = def->features;
+
+ return 0;
+}
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Defines for the IBM RGMII bridge
+ *
+ * Based on ocp_zmii.h/ibm_emac_zmii.h
+ * Armin Kuster akuster@mvista.com
+ *
+ * Copyright 2004 MontaVista Software, Inc.
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_RGMII_H_
+#define _IBM_EMAC_RGMII_H_
+
+#include <linux/config.h>
+
+/* RGMII bridge */
+typedef struct rgmii_regs {
+ u32 fer; /* Function enable register */
+ u32 ssr; /* Speed select register */
+} rgmii_t;
+
+#define RGMII_INPUTS 4
+
+/* RGMII device */
+struct ibm_ocp_rgmii {
+ struct rgmii_regs *base;
+ int mode[RGMII_INPUTS];
+ int users; /* number of EMACs using this RGMII bridge */
+};
+
+/* Fuctional Enable Reg */
+#define RGMII_FER_MASK(x) (0x00000007 << (4*x))
+#define RGMII_RTBI 0x00000004
+#define RGMII_RGMII 0x00000005
+#define RGMII_TBI 0x00000006
+#define RGMII_GMII 0x00000007
+
+/* Speed Selection reg */
+
+#define RGMII_SP2_100 0x00000002
+#define RGMII_SP2_1000 0x00000004
+#define RGMII_SP3_100 0x00000200
+#define RGMII_SP3_1000 0x00000400
+
+#define RGMII_MII2_SPDMASK 0x00000007
+#define RGMII_MII3_SPDMASK 0x00000700
+
+#define RGMII_MII2_100MB RGMII_SP2_100 & ~RGMII_SP2_1000
+#define RGMII_MII2_1000MB RGMII_SP2_1000 & ~RGMII_SP2_100
+#define RGMII_MII2_10MB ~(RGMII_SP2_100 | RGMII_SP2_1000)
+#define RGMII_MII3_100MB RGMII_SP3_100 & ~RGMII_SP3_1000
+#define RGMII_MII3_1000MB RGMII_SP3_1000 & ~RGMII_SP3_100
+#define RGMII_MII3_10MB ~(RGMII_SP3_100 | RGMII_SP3_1000)
+
+#define RTBI 0
+#define RGMII 1
+#define TBI 2
+#define GMII 3
+
+#endif /* _IBM_EMAC_RGMII_H_ */
--- /dev/null
+/*
+ * Defines for the IBM TAH
+ *
+ * Copyright 2004 MontaVista Software, Inc.
+ * Matt Porter <mporter@kernel.crashing.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_TAH_H
+#define _IBM_EMAC_TAH_H
+
+/* TAH */
+typedef struct tah_regs {
+ u32 tah_revid;
+ u32 pad[3];
+ u32 tah_mr;
+ u32 tah_ssr0;
+ u32 tah_ssr1;
+ u32 tah_ssr2;
+ u32 tah_ssr3;
+ u32 tah_ssr4;
+ u32 tah_ssr5;
+ u32 tah_tsr;
+} tah_t;
+
+/* TAH engine */
+#define TAH_MR_CVR 0x80000000
+#define TAH_MR_SR 0x40000000
+#define TAH_MR_ST_256 0x01000000
+#define TAH_MR_ST_512 0x02000000
+#define TAH_MR_ST_768 0x03000000
+#define TAH_MR_ST_1024 0x04000000
+#define TAH_MR_ST_1280 0x05000000
+#define TAH_MR_ST_1536 0x06000000
+#define TAH_MR_TFS_16KB 0x00000000
+#define TAH_MR_TFS_2KB 0x00200000
+#define TAH_MR_TFS_4KB 0x00400000
+#define TAH_MR_TFS_6KB 0x00600000
+#define TAH_MR_TFS_8KB 0x00800000
+#define TAH_MR_TFS_10KB 0x00a00000
+#define TAH_MR_DTFP 0x00100000
+#define TAH_MR_DIG 0x00080000
+
+#endif /* _IBM_EMAC_TAH_H */
--- /dev/null
+/*
+ * ocp_zmii.h
+ *
+ * Defines for the IBM ZMII bridge
+ *
+ * Armin Kuster akuster@mvista.com
+ * Dec, 2001
+ *
+ * Copyright 2001 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_ZMII_H_
+#define _IBM_EMAC_ZMII_H_
+
+#include <linux/config.h>
+
+/* ZMII bridge registers */
+struct zmii_regs {
+ u32 fer; /* Function enable reg */
+ u32 ssr; /* Speed select reg */
+ u32 smiirs; /* SMII status reg */
+};
+
+#define ZMII_INPUTS 4
+
+/* ZMII device */
+struct ibm_ocp_zmii {
+ struct zmii_regs *base;
+ int mode[ZMII_INPUTS];
+ int users; /* number of EMACs using this ZMII bridge */
+};
+
+/* Fuctional Enable Reg */
+
+#define ZMII_FER_MASK(x) (0xf0000000 >> (4*x))
+
+#define ZMII_MDI0 0x80000000
+#define ZMII_SMII0 0x40000000
+#define ZMII_RMII0 0x20000000
+#define ZMII_MII0 0x10000000
+#define ZMII_MDI1 0x08000000
+#define ZMII_SMII1 0x04000000
+#define ZMII_RMII1 0x02000000
+#define ZMII_MII1 0x01000000
+#define ZMII_MDI2 0x00800000
+#define ZMII_SMII2 0x00400000
+#define ZMII_RMII2 0x00200000
+#define ZMII_MII2 0x00100000
+#define ZMII_MDI3 0x00080000
+#define ZMII_SMII3 0x00040000
+#define ZMII_RMII3 0x00020000
+#define ZMII_MII3 0x00010000
+
+/* Speed Selection reg */
+
+#define ZMII_SCI0 0x40000000
+#define ZMII_FSS0 0x20000000
+#define ZMII_SP0 0x10000000
+#define ZMII_SCI1 0x04000000
+#define ZMII_FSS1 0x02000000
+#define ZMII_SP1 0x01000000
+#define ZMII_SCI2 0x00400000
+#define ZMII_FSS2 0x00200000
+#define ZMII_SP2 0x00100000
+#define ZMII_SCI3 0x00040000
+#define ZMII_FSS3 0x00020000
+#define ZMII_SP3 0x00010000
+
+#define ZMII_MII0_100MB ZMII_SP0
+#define ZMII_MII0_10MB ~ZMII_SP0
+#define ZMII_MII1_100MB ZMII_SP1
+#define ZMII_MII1_10MB ~ZMII_SP1
+#define ZMII_MII2_100MB ZMII_SP2
+#define ZMII_MII2_10MB ~ZMII_SP2
+#define ZMII_MII3_100MB ZMII_SP3
+#define ZMII_MII3_10MB ~ZMII_SP3
+
+/* SMII Status reg */
+
+#define ZMII_STS0 0xFF000000 /* EMAC0 smii status mask */
+#define ZMII_STS1 0x00FF0000 /* EMAC1 smii status mask */
+
+#define SMII 0
+#define RMII 1
+#define MII 2
+#define MDI 3
+
+#endif /* _IBM_EMAC_ZMII_H_ */
--- /dev/null
+/* ne-h8300.c: A NE2000 clone on H8/300 driver for linux. */
+/*
+ original ne.c
+ Written 1992-94 by Donald Becker.
+
+ Copyright 1993 United States Government as represented by the
+ Director, National Security Agency.
+
+ This software may be used and distributed according to the terms
+ of the GNU General Public License, incorporated herein by reference.
+
+ The author may be reached as becker@scyld.com, or C/O
+ Scyld Computing Corporation, 410 Severn Ave., Suite 210, Annapolis MD 21403
+
+ H8/300 modified
+ Yoshinori Sato <ysato@users.sourceforge.jp>
+*/
+
+static const char version1[] =
+"ne-h8300.c:v1.00 2004/04/11 ysato\n";
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "8390.h"
+
+/* Some defines that people can play with if so inclined. */
+
+/* Do we perform extra sanity checks on stuff ? */
+/* #define NE_SANITY_CHECK */
+
+/* Do we implement the read before write bugfix ? */
+/* #define NE_RW_BUGFIX */
+
+/* Do we have a non std. amount of memory? (in units of 256 byte pages) */
+/* #define PACKETBUF_MEMSIZE 0x40 */
+
+/* A zero-terminated list of I/O addresses to be probed at boot. */
+
+/* ---- No user-serviceable parts below ---- */
+
+#define NE_BASE (dev->base_addr)
+#define NE_CMD 0x00
+#define NE_DATAPORT (ei_status.word16?0x20:0x10) /* NatSemi-defined port window offset. */
+#define NE_RESET (ei_status.word16?0x3f:0x1f) /* Issue a read to reset, a write to clear. */
+#define NE_IO_EXTENT (ei_status.word16?0x40:0x20)
+
+#define NESM_START_PG 0x40 /* First page of TX buffer */
+#define NESM_STOP_PG 0x80 /* Last page +1 of RX ring */
+
+static int ne_probe1(struct net_device *dev, int ioaddr);
+
+static int ne_open(struct net_device *dev);
+static int ne_close(struct net_device *dev);
+
+static void ne_reset_8390(struct net_device *dev);
+static void ne_get_8390_hdr(struct net_device *dev, struct e8390_pkt_hdr *hdr,
+ int ring_page);
+static void ne_block_input(struct net_device *dev, int count,
+ struct sk_buff *skb, int ring_offset);
+static void ne_block_output(struct net_device *dev, const int count,
+ const unsigned char *buf, const int start_page);
+
+
+static u32 reg_offset[16];
+
+static int __init init_reg_offset(struct net_device *dev,unsigned long base_addr)
+{
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+ int i;
+ unsigned char bus_width;
+
+ bus_width = *(volatile unsigned char *)ABWCR;
+ bus_width &= 1 << ((base_addr >> 21) & 7);
+
+ for (i = 0; i < sizeof(reg_offset) / sizeof(u32); i++)
+ if (bus_width == 0)
+ reg_offset[i] = i * 2 + 1;
+ else
+ reg_offset[i] = i;
+
+ ei_local->reg_offset = reg_offset;
+ return 0;
+}
+
+static int __initdata h8300_ne_count = 0;
+#ifdef CONFIG_H8300H_H8MAX
+static unsigned long __initdata h8300_ne_base[] = { 0x800600 };
+static int h8300_ne_irq[] = {EXT_IRQ4};
+#endif
+#ifdef CONFIG_H8300H_AKI3068NET
+static unsigned long __initdata h8300_ne_base[] = { 0x200000 };
+static int h8300_ne_irq[] = {EXT_IRQ5};
+#endif
+
+static inline int init_dev(struct net_device *dev)
+{
+ if (h8300_ne_count < (sizeof(h8300_ne_base) / sizeof(unsigned long))) {
+ dev->base_addr = h8300_ne_base[h8300_ne_count];
+ dev->irq = h8300_ne_irq[h8300_ne_count];
+ h8300_ne_count++;
+ return 0;
+ } else
+ return -ENODEV;
+}
+
+/* Probe for various non-shared-memory ethercards.
+
+ NEx000-clone boards have a Station Address PROM (SAPROM) in the packet
+ buffer memory space. NE2000 clones have 0x57,0x57 in bytes 0x0e,0x0f of
+ the SAPROM, while other supposed NE2000 clones must be detected by their
+ SA prefix.
+
+ Reading the SAPROM from a word-wide card with the 8390 set in byte-wide
+ mode results in doubled values, which can be detected and compensated for.
+
+ The probe is also responsible for initializing the card and filling
+ in the 'dev' and 'ei_status' structures.
+
+ We use the minimum memory size for some ethercard product lines, iff we can't
+ distinguish models. You can increase the packet buffer size by setting
+ PACKETBUF_MEMSIZE. Reported Cabletron packet buffer locations are:
+ E1010 starts at 0x100 and ends at 0x2000.
+ E1010-x starts at 0x100 and ends at 0x8000. ("-x" means "more memory")
+ E2010 starts at 0x100 and ends at 0x4000.
+ E2010-x starts at 0x100 and ends at 0xffff. */
+
+static int __init do_ne_probe(struct net_device *dev)
+{
+ unsigned int base_addr = dev->base_addr;
+
+ SET_MODULE_OWNER(dev);
+
+ /* First check any supplied i/o locations. User knows best. <cough> */
+ if (base_addr > 0x1ff) /* Check a single specified location. */
+ return ne_probe1(dev, base_addr);
+ else if (base_addr != 0) /* Don't probe at all. */
+ return -ENXIO;
+
+ return -ENODEV;
+}
+
+static void cleanup_card(struct net_device *dev)
+{
+ free_irq(dev->irq, dev);
+ release_region(dev->base_addr, NE_IO_EXTENT);
+}
+
+struct net_device * __init ne_probe(int unit)
+{
+ struct net_device *dev = alloc_ei_netdev();
+ int err;
+
+ if (!dev)
+ return ERR_PTR(-ENOMEM);
+
+ if (init_dev(dev))
+ return ERR_PTR(-ENODEV);
+
+ sprintf(dev->name, "eth%d", unit);
+ netdev_boot_setup_check(dev);
+
+ err = init_reg_offset(dev, dev->base_addr);
+ if (err)
+ goto out;
+
+ err = do_ne_probe(dev);
+ if (err)
+ goto out;
+ err = register_netdev(dev);
+ if (err)
+ goto out1;
+ return dev;
+out1:
+ cleanup_card(dev);
+out:
+ free_netdev(dev);
+ return ERR_PTR(err);
+}
+
+static int __init ne_probe1(struct net_device *dev, int ioaddr)
+{
+ int i;
+ unsigned char SA_prom[16];
+ int wordlength = 2;
+ const char *name = NULL;
+ int start_page, stop_page;
+ int reg0, ret;
+ static unsigned version_printed;
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+ unsigned char bus_width;
+
+ if (!request_region(ioaddr, NE_IO_EXTENT, dev->name))
+ return -EBUSY;
+
+ reg0 = inb_p(ioaddr);
+ if (reg0 == 0xFF) {
+ ret = -ENODEV;
+ goto err_out;
+ }
+
+ /* Do a preliminary verification that we have a 8390. */
+ {
+ int regd;
+ outb_p(E8390_NODMA+E8390_PAGE1+E8390_STOP, ioaddr + E8390_CMD);
+ regd = inb_p(ioaddr + EI_SHIFT(0x0d));
+ outb_p(0xff, ioaddr + EI_SHIFT(0x0d));
+ outb_p(E8390_NODMA+E8390_PAGE0, ioaddr + E8390_CMD);
+ inb_p(ioaddr + EN0_COUNTER0); /* Clear the counter by reading. */
+ if (inb_p(ioaddr + EN0_COUNTER0) != 0) {
+ outb_p(reg0, ioaddr + EI_SHIFT(0));
+ outb_p(regd, ioaddr + EI_SHIFT(0x0d)); /* Restore the old values. */
+ ret = -ENODEV;
+ goto err_out;
+ }
+ }
+
+ if (ei_debug && version_printed++ == 0)
+ printk(KERN_INFO "%s", version1);
+
+ printk(KERN_INFO "NE*000 ethercard probe at %08x:", ioaddr);
+
+ /* Read the 16 bytes of station address PROM.
+ We must first initialize registers, similar to NS8390_init(eifdev, 0).
+ We can't reliably read the SAPROM address without this.
+ (I learned the hard way!). */
+ {
+ struct {unsigned char value, offset; } program_seq[] =
+ {
+ {E8390_NODMA+E8390_PAGE0+E8390_STOP, E8390_CMD}, /* Select page 0*/
+ {0x48, EN0_DCFG}, /* Set byte-wide (0x48) access. */
+ {0x00, EN0_RCNTLO}, /* Clear the count regs. */
+ {0x00, EN0_RCNTHI},
+ {0x00, EN0_IMR}, /* Mask completion irq. */
+ {0xFF, EN0_ISR},
+ {E8390_RXOFF, EN0_RXCR}, /* 0x20 Set to monitor */
+ {E8390_TXOFF, EN0_TXCR}, /* 0x02 and loopback mode. */
+ {32, EN0_RCNTLO},
+ {0x00, EN0_RCNTHI},
+ {0x00, EN0_RSARLO}, /* DMA starting at 0x0000. */
+ {0x00, EN0_RSARHI},
+ {E8390_RREAD+E8390_START, E8390_CMD},
+ };
+
+ for (i = 0; i < sizeof(program_seq)/sizeof(program_seq[0]); i++)
+ outb_p(program_seq[i].value, ioaddr + program_seq[i].offset);
+
+ }
+ bus_width = *(volatile unsigned char *)ABWCR;
+ bus_width &= 1 << ((ioaddr >> 21) & 7);
+ ei_status.word16 = (bus_width == 0); /* temporary setting */
+ for(i = 0; i < 16 /*sizeof(SA_prom)*/; i++) {
+ SA_prom[i] = inb_p(ioaddr + NE_DATAPORT);
+ inb_p(ioaddr + NE_DATAPORT); /* dummy read */
+ }
+
+ start_page = NESM_START_PG;
+ stop_page = NESM_STOP_PG;
+
+ if (bus_width)
+ wordlength = 1;
+ else
+ outb_p(0x49, ioaddr + EN0_DCFG);
+
+ /* Set up the rest of the parameters. */
+ name = (wordlength == 2) ? "NE2000" : "NE1000";
+
+ if (! dev->irq) {
+ printk(" failed to detect IRQ line.\n");
+ ret = -EAGAIN;
+ goto err_out;
+ }
+
+ /* Snarf the interrupt now. There's no point in waiting since we cannot
+ share and the board will usually be enabled. */
+ ret = request_irq(dev->irq, ei_interrupt, 0, name, dev);
+ if (ret) {
+ printk (" unable to get IRQ %d (errno=%d).\n", dev->irq, ret);
+ goto err_out;
+ }
+
+ dev->base_addr = ioaddr;
+
+ for(i = 0; i < ETHER_ADDR_LEN; i++) {
+ printk(" %2.2x", SA_prom[i]);
+ dev->dev_addr[i] = SA_prom[i];
+ }
+
+ printk("\n%s: %s found at %#x, using IRQ %d.\n",
+ dev->name, name, ioaddr, dev->irq);
+
+ ei_status.name = name;
+ ei_status.tx_start_page = start_page;
+ ei_status.stop_page = stop_page;
+ ei_status.word16 = (wordlength == 2);
+
+ ei_status.rx_start_page = start_page + TX_PAGES;
+#ifdef PACKETBUF_MEMSIZE
+ /* Allow the packet buffer size to be overridden by know-it-alls. */
+ ei_status.stop_page = ei_status.tx_start_page + PACKETBUF_MEMSIZE;
+#endif
+
+ ei_status.reset_8390 = &ne_reset_8390;
+ ei_status.block_input = &ne_block_input;
+ ei_status.block_output = &ne_block_output;
+ ei_status.get_8390_hdr = &ne_get_8390_hdr;
+ ei_status.priv = 0;
+ dev->open = &ne_open;
+ dev->stop = &ne_close;
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ dev->poll_controller = ei_poll;
+#endif
+ NS8390_init(dev, 0);
+ return 0;
+
+err_out:
+ release_region(ioaddr, NE_IO_EXTENT);
+ return ret;
+}
+
+static int ne_open(struct net_device *dev)
+{
+ ei_open(dev);
+ return 0;
+}
+
+static int ne_close(struct net_device *dev)
+{
+ if (ei_debug > 1)
+ printk(KERN_DEBUG "%s: Shutting down ethercard.\n", dev->name);
+ ei_close(dev);
+ return 0;
+}
+
+/* Hard reset the card. This used to pause for the same period that a
+ 8390 reset command required, but that shouldn't be necessary. */
+
+static void ne_reset_8390(struct net_device *dev)
+{
+ unsigned long reset_start_time = jiffies;
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+
+ if (ei_debug > 1)
+ printk(KERN_DEBUG "resetting the 8390 t=%ld...", jiffies);
+
+ /* DON'T change these to inb_p/outb_p or reset will fail on clones. */
+ outb(inb(NE_BASE + NE_RESET), NE_BASE + NE_RESET);
+
+ ei_status.txing = 0;
+ ei_status.dmaing = 0;
+
+ /* This check _should_not_ be necessary, omit eventually. */
+ while ((inb_p(NE_BASE+EN0_ISR) & ENISR_RESET) == 0)
+ if (jiffies - reset_start_time > 2*HZ/100) {
+ printk(KERN_WARNING "%s: ne_reset_8390() did not complete.\n", dev->name);
+ break;
+ }
+ outb_p(ENISR_RESET, NE_BASE + EN0_ISR); /* Ack intr. */
+}
+
+/* Grab the 8390 specific header. Similar to the block_input routine, but
+ we don't need to be concerned with ring wrap as the header will be at
+ the start of a page, so we optimize accordingly. */
+
+static void ne_get_8390_hdr(struct net_device *dev, struct e8390_pkt_hdr *hdr, int ring_page)
+{
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+ /* This *shouldn't* happen. If it does, it's the last thing you'll see */
+
+ if (ei_status.dmaing)
+ {
+ printk(KERN_EMERG "%s: DMAing conflict in ne_get_8390_hdr "
+ "[DMAstat:%d][irqlock:%d].\n",
+ dev->name, ei_status.dmaing, ei_status.irqlock);
+ return;
+ }
+
+ ei_status.dmaing |= 0x01;
+ outb_p(E8390_NODMA+E8390_PAGE0+E8390_START, NE_BASE + NE_CMD);
+ outb_p(sizeof(struct e8390_pkt_hdr), NE_BASE + EN0_RCNTLO);
+ outb_p(0, NE_BASE + EN0_RCNTHI);
+ outb_p(0, NE_BASE + EN0_RSARLO); /* On page boundary */
+ outb_p(ring_page, NE_BASE + EN0_RSARHI);
+ outb_p(E8390_RREAD+E8390_START, NE_BASE + NE_CMD);
+
+ if (ei_status.word16) {
+ int len;
+ unsigned short *p = (unsigned short *)hdr;
+ for (len = sizeof(struct e8390_pkt_hdr)>>1; len > 0; len--)
+ *p++ = inw(NE_BASE + NE_DATAPORT);
+ } else
+ insb(NE_BASE + NE_DATAPORT, hdr, sizeof(struct e8390_pkt_hdr));
+
+ outb_p(ENISR_RDC, NE_BASE + EN0_ISR); /* Ack intr. */
+ ei_status.dmaing &= ~0x01;
+
+ le16_to_cpus(&hdr->count);
+}
+
+/* Block input and output, similar to the Crynwr packet driver. If you
+ are porting to a new ethercard, look at the packet driver source for hints.
+ The NEx000 doesn't share the on-board packet memory -- you have to put
+ the packet out through the "remote DMA" dataport using outb. */
+
+static void ne_block_input(struct net_device *dev, int count, struct sk_buff *skb, int ring_offset)
+{
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+#ifdef NE_SANITY_CHECK
+ int xfer_count = count;
+#endif
+ char *buf = skb->data;
+
+ /* This *shouldn't* happen. If it does, it's the last thing you'll see */
+ if (ei_status.dmaing)
+ {
+ printk(KERN_EMERG "%s: DMAing conflict in ne_block_input "
+ "[DMAstat:%d][irqlock:%d].\n",
+ dev->name, ei_status.dmaing, ei_status.irqlock);
+ return;
+ }
+ ei_status.dmaing |= 0x01;
+ outb_p(E8390_NODMA+E8390_PAGE0+E8390_START, NE_BASE + NE_CMD);
+ outb_p(count & 0xff, NE_BASE + EN0_RCNTLO);
+ outb_p(count >> 8, NE_BASE + EN0_RCNTHI);
+ outb_p(ring_offset & 0xff, NE_BASE + EN0_RSARLO);
+ outb_p(ring_offset >> 8, NE_BASE + EN0_RSARHI);
+ outb_p(E8390_RREAD+E8390_START, NE_BASE + NE_CMD);
+ if (ei_status.word16)
+ {
+ int len;
+ unsigned short *p = (unsigned short *)buf;
+ for (len = count>>1; len > 0; len--)
+ *p++ = inw(NE_BASE + NE_DATAPORT);
+ if (count & 0x01)
+ {
+ buf[count-1] = inb(NE_BASE + NE_DATAPORT);
+#ifdef NE_SANITY_CHECK
+ xfer_count++;
+#endif
+ }
+ } else {
+ insb(NE_BASE + NE_DATAPORT, buf, count);
+ }
+
+#ifdef NE_SANITY_CHECK
+ /* This was for the ALPHA version only, but enough people have
+ been encountering problems so it is still here. If you see
+ this message you either 1) have a slightly incompatible clone
+ or 2) have noise/speed problems with your bus. */
+
+ if (ei_debug > 1)
+ {
+ /* DMA termination address check... */
+ int addr, tries = 20;
+ do {
+ /* DON'T check for 'inb_p(EN0_ISR) & ENISR_RDC' here
+ -- it's broken for Rx on some cards! */
+ int high = inb_p(NE_BASE + EN0_RSARHI);
+ int low = inb_p(NE_BASE + EN0_RSARLO);
+ addr = (high << 8) + low;
+ if (((ring_offset + xfer_count) & 0xff) == low)
+ break;
+ } while (--tries > 0);
+ if (tries <= 0)
+ printk(KERN_WARNING "%s: RX transfer address mismatch,"
+ "%#4.4x (expected) vs. %#4.4x (actual).\n",
+ dev->name, ring_offset + xfer_count, addr);
+ }
+#endif
+ outb_p(ENISR_RDC, NE_BASE + EN0_ISR); /* Ack intr. */
+ ei_status.dmaing &= ~0x01;
+}
+
+static void ne_block_output(struct net_device *dev, int count,
+ const unsigned char *buf, const int start_page)
+{
+ struct ei_device *ei_local = (struct ei_device *) netdev_priv(dev);
+ unsigned long dma_start;
+#ifdef NE_SANITY_CHECK
+ int retries = 0;
+#endif
+
+ /* Round the count up for word writes. Do we need to do this?
+ What effect will an odd byte count have on the 8390?
+ I should check someday. */
+
+ if (ei_status.word16 && (count & 0x01))
+ count++;
+
+ /* This *shouldn't* happen. If it does, it's the last thing you'll see */
+ if (ei_status.dmaing)
+ {
+ printk(KERN_EMERG "%s: DMAing conflict in ne_block_output."
+ "[DMAstat:%d][irqlock:%d]\n",
+ dev->name, ei_status.dmaing, ei_status.irqlock);
+ return;
+ }
+ ei_status.dmaing |= 0x01;
+ /* We should already be in page 0, but to be safe... */
+ outb_p(E8390_PAGE0+E8390_START+E8390_NODMA, NE_BASE + NE_CMD);
+
+#ifdef NE_SANITY_CHECK
+retry:
+#endif
+
+#ifdef NE8390_RW_BUGFIX
+ /* Handle the read-before-write bug the same way as the
+ Crynwr packet driver -- the NatSemi method doesn't work.
+ Actually this doesn't always work either, but if you have
+ problems with your NEx000 this is better than nothing! */
+
+ outb_p(0x42, NE_BASE + EN0_RCNTLO);
+ outb_p(0x00, NE_BASE + EN0_RCNTHI);
+ outb_p(0x42, NE_BASE + EN0_RSARLO);
+ outb_p(0x00, NE_BASE + EN0_RSARHI);
+ outb_p(E8390_RREAD+E8390_START, NE_BASE + NE_CMD);
+ /* Make certain that the dummy read has occurred. */
+ udelay(6);
+#endif
+
+ outb_p(ENISR_RDC, NE_BASE + EN0_ISR);
+
+ /* Now the normal output. */
+ outb_p(count & 0xff, NE_BASE + EN0_RCNTLO);
+ outb_p(count >> 8, NE_BASE + EN0_RCNTHI);
+ outb_p(0x00, NE_BASE + EN0_RSARLO);
+ outb_p(start_page, NE_BASE + EN0_RSARHI);
+
+ outb_p(E8390_RWRITE+E8390_START, NE_BASE + NE_CMD);
+ if (ei_status.word16) {
+ int len;
+ unsigned short *p = (unsigned short *)buf;
+ for (len = count>>1; len > 0; len--)
+ outw(*p++, NE_BASE + NE_DATAPORT);
+ } else {
+ outsb(NE_BASE + NE_DATAPORT, buf, count);
+ }
+
+ dma_start = jiffies;
+
+#ifdef NE_SANITY_CHECK
+ /* This was for the ALPHA version only, but enough people have
+ been encountering problems so it is still here. */
+
+ if (ei_debug > 1)
+ {
+ /* DMA termination address check... */
+ int addr, tries = 20;
+ do {
+ int high = inb_p(NE_BASE + EN0_RSARHI);
+ int low = inb_p(NE_BASE + EN0_RSARLO);
+ addr = (high << 8) + low;
+ if ((start_page << 8) + count == addr)
+ break;
+ } while (--tries > 0);
+
+ if (tries <= 0)
+ {
+ printk(KERN_WARNING "%s: Tx packet transfer address mismatch,"
+ "%#4.4x (expected) vs. %#4.4x (actual).\n",
+ dev->name, (start_page << 8) + count, addr);
+ if (retries++ == 0)
+ goto retry;
+ }
+ }
+#endif
+
+ while ((inb_p(NE_BASE + EN0_ISR) & ENISR_RDC) == 0)
+ if (jiffies - dma_start > 2*HZ/100) { /* 20ms */
+ printk(KERN_WARNING "%s: timeout waiting for Tx RDC.\n", dev->name);
+ ne_reset_8390(dev);
+ NS8390_init(dev,1);
+ break;
+ }
+
+ outb_p(ENISR_RDC, NE_BASE + EN0_ISR); /* Ack intr. */
+ ei_status.dmaing &= ~0x01;
+ return;
+}
+
+\f
+#ifdef MODULE
+#define MAX_NE_CARDS 1 /* Max number of NE cards per module */
+static struct net_device *dev_ne[MAX_NE_CARDS];
+static int io[MAX_NE_CARDS];
+static int irq[MAX_NE_CARDS];
+static int bad[MAX_NE_CARDS]; /* 0xbad = bad sig or no reset ack */
+
+MODULE_PARM(io, "1-" __MODULE_STRING(MAX_NE_CARDS) "i");
+MODULE_PARM(irq, "1-" __MODULE_STRING(MAX_NE_CARDS) "i");
+MODULE_PARM(bad, "1-" __MODULE_STRING(MAX_NE_CARDS) "i");
+MODULE_PARM_DESC(io, "I/O base address(es)");
+MODULE_PARM_DESC(irq, "IRQ number(s)");
+MODULE_DESCRIPTION("H8/300 NE2000 Ethernet driver");
+MODULE_LICENSE("GPL");
+
+/* This is set up so that no ISA autoprobe takes place. We can't guarantee
+that the ne2k probe is the last 8390 based probe to take place (as it
+is at boot) and so the probe will get confused by any other 8390 cards.
+ISA device autoprobes on a running machine are not recommended anyway. */
+
+int init_module(void)
+{
+ int this_dev, found = 0;
+ int err;
+
+ for (this_dev = 0; this_dev < MAX_NE_CARDS; this_dev++) {
+ struct net_device *dev = alloc_ei_netdev();
+ if (!dev)
+ break;
+ if (io[this_dev]) {
+ dev->irq = irq[this_dev];
+ dev->mem_end = bad[this_dev];
+ dev->base_addr = io[this_dev];
+ } else {
+ dev->base_addr = h8300_ne_base[this_dev];
+ dev->irq = h8300_ne_irq[this_dev];
+ }
+ err = init_reg_offset(dev, dev->base_addr);
+ if (!err) {
+ if (do_ne_probe(dev) == 0) {
+ if (register_netdev(dev) == 0) {
+ dev_ne[found++] = dev;
+ continue;
+ }
+ cleanup_card(dev);
+ }
+ }
+ free_netdev(dev);
+ if (found)
+ break;
+ if (io[this_dev] != 0)
+ printk(KERN_WARNING "ne.c: No NE*000 card found at i/o = %#x\n", dev->base_addr);
+ else
+ printk(KERN_NOTICE "ne.c: You must supply \"io=0xNNN\" value(s) for ISA cards.\n");
+ return -ENXIO;
+ }
+ if (found)
+ return 0;
+ return -ENODEV;
+}
+
+void cleanup_module(void)
+{
+ int this_dev;
+
+ for (this_dev = 0; this_dev < MAX_NE_CARDS; this_dev++) {
+ struct net_device *dev = dev_ne[this_dev];
+ if (dev) {
+ unregister_netdev(dev);
+ cleanup_card(dev);
+ free_netdev(dev);
+ }
+ }
+}
+#endif /* MODULE */
--- /dev/null
+/*
+ * smc91x.c
+ * This is a driver for SMSC's 91C9x/91C1xx single-chip Ethernet devices.
+ *
+ * Copyright (C) 1996 by Erik Stahlman
+ * Copyright (C) 2001 Standard Microsystems Corporation
+ * Developed by Simple Network Magic Corporation
+ * Copyright (C) 2003 Monta Vista Software, Inc.
+ * Unified SMC91x driver by Nicolas Pitre
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Arguments:
+ * io = for the base address
+ * irq = for the IRQ
+ * nowait = 0 for normal wait states, 1 eliminates additional wait states
+ *
+ * original author:
+ * Erik Stahlman <erik@vt.edu>
+ *
+ * hardware multicast code:
+ * Peter Cammaert <pc@denkart.be>
+ *
+ * contributors:
+ * Daris A Nevil <dnevil@snmc.com>
+ * Nicolas Pitre <nico@cam.org>
+ * Russell King <rmk@arm.linux.org.uk>
+ *
+ * History:
+ * 08/20/00 Arnaldo Melo fix kfree(skb) in smc_hardware_send_packet
+ * 12/15/00 Christian Jullien fix "Warning: kfree_skb on hard IRQ"
+ * 03/16/01 Daris A Nevil modified smc9194.c for use with LAN91C111
+ * 08/22/01 Scott Anderson merge changes from smc9194 to smc91111
+ * 08/21/01 Pramod B Bhardwaj added support for RevB of LAN91C111
+ * 12/20/01 Jeff Sutherland initial port to Xscale PXA with DMA support
+ * 04/07/03 Nicolas Pitre unified SMC91x driver, killed irq races,
+ * more bus abstraction, big cleanup, etc.
+ * 29/09/03 Russell King - add driver model support
+ * - ethtool support
+ * - convert to use generic MII interface
+ * - add link up/down notification
+ * - don't try to handle full negotiation in
+ * smc_phy_configure
+ * - clean up (and fix stack overrun) in PHY
+ * MII read/write functions
+ */
+static const char version[] =
+ "smc91x.c: v1.0, mar 07 2003 by Nicolas Pitre <nico@cam.org>\n";
+
+/* Debugging level */
+#ifndef SMC_DEBUG
+#define SMC_DEBUG 0
+#endif
+
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/crc32.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "smc91x.h"
+
+#ifdef CONFIG_ISA
+/*
+ * the LAN91C111 can be at any of the following port addresses. To change,
+ * for a slightly different card, you can add it to the array. Keep in
+ * mind that the array must end in zero.
+ */
+static unsigned int smc_portlist[] __initdata = {
+ 0x200, 0x220, 0x240, 0x260, 0x280, 0x2A0, 0x2C0, 0x2E0,
+ 0x300, 0x320, 0x340, 0x360, 0x380, 0x3A0, 0x3C0, 0x3E0, 0
+};
+
+#ifndef SMC_IOADDR
+# define SMC_IOADDR -1
+#endif
+static unsigned long io = SMC_IOADDR;
+module_param(io, ulong, 0400);
+MODULE_PARM_DESC(io, "I/O base address");
+
+#ifndef SMC_IRQ
+# define SMC_IRQ -1
+#endif
+static int irq = SMC_IRQ;
+module_param(irq, int, 0400);
+MODULE_PARM_DESC(irq, "IRQ number");
+
+#endif /* CONFIG_ISA */
+
+#ifndef SMC_NOWAIT
+# define SMC_NOWAIT 0
+#endif
+static int nowait = SMC_NOWAIT;
+module_param(nowait, int, 0400);
+MODULE_PARM_DESC(nowait, "set to 1 for no wait state");
+
+/*
+ * Transmit timeout, default 5 seconds.
+ */
+static int watchdog = 5000;
+module_param(watchdog, int, 0400);
+MODULE_PARM_DESC(watchdog, "transmit timeout in milliseconds");
+
+MODULE_LICENSE("GPL");
+
+/*
+ * The internal workings of the driver. If you are changing anything
+ * here with the SMC stuff, you should have the datasheet and know
+ * what you are doing.
+ */
+#define CARDNAME "smc91x"
+
+/*
+ * Use power-down feature of the chip
+ */
+#define POWER_DOWN 1
+
+/*
+ * Wait time for memory to be free. This probably shouldn't be
+ * tuned that much, as waiting for this means nothing else happens
+ * in the system
+ */
+#define MEMORY_WAIT_TIME 16
+
+/*
+ * This selects whether TX packets are sent one by one to the SMC91x internal
+ * memory and throttled until transmission completes. This may prevent
+ * RX overruns a litle by keeping much of the memory free for RX packets
+ * but to the expense of reduced TX throughput and increased IRQ overhead.
+ * Note this is not a cure for a too slow data bus or too high IRQ latency.
+ */
+#define THROTTLE_TX_PKTS 0
+
+/*
+ * The MII clock high/low times. 2x this number gives the MII clock period
+ * in microseconds. (was 50, but this gives 6.4ms for each MII transaction!)
+ */
+#define MII_DELAY 1
+
+/* store this information for the driver.. */
+struct smc_local {
+ /*
+ * If I have to wait until memory is available to send a
+ * packet, I will store the skbuff here, until I get the
+ * desired memory. Then, I'll send it out and free it.
+ */
+ struct sk_buff *saved_skb;
+
+ /*
+ * these are things that the kernel wants me to keep, so users
+ * can find out semi-useless statistics of how well the card is
+ * performing
+ */
+ struct net_device_stats stats;
+
+ /* version/revision of the SMC91x chip */
+ int version;
+
+ /* Contains the current active transmission mode */
+ int tcr_cur_mode;
+
+ /* Contains the current active receive mode */
+ int rcr_cur_mode;
+
+ /* Contains the current active receive/phy mode */
+ int rpc_cur_mode;
+ int ctl_rfduplx;
+ int ctl_rspeed;
+
+ u32 msg_enable;
+ u32 phy_type;
+ struct mii_if_info mii;
+ spinlock_t lock;
+
+#ifdef SMC_USE_PXA_DMA
+ /* DMA needs the physical address of the chip */
+ u_long physaddr;
+#endif
+};
+
+#if SMC_DEBUG > 0
+#define DBG(n, args...) \
+ do { \
+ if (SMC_DEBUG >= (n)) \
+ printk(KERN_DEBUG args); \
+ } while (0)
+
+#define PRINTK(args...) printk(args)
+#else
+#define DBG(n, args...) do { } while(0)
+#define PRINTK(args...) printk(KERN_DEBUG args)
+#endif
+
+#if SMC_DEBUG > 3
+static void PRINT_PKT(u_char *buf, int length)
+{
+ int i;
+ int remainder;
+ int lines;
+
+ lines = length / 16;
+ remainder = length % 16;
+
+ for (i = 0; i < lines ; i ++) {
+ int cur;
+ for (cur = 0; cur < 8; cur++) {
+ u_char a, b;
+ a = *buf++;
+ b = *buf++;
+ printk("%02x%02x ", a, b);
+ }
+ printk("\n");
+ }
+ for (i = 0; i < remainder/2 ; i++) {
+ u_char a, b;
+ a = *buf++;
+ b = *buf++;
+ printk("%02x%02x ", a, b);
+ }
+ printk("\n");
+}
+#else
+#define PRINT_PKT(x...) do { } while(0)
+#endif
+
+
+/* this enables an interrupt in the interrupt mask register */
+#define SMC_ENABLE_INT(x) do { \
+ unsigned long flags; \
+ unsigned char mask; \
+ spin_lock_irqsave(&lp->lock, flags); \
+ mask = SMC_GET_INT_MASK(); \
+ mask |= (x); \
+ SMC_SET_INT_MASK(mask); \
+ spin_unlock_irqrestore(&lp->lock, flags); \
+} while (0)
+
+/* this disables an interrupt from the interrupt mask register */
+#define SMC_DISABLE_INT(x) do { \
+ unsigned long flags; \
+ unsigned char mask; \
+ spin_lock_irqsave(&lp->lock, flags); \
+ mask = SMC_GET_INT_MASK(); \
+ mask &= ~(x); \
+ SMC_SET_INT_MASK(mask); \
+ spin_unlock_irqrestore(&lp->lock, flags); \
+} while (0)
+
+/*
+ * Wait while MMU is busy. This is usually in the order of a few nanosecs
+ * if at all, but let's avoid deadlocking the system if the hardware
+ * decides to go south.
+ */
+#define SMC_WAIT_MMU_BUSY() do { \
+ if (unlikely(SMC_GET_MMU_CMD() & MC_BUSY)) { \
+ unsigned long timeout = jiffies + 2; \
+ while (SMC_GET_MMU_CMD() & MC_BUSY) { \
+ if (time_after(jiffies, timeout)) { \
+ printk("%s: timeout %s line %d\n", \
+ dev->name, __FILE__, __LINE__); \
+ break; \
+ } \
+ cpu_relax(); \
+ } \
+ } \
+} while (0)
+
+
+/*
+ * this does a soft reset on the device
+ */
+static void smc_reset(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int ctl, cfg;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /*
+ * This resets the registers mostly to defaults, but doesn't
+ * affect EEPROM. That seems unnecessary
+ */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_SOFTRST);
+
+ /*
+ * Setup the Configuration Register
+ * This is necessary because the CONFIG_REG is not affected
+ * by a soft reset
+ */
+ SMC_SELECT_BANK(1);
+
+ cfg = CONFIG_DEFAULT;
+
+ /*
+ * Setup for fast accesses if requested. If the card/system
+ * can't handle it then there will be no recovery except for
+ * a hard reset or power cycle
+ */
+ if (nowait)
+ cfg |= CONFIG_NO_WAIT;
+
+ /*
+ * Release from possible power-down state
+ * Configuration register is not affected by Soft Reset
+ */
+ cfg |= CONFIG_EPH_POWER_EN;
+
+ SMC_SET_CONFIG(cfg);
+
+ /* this should pause enough for the chip to be happy */
+ /*
+ * elaborate? What does the chip _need_? --jgarzik
+ *
+ * This seems to be undocumented, but something the original
+ * driver(s) have always done. Suspect undocumented timing
+ * info/determined empirically. --rmk
+ */
+ udelay(1);
+
+ /* Disable transmit and receive functionality */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_CLEAR);
+ SMC_SET_TCR(TCR_CLEAR);
+
+ SMC_SELECT_BANK(1);
+ ctl = SMC_GET_CTL() | CTL_LE_ENABLE;
+
+ /*
+ * Set the control register to automatically release successfully
+ * transmitted packets, to make the best use out of our limited
+ * memory
+ */
+#if ! THROTTLE_TX_PKTS
+ ctl |= CTL_AUTO_RELEASE;
+#else
+ ctl &= ~CTL_AUTO_RELEASE;
+#endif
+ SMC_SET_CTL(ctl);
+
+ /* Disable all interrupts */
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(0);
+
+ /* Reset the MMU */
+ SMC_SET_MMU_CMD(MC_RESET);
+ SMC_WAIT_MMU_BUSY();
+}
+
+/*
+ * Enable Interrupts, Receive, and Transmit
+ */
+static void smc_enable(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ int mask;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* see the header file for options in TCR/RCR DEFAULT */
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ SMC_SET_RCR(lp->rcr_cur_mode);
+
+ /* now, enable interrupts */
+ mask = IM_EPH_INT|IM_RX_OVRN_INT|IM_RCV_INT;
+ if (lp->version >= (CHIP_91100 << 4))
+ mask |= IM_MDINT;
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(mask);
+}
+
+/*
+ * this puts the device in an inactive state
+ */
+static void smc_shutdown(unsigned long ioaddr)
+{
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ /* no more interrupts for me */
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(0);
+
+ /* and tell the card to stay away from that nasty outside world */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_CLEAR);
+ SMC_SET_TCR(TCR_CLEAR);
+
+#ifdef POWER_DOWN
+ /* finally, shut the chip down */
+ SMC_SELECT_BANK(1);
+ SMC_SET_CONFIG(SMC_GET_CONFIG() & ~CONFIG_EPH_POWER_EN);
+#endif
+}
+
+/*
+ * This is the procedure to handle the receipt of a packet.
+ */
+static inline void smc_rcv(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int packet_number, status, packet_len;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ packet_number = SMC_GET_RXFIFO();
+ if (unlikely(packet_number & RXFIFO_REMPTY)) {
+ PRINTK("%s: smc_rcv with nothing on FIFO.\n", dev->name);
+ return;
+ }
+
+ /* read from start of packet */
+ SMC_SET_PTR(PTR_READ | PTR_RCV | PTR_AUTOINC);
+
+ /* First two words are status and packet length */
+ SMC_GET_PKT_HDR(status, packet_len);
+ packet_len &= 0x07ff; /* mask off top bits */
+ DBG(2, "%s: RX PNR 0x%x STATUS 0x%04x LENGTH 0x%04x (%d)\n",
+ dev->name, packet_number, status,
+ packet_len, packet_len);
+
+ if (unlikely(status & RS_ERRORS)) {
+ lp->stats.rx_errors++;
+ if (status & RS_ALGNERR)
+ lp->stats.rx_frame_errors++;
+ if (status & (RS_TOOSHORT | RS_TOOLONG))
+ lp->stats.rx_length_errors++;
+ if (status & RS_BADCRC)
+ lp->stats.rx_crc_errors++;
+ } else {
+ struct sk_buff *skb;
+ unsigned char *data;
+ unsigned int data_len;
+
+ /* set multicast stats */
+ if (status & RS_MULTICAST)
+ lp->stats.multicast++;
+
+ /*
+ * Actual payload is packet_len - 4 (or 3 if odd byte).
+ * We want skb_reserve(2) and the final ctrl word
+ * (2 bytes, possibly containing the payload odd byte).
+ * Ence packet_len - 4 + 2 + 2.
+ */
+ skb = dev_alloc_skb(packet_len);
+ if (unlikely(skb == NULL)) {
+ printk(KERN_NOTICE "%s: Low memory, packet dropped.\n",
+ dev->name);
+ lp->stats.rx_dropped++;
+ goto done;
+ }
+
+ /* Align IP header to 32 bits */
+ skb_reserve(skb, 2);
+
+ /* BUG: the LAN91C111 rev A never sets this bit. Force it. */
+ if (lp->version == 0x90)
+ status |= RS_ODDFRAME;
+
+ /*
+ * If odd length: packet_len - 3,
+ * otherwise packet_len - 4.
+ */
+ data_len = packet_len - ((status & RS_ODDFRAME) ? 3 : 4);
+ data = skb_put(skb, data_len);
+ SMC_PULL_DATA(data, packet_len - 2);
+
+ PRINT_PKT(data, packet_len - 2);
+
+ dev->last_rx = jiffies;
+ skb->dev = dev;
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ lp->stats.rx_packets++;
+ lp->stats.rx_bytes += data_len;
+ }
+
+done:
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_RELEASE);
+}
+
+/*
+ * This is called to actually send a packet to the chip.
+ * Returns non-zero when successful.
+ */
+static void smc_hardware_send_packet(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ struct sk_buff *skb = lp->saved_skb;
+ unsigned int packet_no, len;
+ unsigned char *buf;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ packet_no = SMC_GET_AR();
+ if (unlikely(packet_no & AR_FAILED)) {
+ printk("%s: Memory allocation failed.\n", dev->name);
+ lp->saved_skb = NULL;
+ lp->stats.tx_errors++;
+ lp->stats.tx_fifo_errors++;
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ /* point to the beginning of the packet */
+ SMC_SET_PN(packet_no);
+ SMC_SET_PTR(PTR_AUTOINC);
+
+ buf = skb->data;
+ len = skb->len;
+ DBG(2, "%s: TX PNR 0x%x LENGTH 0x%04x (%d) BUF 0x%p\n",
+ dev->name, packet_no, len, len, buf);
+ PRINT_PKT(buf, len);
+
+ /*
+ * Send the packet length (+6 for status words, length, and ctl.
+ * The card will pad to 64 bytes with zeroes if packet is too small.
+ */
+ SMC_PUT_PKT_HDR(0, len + 6);
+
+ /* send the actual data */
+ SMC_PUSH_DATA(buf, len & ~1);
+
+ /* Send final ctl word with the last byte if there is one */
+ SMC_outw(((len & 1) ? (0x2000 | buf[len-1]) : 0), ioaddr, DATA_REG);
+
+ /* and let the chipset deal with it */
+ SMC_SET_MMU_CMD(MC_ENQUEUE);
+ SMC_ACK_INT(IM_TX_EMPTY_INT);
+
+ dev->trans_start = jiffies;
+ dev_kfree_skb_any(skb);
+ lp->saved_skb = NULL;
+ lp->stats.tx_packets++;
+ lp->stats.tx_bytes += len;
+}
+
+/*
+ * Since I am not sure if I will have enough room in the chip's ram
+ * to store the packet, I call this routine which either sends it
+ * now, or set the card to generates an interrupt when ready
+ * for the packet.
+ */
+static int smc_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int numPages, poll_count, status, saved_bank;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ BUG_ON(lp->saved_skb != NULL);
+ lp->saved_skb = skb;
+
+ /*
+ * The MMU wants the number of pages to be the number of 256 bytes
+ * 'pages', minus 1 (since a packet can't ever have 0 pages :))
+ *
+ * The 91C111 ignores the size bits, but earlier models don't.
+ *
+ * Pkt size for allocating is data length +6 (for additional status
+ * words, length and ctl)
+ *
+ * If odd size then last byte is included in ctl word.
+ */
+ numPages = ((skb->len & ~1) + (6 - 1)) >> 8;
+ if (unlikely(numPages > 7)) {
+ printk("%s: Far too big packet error.\n", dev->name);
+ lp->saved_skb = NULL;
+ lp->stats.tx_errors++;
+ lp->stats.tx_dropped++;
+ dev_kfree_skb(skb);
+ return 0;
+ }
+
+ /* now, try to allocate the memory */
+ saved_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(2);
+ SMC_SET_MMU_CMD(MC_ALLOC | numPages);
+
+ /*
+ * Poll the chip for a short amount of time in case the
+ * allocation succeeds quickly.
+ */
+ poll_count = MEMORY_WAIT_TIME;
+ do {
+ status = SMC_GET_INT();
+ if (status & IM_ALLOC_INT) {
+ SMC_ACK_INT(IM_ALLOC_INT);
+ break;
+ }
+ } while (--poll_count);
+
+ if (!poll_count) {
+ /* oh well, wait until the chip finds memory later */
+ netif_stop_queue(dev);
+ DBG(2, "%s: TX memory allocation deferred.\n", dev->name);
+ SMC_ENABLE_INT(IM_ALLOC_INT);
+ } else {
+ /*
+ * Allocation succeeded: push packet to the chip's own memory
+ * immediately.
+ *
+ * If THROTTLE_TX_PKTS is selected that means we don't want
+ * more than a single TX packet taking up space in the chip's
+ * internal memory at all time, in which case we stop the
+ * queue right here until we're notified of TX completion.
+ *
+ * Otherwise we're quite happy to feed more TX packets right
+ * away for better TX throughput, in which case the queue is
+ * left active.
+ */
+#if THROTTLE_TX_PKTS
+ netif_stop_queue(dev);
+#endif
+ smc_hardware_send_packet(dev);
+ SMC_ENABLE_INT(IM_TX_INT | IM_TX_EMPTY_INT);
+ }
+
+ SMC_SELECT_BANK(saved_bank);
+ return 0;
+}
+
+/*
+ * This handles a TX interrupt, which is only called when:
+ * - a TX error occurred, or
+ * - CTL_AUTO_RELEASE is not set and TX of a packet completed.
+ */
+static void smc_tx(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int saved_packet, packet_no, tx_status, pkt_len;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* If the TX FIFO is empty then nothing to do */
+ packet_no = SMC_GET_TXFIFO();
+ if (unlikely(packet_no & TXFIFO_TEMPTY)) {
+ PRINTK("%s: smc_tx with nothing on FIFO.\n", dev->name);
+ return;
+ }
+
+ /* select packet to read from */
+ saved_packet = SMC_GET_PN();
+ SMC_SET_PN(packet_no);
+
+ /* read the first word (status word) from this packet */
+ SMC_SET_PTR(PTR_AUTOINC | PTR_READ);
+ SMC_GET_PKT_HDR(tx_status, pkt_len);
+ DBG(2, "%s: TX STATUS 0x%04x PNR 0x%02x\n",
+ dev->name, tx_status, packet_no);
+
+ if (!(tx_status & TS_SUCCESS))
+ lp->stats.tx_errors++;
+ if (tx_status & TS_LOSTCAR)
+ lp->stats.tx_carrier_errors++;
+
+ if (tx_status & TS_LATCOL) {
+ PRINTK("%s: late collision occurred on last xmit\n", dev->name);
+ lp->stats.tx_window_errors++;
+ if (!(lp->stats.tx_window_errors & 63) && net_ratelimit()) {
+ printk(KERN_INFO "%s: unexpectedly large numbers of "
+ "late collisions. Please check duplex "
+ "setting.\n", dev->name);
+ }
+ }
+
+ /* kill the packet */
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_FREEPKT);
+
+ /* Don't restore Packet Number Reg until busy bit is cleared */
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_PN(saved_packet);
+
+ /* re-enable transmit */
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ SMC_SELECT_BANK(2);
+}
+
+
+/*---PHY CONTROL AND CONFIGURATION-----------------------------------------*/
+
+static void smc_mii_out(struct net_device *dev, unsigned int val, int bits)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int mii_reg, mask;
+
+ mii_reg = SMC_GET_MII() & ~(MII_MCLK | MII_MDOE | MII_MDO);
+ mii_reg |= MII_MDOE;
+
+ for (mask = 1 << (bits - 1); mask; mask >>= 1) {
+ if (val & mask)
+ mii_reg |= MII_MDO;
+ else
+ mii_reg &= ~MII_MDO;
+
+ SMC_SET_MII(mii_reg);
+ udelay(MII_DELAY);
+ SMC_SET_MII(mii_reg | MII_MCLK);
+ udelay(MII_DELAY);
+ }
+}
+
+static unsigned int smc_mii_in(struct net_device *dev, int bits)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int mii_reg, mask, val;
+
+ mii_reg = SMC_GET_MII() & ~(MII_MCLK | MII_MDOE | MII_MDO);
+ SMC_SET_MII(mii_reg);
+
+ for (mask = 1 << (bits - 1), val = 0; mask; mask >>= 1) {
+ if (SMC_GET_MII() & MII_MDI)
+ val |= mask;
+
+ SMC_SET_MII(mii_reg);
+ udelay(MII_DELAY);
+ SMC_SET_MII(mii_reg | MII_MCLK);
+ udelay(MII_DELAY);
+ }
+
+ return val;
+}
+
+/*
+ * Reads a register from the MII Management serial interface
+ */
+static int smc_phy_read(struct net_device *dev, int phyaddr, int phyreg)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int phydata, old_bank;
+
+ /* Save the current bank, and select bank 3 */
+ old_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(3);
+
+ /* Idle - 32 ones */
+ smc_mii_out(dev, 0xffffffff, 32);
+
+ /* Start code (01) + read (10) + phyaddr + phyreg */
+ smc_mii_out(dev, 6 << 10 | phyaddr << 5 | phyreg, 14);
+
+ /* Turnaround (2bits) + phydata */
+ phydata = smc_mii_in(dev, 18);
+
+ /* Return to idle state */
+ SMC_SET_MII(SMC_GET_MII() & ~(MII_MCLK|MII_MDOE|MII_MDO));
+
+ /* And select original bank */
+ SMC_SELECT_BANK(old_bank);
+
+ DBG(3, "%s: phyaddr=0x%x, phyreg=0x%x, phydata=0x%x\n",
+ __FUNCTION__, phyaddr, phyreg, phydata);
+
+ return phydata;
+}
+
+/*
+ * Writes a register to the MII Management serial interface
+ */
+static void smc_phy_write(struct net_device *dev, int phyaddr, int phyreg,
+ int phydata)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int old_bank;
+
+ /* Save the current bank, and select bank 3 */
+ old_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(3);
+
+ /* Idle - 32 ones */
+ smc_mii_out(dev, 0xffffffff, 32);
+
+ /* Start code (01) + write (01) + phyaddr + phyreg + turnaround + phydata */
+ smc_mii_out(dev, 5 << 28 | phyaddr << 23 | phyreg << 18 | 2 << 16 | phydata, 32);
+
+ /* Return to idle state */
+ SMC_SET_MII(SMC_GET_MII() & ~(MII_MCLK|MII_MDOE|MII_MDO));
+
+ /* And select original bank */
+ SMC_SELECT_BANK(old_bank);
+
+ DBG(3, "%s: phyaddr=0x%x, phyreg=0x%x, phydata=0x%x\n",
+ __FUNCTION__, phyaddr, phyreg, phydata);
+}
+
+/*
+ * Finds and reports the PHY address
+ */
+static void smc_detect_phy(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int phyaddr;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ lp->phy_type = 0;
+
+ /*
+ * Scan all 32 PHY addresses if necessary, starting at
+ * PHY#1 to PHY#31, and then PHY#0 last.
+ */
+ for (phyaddr = 1; phyaddr < 33; ++phyaddr) {
+ unsigned int id1, id2;
+
+ /* Read the PHY identifiers */
+ id1 = smc_phy_read(dev, phyaddr & 31, MII_PHYSID1);
+ id2 = smc_phy_read(dev, phyaddr & 31, MII_PHYSID2);
+
+ DBG(3, "%s: phy_id1=0x%x, phy_id2=0x%x\n",
+ dev->name, id1, id2);
+
+ /* Make sure it is a valid identifier */
+ if (id1 != 0x0000 && id1 != 0xffff && id1 != 0x8000 &&
+ id2 != 0x0000 && id2 != 0xffff && id2 != 0x8000) {
+ /* Save the PHY's address */
+ lp->mii.phy_id = phyaddr & 31;
+ lp->phy_type = id1 << 16 | id2;
+ break;
+ }
+ }
+}
+
+/*
+ * Sets the PHY to a configuration as determined by the user
+ */
+static int smc_phy_fixed(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ int phyaddr = lp->mii.phy_id;
+ int bmcr, cfg1;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* Enter Link Disable state */
+ cfg1 = smc_phy_read(dev, phyaddr, PHY_CFG1_REG);
+ cfg1 |= PHY_CFG1_LNKDIS;
+ smc_phy_write(dev, phyaddr, PHY_CFG1_REG, cfg1);
+
+ /*
+ * Set our fixed capabilities
+ * Disable auto-negotiation
+ */
+ bmcr = 0;
+
+ if (lp->ctl_rfduplx)
+ bmcr |= BMCR_FULLDPLX;
+
+ if (lp->ctl_rspeed == 100)
+ bmcr |= BMCR_SPEED100;
+
+ /* Write our capabilities to the phy control register */
+ smc_phy_write(dev, phyaddr, MII_BMCR, bmcr);
+
+ /* Re-Configure the Receive/Phy Control register */
+ SMC_SET_RPC(lp->rpc_cur_mode);
+
+ return 1;
+}
+
+/*
+ * smc_phy_reset - reset the phy
+ * @dev: net device
+ * @phy: phy address
+ *
+ * Issue a software reset for the specified PHY and
+ * wait up to 100ms for the reset to complete. We should
+ * not access the PHY for 50ms after issuing the reset.
+ *
+ * The time to wait appears to be dependent on the PHY.
+ *
+ * Must be called with lp->lock locked.
+ */
+static int smc_phy_reset(struct net_device *dev, int phy)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int bmcr;
+ int timeout;
+
+ smc_phy_write(dev, phy, MII_BMCR, BMCR_RESET);
+
+ for (timeout = 2; timeout; timeout--) {
+ spin_unlock_irq(&lp->lock);
+ msleep(50);
+ spin_lock_irq(&lp->lock);
+
+ bmcr = smc_phy_read(dev, phy, MII_BMCR);
+ if (!(bmcr & BMCR_RESET))
+ break;
+ }
+
+ return bmcr & BMCR_RESET;
+}
+
+/*
+ * smc_phy_powerdown - powerdown phy
+ * @dev: net device
+ * @phy: phy address
+ *
+ * Power down the specified PHY
+ */
+static void smc_phy_powerdown(struct net_device *dev, int phy)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int bmcr;
+
+ spin_lock_irq(&lp->lock);
+ bmcr = smc_phy_read(dev, phy, MII_BMCR);
+ smc_phy_write(dev, phy, MII_BMCR, bmcr | BMCR_PDOWN);
+ spin_unlock_irq(&lp->lock);
+}
+
+/*
+ * smc_phy_check_media - check the media status and adjust TCR
+ * @dev: net device
+ * @init: set true for initialisation
+ *
+ * Select duplex mode depending on negotiation state. This
+ * also updates our carrier state.
+ */
+static void smc_phy_check_media(struct net_device *dev, int init)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+
+ if (mii_check_media(&lp->mii, netif_msg_link(lp), init)) {
+ unsigned int old_bank;
+
+ /* duplex state has changed */
+ if (lp->mii.full_duplex) {
+ lp->tcr_cur_mode |= TCR_SWFDUP;
+ } else {
+ lp->tcr_cur_mode &= ~TCR_SWFDUP;
+ }
+
+ old_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ SMC_SELECT_BANK(old_bank);
+ }
+}
+
+/*
+ * Configures the specified PHY through the MII management interface
+ * using Autonegotiation.
+ * Calls smc_phy_fixed() if the user has requested a certain config.
+ * If RPC ANEG bit is set, the media selection is dependent purely on
+ * the selection by the MII (either in the MII BMCR reg or the result
+ * of autonegotiation.) If the RPC ANEG bit is cleared, the selection
+ * is controlled by the RPC SPEED and RPC DPLX bits.
+ */
+static void smc_phy_configure(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ int phyaddr = lp->mii.phy_id;
+ int my_phy_caps; /* My PHY capabilities */
+ int my_ad_caps; /* My Advertised capabilities */
+ int status;
+
+ DBG(3, "%s:smc_program_phy()\n", dev->name);
+
+ spin_lock_irq(&lp->lock);
+
+ /*
+ * We should not be called if phy_type is zero.
+ */
+ if (lp->phy_type == 0)
+ goto smc_phy_configure_exit;
+
+ if (smc_phy_reset(dev, phyaddr)) {
+ printk("%s: PHY reset timed out\n", dev->name);
+ goto smc_phy_configure_exit;
+ }
+
+ /*
+ * Enable PHY Interrupts (for register 18)
+ * Interrupts listed here are disabled
+ */
+ smc_phy_write(dev, phyaddr, PHY_MASK_REG,
+ PHY_INT_LOSSSYNC | PHY_INT_CWRD | PHY_INT_SSD |
+ PHY_INT_ESD | PHY_INT_RPOL | PHY_INT_JAB |
+ PHY_INT_SPDDET | PHY_INT_DPLXDET);
+
+ /* Configure the Receive/Phy Control register */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RPC(lp->rpc_cur_mode);
+
+ /* If the user requested no auto neg, then go set his request */
+ if (lp->mii.force_media) {
+ smc_phy_fixed(dev);
+ goto smc_phy_configure_exit;
+ }
+
+ /* Copy our capabilities from MII_BMSR to MII_ADVERTISE */
+ my_phy_caps = smc_phy_read(dev, phyaddr, MII_BMSR);
+
+ if (!(my_phy_caps & BMSR_ANEGCAPABLE)) {
+ printk(KERN_INFO "Auto negotiation NOT supported\n");
+ smc_phy_fixed(dev);
+ goto smc_phy_configure_exit;
+ }
+
+ my_ad_caps = ADVERTISE_CSMA; /* I am CSMA capable */
+
+ if (my_phy_caps & BMSR_100BASE4)
+ my_ad_caps |= ADVERTISE_100BASE4;
+ if (my_phy_caps & BMSR_100FULL)
+ my_ad_caps |= ADVERTISE_100FULL;
+ if (my_phy_caps & BMSR_100HALF)
+ my_ad_caps |= ADVERTISE_100HALF;
+ if (my_phy_caps & BMSR_10FULL)
+ my_ad_caps |= ADVERTISE_10FULL;
+ if (my_phy_caps & BMSR_10HALF)
+ my_ad_caps |= ADVERTISE_10HALF;
+
+ /* Disable capabilities not selected by our user */
+ if (lp->ctl_rspeed != 100)
+ my_ad_caps &= ~(ADVERTISE_100BASE4|ADVERTISE_100FULL|ADVERTISE_100HALF);
+
+ if (!lp->ctl_rfduplx)
+ my_ad_caps &= ~(ADVERTISE_100FULL|ADVERTISE_10FULL);
+
+ /* Update our Auto-Neg Advertisement Register */
+ smc_phy_write(dev, phyaddr, MII_ADVERTISE, my_ad_caps);
+ lp->mii.advertising = my_ad_caps;
+
+ /*
+ * Read the register back. Without this, it appears that when
+ * auto-negotiation is restarted, sometimes it isn't ready and
+ * the link does not come up.
+ */
+ status = smc_phy_read(dev, phyaddr, MII_ADVERTISE);
+
+ DBG(2, "%s: phy caps=%x\n", dev->name, my_phy_caps);
+ DBG(2, "%s: phy advertised caps=%x\n", dev->name, my_ad_caps);
+
+ /* Restart auto-negotiation process in order to advertise my caps */
+ smc_phy_write(dev, phyaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+
+ smc_phy_check_media(dev, 1);
+
+smc_phy_configure_exit:
+ spin_unlock_irq(&lp->lock);
+}
+
+/*
+ * smc_phy_interrupt
+ *
+ * Purpose: Handle interrupts relating to PHY register 18. This is
+ * called from the "hard" interrupt handler under our private spinlock.
+ */
+static void smc_phy_interrupt(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int phyaddr = lp->mii.phy_id;
+ int phy18;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ if (lp->phy_type == 0)
+ return;
+
+ for(;;) {
+ smc_phy_check_media(dev, 0);
+
+ /* Read PHY Register 18, Status Output */
+ phy18 = smc_phy_read(dev, phyaddr, PHY_INT_REG);
+ if ((phy18 & PHY_INT_INT) == 0)
+ break;
+ }
+}
+
+/*--- END PHY CONTROL AND CONFIGURATION-------------------------------------*/
+
+static void smc_10bt_check_media(struct net_device *dev, int init)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int old_carrier, new_carrier, old_bank;
+
+ old_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(0);
+ old_carrier = netif_carrier_ok(dev) ? 1 : 0;
+ new_carrier = SMC_inw(ioaddr, EPH_STATUS_REG) & ES_LINK_OK ? 1 : 0;
+
+ if (init || (old_carrier != new_carrier)) {
+ if (!new_carrier) {
+ netif_carrier_off(dev);
+ } else {
+ netif_carrier_on(dev);
+ }
+ if (netif_msg_link(lp))
+ printk(KERN_INFO "%s: link %s\n", dev->name,
+ new_carrier ? "up" : "down");
+ }
+ SMC_SELECT_BANK(old_bank);
+}
+
+static void smc_eph_interrupt(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int old_bank, ctl;
+
+ smc_10bt_check_media(dev, 0);
+
+ old_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(1);
+
+ ctl = SMC_GET_CTL();
+ SMC_SET_CTL(ctl & ~CTL_LE_ENABLE);
+ SMC_SET_CTL(ctl);
+
+ SMC_SELECT_BANK(old_bank);
+}
+
+/*
+ * This is the main routine of the driver, to handle the device when
+ * it needs some attention.
+ */
+static irqreturn_t smc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ int status, mask, timeout, card_stats;
+ int saved_bank, saved_pointer;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ saved_bank = SMC_CURRENT_BANK();
+ SMC_SELECT_BANK(2);
+ saved_pointer = SMC_GET_PTR();
+ mask = SMC_GET_INT_MASK();
+ SMC_SET_INT_MASK(0);
+
+ /* set a timeout value, so I don't stay here forever */
+ timeout = 8;
+
+ do {
+ status = SMC_GET_INT();
+
+ DBG(2, "%s: IRQ 0x%02x MASK 0x%02x MEM 0x%04x FIFO 0x%04x\n",
+ dev->name, status, mask,
+ ({ int meminfo; SMC_SELECT_BANK(0);
+ meminfo = SMC_GET_MIR();
+ SMC_SELECT_BANK(2); meminfo; }),
+ SMC_GET_FIFO());
+
+ status &= mask;
+ if (!status)
+ break;
+
+ spin_lock(&lp->lock);
+
+ if (status & IM_RCV_INT) {
+ DBG(3, "%s: RX irq\n", dev->name);
+ smc_rcv(dev);
+ } else if (status & IM_TX_INT) {
+ DBG(3, "%s: TX int\n", dev->name);
+ smc_tx(dev);
+ SMC_ACK_INT(IM_TX_INT);
+#if THROTTLE_TX_PKTS
+ netif_wake_queue(dev);
+#endif
+ } else if (status & IM_ALLOC_INT) {
+ DBG(3, "%s: Allocation irq\n", dev->name);
+ smc_hardware_send_packet(dev);
+ mask |= (IM_TX_INT | IM_TX_EMPTY_INT);
+ mask &= ~IM_ALLOC_INT;
+#if ! THROTTLE_TX_PKTS
+ netif_wake_queue(dev);
+#endif
+ } else if (status & IM_TX_EMPTY_INT) {
+ DBG(3, "%s: TX empty\n", dev->name);
+ mask &= ~IM_TX_EMPTY_INT;
+
+ /* update stats */
+ SMC_SELECT_BANK(0);
+ card_stats = SMC_GET_COUNTER();
+ SMC_SELECT_BANK(2);
+
+ /* single collisions */
+ lp->stats.collisions += card_stats & 0xF;
+ card_stats >>= 4;
+
+ /* multiple collisions */
+ lp->stats.collisions += card_stats & 0xF;
+ } else if (status & IM_RX_OVRN_INT) {
+ DBG(1, "%s: RX overrun\n", dev->name);
+ SMC_ACK_INT(IM_RX_OVRN_INT);
+ lp->stats.rx_errors++;
+ lp->stats.rx_fifo_errors++;
+ } else if (status & IM_EPH_INT) {
+ smc_eph_interrupt(dev);
+ } else if (status & IM_MDINT) {
+ SMC_ACK_INT(IM_MDINT);
+ smc_phy_interrupt(dev);
+ } else if (status & IM_ERCV_INT) {
+ SMC_ACK_INT(IM_ERCV_INT);
+ PRINTK("%s: UNSUPPORTED: ERCV INTERRUPT \n", dev->name);
+ }
+
+ spin_unlock(&lp->lock);
+ } while (--timeout);
+
+ /* restore register states */
+ SMC_SET_INT_MASK(mask);
+ SMC_SET_PTR(saved_pointer);
+ SMC_SELECT_BANK(saved_bank);
+
+ DBG(3, "%s: Interrupt done (%d loops)\n", dev->name, 8-timeout);
+
+ /*
+ * We return IRQ_HANDLED unconditionally here even if there was
+ * nothing to do. There is a possibility that a packet might
+ * get enqueued into the chip right after TX_EMPTY_INT is raised
+ * but just before the CPU acknowledges the IRQ.
+ * Better take an unneeded IRQ in some occasions than complexifying
+ * the code for all cases.
+ */
+ return IRQ_HANDLED;
+}
+
+/* Our watchdog timed out. Called by the networking layer */
+static void smc_timeout(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ smc_reset(dev);
+ smc_enable(dev);
+
+#if 0
+ /*
+ * Reconfiguring the PHY doesn't seem like a bad idea here, but
+ * it introduced a problem. Now that this is a timeout routine,
+ * we are getting called from within an interrupt context.
+ * smc_phy_configure() calls msleep() which calls
+ * schedule_timeout() which calls schedule(). When schedule()
+ * is called from an interrupt context, it prints out
+ * "Scheduling in interrupt" and then calls BUG(). This is
+ * obviously not desirable. This was worked around by removing
+ * the call to smc_phy_configure() here because it didn't seem
+ * absolutely necessary. Ultimately, if msleep() is
+ * supposed to be usable from an interrupt context (which it
+ * looks like it thinks it should handle), it should be fixed.
+ */
+ if (lp->phy_type != 0)
+ smc_phy_configure(dev);
+#endif
+
+ /* clear anything saved */
+ if (lp->saved_skb != NULL) {
+ dev_kfree_skb (lp->saved_skb);
+ lp->saved_skb = NULL;
+ lp->stats.tx_errors++;
+ lp->stats.tx_aborted_errors++;
+ }
+ /* We can accept TX packets again */
+ dev->trans_start = jiffies;
+ netif_wake_queue(dev);
+}
+
+/*
+ * This sets the internal hardware table to filter out unwanted multicast
+ * packets before they take up memory.
+ *
+ * The SMC chip uses a hash table where the high 6 bits of the CRC of
+ * address are the offset into the table. If that bit is 1, then the
+ * multicast packet is accepted. Otherwise, it's dropped silently.
+ *
+ * To use the 6 bits as an offset into the table, the high 3 bits are the
+ * number of the 8 bit register, while the low 3 bits are the bit within
+ * that register.
+ *
+ * This routine is based very heavily on the one provided by Peter Cammaert.
+ */
+static void
+smc_setmulticast(unsigned long ioaddr, int count, struct dev_mc_list *addrs)
+{
+ int i;
+ unsigned char multicast_table[8];
+ struct dev_mc_list *cur_addr;
+
+ /* table for flipping the order of 3 bits */
+ static unsigned char invert3[] = { 0, 4, 2, 6, 1, 5, 3, 7 };
+
+ /* start with a table of all zeros: reject all */
+ memset(multicast_table, 0, sizeof(multicast_table));
+
+ cur_addr = addrs;
+ for (i = 0; i < count; i++, cur_addr = cur_addr->next) {
+ int position;
+
+ /* do we have a pointer here? */
+ if (!cur_addr)
+ break;
+ /* make sure this is a multicast address - shouldn't this
+ be a given if we have it here ? */
+ if (!(*cur_addr->dmi_addr & 1))
+ continue;
+
+ /* only use the low order bits */
+ position = crc32_le(~0, cur_addr->dmi_addr, 6) & 0x3f;
+
+ /* do some messy swapping to put the bit in the right spot */
+ multicast_table[invert3[position&7]] |=
+ (1<<invert3[(position>>3)&7]);
+
+ }
+ /* now, the table can be loaded into the chipset */
+ SMC_SELECT_BANK(3);
+ SMC_SET_MCAST(multicast_table);
+}
+
+/*
+ * This routine will, depending on the values passed to it,
+ * either make it accept multicast packets, go into
+ * promiscuous mode (for TCPDUMP and cousins) or accept
+ * a select set of multicast packets
+ */
+static void smc_set_multicast_list(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ SMC_SELECT_BANK(0);
+ if (dev->flags & IFF_PROMISC) {
+ DBG(2, "%s: RCR_PRMS\n", dev->name);
+ lp->rcr_cur_mode |= RCR_PRMS;
+ SMC_SET_RCR(lp->rcr_cur_mode);
+ }
+
+/* BUG? I never disable promiscuous mode if multicasting was turned on.
+ Now, I turn off promiscuous mode, but I don't do anything to multicasting
+ when promiscuous mode is turned on.
+*/
+
+ /*
+ * Here, I am setting this to accept all multicast packets.
+ * I don't need to zero the multicast table, because the flag is
+ * checked before the table is
+ */
+ else if (dev->flags & IFF_ALLMULTI || dev->mc_count > 16) {
+ lp->rcr_cur_mode |= RCR_ALMUL;
+ SMC_SET_RCR(lp->rcr_cur_mode);
+ DBG(2, "%s: RCR_ALMUL\n", dev->name);
+ }
+
+ /*
+ * We just get all multicast packets even if we only want them
+ * from one source. This will be changed at some future point.
+ */
+ else if (dev->mc_count) {
+ /* support hardware multicasting */
+
+ /* be sure I get rid of flags I might have set */
+ lp->rcr_cur_mode &= ~(RCR_PRMS | RCR_ALMUL);
+ SMC_SET_RCR(lp->rcr_cur_mode);
+ /*
+ * NOTE: this has to set the bank, so make sure it is the
+ * last thing called. The bank is set to zero at the top
+ */
+ smc_setmulticast(ioaddr, dev->mc_count, dev->mc_list);
+ } else {
+ DBG(2, "%s: ~(RCR_PRMS|RCR_ALMUL)\n", dev->name);
+ lp->rcr_cur_mode &= ~(RCR_PRMS | RCR_ALMUL);
+ SMC_SET_RCR(lp->rcr_cur_mode);
+
+ /*
+ * since I'm disabling all multicast entirely, I need to
+ * clear the multicast list
+ */
+ SMC_SELECT_BANK(3);
+ SMC_CLEAR_MCAST();
+ }
+}
+
+
+/*
+ * Open and Initialize the board
+ *
+ * Set up everything, reset the card, etc..
+ */
+static int
+smc_open(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /*
+ * Check that the address is valid. If its not, refuse
+ * to bring the device up. The user must specify an
+ * address using ifconfig eth0 hw ether xx:xx:xx:xx:xx:xx
+ */
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ DBG(2, (KERN_DEBUG "smc_open: no valid ethernet hw addr\n"));
+ return -EINVAL;
+ }
+
+ /* clear out all the junk that was put here before... */
+ lp->saved_skb = NULL;
+
+ /* Setup the default Register Modes */
+ lp->tcr_cur_mode = TCR_DEFAULT;
+ lp->rcr_cur_mode = RCR_DEFAULT;
+ lp->rpc_cur_mode = RPC_DEFAULT;
+
+ /*
+ * If we are not using a MII interface, we need to
+ * monitor our own carrier signal to detect faults.
+ */
+ if (lp->phy_type == 0)
+ lp->tcr_cur_mode |= TCR_MON_CSN;
+
+ /* reset the hardware */
+ smc_reset(dev);
+ smc_enable(dev);
+
+ SMC_SELECT_BANK(1);
+ SMC_SET_MAC_ADDR(dev->dev_addr);
+
+ /* Configure the PHY */
+ if (lp->phy_type != 0)
+ smc_phy_configure(dev);
+ else {
+ spin_lock_irq(&lp->lock);
+ smc_10bt_check_media(dev, 1);
+ spin_unlock_irq(&lp->lock);
+ }
+
+ /*
+ * make sure to initialize the link state with netif_carrier_off()
+ * somewhere, too --jgarzik
+ *
+ * smc_phy_configure() and smc_10bt_check_media() does that. --rmk
+ */
+ netif_start_queue(dev);
+ return 0;
+}
+
+/*
+ * smc_close
+ *
+ * this makes the board clean up everything that it can
+ * and not talk to the outside world. Caused by
+ * an 'ifconfig ethX down'
+ */
+static int smc_close(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ netif_stop_queue(dev);
+ netif_carrier_off(dev);
+
+ /* clear everything */
+ smc_shutdown(dev->base_addr);
+
+ if (lp->phy_type != 0)
+ smc_phy_powerdown(dev, lp->mii.phy_id);
+
+ return 0;
+}
+
+/*
+ * Get the current statistics.
+ * This may be called with the card open or closed.
+ */
+static struct net_device_stats *smc_query_statistics(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ return &lp->stats;
+}
+
+/*
+ * Ethtool support
+ */
+static int
+smc_ethtool_getsettings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret;
+
+ cmd->maxtxpkt = 1;
+ cmd->maxrxpkt = 1;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_ethtool_gset(&lp->mii, cmd);
+ spin_unlock_irq(&lp->lock);
+ } else {
+ cmd->supported = SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_TP | SUPPORTED_AUI;
+
+ if (lp->ctl_rspeed == 10)
+ cmd->speed = SPEED_10;
+ else if (lp->ctl_rspeed == 100)
+ cmd->speed = SPEED_100;
+
+ cmd->autoneg = AUTONEG_DISABLE;
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->port = 0;
+ cmd->duplex = lp->tcr_cur_mode & TCR_SWFDUP ? DUPLEX_FULL : DUPLEX_HALF;
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static int
+smc_ethtool_setsettings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_ethtool_sset(&lp->mii, cmd);
+ spin_unlock_irq(&lp->lock);
+ } else {
+ if (cmd->autoneg != AUTONEG_DISABLE ||
+ cmd->speed != SPEED_10 ||
+ (cmd->duplex != DUPLEX_HALF && cmd->duplex != DUPLEX_FULL) ||
+ (cmd->port != PORT_TP && cmd->port != PORT_AUI))
+ return -EINVAL;
+
+// lp->port = cmd->port;
+ lp->ctl_rfduplx = cmd->duplex == DUPLEX_FULL;
+
+// if (netif_running(dev))
+// smc_set_port(dev);
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static void
+smc_ethtool_getdrvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ strncpy(info->driver, CARDNAME, sizeof(info->driver));
+ strncpy(info->version, version, sizeof(info->version));
+ strncpy(info->bus_info, dev->class_dev.dev->bus_id, sizeof(info->bus_info));
+}
+
+static int smc_ethtool_nwayreset(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret = -EINVAL;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_nway_restart(&lp->mii);
+ spin_unlock_irq(&lp->lock);
+ }
+
+ return ret;
+}
+
+static u32 smc_ethtool_getmsglevel(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ return lp->msg_enable;
+}
+
+static void smc_ethtool_setmsglevel(struct net_device *dev, u32 level)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ lp->msg_enable = level;
+}
+
+static struct ethtool_ops smc_ethtool_ops = {
+ .get_settings = smc_ethtool_getsettings,
+ .set_settings = smc_ethtool_setsettings,
+ .get_drvinfo = smc_ethtool_getdrvinfo,
+
+ .get_msglevel = smc_ethtool_getmsglevel,
+ .set_msglevel = smc_ethtool_setmsglevel,
+ .nway_reset = smc_ethtool_nwayreset,
+ .get_link = ethtool_op_get_link,
+// .get_eeprom = smc_ethtool_geteeprom,
+// .set_eeprom = smc_ethtool_seteeprom,
+};
+
+/*
+ * smc_findirq
+ *
+ * This routine has a simple purpose -- make the SMC chip generate an
+ * interrupt, so an auto-detect routine can detect it, and find the IRQ,
+ */
+/*
+ * does this still work?
+ *
+ * I just deleted auto_irq.c, since it was never built...
+ * --jgarzik
+ */
+static int __init smc_findirq(unsigned long ioaddr)
+{
+ int timeout = 20;
+ unsigned long cookie;
+
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ cookie = probe_irq_on();
+
+ /*
+ * What I try to do here is trigger an ALLOC_INT. This is done
+ * by allocating a small chunk of memory, which will give an interrupt
+ * when done.
+ */
+ /* enable ALLOCation interrupts ONLY */
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(IM_ALLOC_INT);
+
+ /*
+ * Allocate 512 bytes of memory. Note that the chip was just
+ * reset so all the memory is available
+ */
+ SMC_SET_MMU_CMD(MC_ALLOC | 1);
+
+ /*
+ * Wait until positive that the interrupt has been generated
+ */
+ do {
+ int int_status;
+ udelay(10);
+ int_status = SMC_GET_INT();
+ if (int_status & IM_ALLOC_INT)
+ break; /* got the interrupt */
+ } while (--timeout);
+
+ /*
+ * there is really nothing that I can do here if timeout fails,
+ * as autoirq_report will return a 0 anyway, which is what I
+ * want in this case. Plus, the clean up is needed in both
+ * cases.
+ */
+
+ /* and disable all interrupts again */
+ SMC_SET_INT_MASK(0);
+
+ /* and return what I found */
+ return probe_irq_off(cookie);
+}
+
+/*
+ * Function: smc_probe(unsigned long ioaddr)
+ *
+ * Purpose:
+ * Tests to see if a given ioaddr points to an SMC91x chip.
+ * Returns a 0 on success
+ *
+ * Algorithm:
+ * (1) see if the high byte of BANK_SELECT is 0x33
+ * (2) compare the ioaddr with the base register's address
+ * (3) see if I recognize the chip ID in the appropriate register
+ *
+ * Here I do typical initialization tasks.
+ *
+ * o Initialize the structure if needed
+ * o print out my vanity message if not done so already
+ * o print out what type of hardware is detected
+ * o print out the ethernet address
+ * o find the IRQ
+ * o set up my private data
+ * o configure the dev structure with my subroutines
+ * o actually GRAB the irq.
+ * o GRAB the region
+ */
+static int __init smc_probe(struct net_device *dev, unsigned long ioaddr)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ static int version_printed = 0;
+ int i, retval;
+ unsigned int val, revision_register;
+ const char *version_string;
+
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ /* First, see if the high byte is 0x33 */
+ val = SMC_CURRENT_BANK();
+ DBG(2, "%s: bank signature probe returned 0x%04x\n", CARDNAME, val);
+ if ((val & 0xFF00) != 0x3300) {
+ if ((val & 0xFF) == 0x33) {
+ printk(KERN_WARNING
+ "%s: Detected possible byte-swapped interface"
+ " at IOADDR 0x%lx\n", CARDNAME, ioaddr);
+ }
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /*
+ * The above MIGHT indicate a device, but I need to write to
+ * further test this.
+ */
+ SMC_SELECT_BANK(0);
+ val = SMC_CURRENT_BANK();
+ if ((val & 0xFF00) != 0x3300) {
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /*
+ * well, we've already written once, so hopefully another
+ * time won't hurt. This time, I need to switch the bank
+ * register to bank 1, so I can access the base address
+ * register
+ */
+ SMC_SELECT_BANK(1);
+ val = SMC_GET_BASE();
+ val = ((val & 0x1F00) >> 3) << SMC_IO_SHIFT;
+ if ((ioaddr & ((PAGE_SIZE-1)<<SMC_IO_SHIFT)) != val) {
+ printk("%s: IOADDR %lx doesn't match configuration (%x).\n",
+ CARDNAME, ioaddr, val);
+ }
+
+ /*
+ * check if the revision register is something that I
+ * recognize. These might need to be added to later,
+ * as future revisions could be added.
+ */
+ SMC_SELECT_BANK(3);
+ revision_register = SMC_GET_REV();
+ DBG(2, "%s: revision = 0x%04x\n", CARDNAME, revision_register);
+ version_string = chip_ids[ (revision_register >> 4) & 0xF];
+ if (!version_string || (revision_register & 0xff00) != 0x3300) {
+ /* I don't recognize this chip, so... */
+ printk("%s: IO 0x%lx: Unrecognized revision register 0x%04x"
+ ", Contact author.\n", CARDNAME,
+ ioaddr, revision_register);
+
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /* At this point I'll assume that the chip is an SMC91x. */
+ if (version_printed++ == 0)
+ printk("%s", version);
+
+ /* fill in some of the fields */
+ dev->base_addr = ioaddr;
+ lp->version = revision_register & 0xff;
+
+ /* Get the MAC address */
+ SMC_SELECT_BANK(1);
+ SMC_GET_MAC_ADDR(dev->dev_addr);
+
+ /* now, reset the chip, and put it into a known state */
+ smc_reset(dev);
+
+ /*
+ * If dev->irq is 0, then the device has to be banged on to see
+ * what the IRQ is.
+ *
+ * This banging doesn't always detect the IRQ, for unknown reasons.
+ * a workaround is to reset the chip and try again.
+ *
+ * Interestingly, the DOS packet driver *SETS* the IRQ on the card to
+ * be what is requested on the command line. I don't do that, mostly
+ * because the card that I have uses a non-standard method of accessing
+ * the IRQs, and because this _should_ work in most configurations.
+ *
+ * Specifying an IRQ is done with the assumption that the user knows
+ * what (s)he is doing. No checking is done!!!!
+ */
+ if (dev->irq < 1) {
+ int trials;
+
+ trials = 3;
+ while (trials--) {
+ dev->irq = smc_findirq(ioaddr);
+ if (dev->irq)
+ break;
+ /* kick the card and try again */
+ smc_reset(dev);
+ }
+ }
+ if (dev->irq == 0) {
+ printk("%s: Couldn't autodetect your IRQ. Use irq=xx.\n",
+ dev->name);
+ retval = -ENODEV;
+ goto err_out;
+ }
+ dev->irq = irq_canonicalize(dev->irq);
+
+ /* Fill in the fields of the device structure with ethernet values. */
+ ether_setup(dev);
+
+ dev->open = smc_open;
+ dev->stop = smc_close;
+ dev->hard_start_xmit = smc_hard_start_xmit;
+ dev->tx_timeout = smc_timeout;
+ dev->watchdog_timeo = msecs_to_jiffies(watchdog);
+ dev->get_stats = smc_query_statistics;
+ dev->set_multicast_list = smc_set_multicast_list;
+ dev->ethtool_ops = &smc_ethtool_ops;
+
+ spin_lock_init(&lp->lock);
+ lp->mii.phy_id_mask = 0x1f;
+ lp->mii.reg_num_mask = 0x1f;
+ lp->mii.force_media = 0;
+ lp->mii.full_duplex = 0;
+ lp->mii.dev = dev;
+ lp->mii.mdio_read = smc_phy_read;
+ lp->mii.mdio_write = smc_phy_write;
+
+ /*
+ * Locate the phy, if any.
+ */
+ if (lp->version >= (CHIP_91100 << 4))
+ smc_detect_phy(dev);
+
+ /* Set default parameters */
+ lp->msg_enable = NETIF_MSG_LINK;
+ lp->ctl_rfduplx = 0;
+ lp->ctl_rspeed = 10;
+
+ if (lp->version >= (CHIP_91100 << 4)) {
+ lp->ctl_rfduplx = 1;
+ lp->ctl_rspeed = 100;
+ }
+
+ /* Grab the IRQ */
+ retval = request_irq(dev->irq, &smc_interrupt, 0, dev->name, dev);
+ if (retval)
+ goto err_out;
+
+ set_irq_type(dev->irq, IRQT_RISING);
+#ifdef SMC_USE_PXA_DMA
+ {
+ int dma = pxa_request_dma(dev->name, DMA_PRIO_LOW,
+ smc_pxa_dma_irq, NULL);
+ if (dma >= 0)
+ dev->dma = dma;
+ }
+#endif
+
+ retval = register_netdev(dev);
+ if (retval == 0) {
+ /* now, print out the card info, in a short format.. */
+ printk("%s: %s (rev %d) at %#lx IRQ %d",
+ dev->name, version_string, revision_register & 0x0f,
+ dev->base_addr, dev->irq);
+
+ if (dev->dma != (unsigned char)-1)
+ printk(" DMA %d", dev->dma);
+
+ printk("%s%s\n", nowait ? " [nowait]" : "",
+ THROTTLE_TX_PKTS ? " [throttle_tx]" : "");
+
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ printk("%s: Invalid ethernet MAC address. Please "
+ "set using ifconfig\n", dev->name);
+ } else {
+ /* Print the Ethernet address */
+ printk("%s: Ethernet addr: ", dev->name);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x\n", dev->dev_addr[5]);
+ }
+
+ if (lp->phy_type == 0) {
+ PRINTK("%s: No PHY found\n", dev->name);
+ } else if ((lp->phy_type & 0xfffffff0) == 0x0016f840) {
+ PRINTK("%s: PHY LAN83C183 (LAN91C111 Internal)\n", dev->name);
+ } else if ((lp->phy_type & 0xfffffff0) == 0x02821c50) {
+ PRINTK("%s: PHY LAN83C180\n", dev->name);
+ }
+ }
+
+err_out:
+#ifdef SMC_USE_PXA_DMA
+ if (retval && dev->dma != (unsigned char)-1)
+ pxa_free_dma(dev->dma);
+#endif
+ return retval;
+}
+
+static int smc_enable_device(unsigned long attrib_phys)
+{
+ unsigned long flags;
+ unsigned char ecor, ecsr;
+ void *addr;
+
+ /*
+ * Map the attribute space. This is overkill, but clean.
+ */
+ addr = ioremap(attrib_phys, ATTRIB_SIZE);
+ if (!addr)
+ return -ENOMEM;
+
+ /*
+ * Reset the device. We must disable IRQs around this
+ * since a reset causes the IRQ line become active.
+ */
+ local_irq_save(flags);
+ ecor = readb(addr + (ECOR << SMC_IO_SHIFT)) & ~ECOR_RESET;
+ writeb(ecor | ECOR_RESET, addr + (ECOR << SMC_IO_SHIFT));
+ readb(addr + (ECOR << SMC_IO_SHIFT));
+
+ /*
+ * Wait 100us for the chip to reset.
+ */
+ udelay(100);
+
+ /*
+ * The device will ignore all writes to the enable bit while
+ * reset is asserted, even if the reset bit is cleared in the
+ * same write. Must clear reset first, then enable the device.
+ */
+ writeb(ecor, addr + (ECOR << SMC_IO_SHIFT));
+ writeb(ecor | ECOR_ENABLE, addr + (ECOR << SMC_IO_SHIFT));
+
+ /*
+ * Set the appropriate byte/word mode.
+ */
+ ecsr = readb(addr + (ECSR << SMC_IO_SHIFT)) & ~ECSR_IOIS8;
+#ifndef SMC_CAN_USE_16BIT
+ ecsr |= ECSR_IOIS8;
+#endif
+ writeb(ecsr, addr + (ECSR << SMC_IO_SHIFT));
+ local_irq_restore(flags);
+
+ iounmap(addr);
+
+ /*
+ * Wait for the chip to wake up. We could poll the control
+ * register in the main register space, but that isn't mapped
+ * yet. We know this is going to take 750us.
+ */
+ msleep(1);
+
+ return 0;
+}
+
+/*
+ * smc_init(void)
+ * Input parameters:
+ * dev->base_addr == 0, try to find all possible locations
+ * dev->base_addr > 0x1ff, this is the address to check
+ * dev->base_addr == <anything else>, return failure code
+ *
+ * Output:
+ * 0 --> there is a device
+ * anything else, error
+ */
+static int smc_drv_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev;
+ struct resource *res, *ext = NULL;
+ unsigned int *addr;
+ int ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ /*
+ * Request the regions.
+ */
+ if (!request_mem_region(res->start, SMC_IO_EXTENT, "smc91x")) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ ndev = alloc_etherdev(sizeof(struct smc_local));
+ if (!ndev) {
+ printk("%s: could not allocate device.\n", CARDNAME);
+ ret = -ENOMEM;
+ goto release_1;
+ }
+ SET_MODULE_OWNER(ndev);
+ SET_NETDEV_DEV(ndev, dev);
+
+ ndev->dma = (unsigned char)-1;
+ ndev->irq = platform_get_irq(pdev, 0);
+
+ ext = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (ext) {
+ if (!request_mem_region(ext->start, ATTRIB_SIZE, ndev->name)) {
+ ret = -EBUSY;
+ goto release_1;
+ }
+
+#if defined(CONFIG_SA1100_ASSABET)
+ NCR_0 |= NCR_ENET_OSC_EN;
+#endif
+
+ ret = smc_enable_device(ext->start);
+ if (ret)
+ goto release_both;
+ }
+
+ addr = ioremap(res->start, SMC_IO_EXTENT);
+ if (!addr) {
+ ret = -ENOMEM;
+ goto release_both;
+ }
+
+ dev_set_drvdata(dev, ndev);
+ ret = smc_probe(ndev, (unsigned long)addr);
+ if (ret != 0) {
+ dev_set_drvdata(dev, NULL);
+ iounmap(addr);
+ release_both:
+ if (ext)
+ release_mem_region(ext->start, ATTRIB_SIZE);
+ free_netdev(ndev);
+ release_1:
+ release_mem_region(res->start, SMC_IO_EXTENT);
+ out:
+ printk("%s: not found (%d).\n", CARDNAME, ret);
+ }
+#ifdef SMC_USE_PXA_DMA
+ else {
+ struct smc_local *lp = netdev_priv(ndev);
+ lp->physaddr = res->start;
+ }
+#endif
+
+ return ret;
+}
+
+static int smc_drv_remove(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct resource *res;
+
+ dev_set_drvdata(dev, NULL);
+
+ unregister_netdev(ndev);
+
+ free_irq(ndev->irq, ndev);
+
+#ifdef SMC_USE_PXA_DMA
+ if (ndev->dma != (unsigned char)-1)
+ pxa_free_dma(ndev->dma);
+#endif
+ iounmap((void *)ndev->base_addr);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res)
+ release_mem_region(res->start, ATTRIB_SIZE);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(res->start, SMC_IO_EXTENT);
+
+ free_netdev(ndev);
+
+ return 0;
+}
+
+static int smc_drv_suspend(struct device *dev, u32 state, u32 level)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+
+ if (ndev && level == SUSPEND_DISABLE) {
+ if (netif_running(ndev)) {
+ netif_device_detach(ndev);
+ smc_shutdown(ndev->base_addr);
+ }
+ }
+ return 0;
+}
+
+static int smc_drv_resume(struct device *dev, u32 level)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev = dev_get_drvdata(dev);
+
+ if (ndev && level == RESUME_ENABLE) {
+ struct smc_local *lp = netdev_priv(ndev);
+ unsigned long ioaddr = ndev->base_addr;
+
+ if (pdev->num_resources == 3)
+ smc_enable_device(pdev->resource[2].start);
+ if (netif_running(ndev)) {
+ smc_reset(ndev);
+ smc_enable(ndev);
+ SMC_SELECT_BANK(1);
+ SMC_SET_MAC_ADDR(ndev->dev_addr);
+ if (lp->phy_type != 0)
+ smc_phy_configure(ndev);
+ netif_device_attach(ndev);
+ }
+ }
+ return 0;
+}
+
+static struct device_driver smc_driver = {
+ .name = CARDNAME,
+ .bus = &platform_bus_type,
+ .probe = smc_drv_probe,
+ .remove = smc_drv_remove,
+ .suspend = smc_drv_suspend,
+ .resume = smc_drv_resume,
+};
+
+static int __init smc_init(void)
+{
+#ifdef MODULE
+ if (io == -1)
+ printk(KERN_WARNING
+ "%s: You shouldn't use auto-probing with insmod!\n",
+ CARDNAME);
+#endif
+
+ return driver_register(&smc_driver);
+}
+
+static void __exit smc_cleanup(void)
+{
+ driver_unregister(&smc_driver);
+}
+
+module_init(smc_init);
+module_exit(smc_cleanup);
--- /dev/null
+/*------------------------------------------------------------------------
+ . smc91x.h - macros for SMSC's 91C9x/91C1xx single-chip Ethernet device.
+ .
+ . Copyright (C) 1996 by Erik Stahlman
+ . Copyright (C) 2001 Standard Microsystems Corporation
+ . Developed by Simple Network Magic Corporation
+ . Copyright (C) 2003 Monta Vista Software, Inc.
+ . Unified SMC91x driver by Nicolas Pitre
+ .
+ . This program is free software; you can redistribute it and/or modify
+ . it under the terms of the GNU General Public License as published by
+ . the Free Software Foundation; either version 2 of the License, or
+ . (at your option) any later version.
+ .
+ . This program is distributed in the hope that it will be useful,
+ . but WITHOUT ANY WARRANTY; without even the implied warranty of
+ . MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ . GNU General Public License for more details.
+ .
+ . You should have received a copy of the GNU General Public License
+ . along with this program; if not, write to the Free Software
+ . Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ .
+ . Information contained in this file was obtained from the LAN91C111
+ . manual from SMC. To get a copy, if you really want one, you can find
+ . information under www.smsc.com.
+ .
+ . Authors
+ . Erik Stahlman <erik@vt.edu>
+ . Daris A Nevil <dnevil@snmc.com>
+ . Nicolas Pitre <nico@cam.org>
+ .
+ ---------------------------------------------------------------------------*/
+#ifndef _SMC91X_H_
+#define _SMC91X_H_
+
+
+/*
+ * Define your architecture specific bus configuration parameters here.
+ */
+
+#if defined(CONFIG_SA1100_GRAPHICSCLIENT) || \
+ defined(CONFIG_SA1100_PFS168) || \
+ defined(CONFIG_SA1100_FLEXANET) || \
+ defined(CONFIG_SA1100_GRAPHICSMASTER) || \
+ defined(CONFIG_ARCH_LUBBOCK)
+
+/* We can only do 16-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+/* The first two address lines aren't connected... */
+#define SMC_IO_SHIFT 2
+
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
+
+#elif defined(CONFIG_REDWOOD_5) || defined(CONFIG_REDWOOD_6)
+
+/* We can only do 16-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+#define SMC_IO_SHIFT 0
+
+#define SMC_inw(a, r) in_be16((volatile u16 *)((a) + (r)))
+#define SMC_outw(v, a, r) out_be16((volatile u16 *)((a) + (r)), v)
+#define SMC_insw(a, r, p, l) \
+ do { \
+ unsigned long __port = (a) + (r); \
+ u16 *__p = (u16 *)(p); \
+ int __l = (l); \
+ insw(__port, __p, __l); \
+ while (__l > 0) { \
+ *__p = swab16(*__p); \
+ __p++; \
+ __l--; \
+ } \
+ } while (0)
+#define SMC_outsw(a, r, p, l) \
+ do { \
+ unsigned long __port = (a) + (r); \
+ u16 *__p = (u16 *)(p); \
+ int __l = (l); \
+ while (__l > 0) { \
+ /* Believe it or not, the swab isn't needed. */ \
+ outw( /* swab16 */ (*__p++), __port); \
+ __l--; \
+ } \
+ } while (0)
+#define set_irq_type(irq, type)
+
+#elif defined(CONFIG_SA1100_ASSABET)
+
+#include <asm/arch/neponset.h>
+
+/* We can only do 8-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 0
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+/* The first two address lines aren't connected... */
+#define SMC_IO_SHIFT 2
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l))
+#define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l))
+
+#elif defined(CONFIG_ARCH_INNOKOM) || \
+ defined(CONFIG_MACH_MAINSTONE) || \
+ defined(CONFIG_ARCH_PXA_IDP) || \
+ defined(CONFIG_ARCH_RAMSES)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_IO_SHIFT 0
+#define SMC_NOWAIT 1
+#define SMC_USE_PXA_DMA 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+/* We actually can't write halfwords properly if not word aligned */
+static inline void
+SMC_outw(u16 val, unsigned long ioaddr, int reg)
+{
+ if (reg & 2) {
+ unsigned int v = val << 16;
+ v |= readl(ioaddr + (reg & ~2)) & 0xffff;
+ writel(v, ioaddr + (reg & ~2));
+ } else {
+ writew(val, ioaddr + reg);
+ }
+}
+
+#elif defined(CONFIG_ISA)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+
+#define SMC_inb(a, r) inb((a) + (r))
+#define SMC_inw(a, r) inw((a) + (r))
+#define SMC_outb(v, a, r) outb(v, (a) + (r))
+#define SMC_outw(v, a, r) outw(v, (a) + (r))
+#define SMC_insw(a, r, p, l) insw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) outsw((a) + (r), p, l)
+
+#else
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_NOWAIT 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+#define RPC_LSA_DEFAULT RPC_LED_100_10
+#define RPC_LSB_DEFAULT RPC_LED_TX_RX
+
+#endif
+
+
+#ifdef SMC_USE_PXA_DMA
+/*
+ * Let's use the DMA engine on the XScale PXA2xx for RX packets. This is
+ * always happening in irq context so no need to worry about races. TX is
+ * different and probably not worth it for that reason, and not as critical
+ * as RX which can overrun memory and lose packets.
+ */
+#include <linux/pci.h>
+#include <asm/dma.h>
+
+#ifdef SMC_insl
+#undef SMC_insl
+#define SMC_insl(a, r, p, l) \
+ smc_pxa_dma_insl(a, lp->physaddr, r, dev->dma, p, l)
+static inline void
+smc_pxa_dma_insl(u_long ioaddr, u_long physaddr, int reg, int dma,
+ u_char *buf, int len)
+{
+ dma_addr_t dmabuf;
+
+ /* fallback if no DMA available */
+ if (dma == (unsigned char)-1) {
+ readsl(ioaddr + reg, buf, len);
+ return;
+ }
+
+ /* 64 bit alignment is required for memory to memory DMA */
+ if ((long)buf & 4) {
+ *((u32 *)buf)++ = SMC_inl(ioaddr, reg);
+ len--;
+ }
+
+ len *= 4;
+ dmabuf = dma_map_single(NULL, buf, len, PCI_DMA_FROMDEVICE);
+ DCSR(dma) = DCSR_NODESC;
+ DTADR(dma) = dmabuf;
+ DSADR(dma) = physaddr + reg;
+ DCMD(dma) = (DCMD_INCTRGADDR | DCMD_BURST32 |
+ DCMD_WIDTH4 | (DCMD_LENGTH & len));
+ DCSR(dma) = DCSR_NODESC | DCSR_RUN;
+ while (!(DCSR(dma) & DCSR_STOPSTATE));
+ DCSR(dma) = 0;
+ dma_unmap_single(NULL, dmabuf, len, PCI_DMA_FROMDEVICE);
+}
+#endif
+
+#ifdef SMC_insw
+#undef SMC_insw
+#define SMC_insw(a, r, p, l) \
+ smc_pxa_dma_insw(a, lp->physaddr, r, dev->dma, p, l)
+static inline void
+smc_pxa_dma_insw(u_long ioaddr, u_long physaddr, int reg, int dma,
+ u_char *buf, int len)
+{
+ dma_addr_t dmabuf;
+
+ /* fallback if no DMA available */
+ if (dma == (unsigned char)-1) {
+ readsw(ioaddr + reg, buf, len);
+ return;
+ }
+
+ /* 64 bit alignment is required for memory to memory DMA */
+ while ((long)buf & 6) {
+ *((u16 *)buf)++ = SMC_inw(ioaddr, reg);
+ len--;
+ }
+
+ len *= 2;
+ dmabuf = dma_map_single(NULL, buf, len, PCI_DMA_FROMDEVICE);
+ DCSR(dma) = DCSR_NODESC;
+ DTADR(dma) = dmabuf;
+ DSADR(dma) = physaddr + reg;
+ DCMD(dma) = (DCMD_INCTRGADDR | DCMD_BURST32 |
+ DCMD_WIDTH2 | (DCMD_LENGTH & len));
+ DCSR(dma) = DCSR_NODESC | DCSR_RUN;
+ while (!(DCSR(dma) & DCSR_STOPSTATE));
+ DCSR(dma) = 0;
+ dma_unmap_single(NULL, dmabuf, len, PCI_DMA_FROMDEVICE);
+}
+#endif
+
+static void
+smc_pxa_dma_irq(int dma, void *dummy, struct pt_regs *regs)
+{
+ DCSR(dma) = 0;
+}
+#endif /* SMC_USE_PXA_DMA */
+
+
+/* Because of bank switching, the LAN91x uses only 16 I/O ports */
+#ifndef SMC_IO_SHIFT
+#define SMC_IO_SHIFT 0
+#endif
+#define SMC_IO_EXTENT (16 << SMC_IO_SHIFT)
+
+
+/*
+ . Bank Select Register:
+ .
+ . yyyy yyyy 0000 00xx
+ . xx = bank number
+ . yyyy yyyy = 0x33, for identification purposes.
+*/
+#define BANK_SELECT (14 << SMC_IO_SHIFT)
+
+
+// Transmit Control Register
+/* BANK 0 */
+#define TCR_REG SMC_REG(0x0000, 0)
+#define TCR_ENABLE 0x0001 // When 1 we can transmit
+#define TCR_LOOP 0x0002 // Controls output pin LBK
+#define TCR_FORCOL 0x0004 // When 1 will force a collision
+#define TCR_PAD_EN 0x0080 // When 1 will pad tx frames < 64 bytes w/0
+#define TCR_NOCRC 0x0100 // When 1 will not append CRC to tx frames
+#define TCR_MON_CSN 0x0400 // When 1 tx monitors carrier
+#define TCR_FDUPLX 0x0800 // When 1 enables full duplex operation
+#define TCR_STP_SQET 0x1000 // When 1 stops tx if Signal Quality Error
+#define TCR_EPH_LOOP 0x2000 // When 1 enables EPH block loopback
+#define TCR_SWFDUP 0x8000 // When 1 enables Switched Full Duplex mode
+
+#define TCR_CLEAR 0 /* do NOTHING */
+/* the default settings for the TCR register : */
+#define TCR_DEFAULT (TCR_ENABLE | TCR_PAD_EN)
+
+
+// EPH Status Register
+/* BANK 0 */
+#define EPH_STATUS_REG SMC_REG(0x0002, 0)
+#define ES_TX_SUC 0x0001 // Last TX was successful
+#define ES_SNGL_COL 0x0002 // Single collision detected for last tx
+#define ES_MUL_COL 0x0004 // Multiple collisions detected for last tx
+#define ES_LTX_MULT 0x0008 // Last tx was a multicast
+#define ES_16COL 0x0010 // 16 Collisions Reached
+#define ES_SQET 0x0020 // Signal Quality Error Test
+#define ES_LTXBRD 0x0040 // Last tx was a broadcast
+#define ES_TXDEFR 0x0080 // Transmit Deferred
+#define ES_LATCOL 0x0200 // Late collision detected on last tx
+#define ES_LOSTCARR 0x0400 // Lost Carrier Sense
+#define ES_EXC_DEF 0x0800 // Excessive Deferral
+#define ES_CTR_ROL 0x1000 // Counter Roll Over indication
+#define ES_LINK_OK 0x4000 // Driven by inverted value of nLNK pin
+#define ES_TXUNRN 0x8000 // Tx Underrun
+
+
+// Receive Control Register
+/* BANK 0 */
+#define RCR_REG SMC_REG(0x0004, 0)
+#define RCR_RX_ABORT 0x0001 // Set if a rx frame was aborted
+#define RCR_PRMS 0x0002 // Enable promiscuous mode
+#define RCR_ALMUL 0x0004 // When set accepts all multicast frames
+#define RCR_RXEN 0x0100 // IFF this is set, we can receive packets
+#define RCR_STRIP_CRC 0x0200 // When set strips CRC from rx packets
+#define RCR_ABORT_ENB 0x0200 // When set will abort rx on collision
+#define RCR_FILT_CAR 0x0400 // When set filters leading 12 bit s of carrier
+#define RCR_SOFTRST 0x8000 // resets the chip
+
+/* the normal settings for the RCR register : */
+#define RCR_DEFAULT (RCR_STRIP_CRC | RCR_RXEN)
+#define RCR_CLEAR 0x0 // set it to a base state
+
+
+// Counter Register
+/* BANK 0 */
+#define COUNTER_REG SMC_REG(0x0006, 0)
+
+
+// Memory Information Register
+/* BANK 0 */
+#define MIR_REG SMC_REG(0x0008, 0)
+
+
+// Receive/Phy Control Register
+/* BANK 0 */
+#define RPC_REG SMC_REG(0x000A, 0)
+#define RPC_SPEED 0x2000 // When 1 PHY is in 100Mbps mode.
+#define RPC_DPLX 0x1000 // When 1 PHY is in Full-Duplex Mode
+#define RPC_ANEG 0x0800 // When 1 PHY is in Auto-Negotiate Mode
+#define RPC_LSXA_SHFT 5 // Bits to shift LS2A,LS1A,LS0A to lsb
+#define RPC_LSXB_SHFT 2 // Bits to get LS2B,LS1B,LS0B to lsb
+#define RPC_LED_100_10 (0x00) // LED = 100Mbps OR's with 10Mbps link detect
+#define RPC_LED_RES (0x01) // LED = Reserved
+#define RPC_LED_10 (0x02) // LED = 10Mbps link detect
+#define RPC_LED_FD (0x03) // LED = Full Duplex Mode
+#define RPC_LED_TX_RX (0x04) // LED = TX or RX packet occurred
+#define RPC_LED_100 (0x05) // LED = 100Mbps link dectect
+#define RPC_LED_TX (0x06) // LED = TX packet occurred
+#define RPC_LED_RX (0x07) // LED = RX packet occurred
+
+#ifndef RPC_LSA_DEFAULT
+#define RPC_LSA_DEFAULT RPC_LED_100
+#endif
+#ifndef RPC_LSB_DEFAULT
+#define RPC_LSB_DEFAULT RPC_LED_FD
+#endif
+
+#define RPC_DEFAULT (RPC_ANEG | (RPC_LSA_DEFAULT << RPC_LSXA_SHFT) | (RPC_LSB_DEFAULT << RPC_LSXB_SHFT) | RPC_SPEED | RPC_DPLX)
+
+
+/* Bank 0 0x0C is reserved */
+
+// Bank Select Register
+/* All Banks */
+#define BSR_REG 0x000E
+
+
+// Configuration Reg
+/* BANK 1 */
+#define CONFIG_REG SMC_REG(0x0000, 1)
+#define CONFIG_EXT_PHY 0x0200 // 1=external MII, 0=internal Phy
+#define CONFIG_GPCNTRL 0x0400 // Inverse value drives pin nCNTRL
+#define CONFIG_NO_WAIT 0x1000 // When 1 no extra wait states on ISA bus
+#define CONFIG_EPH_POWER_EN 0x8000 // When 0 EPH is placed into low power mode.
+
+// Default is powered-up, Internal Phy, Wait States, and pin nCNTRL=low
+#define CONFIG_DEFAULT (CONFIG_EPH_POWER_EN)
+
+
+// Base Address Register
+/* BANK 1 */
+#define BASE_REG SMC_REG(0x0002, 1)
+
+
+// Individual Address Registers
+/* BANK 1 */
+#define ADDR0_REG SMC_REG(0x0004, 1)
+#define ADDR1_REG SMC_REG(0x0006, 1)
+#define ADDR2_REG SMC_REG(0x0008, 1)
+
+
+// General Purpose Register
+/* BANK 1 */
+#define GP_REG SMC_REG(0x000A, 1)
+
+
+// Control Register
+/* BANK 1 */
+#define CTL_REG SMC_REG(0x000C, 1)
+#define CTL_RCV_BAD 0x4000 // When 1 bad CRC packets are received
+#define CTL_AUTO_RELEASE 0x0800 // When 1 tx pages are released automatically
+#define CTL_LE_ENABLE 0x0080 // When 1 enables Link Error interrupt
+#define CTL_CR_ENABLE 0x0040 // When 1 enables Counter Rollover interrupt
+#define CTL_TE_ENABLE 0x0020 // When 1 enables Transmit Error interrupt
+#define CTL_EEPROM_SELECT 0x0004 // Controls EEPROM reload & store
+#define CTL_RELOAD 0x0002 // When set reads EEPROM into registers
+#define CTL_STORE 0x0001 // When set stores registers into EEPROM
+
+
+// MMU Command Register
+/* BANK 2 */
+#define MMU_CMD_REG SMC_REG(0x0000, 2)
+#define MC_BUSY 1 // When 1 the last release has not completed
+#define MC_NOP (0<<5) // No Op
+#define MC_ALLOC (1<<5) // OR with number of 256 byte packets
+#define MC_RESET (2<<5) // Reset MMU to initial state
+#define MC_REMOVE (3<<5) // Remove the current rx packet
+#define MC_RELEASE (4<<5) // Remove and release the current rx packet
+#define MC_FREEPKT (5<<5) // Release packet in PNR register
+#define MC_ENQUEUE (6<<5) // Enqueue the packet for transmit
+#define MC_RSTTXFIFO (7<<5) // Reset the TX FIFOs
+
+
+// Packet Number Register
+/* BANK 2 */
+#define PN_REG SMC_REG(0x0002, 2)
+
+
+// Allocation Result Register
+/* BANK 2 */
+#define AR_REG SMC_REG(0x0003, 2)
+#define AR_FAILED 0x80 // Alocation Failed
+
+
+// TX FIFO Ports Register
+/* BANK 2 */
+#define TXFIFO_REG SMC_REG(0x0004, 2)
+#define TXFIFO_TEMPTY 0x80 // TX FIFO Empty
+
+// RX FIFO Ports Register
+/* BANK 2 */
+#define RXFIFO_REG SMC_REG(0x0005, 2)
+#define RXFIFO_REMPTY 0x80 // RX FIFO Empty
+
+#define FIFO_REG SMC_REG(0x0004, 2)
+
+// Pointer Register
+/* BANK 2 */
+#define PTR_REG SMC_REG(0x0006, 2)
+#define PTR_RCV 0x8000 // 1=Receive area, 0=Transmit area
+#define PTR_AUTOINC 0x4000 // Auto increment the pointer on each access
+#define PTR_READ 0x2000 // When 1 the operation is a read
+
+
+// Data Register
+/* BANK 2 */
+#define DATA_REG SMC_REG(0x0008, 2)
+
+
+// Interrupt Status/Acknowledge Register
+/* BANK 2 */
+#define INT_REG SMC_REG(0x000C, 2)
+
+
+// Interrupt Mask Register
+/* BANK 2 */
+#define IM_REG SMC_REG(0x000D, 2)
+#define IM_MDINT 0x80 // PHY MI Register 18 Interrupt
+#define IM_ERCV_INT 0x40 // Early Receive Interrupt
+#define IM_EPH_INT 0x20 // Set by Ethernet Protocol Handler section
+#define IM_RX_OVRN_INT 0x10 // Set by Receiver Overruns
+#define IM_ALLOC_INT 0x08 // Set when allocation request is completed
+#define IM_TX_EMPTY_INT 0x04 // Set if the TX FIFO goes empty
+#define IM_TX_INT 0x02 // Transmit Interrupt
+#define IM_RCV_INT 0x01 // Receive Interrupt
+
+
+// Multicast Table Registers
+/* BANK 3 */
+#define MCAST_REG1 SMC_REG(0x0000, 3)
+#define MCAST_REG2 SMC_REG(0x0002, 3)
+#define MCAST_REG3 SMC_REG(0x0004, 3)
+#define MCAST_REG4 SMC_REG(0x0006, 3)
+
+
+// Management Interface Register (MII)
+/* BANK 3 */
+#define MII_REG SMC_REG(0x0008, 3)
+#define MII_MSK_CRS100 0x4000 // Disables CRS100 detection during tx half dup
+#define MII_MDOE 0x0008 // MII Output Enable
+#define MII_MCLK 0x0004 // MII Clock, pin MDCLK
+#define MII_MDI 0x0002 // MII Input, pin MDI
+#define MII_MDO 0x0001 // MII Output, pin MDO
+
+
+// Revision Register
+/* BANK 3 */
+/* ( hi: chip id low: rev # ) */
+#define REV_REG SMC_REG(0x000A, 3)
+
+
+// Early RCV Register
+/* BANK 3 */
+/* this is NOT on SMC9192 */
+#define ERCV_REG SMC_REG(0x000C, 3)
+#define ERCV_RCV_DISCRD 0x0080 // When 1 discards a packet being received
+#define ERCV_THRESHOLD 0x001F // ERCV Threshold Mask
+
+
+// External Register
+/* BANK 7 */
+#define EXT_REG SMC_REG(0x0000, 7)
+
+
+#define CHIP_9192 3
+#define CHIP_9194 4
+#define CHIP_9195 5
+#define CHIP_9196 6
+#define CHIP_91100 7
+#define CHIP_91100FD 8
+#define CHIP_91111FD 9
+
+static const char * chip_ids[ 16 ] = {
+ NULL, NULL, NULL,
+ /* 3 */ "SMC91C90/91C92",
+ /* 4 */ "SMC91C94",
+ /* 5 */ "SMC91C95",
+ /* 6 */ "SMC91C96",
+ /* 7 */ "SMC91C100",
+ /* 8 */ "SMC91C100FD",
+ /* 9 */ "SMC91C11xFD",
+ NULL, NULL, NULL,
+ NULL, NULL, NULL};
+
+
+/*
+ . Transmit status bits
+*/
+#define TS_SUCCESS 0x0001
+#define TS_LOSTCAR 0x0400
+#define TS_LATCOL 0x0200
+#define TS_16COL 0x0010
+
+/*
+ . Receive status bits
+*/
+#define RS_ALGNERR 0x8000
+#define RS_BRODCAST 0x4000
+#define RS_BADCRC 0x2000
+#define RS_ODDFRAME 0x1000
+#define RS_TOOLONG 0x0800
+#define RS_TOOSHORT 0x0400
+#define RS_MULTICAST 0x0001
+#define RS_ERRORS (RS_ALGNERR | RS_BADCRC | RS_TOOLONG | RS_TOOSHORT)
+
+
+/*
+ * PHY IDs
+ * LAN83C183 == LAN91C111 Internal PHY
+ */
+#define PHY_LAN83C183 0x0016f840
+#define PHY_LAN83C180 0x02821c50
+
+/*
+ * PHY Register Addresses (LAN91C111 Internal PHY)
+ *
+ * Generic PHY registers can be found in <linux/mii.h>
+ *
+ * These phy registers are specific to our on-board phy.
+ */
+
+// PHY Configuration Register 1
+#define PHY_CFG1_REG 0x10
+#define PHY_CFG1_LNKDIS 0x8000 // 1=Rx Link Detect Function disabled
+#define PHY_CFG1_XMTDIS 0x4000 // 1=TP Transmitter Disabled
+#define PHY_CFG1_XMTPDN 0x2000 // 1=TP Transmitter Powered Down
+#define PHY_CFG1_BYPSCR 0x0400 // 1=Bypass scrambler/descrambler
+#define PHY_CFG1_UNSCDS 0x0200 // 1=Unscramble Idle Reception Disable
+#define PHY_CFG1_EQLZR 0x0100 // 1=Rx Equalizer Disabled
+#define PHY_CFG1_CABLE 0x0080 // 1=STP(150ohm), 0=UTP(100ohm)
+#define PHY_CFG1_RLVL0 0x0040 // 1=Rx Squelch level reduced by 4.5db
+#define PHY_CFG1_TLVL_SHIFT 2 // Transmit Output Level Adjust
+#define PHY_CFG1_TLVL_MASK 0x003C
+#define PHY_CFG1_TRF_MASK 0x0003 // Transmitter Rise/Fall time
+
+
+// PHY Configuration Register 2
+#define PHY_CFG2_REG 0x11
+#define PHY_CFG2_APOLDIS 0x0020 // 1=Auto Polarity Correction disabled
+#define PHY_CFG2_JABDIS 0x0010 // 1=Jabber disabled
+#define PHY_CFG2_MREG 0x0008 // 1=Multiple register access (MII mgt)
+#define PHY_CFG2_INTMDIO 0x0004 // 1=Interrupt signaled with MDIO pulseo
+
+// PHY Status Output (and Interrupt status) Register
+#define PHY_INT_REG 0x12 // Status Output (Interrupt Status)
+#define PHY_INT_INT 0x8000 // 1=bits have changed since last read
+#define PHY_INT_LNKFAIL 0x4000 // 1=Link Not detected
+#define PHY_INT_LOSSSYNC 0x2000 // 1=Descrambler has lost sync
+#define PHY_INT_CWRD 0x1000 // 1=Invalid 4B5B code detected on rx
+#define PHY_INT_SSD 0x0800 // 1=No Start Of Stream detected on rx
+#define PHY_INT_ESD 0x0400 // 1=No End Of Stream detected on rx
+#define PHY_INT_RPOL 0x0200 // 1=Reverse Polarity detected
+#define PHY_INT_JAB 0x0100 // 1=Jabber detected
+#define PHY_INT_SPDDET 0x0080 // 1=100Base-TX mode, 0=10Base-T mode
+#define PHY_INT_DPLXDET 0x0040 // 1=Device in Full Duplex
+
+// PHY Interrupt/Status Mask Register
+#define PHY_MASK_REG 0x13 // Interrupt Mask
+// Uses the same bit definitions as PHY_INT_REG
+
+
+/*
+ * SMC91C96 ethernet config and status registers.
+ * These are in the "attribute" space.
+ */
+#define ECOR 0x8000
+#define ECOR_RESET 0x80
+#define ECOR_LEVEL_IRQ 0x40
+#define ECOR_WR_ATTRIB 0x04
+#define ECOR_ENABLE 0x01
+
+#define ECSR 0x8002
+#define ECSR_IOIS8 0x20
+#define ECSR_PWRDWN 0x04
+#define ECSR_INT 0x02
+
+#define ATTRIB_SIZE ((64*1024) << SMC_IO_SHIFT)
+
+
+/*
+ * Macros to abstract register access according to the data bus
+ * capabilities. Please use those and not the in/out primitives.
+ * Note: the following macros do *not* select the bank -- this must
+ * be done separately as needed in the main code. The SMC_REG() macro
+ * only uses the bank argument for debugging purposes (when enabled).
+ */
+
+#if SMC_DEBUG > 0
+#define SMC_REG(reg, bank) \
+ ({ \
+ int __b = SMC_CURRENT_BANK(); \
+ if (unlikely((__b & ~0xf0) != (0x3300 | bank))) { \
+ printk( "%s: bank reg screwed (0x%04x)\n", \
+ CARDNAME, __b ); \
+ BUG(); \
+ } \
+ reg<<SMC_IO_SHIFT; \
+ })
+#else
+#define SMC_REG(reg, bank) (reg<<SMC_IO_SHIFT)
+#endif
+
+#if SMC_CAN_USE_8BIT
+#define SMC_GET_PN() SMC_inb( ioaddr, PN_REG )
+#define SMC_SET_PN(x) SMC_outb( x, ioaddr, PN_REG )
+#define SMC_GET_AR() SMC_inb( ioaddr, AR_REG )
+#define SMC_GET_TXFIFO() SMC_inb( ioaddr, TXFIFO_REG )
+#define SMC_GET_RXFIFO() SMC_inb( ioaddr, RXFIFO_REG )
+#define SMC_GET_INT() SMC_inb( ioaddr, INT_REG )
+#define SMC_ACK_INT(x) SMC_outb( x, ioaddr, INT_REG )
+#define SMC_GET_INT_MASK() SMC_inb( ioaddr, IM_REG )
+#define SMC_SET_INT_MASK(x) SMC_outb( x, ioaddr, IM_REG )
+#else
+#define SMC_GET_PN() (SMC_inw( ioaddr, PN_REG ) & 0xFF)
+#define SMC_SET_PN(x) SMC_outw( x, ioaddr, PN_REG )
+#define SMC_GET_AR() (SMC_inw( ioaddr, PN_REG ) >> 8)
+#define SMC_GET_TXFIFO() (SMC_inw( ioaddr, TXFIFO_REG ) & 0xFF)
+#define SMC_GET_RXFIFO() (SMC_inw( ioaddr, TXFIFO_REG ) >> 8)
+#define SMC_GET_INT() (SMC_inw( ioaddr, INT_REG ) & 0xFF)
+#define SMC_ACK_INT(x) \
+ do { \
+ unsigned long __flags; \
+ int __mask; \
+ local_irq_save(__flags); \
+ __mask = SMC_inw( ioaddr, INT_REG ) & ~0xff; \
+ SMC_outw( __mask | (x), ioaddr, INT_REG ); \
+ local_irq_restore(__flags); \
+ } while (0)
+#define SMC_GET_INT_MASK() (SMC_inw( ioaddr, INT_REG ) >> 8)
+#define SMC_SET_INT_MASK(x) SMC_outw( (x) << 8, ioaddr, INT_REG )
+#endif
+
+#define SMC_CURRENT_BANK() SMC_inw( ioaddr, BANK_SELECT )
+#define SMC_SELECT_BANK(x) SMC_outw( x, ioaddr, BANK_SELECT )
+#define SMC_GET_BASE() SMC_inw( ioaddr, BASE_REG )
+#define SMC_SET_BASE(x) SMC_outw( x, ioaddr, BASE_REG )
+#define SMC_GET_CONFIG() SMC_inw( ioaddr, CONFIG_REG )
+#define SMC_SET_CONFIG(x) SMC_outw( x, ioaddr, CONFIG_REG )
+#define SMC_GET_COUNTER() SMC_inw( ioaddr, COUNTER_REG )
+#define SMC_GET_CTL() SMC_inw( ioaddr, CTL_REG )
+#define SMC_SET_CTL(x) SMC_outw( x, ioaddr, CTL_REG )
+#define SMC_GET_MII() SMC_inw( ioaddr, MII_REG )
+#define SMC_SET_MII(x) SMC_outw( x, ioaddr, MII_REG )
+#define SMC_GET_MIR() SMC_inw( ioaddr, MIR_REG )
+#define SMC_SET_MIR(x) SMC_outw( x, ioaddr, MIR_REG )
+#define SMC_GET_MMU_CMD() SMC_inw( ioaddr, MMU_CMD_REG )
+#define SMC_SET_MMU_CMD(x) SMC_outw( x, ioaddr, MMU_CMD_REG )
+#define SMC_GET_FIFO() SMC_inw( ioaddr, FIFO_REG )
+#define SMC_GET_PTR() SMC_inw( ioaddr, PTR_REG )
+#define SMC_SET_PTR(x) SMC_outw( x, ioaddr, PTR_REG )
+#define SMC_GET_RCR() SMC_inw( ioaddr, RCR_REG )
+#define SMC_SET_RCR(x) SMC_outw( x, ioaddr, RCR_REG )
+#define SMC_GET_REV() SMC_inw( ioaddr, REV_REG )
+#define SMC_GET_RPC() SMC_inw( ioaddr, RPC_REG )
+#define SMC_SET_RPC(x) SMC_outw( x, ioaddr, RPC_REG )
+#define SMC_GET_TCR() SMC_inw( ioaddr, TCR_REG )
+#define SMC_SET_TCR(x) SMC_outw( x, ioaddr, TCR_REG )
+
+#ifndef SMC_GET_MAC_ADDR
+#define SMC_GET_MAC_ADDR(addr) \
+ do { \
+ unsigned int __v; \
+ __v = SMC_inw( ioaddr, ADDR0_REG ); \
+ addr[0] = __v; addr[1] = __v >> 8; \
+ __v = SMC_inw( ioaddr, ADDR1_REG ); \
+ addr[2] = __v; addr[3] = __v >> 8; \
+ __v = SMC_inw( ioaddr, ADDR2_REG ); \
+ addr[4] = __v; addr[5] = __v >> 8; \
+ } while (0)
+#endif
+
+#define SMC_SET_MAC_ADDR(addr) \
+ do { \
+ SMC_outw( addr[0]|(addr[1] << 8), ioaddr, ADDR0_REG ); \
+ SMC_outw( addr[2]|(addr[3] << 8), ioaddr, ADDR1_REG ); \
+ SMC_outw( addr[4]|(addr[5] << 8), ioaddr, ADDR2_REG ); \
+ } while (0)
+
+#define SMC_CLEAR_MCAST() \
+ do { \
+ SMC_outw( 0, ioaddr, MCAST_REG1 ); \
+ SMC_outw( 0, ioaddr, MCAST_REG2 ); \
+ SMC_outw( 0, ioaddr, MCAST_REG3 ); \
+ SMC_outw( 0, ioaddr, MCAST_REG4 ); \
+ } while (0)
+#define SMC_SET_MCAST(x) \
+ do { \
+ unsigned char *mt = (x); \
+ SMC_outw( mt[0] | (mt[1] << 8), ioaddr, MCAST_REG1 ); \
+ SMC_outw( mt[2] | (mt[3] << 8), ioaddr, MCAST_REG2 ); \
+ SMC_outw( mt[4] | (mt[5] << 8), ioaddr, MCAST_REG3 ); \
+ SMC_outw( mt[6] | (mt[7] << 8), ioaddr, MCAST_REG4 ); \
+ } while (0)
+
+#if SMC_CAN_USE_32BIT
+/*
+ * Some setups just can't write 8 or 16 bits reliably when not aligned
+ * to a 32 bit boundary. I tell you that exists!
+ * We re-do the ones here that can be easily worked around if they can have
+ * their low parts written to 0 without adverse effects.
+ */
+#undef SMC_SELECT_BANK
+#define SMC_SELECT_BANK(x) SMC_outl( (x)<<16, ioaddr, 12<<SMC_IO_SHIFT )
+#undef SMC_SET_RPC
+#define SMC_SET_RPC(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(8, 0) )
+#undef SMC_SET_PN
+#define SMC_SET_PN(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(0, 2) )
+#undef SMC_SET_PTR
+#define SMC_SET_PTR(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(4, 2) )
+#endif
+
+#if SMC_CAN_USE_32BIT
+#define SMC_PUT_PKT_HDR(status, length) \
+ SMC_outl( (status) | (length) << 16, ioaddr, DATA_REG )
+#define SMC_GET_PKT_HDR(status, length) \
+ do { \
+ unsigned int __val = SMC_inl( ioaddr, DATA_REG ); \
+ (status) = __val & 0xffff; \
+ (length) = __val >> 16; \
+ } while (0)
+#else
+#define SMC_PUT_PKT_HDR(status, length) \
+ do { \
+ SMC_outw( status, ioaddr, DATA_REG ); \
+ SMC_outw( length, ioaddr, DATA_REG ); \
+ } while (0)
+#define SMC_GET_PKT_HDR(status, length) \
+ do { \
+ (status) = SMC_inw( ioaddr, DATA_REG ); \
+ (length) = SMC_inw( ioaddr, DATA_REG ); \
+ } while (0)
+#endif
+
+#if SMC_CAN_USE_32BIT
+#define SMC_PUSH_DATA(p, l) \
+ do { \
+ char *__ptr = (p); \
+ int __len = (l); \
+ if (__len >= 2 && (long)__ptr & 2) { \
+ __len -= 2; \
+ SMC_outw( *((u16 *)__ptr)++, ioaddr, DATA_REG );\
+ } \
+ SMC_outsl( ioaddr, DATA_REG, __ptr, __len >> 2); \
+ if (__len & 2) { \
+ __ptr += (__len & ~3); \
+ SMC_outw( *((u16 *)__ptr), ioaddr, DATA_REG ); \
+ } \
+ } while (0)
+#define SMC_PULL_DATA(p, l) \
+ do { \
+ char *__ptr = (p); \
+ int __len = (l); \
+ if ((long)__ptr & 2) { \
+ /* \
+ * We want 32bit alignment here. \
+ * Since some buses perform a full 32bit \
+ * fetch even for 16bit data we can't use \
+ * SMC_inw() here. Back both source (on chip \
+ * and destination) pointers of 2 bytes. \
+ */ \
+ (long)__ptr &= ~2; \
+ __len += 2; \
+ SMC_SET_PTR( 2|PTR_READ|PTR_RCV|PTR_AUTOINC ); \
+ } \
+ __len += 2; \
+ SMC_insl( ioaddr, DATA_REG, __ptr, __len >> 2); \
+ } while (0)
+#elif SMC_CAN_USE_16BIT
+#define SMC_PUSH_DATA(p, l) SMC_outsw( ioaddr, DATA_REG, p, (l) >> 1 )
+#define SMC_PULL_DATA(p, l) SMC_insw ( ioaddr, DATA_REG, p, (l) >> 1 )
+#elif SMC_CAN_USE_8BIT
+#define SMC_PUSH_DATA(p, l) SMC_outsb( ioaddr, DATA_REG, p, l )
+#define SMC_PULL_DATA(p, l) SMC_insb ( ioaddr, DATA_REG, p, l )
+#endif
+
+#if ! SMC_CAN_USE_16BIT
+#define SMC_outw(x, ioaddr, reg) \
+ do { \
+ unsigned int __val16 = (x); \
+ SMC_outb( __val16, ioaddr, reg ); \
+ SMC_outb( __val16 >> 8, ioaddr, reg + (1 << SMC_IO_SHIFT));\
+ } while (0)
+#define SMC_inw(ioaddr, reg) \
+ ({ \
+ unsigned int __val16; \
+ __val16 = SMC_inb( ioaddr, reg ); \
+ __val16 |= SMC_inb( ioaddr, reg + (1 << SMC_IO_SHIFT)) << 8; \
+ __val16; \
+ })
+#endif
+
+
+#endif /* _SMC91X_H_ */
--- /dev/null
+/*
+ * This code is derived from the VIA reference driver (copyright message
+ * below) provided to Red Hat by VIA Networking Technologies, Inc. for
+ * addition to the Linux kernel.
+ *
+ * The code has been merged into one source file, cleaned up to follow
+ * Linux coding style, ported to the Linux 2.6 kernel tree and cleaned
+ * for 64bit hardware platforms.
+ *
+ * TODO
+ * Big-endian support
+ * rx_copybreak/alignment
+ * Scatter gather
+ * More testing
+ *
+ * The changes are (c) Copyright 2004, Red Hat Inc. <alan@redhat.com>
+ * Additional fixes and clean up: Francois Romieu
+ *
+ * This source has not been verified for use in safety critical systems.
+ *
+ * Please direct queries about the revamped driver to the linux-kernel
+ * list not VIA.
+ *
+ * Original code:
+ *
+ * Copyright (c) 1996, 2003 VIA Networking Technologies, Inc.
+ * All rights reserved.
+ *
+ * This software may be redistributed and/or modified under
+ * the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * Author: Chuang Liang-Shing, AJ Jiang
+ *
+ * Date: Jan 24, 2003
+ *
+ * MODULE_LICENSE("GPL");
+ *
+ */
+
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/version.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+#include <asm/io.h>
+#include <linux/if.h>
+#include <linux/config.h>
+#include <asm/uaccess.h>
+#include <linux/proc_fs.h>
+#include <linux/inetdevice.h>
+#include <linux/reboot.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/in.h>
+#include <linux/if_arp.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+
+#include "via-velocity.h"
+
+
+static int velocity_nics = 0;
+static int msglevel = MSG_LEVEL_INFO;
+
+
+static int velocity_mii_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd);
+static struct ethtool_ops velocity_ethtool_ops;
+
+/*
+ Define module options
+*/
+
+MODULE_AUTHOR("VIA Networking Technologies, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("VIA Networking Velocity Family Gigabit Ethernet Adapter Driver");
+
+#define VELOCITY_PARAM(N,D) \
+ static const int N[MAX_UNITS]=OPTION_DEFAULT;\
+ MODULE_PARM(N, "1-" __MODULE_STRING(MAX_UNITS) "i");\
+ MODULE_PARM_DESC(N, D);
+
+#define RX_DESC_MIN 64
+#define RX_DESC_MAX 255
+#define RX_DESC_DEF 64
+VELOCITY_PARAM(RxDescriptors, "Number of receive descriptors");
+
+#define TX_DESC_MIN 16
+#define TX_DESC_MAX 256
+#define TX_DESC_DEF 64
+VELOCITY_PARAM(TxDescriptors, "Number of transmit descriptors");
+
+#define VLAN_ID_MIN 0
+#define VLAN_ID_MAX 4095
+#define VLAN_ID_DEF 0
+/* VID_setting[] is used for setting the VID of NIC.
+ 0: default VID.
+ 1-4094: other VIDs.
+*/
+VELOCITY_PARAM(VID_setting, "802.1Q VLAN ID");
+
+#define RX_THRESH_MIN 0
+#define RX_THRESH_MAX 3
+#define RX_THRESH_DEF 0
+/* rx_thresh[] is used for controlling the receive fifo threshold.
+ 0: indicate the rxfifo threshold is 128 bytes.
+ 1: indicate the rxfifo threshold is 512 bytes.
+ 2: indicate the rxfifo threshold is 1024 bytes.
+ 3: indicate the rxfifo threshold is store & forward.
+*/
+VELOCITY_PARAM(rx_thresh, "Receive fifo threshold");
+
+#define DMA_LENGTH_MIN 0
+#define DMA_LENGTH_MAX 7
+#define DMA_LENGTH_DEF 0
+
+/* DMA_length[] is used for controlling the DMA length
+ 0: 8 DWORDs
+ 1: 16 DWORDs
+ 2: 32 DWORDs
+ 3: 64 DWORDs
+ 4: 128 DWORDs
+ 5: 256 DWORDs
+ 6: SF(flush till emply)
+ 7: SF(flush till emply)
+*/
+VELOCITY_PARAM(DMA_length, "DMA length");
+
+#define TAGGING_DEF 0
+/* enable_tagging[] is used for enabling 802.1Q VID tagging.
+ 0: disable VID seeting(default).
+ 1: enable VID setting.
+*/
+VELOCITY_PARAM(enable_tagging, "Enable 802.1Q tagging");
+
+#define IP_ALIG_DEF 0
+/* IP_byte_align[] is used for IP header DWORD byte aligned
+ 0: indicate the IP header won't be DWORD byte aligned.(Default) .
+ 1: indicate the IP header will be DWORD byte aligned.
+ In some enviroment, the IP header should be DWORD byte aligned,
+ or the packet will be droped when we receive it. (eg: IPVS)
+*/
+VELOCITY_PARAM(IP_byte_align, "Enable IP header dword aligned");
+
+#define TX_CSUM_DEF 1
+/* txcsum_offload[] is used for setting the checksum offload ability of NIC.
+ (We only support RX checksum offload now)
+ 0: disable csum_offload[checksum offload
+ 1: enable checksum offload. (Default)
+*/
+VELOCITY_PARAM(txcsum_offload, "Enable transmit packet checksum offload");
+
+#define FLOW_CNTL_DEF 1
+#define FLOW_CNTL_MIN 1
+#define FLOW_CNTL_MAX 5
+
+/* flow_control[] is used for setting the flow control ability of NIC.
+ 1: hardware deafult - AUTO (default). Use Hardware default value in ANAR.
+ 2: enable TX flow control.
+ 3: enable RX flow control.
+ 4: enable RX/TX flow control.
+ 5: disable
+*/
+VELOCITY_PARAM(flow_control, "Enable flow control ability");
+
+#define MED_LNK_DEF 0
+#define MED_LNK_MIN 0
+#define MED_LNK_MAX 4
+/* speed_duplex[] is used for setting the speed and duplex mode of NIC.
+ 0: indicate autonegotiation for both speed and duplex mode
+ 1: indicate 100Mbps half duplex mode
+ 2: indicate 100Mbps full duplex mode
+ 3: indicate 10Mbps half duplex mode
+ 4: indicate 10Mbps full duplex mode
+
+ Note:
+ if EEPROM have been set to the force mode, this option is ignored
+ by driver.
+*/
+VELOCITY_PARAM(speed_duplex, "Setting the speed and duplex mode");
+
+#define VAL_PKT_LEN_DEF 0
+/* ValPktLen[] is used for setting the checksum offload ability of NIC.
+ 0: Receive frame with invalid layer 2 length (Default)
+ 1: Drop frame with invalid layer 2 length
+*/
+VELOCITY_PARAM(ValPktLen, "Receiving or Drop invalid 802.3 frame");
+
+#define WOL_OPT_DEF 0
+#define WOL_OPT_MIN 0
+#define WOL_OPT_MAX 7
+/* wol_opts[] is used for controlling wake on lan behavior.
+ 0: Wake up if recevied a magic packet. (Default)
+ 1: Wake up if link status is on/off.
+ 2: Wake up if recevied an arp packet.
+ 4: Wake up if recevied any unicast packet.
+ Those value can be sumed up to support more than one option.
+*/
+VELOCITY_PARAM(wol_opts, "Wake On Lan options");
+
+#define INT_WORKS_DEF 20
+#define INT_WORKS_MIN 10
+#define INT_WORKS_MAX 64
+
+VELOCITY_PARAM(int_works, "Number of packets per interrupt services");
+
+static int velocity_found1(struct pci_dev *pdev, const struct pci_device_id *ent);
+static void velocity_init_info(struct pci_dev *pdev, struct velocity_info *vptr, struct velocity_info_tbl *info);
+static int velocity_get_pci_info(struct velocity_info *, struct pci_dev *pdev);
+static void velocity_print_info(struct velocity_info *vptr);
+static int velocity_open(struct net_device *dev);
+static int velocity_change_mtu(struct net_device *dev, int mtu);
+static int velocity_xmit(struct sk_buff *skb, struct net_device *dev);
+static int velocity_intr(int irq, void *dev_instance, struct pt_regs *regs);
+static void velocity_set_multi(struct net_device *dev);
+static struct net_device_stats *velocity_get_stats(struct net_device *dev);
+static int velocity_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static int velocity_close(struct net_device *dev);
+static int velocity_rx_srv(struct velocity_info *vptr, int status);
+static int velocity_receive_frame(struct velocity_info *, int idx);
+static int velocity_alloc_rx_buf(struct velocity_info *, int idx);
+static void velocity_init_registers(struct velocity_info *vptr, enum velocity_init_type type);
+static void velocity_free_rd_ring(struct velocity_info *vptr);
+static void velocity_free_tx_buf(struct velocity_info *vptr, struct velocity_td_info *);
+static int velocity_soft_reset(struct velocity_info *vptr);
+static void mii_init(struct velocity_info *vptr, u32 mii_status);
+static u32 velocity_get_opt_media_mode(struct velocity_info *vptr);
+static void velocity_print_link_status(struct velocity_info *vptr);
+static void safe_disable_mii_autopoll(struct mac_regs * regs);
+static void velocity_shutdown(struct velocity_info *vptr);
+static void enable_flow_control_ability(struct velocity_info *vptr);
+static void enable_mii_autopoll(struct mac_regs * regs);
+static int velocity_mii_read(struct mac_regs *, u8 byIdx, u16 * pdata);
+static int velocity_mii_write(struct mac_regs *, u8 byMiiAddr, u16 data);
+static int velocity_set_wol(struct velocity_info *vptr);
+static void velocity_save_context(struct velocity_info *vptr, struct velocity_context *context);
+static void velocity_restore_context(struct velocity_info *vptr, struct velocity_context *context);
+static u32 mii_check_media_mode(struct mac_regs * regs);
+static u32 check_connection_type(struct mac_regs * regs);
+static void velocity_init_cam_filter(struct velocity_info *vptr);
+static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status);
+
+#ifdef CONFIG_PM
+static int velocity_suspend(struct pci_dev *pdev, u32 state);
+static int velocity_resume(struct pci_dev *pdev);
+
+static int velocity_netdev_event(struct notifier_block *nb, unsigned long notification, void *ptr);
+
+static struct notifier_block velocity_inetaddr_notifier = {
+ notifier_call:velocity_netdev_event,
+};
+
+#endif /* CONFIG_PM */
+
+/*
+ * Internal board variants. At the moment we have only one
+ */
+
+static struct velocity_info_tbl chip_info_table[] = {
+ {CHIP_TYPE_VT6110, "VIA Networking Velocity Family Gigabit Ethernet Adapter", 256, 1, 0x00FFFFFFUL},
+ {0, NULL}
+};
+
+/*
+ * Describe the PCI device identifiers that we support in this
+ * device driver. Used for hotplug autoloading.
+ */
+
+static struct pci_device_id velocity_id_table[] __devinitdata = {
+ {0x1106, 0x3119, PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) &chip_info_table[0]},
+ {0,}
+};
+
+MODULE_DEVICE_TABLE(pci, velocity_id_table);
+
+/**
+ * get_chip_name - identifier to name
+ * @id: chip identifier
+ *
+ * Given a chip identifier return a suitable description. Returns
+ * a pointer a static string valid while the driver is loaded.
+ */
+
+static char __devinit *get_chip_name(enum chip_type chip_id)
+{
+ int i;
+ for (i = 0; chip_info_table[i].name != NULL; i++)
+ if (chip_info_table[i].chip_id == chip_id)
+ break;
+ return chip_info_table[i].name;
+}
+
+/**
+ * velocity_remove1 - device unplug
+ * @pdev: PCI device being removed
+ *
+ * Device unload callback. Called on an unplug or on module
+ * unload for each active device that is present. Disconnects
+ * the device from the network layer and frees all the resources
+ */
+
+static void __devexit velocity_remove1(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct velocity_info *vptr = dev->priv;
+
+ unregister_netdev(dev);
+ iounmap(vptr->mac_regs);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ free_netdev(dev);
+}
+
+/**
+ * velocity_set_int_opt - parser for integer options
+ * @opt: pointer to option value
+ * @val: value the user requested (or -1 for default)
+ * @min: lowest value allowed
+ * @max: highest value allowed
+ * @def: default value
+ * @name: property name
+ * @dev: device name
+ *
+ * Set an integer property in the module options. This function does
+ * all the verification and checking as well as reporting so that
+ * we don't duplicate code for each option.
+ */
+
+static void __devinit velocity_set_int_opt(int *opt, int val, int min, int max, int def, char *name, char *devname)
+{
+ if (val == -1)
+ *opt = def;
+ else if (val < min || val > max) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: the value of parameter %s is invalid, the valid range is (%d-%d)\n",
+ devname, name, min, max);
+ *opt = def;
+ } else {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_INFO "%s: set value of parameter %s to %d\n",
+ devname, name, val);
+ *opt = val;
+ }
+}
+
+/**
+ * velocity_set_bool_opt - parser for boolean options
+ * @opt: pointer to option value
+ * @val: value the user requested (or -1 for default)
+ * @def: default value (yes/no)
+ * @flag: numeric value to set for true.
+ * @name: property name
+ * @dev: device name
+ *
+ * Set a boolean property in the module options. This function does
+ * all the verification and checking as well as reporting so that
+ * we don't duplicate code for each option.
+ */
+
+static void __devinit velocity_set_bool_opt(u32 * opt, int val, int def, u32 flag, char *name, char *devname)
+{
+ (*opt) &= (~flag);
+ if (val == -1)
+ *opt |= (def ? flag : 0);
+ else if (val < 0 || val > 1) {
+ printk(KERN_NOTICE "%s: the value of parameter %s is invalid, the valid range is (0-1)\n",
+ devname, name);
+ *opt |= (def ? flag : 0);
+ } else {
+ printk(KERN_INFO "%s: set parameter %s to %s\n",
+ devname, name, val ? "TRUE" : "FALSE");
+ *opt |= (val ? flag : 0);
+ }
+}
+
+/**
+ * velocity_get_options - set options on device
+ * @opts: option structure for the device
+ * @index: index of option to use in module options array
+ * @devname: device name
+ *
+ * Turn the module and command options into a single structure
+ * for the current device
+ */
+
+static void __devinit velocity_get_options(struct velocity_opt *opts, int index, char *devname)
+{
+
+ velocity_set_int_opt(&opts->rx_thresh, rx_thresh[index], RX_THRESH_MIN, RX_THRESH_MAX, RX_THRESH_DEF, "rx_thresh", devname);
+ velocity_set_int_opt(&opts->DMA_length, DMA_length[index], DMA_LENGTH_MIN, DMA_LENGTH_MAX, DMA_LENGTH_DEF, "DMA_length", devname);
+ velocity_set_int_opt(&opts->numrx, RxDescriptors[index], RX_DESC_MIN, RX_DESC_MAX, RX_DESC_DEF, "RxDescriptors", devname);
+ velocity_set_int_opt(&opts->numtx, TxDescriptors[index], TX_DESC_MIN, TX_DESC_MAX, TX_DESC_DEF, "TxDescriptors", devname);
+ velocity_set_int_opt(&opts->vid, VID_setting[index], VLAN_ID_MIN, VLAN_ID_MAX, VLAN_ID_DEF, "VID_setting", devname);
+ velocity_set_bool_opt(&opts->flags, enable_tagging[index], TAGGING_DEF, VELOCITY_FLAGS_TAGGING, "enable_tagging", devname);
+ velocity_set_bool_opt(&opts->flags, txcsum_offload[index], TX_CSUM_DEF, VELOCITY_FLAGS_TX_CSUM, "txcsum_offload", devname);
+ velocity_set_int_opt(&opts->flow_cntl, flow_control[index], FLOW_CNTL_MIN, FLOW_CNTL_MAX, FLOW_CNTL_DEF, "flow_control", devname);
+ velocity_set_bool_opt(&opts->flags, IP_byte_align[index], IP_ALIG_DEF, VELOCITY_FLAGS_IP_ALIGN, "IP_byte_align", devname);
+ velocity_set_bool_opt(&opts->flags, ValPktLen[index], VAL_PKT_LEN_DEF, VELOCITY_FLAGS_VAL_PKT_LEN, "ValPktLen", devname);
+ velocity_set_int_opt((int *) &opts->spd_dpx, speed_duplex[index], MED_LNK_MIN, MED_LNK_MAX, MED_LNK_DEF, "Media link mode", devname);
+ velocity_set_int_opt((int *) &opts->wol_opts, wol_opts[index], WOL_OPT_MIN, WOL_OPT_MAX, WOL_OPT_DEF, "Wake On Lan options", devname);
+ velocity_set_int_opt((int *) &opts->int_works, int_works[index], INT_WORKS_MIN, INT_WORKS_MAX, INT_WORKS_DEF, "Interrupt service works", devname);
+ opts->numrx = (opts->numrx & ~3);
+}
+
+/**
+ * velocity_init_cam_filter - initialise CAM
+ * @vptr: velocity to program
+ *
+ * Initialize the content addressable memory used for filters. Load
+ * appropriately according to the presence of VLAN
+ */
+
+static void velocity_init_cam_filter(struct velocity_info *vptr)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+
+ /* T urn on MCFG_PQEN, turn off MCFG_RTGOPT */
+ WORD_REG_BITS_SET(MCFG_PQEN, MCFG_RTGOPT, ®s->MCFG);
+ WORD_REG_BITS_ON(MCFG_VIDFR, ®s->MCFG);
+
+ /* Disable all CAMs */
+ memset(vptr->vCAMmask, 0, sizeof(u8) * 8);
+ memset(vptr->mCAMmask, 0, sizeof(u8) * 8);
+ mac_set_cam_mask(regs, vptr->vCAMmask, VELOCITY_VLAN_ID_CAM);
+ mac_set_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+
+ /* Enable first VCAM */
+ if (vptr->flags & VELOCITY_FLAGS_TAGGING) {
+ /* If Tagging option is enabled and VLAN ID is not zero, then
+ turn on MCFG_RTGOPT also */
+ if (vptr->options.vid != 0)
+ WORD_REG_BITS_ON(MCFG_RTGOPT, ®s->MCFG);
+
+ mac_set_cam(regs, 0, (u8 *) & (vptr->options.vid), VELOCITY_VLAN_ID_CAM);
+ vptr->vCAMmask[0] |= 1;
+ mac_set_cam_mask(regs, vptr->vCAMmask, VELOCITY_VLAN_ID_CAM);
+ } else {
+ u16 temp = 0;
+ mac_set_cam(regs, 0, (u8 *) &temp, VELOCITY_VLAN_ID_CAM);
+ temp = 1;
+ mac_set_cam_mask(regs, (u8 *) &temp, VELOCITY_VLAN_ID_CAM);
+ }
+}
+
+/**
+ * velocity_rx_reset - handle a receive reset
+ * @vptr: velocity we are resetting
+ *
+ * Reset the ownership and status for the receive ring side.
+ * Hand all the receive queue to the NIC.
+ */
+
+static void velocity_rx_reset(struct velocity_info *vptr)
+{
+
+ struct mac_regs * regs = vptr->mac_regs;
+ int i;
+
+ vptr->rd_used = vptr->rd_curr = 0;
+
+ /*
+ * Init state, all RD entries belong to the NIC
+ */
+ for (i = 0; i < vptr->options.numrx; ++i)
+ vptr->rd_ring[i].rdesc0.owner = cpu_to_le32(OWNED_BY_NIC);
+
+ writew(vptr->options.numrx, ®s->RBRDU);
+ writel(vptr->rd_pool_dma, ®s->RDBaseLo);
+ writew(0, ®s->RDIdx);
+ writew(vptr->options.numrx - 1, ®s->RDCSize);
+}
+
+/**
+ * velocity_init_registers - initialise MAC registers
+ * @vptr: velocity to init
+ * @type: type of initialisation (hot or cold)
+ *
+ * Initialise the MAC on a reset or on first set up on the
+ * hardware.
+ */
+
+static void velocity_init_registers(struct velocity_info *vptr,
+ enum velocity_init_type type)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ int i, mii_status;
+
+ mac_wol_reset(regs);
+
+ switch (type) {
+ case VELOCITY_INIT_RESET:
+ case VELOCITY_INIT_WOL:
+
+ netif_stop_queue(vptr->dev);
+
+ /*
+ * Reset RX to prevent RX pointer not on the 4X location
+ */
+ velocity_rx_reset(vptr);
+ mac_rx_queue_run(regs);
+ mac_rx_queue_wake(regs);
+
+ mii_status = velocity_get_opt_media_mode(vptr);
+ if (velocity_set_media_mode(vptr, mii_status) != VELOCITY_LINK_CHANGE) {
+ velocity_print_link_status(vptr);
+ if (!(vptr->mii_status & VELOCITY_LINK_FAIL))
+ netif_wake_queue(vptr->dev);
+ }
+
+ enable_flow_control_ability(vptr);
+
+ mac_clear_isr(regs);
+ writel(CR0_STOP, ®s->CR0Clr);
+ writel((CR0_DPOLL | CR0_TXON | CR0_RXON | CR0_STRT),
+ ®s->CR0Set);
+
+ break;
+
+ case VELOCITY_INIT_COLD:
+ default:
+ /*
+ * Do reset
+ */
+ velocity_soft_reset(vptr);
+ mdelay(5);
+
+ mac_eeprom_reload(regs);
+ for (i = 0; i < 6; i++) {
+ writeb(vptr->dev->dev_addr[i], &(regs->PAR[i]));
+ }
+ /*
+ * clear Pre_ACPI bit.
+ */
+ BYTE_REG_BITS_OFF(CFGA_PACPI, &(regs->CFGA));
+ mac_set_rx_thresh(regs, vptr->options.rx_thresh);
+ mac_set_dma_length(regs, vptr->options.DMA_length);
+
+ writeb(WOLCFG_SAM | WOLCFG_SAB, ®s->WOLCFGSet);
+ /*
+ * Bback off algorithm use original IEEE standard
+ */
+ BYTE_REG_BITS_SET(CFGB_OFSET, (CFGB_CRANDOM | CFGB_CAP | CFGB_MBA | CFGB_BAKOPT), ®s->CFGB);
+
+ /*
+ * Set packet filter: Receive directed and broadcast address
+ */
+ velocity_set_multi(vptr->dev);
+
+ /*
+ * Enable MII auto-polling
+ */
+ enable_mii_autopoll(regs);
+
+ vptr->int_mask = INT_MASK_DEF;
+
+ writel(cpu_to_le32(vptr->rd_pool_dma), ®s->RDBaseLo);
+ writew(vptr->options.numrx - 1, ®s->RDCSize);
+ mac_rx_queue_run(regs);
+ mac_rx_queue_wake(regs);
+
+ writew(vptr->options.numtx - 1, ®s->TDCSize);
+
+ for (i = 0; i < vptr->num_txq; i++) {
+ writel(cpu_to_le32(vptr->td_pool_dma[i]), &(regs->TDBaseLo[i]));
+ mac_tx_queue_run(regs, i);
+ }
+
+ velocity_init_cam_filter(vptr);
+
+ init_flow_control_register(vptr);
+
+ writel(CR0_STOP, ®s->CR0Clr);
+ writel((CR0_DPOLL | CR0_TXON | CR0_RXON | CR0_STRT), ®s->CR0Set);
+
+ mii_status = velocity_get_opt_media_mode(vptr);
+ netif_stop_queue(vptr->dev);
+ mac_clear_isr(regs);
+
+ mii_init(vptr, mii_status);
+
+ if (velocity_set_media_mode(vptr, mii_status) != VELOCITY_LINK_CHANGE) {
+ velocity_print_link_status(vptr);
+ if (!(vptr->mii_status & VELOCITY_LINK_FAIL))
+ netif_wake_queue(vptr->dev);
+ }
+
+ enable_flow_control_ability(vptr);
+ mac_hw_mibs_init(regs);
+ mac_write_int_mask(vptr->int_mask, regs);
+ mac_clear_isr(regs);
+
+ }
+}
+
+/**
+ * velocity_soft_reset - soft reset
+ * @vptr: velocity to reset
+ *
+ * Kick off a soft reset of the velocity adapter and then poll
+ * until the reset sequence has completed before returning.
+ */
+
+static int velocity_soft_reset(struct velocity_info *vptr)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ int i = 0;
+
+ writel(CR0_SFRST, ®s->CR0Set);
+
+ for (i = 0; i < W_MAX_TIMEOUT; i++) {
+ udelay(5);
+ if (!DWORD_REG_BITS_IS_ON(CR0_SFRST, ®s->CR0Set))
+ break;
+ }
+
+ if (i == W_MAX_TIMEOUT) {
+ writel(CR0_FORSRST, ®s->CR0Set);
+ /* FIXME: PCI POSTING */
+ /* delay 2ms */
+ mdelay(2);
+ }
+ return 0;
+}
+
+/**
+ * velocity_found1 - set up discovered velocity card
+ * @pdev: PCI device
+ * @ent: PCI device table entry that matched
+ *
+ * Configure a discovered adapter from scratch. Return a negative
+ * errno error code on failure paths.
+ */
+
+static int __devinit velocity_found1(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int first = 1;
+ struct net_device *dev;
+ int i;
+ struct velocity_info_tbl *info = (struct velocity_info_tbl *) ent->driver_data;
+ struct velocity_info *vptr;
+ struct mac_regs * regs;
+ int ret = -ENOMEM;
+
+ if (velocity_nics++ >= MAX_UNITS) {
+ printk(KERN_NOTICE VELOCITY_NAME ": already found %d NICs.\n",
+ velocity_nics);
+ return -ENODEV;
+ }
+
+ dev = alloc_etherdev(sizeof(struct velocity_info));
+
+ if (dev == NULL) {
+ printk(KERN_ERR VELOCITY_NAME ": allocate net device failed.\n");
+ goto out;
+ }
+
+ /* Chain it all together */
+
+ SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ vptr = dev->priv;
+
+
+ if (first) {
+ printk(KERN_INFO "%s Ver. %s\n",
+ VELOCITY_FULL_DRV_NAM, VELOCITY_VERSION);
+ printk(KERN_INFO "Copyright (c) 2002, 2003 VIA Networking Technologies, Inc.\n");
+ printk(KERN_INFO "Copyright (c) 2004 Red Hat Inc.\n");
+ first = 0;
+ }
+
+ velocity_init_info(pdev, vptr, info);
+
+ vptr->dev = dev;
+
+ dev->priv = vptr;
+ dev->irq = pdev->irq;
+
+ ret = pci_enable_device(pdev);
+ if (ret < 0)
+ goto err_free_dev;
+
+ ret = velocity_get_pci_info(vptr, pdev);
+ if (ret < 0) {
+ printk(KERN_ERR VELOCITY_NAME ": Failed to find PCI device.\n");
+ goto err_disable;
+ }
+
+ ret = pci_request_regions(pdev, VELOCITY_NAME);
+ if (ret < 0) {
+ printk(KERN_ERR VELOCITY_NAME ": Failed to find PCI device.\n");
+ goto err_disable;
+ }
+
+ regs = ioremap(vptr->memaddr, vptr->io_size);
+ if (regs == NULL) {
+ ret = -EIO;
+ goto err_release_res;
+ }
+
+ vptr->mac_regs = regs;
+
+ mac_wol_reset(regs);
+
+ dev->base_addr = vptr->ioaddr;
+
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = readb(®s->PAR[i]);
+
+
+ velocity_get_options(&vptr->options, velocity_nics - 1, dev->name);
+
+ /*
+ * Mask out the options cannot be set to the chip
+ */
+
+ vptr->options.flags &= info->flags;
+
+ /*
+ * Enable the chip specified capbilities
+ */
+
+ vptr->flags = vptr->options.flags | (info->flags & 0xFF000000UL);
+
+ vptr->wol_opts = vptr->options.wol_opts;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+
+ vptr->phy_id = MII_GET_PHY_ID(vptr->mac_regs);
+
+ dev->irq = pdev->irq;
+ dev->open = velocity_open;
+ dev->hard_start_xmit = velocity_xmit;
+ dev->stop = velocity_close;
+ dev->get_stats = velocity_get_stats;
+ dev->set_multicast_list = velocity_set_multi;
+ dev->do_ioctl = velocity_ioctl;
+ dev->ethtool_ops = &velocity_ethtool_ops;
+ dev->change_mtu = velocity_change_mtu;
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ dev->features |= NETIF_F_SG;
+#endif
+
+ if (vptr->flags & VELOCITY_FLAGS_TX_CSUM) {
+ dev->features |= NETIF_F_HW_CSUM;
+ }
+
+ ret = register_netdev(dev);
+ if (ret < 0)
+ goto err_iounmap;
+
+ velocity_print_info(vptr);
+ pci_set_drvdata(pdev, dev);
+
+ /* and leave the chip powered down */
+
+ pci_set_power_state(pdev, 3);
+out:
+ return ret;
+
+err_iounmap:
+ iounmap(regs);
+err_release_res:
+ pci_release_regions(pdev);
+err_disable:
+ pci_disable_device(pdev);
+err_free_dev:
+ free_netdev(dev);
+ goto out;
+}
+
+/**
+ * velocity_print_info - per driver data
+ * @vptr: velocity
+ *
+ * Print per driver data as the kernel driver finds Velocity
+ * hardware
+ */
+
+static void __devinit velocity_print_info(struct velocity_info *vptr)
+{
+ struct net_device *dev = vptr->dev;
+
+ printk(KERN_INFO "%s: %s\n", dev->name, get_chip_name(vptr->chip_id));
+ printk(KERN_INFO "%s: Ethernet Address: %2.2X:%2.2X:%2.2X:%2.2X:%2.2X:%2.2X\n",
+ dev->name,
+ dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
+ dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
+}
+
+/**
+ * velocity_init_info - init private data
+ * @pdev: PCI device
+ * @vptr: Velocity info
+ * @info: Board type
+ *
+ * Set up the initial velocity_info struct for the device that has been
+ * discovered.
+ */
+
+static void __devinit velocity_init_info(struct pci_dev *pdev, struct velocity_info *vptr, struct velocity_info_tbl *info)
+{
+ memset(vptr, 0, sizeof(struct velocity_info));
+
+ vptr->pdev = pdev;
+ vptr->chip_id = info->chip_id;
+ vptr->io_size = info->io_size;
+ vptr->num_txq = info->txqueue;
+ vptr->multicast_limit = MCAM_SIZE;
+
+ spin_lock_init(&vptr->lock);
+ spin_lock_init(&vptr->xmit_lock);
+}
+
+/**
+ * velocity_get_pci_info - retrieve PCI info for device
+ * @vptr: velocity device
+ * @pdev: PCI device it matches
+ *
+ * Retrieve the PCI configuration space data that interests us from
+ * the kernel PCI layer
+ */
+
+static int __devinit velocity_get_pci_info(struct velocity_info *vptr, struct pci_dev *pdev)
+{
+
+ if(pci_read_config_byte(pdev, PCI_REVISION_ID, &vptr->rev_id) < 0)
+ return -EIO;
+
+ pci_set_master(pdev);
+
+ vptr->ioaddr = pci_resource_start(pdev, 0);
+ vptr->memaddr = pci_resource_start(pdev, 1);
+
+ if(!(pci_resource_flags(pdev, 0) & IORESOURCE_IO))
+ {
+ printk(KERN_ERR "%s: region #0 is not an I/O resource, aborting.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+
+ if((pci_resource_flags(pdev, 1) & IORESOURCE_IO))
+ {
+ printk(KERN_ERR "%s: region #1 is an I/O resource, aborting.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+
+ if(pci_resource_len(pdev, 1) < 256)
+ {
+ printk(KERN_ERR "%s: region #1 is too small.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+ vptr->pdev = pdev;
+
+ return 0;
+}
+
+/**
+ * velocity_init_rings - set up DMA rings
+ * @vptr: Velocity to set up
+ *
+ * Allocate PCI mapped DMA rings for the receive and transmit layer
+ * to use.
+ */
+
+static int velocity_init_rings(struct velocity_info *vptr)
+{
+ int i;
+ unsigned int psize;
+ unsigned int tsize;
+ dma_addr_t pool_dma;
+ u8 *pool;
+
+ /*
+ * Allocate all RD/TD rings a single pool
+ */
+
+ psize = vptr->options.numrx * sizeof(struct rx_desc) +
+ vptr->options.numtx * sizeof(struct tx_desc) * vptr->num_txq;
+
+ /*
+ * pci_alloc_consistent() fulfills the requirement for 64 bytes
+ * alignment
+ */
+ pool = pci_alloc_consistent(vptr->pdev, psize, &pool_dma);
+
+ if (pool == NULL) {
+ printk(KERN_ERR "%s : DMA memory allocation failed.\n",
+ vptr->dev->name);
+ return -ENOMEM;
+ }
+
+ memset(pool, 0, psize);
+
+ vptr->rd_ring = (struct rx_desc *) pool;
+
+ vptr->rd_pool_dma = pool_dma;
+
+ tsize = vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq;
+ vptr->tx_bufs = pci_alloc_consistent(vptr->pdev, tsize,
+ &vptr->tx_bufs_dma);
+
+ if (vptr->tx_bufs == NULL) {
+ printk(KERN_ERR "%s: DMA memory allocation failed.\n",
+ vptr->dev->name);
+ pci_free_consistent(vptr->pdev, psize, pool, pool_dma);
+ return -ENOMEM;
+ }
+
+ memset(vptr->tx_bufs, 0, vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq);
+
+ i = vptr->options.numrx * sizeof(struct rx_desc);
+ pool += i;
+ pool_dma += i;
+ for (i = 0; i < vptr->num_txq; i++) {
+ int offset = vptr->options.numtx * sizeof(struct tx_desc);
+
+ vptr->td_pool_dma[i] = pool_dma;
+ vptr->td_rings[i] = (struct tx_desc *) pool;
+ pool += offset;
+ pool_dma += offset;
+ }
+ return 0;
+}
+
+/**
+ * velocity_free_rings - free PCI ring pointers
+ * @vptr: Velocity to free from
+ *
+ * Clean up the PCI ring buffers allocated to this velocity.
+ */
+
+static void velocity_free_rings(struct velocity_info *vptr)
+{
+ int size;
+
+ size = vptr->options.numrx * sizeof(struct rx_desc) +
+ vptr->options.numtx * sizeof(struct tx_desc) * vptr->num_txq;
+
+ pci_free_consistent(vptr->pdev, size, vptr->rd_ring, vptr->rd_pool_dma);
+
+ size = vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq;
+
+ pci_free_consistent(vptr->pdev, size, vptr->tx_bufs, vptr->tx_bufs_dma);
+}
+
+/**
+ * velocity_init_rd_ring - set up receive ring
+ * @vptr: velocity to configure
+ *
+ * Allocate and set up the receive buffers for each ring slot and
+ * assign them to the network adapter.
+ */
+
+static int velocity_init_rd_ring(struct velocity_info *vptr)
+{
+ int i, ret = -ENOMEM;
+ struct rx_desc *rd;
+ struct velocity_rd_info *rd_info;
+ unsigned int rsize = sizeof(struct velocity_rd_info) *
+ vptr->options.numrx;
+
+ vptr->rd_info = kmalloc(rsize, GFP_KERNEL);
+ if(vptr->rd_info == NULL)
+ goto out;
+ memset(vptr->rd_info, 0, rsize);
+
+ /* Init the RD ring entries */
+ for (i = 0; i < vptr->options.numrx; i++) {
+ rd = &(vptr->rd_ring[i]);
+ rd_info = &(vptr->rd_info[i]);
+
+ ret = velocity_alloc_rx_buf(vptr, i);
+ if (ret < 0) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_ERR
+ "%s: failed to allocate RX buffer.\n",
+ vptr->dev->name);
+ velocity_free_rd_ring(vptr);
+ goto out;
+ }
+ rd->rdesc0.owner = OWNED_BY_NIC;
+ }
+ vptr->rd_used = vptr->rd_curr = 0;
+out:
+ return ret;
+}
+
+/**
+ * velocity_free_rd_ring - set up receive ring
+ * @vptr: velocity to clean up
+ *
+ * Free the receive buffers for each ring slot and any
+ * attached socket buffers that need to go away.
+ */
+
+static void velocity_free_rd_ring(struct velocity_info *vptr)
+{
+ int i;
+
+ if (vptr->rd_info == NULL)
+ return;
+
+ for (i = 0; i < vptr->options.numrx; i++) {
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[i]);
+
+ if (!rd_info->skb_dma)
+ continue;
+ pci_unmap_single(vptr->pdev, rd_info->skb_dma, vptr->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ rd_info->skb_dma = (dma_addr_t) NULL;
+
+ dev_kfree_skb(rd_info->skb);
+ rd_info->skb = NULL;
+ }
+
+ kfree(vptr->rd_info);
+ vptr->rd_info = NULL;
+}
+
+/**
+ * velocity_init_td_ring - set up transmit ring
+ * @vptr: velocity
+ *
+ * Set up the transmit ring and chain the ring pointers together.
+ * Returns zero on success or a negative posix errno code for
+ * failure.
+ */
+
+static int velocity_init_td_ring(struct velocity_info *vptr)
+{
+ int i, j;
+ dma_addr_t curr;
+ struct tx_desc *td;
+ struct velocity_td_info *td_info;
+ unsigned int tsize = sizeof(struct velocity_td_info) *
+ vptr->options.numtx;
+
+ /* Init the TD ring entries */
+ for (j = 0; j < vptr->num_txq; j++) {
+ curr = vptr->td_pool_dma[j];
+
+ vptr->td_infos[j] = kmalloc(tsize, GFP_KERNEL);
+ if(vptr->td_infos[j] == NULL)
+ {
+ while(--j >= 0)
+ kfree(vptr->td_infos[j]);
+ return -ENOMEM;
+ }
+ memset(vptr->td_infos[j], 0, tsize);
+
+ for (i = 0; i < vptr->options.numtx; i++, curr += sizeof(struct tx_desc)) {
+ td = &(vptr->td_rings[j][i]);
+ td_info = &(vptr->td_infos[j][i]);
+ td_info->buf = vptr->tx_bufs + (i + j) * PKT_BUF_SZ;
+ td_info->buf_dma = vptr->tx_bufs_dma + (i + j) * PKT_BUF_SZ;
+ }
+ vptr->td_tail[j] = vptr->td_curr[j] = vptr->td_used[j] = 0;
+ }
+ return 0;
+}
+
+/*
+ * FIXME: could we merge this with velocity_free_tx_buf ?
+ */
+
+static void velocity_free_td_ring_entry(struct velocity_info *vptr,
+ int q, int n)
+{
+ struct velocity_td_info * td_info = &(vptr->td_infos[q][n]);
+ int i;
+
+ if (td_info == NULL)
+ return;
+
+ if (td_info->skb) {
+ for (i = 0; i < td_info->nskb_dma; i++)
+ {
+ if (td_info->skb_dma[i]) {
+ pci_unmap_single(vptr->pdev, td_info->skb_dma[i],
+ td_info->skb->len, PCI_DMA_TODEVICE);
+ td_info->skb_dma[i] = (dma_addr_t) NULL;
+ }
+ }
+ dev_kfree_skb(td_info->skb);
+ td_info->skb = NULL;
+ }
+}
+
+/**
+ * velocity_free_td_ring - free td ring
+ * @vptr: velocity
+ *
+ * Free up the transmit ring for this particular velocity adapter.
+ * We free the ring contents but not the ring itself.
+ */
+
+static void velocity_free_td_ring(struct velocity_info *vptr)
+{
+ int i, j;
+
+ for (j = 0; j < vptr->num_txq; j++) {
+ if (vptr->td_infos[j] == NULL)
+ continue;
+ for (i = 0; i < vptr->options.numtx; i++) {
+ velocity_free_td_ring_entry(vptr, j, i);
+
+ }
+ if (vptr->td_infos[j]) {
+ kfree(vptr->td_infos[j]);
+ vptr->td_infos[j] = NULL;
+ }
+ }
+}
+
+/**
+ * velocity_rx_srv - service RX interrupt
+ * @vptr: velocity
+ * @status: adapter status (unused)
+ *
+ * Walk the receive ring of the velocity adapter and remove
+ * any received packets from the receive queue. Hand the ring
+ * slots back to the adapter for reuse.
+ */
+
+static int velocity_rx_srv(struct velocity_info *vptr, int status)
+{
+ struct rx_desc *rd;
+ struct net_device_stats *stats = &vptr->stats;
+ struct mac_regs * regs = vptr->mac_regs;
+ int rd_curr = vptr->rd_curr;
+ int works = 0;
+
+ while (1) {
+
+ rd = &(vptr->rd_ring[rd_curr]);
+
+ if ((vptr->rd_info[rd_curr]).skb == NULL) {
+ if (velocity_alloc_rx_buf(vptr, rd_curr) < 0)
+ break;
+ }
+
+ if (works++ > 15)
+ break;
+
+ if (rd->rdesc0.owner == OWNED_BY_NIC)
+ break;
+
+ /*
+ * Don't drop CE or RL error frame although RXOK is off
+ * FIXME: need to handle copybreak
+ */
+ if ((rd->rdesc0.RSR & RSR_RXOK) || (!(rd->rdesc0.RSR & RSR_RXOK) && (rd->rdesc0.RSR & (RSR_CE | RSR_RL)))) {
+ if (velocity_receive_frame(vptr, rd_curr) == 0) {
+ if (velocity_alloc_rx_buf(vptr, rd_curr) < 0) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_ERR "%s: can not allocate rx buf\n", vptr->dev->name);
+ break;
+ }
+ } else {
+ stats->rx_dropped++;
+ }
+ } else {
+ if (rd->rdesc0.RSR & RSR_CRC)
+ stats->rx_crc_errors++;
+ if (rd->rdesc0.RSR & RSR_FAE)
+ stats->rx_frame_errors++;
+
+ stats->rx_dropped++;
+ }
+
+ rd->inten = 1;
+
+ if (++vptr->rd_used >= 4) {
+ int i, rd_prev = rd_curr;
+ for (i = 0; i < 4; i++) {
+ if (--rd_prev < 0)
+ rd_prev = vptr->options.numrx - 1;
+
+ rd = &(vptr->rd_ring[rd_prev]);
+ rd->rdesc0.owner = OWNED_BY_NIC;
+ }
+ writew(4, &(regs->RBRDU));
+ vptr->rd_used -= 4;
+ }
+
+ vptr->dev->last_rx = jiffies;
+
+ rd_curr++;
+ if (rd_curr >= vptr->options.numrx)
+ rd_curr = 0;
+ }
+ vptr->rd_curr = rd_curr;
+ VAR_USED(stats);
+ return works;
+}
+
+/**
+ * velocity_rx_csum - checksum process
+ * @rd: receive packet descriptor
+ * @skb: network layer packet buffer
+ *
+ * Process the status bits for the received packet and determine
+ * if the checksum was computed and verified by the hardware
+ */
+
+static inline void velocity_rx_csum(struct rx_desc *rd, struct sk_buff *skb)
+{
+ skb->ip_summed = CHECKSUM_NONE;
+
+ if (rd->rdesc1.CSM & CSM_IPKT) {
+ if (rd->rdesc1.CSM & CSM_IPOK) {
+ if ((rd->rdesc1.CSM & CSM_TCPKT) ||
+ (rd->rdesc1.CSM & CSM_UDPKT)) {
+ if (!(rd->rdesc1.CSM & CSM_TUPOK)) {
+ return;
+ }
+ }
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+ }
+}
+
+/**
+ * velocity_receive_frame - received packet processor
+ * @vptr: velocity we are handling
+ * @idx: ring index
+ *
+ * A packet has arrived. We process the packet and if appropriate
+ * pass the frame up the network stack
+ */
+
+static int velocity_receive_frame(struct velocity_info *vptr, int idx)
+{
+ struct net_device_stats *stats = &vptr->stats;
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[idx]);
+ struct rx_desc *rd = &(vptr->rd_ring[idx]);
+ struct sk_buff *skb;
+
+ if (rd->rdesc0.RSR & (RSR_STP | RSR_EDP)) {
+ VELOCITY_PRT(MSG_LEVEL_VERBOSE, KERN_ERR " %s : the received frame span multple RDs.\n", vptr->dev->name);
+ stats->rx_length_errors++;
+ return -EINVAL;
+ }
+
+ if (rd->rdesc0.RSR & RSR_MAR)
+ vptr->stats.multicast++;
+
+ skb = rd_info->skb;
+ skb->dev = vptr->dev;
+
+ pci_unmap_single(vptr->pdev, rd_info->skb_dma, vptr->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ rd_info->skb_dma = (dma_addr_t) NULL;
+ rd_info->skb = NULL;
+
+ /* FIXME - memmove ? */
+ if (vptr->flags & VELOCITY_FLAGS_IP_ALIGN) {
+ int i;
+ for (i = rd->rdesc0.len + 4; i >= 0; i--)
+ *(skb->data + i + 2) = *(skb->data + i);
+ skb->data += 2;
+ skb->tail += 2;
+ }
+
+ skb_put(skb, (rd->rdesc0.len - 4));
+ skb->protocol = eth_type_trans(skb, skb->dev);
+
+ /*
+ * Drop frame not meeting IEEE 802.3
+ */
+
+ if (vptr->flags & VELOCITY_FLAGS_VAL_PKT_LEN) {
+ if (rd->rdesc0.RSR & RSR_RL) {
+ stats->rx_length_errors++;
+ return -EINVAL;
+ }
+ }
+
+ velocity_rx_csum(rd, skb);
+
+ /*
+ * FIXME: need rx_copybreak handling
+ */
+
+ stats->rx_bytes += skb->len;
+ netif_rx(skb);
+
+ return 0;
+}
+
+/**
+ * velocity_alloc_rx_buf - allocate aligned receive buffer
+ * @vptr: velocity
+ * @idx: ring index
+ *
+ * Allocate a new full sized buffer for the reception of a frame and
+ * map it into PCI space for the hardware to use. The hardware
+ * requires *64* byte alignment of the buffer which makes life
+ * less fun than would be ideal.
+ */
+
+static int velocity_alloc_rx_buf(struct velocity_info *vptr, int idx)
+{
+ struct rx_desc *rd = &(vptr->rd_ring[idx]);
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[idx]);
+
+ rd_info->skb = dev_alloc_skb(vptr->rx_buf_sz + 64);
+ if (rd_info->skb == NULL)
+ return -ENOMEM;
+
+ /*
+ * Do the gymnastics to get the buffer head for data at
+ * 64byte alignment.
+ */
+ skb_reserve(rd_info->skb, (unsigned long) rd_info->skb->tail & 63);
+ rd_info->skb->dev = vptr->dev;
+ rd_info->skb_dma = pci_map_single(vptr->pdev, rd_info->skb->tail, vptr->rx_buf_sz, PCI_DMA_FROMDEVICE);
+
+ /*
+ * Fill in the descriptor to match
+ */
+
+ *((u32 *) & (rd->rdesc0)) = 0;
+ rd->len = cpu_to_le32(vptr->rx_buf_sz);
+ rd->inten = 1;
+ rd->pa_low = cpu_to_le32(rd_info->skb_dma);
+ rd->pa_high = 0;
+ return 0;
+}
+
+/**
+ * tx_srv - transmit interrupt service
+ * @vptr; Velocity
+ * @status:
+ *
+ * Scan the queues looking for transmitted packets that
+ * we can complete and clean up. Update any statistics as
+ * neccessary/
+ */
+
+static int velocity_tx_srv(struct velocity_info *vptr, u32 status)
+{
+ struct tx_desc *td;
+ int qnum;
+ int full = 0;
+ int idx;
+ int works = 0;
+ struct velocity_td_info *tdinfo;
+ struct net_device_stats *stats = &vptr->stats;
+
+ for (qnum = 0; qnum < vptr->num_txq; qnum++) {
+ for (idx = vptr->td_tail[qnum]; vptr->td_used[qnum] > 0;
+ idx = (idx + 1) % vptr->options.numtx) {
+
+ /*
+ * Get Tx Descriptor
+ */
+ td = &(vptr->td_rings[qnum][idx]);
+ tdinfo = &(vptr->td_infos[qnum][idx]);
+
+ if (td->tdesc0.owner == OWNED_BY_NIC)
+ break;
+
+ if ((works++ > 15))
+ break;
+
+ if (td->tdesc0.TSR & TSR0_TERR) {
+ stats->tx_errors++;
+ stats->tx_dropped++;
+ if (td->tdesc0.TSR & TSR0_CDH)
+ stats->tx_heartbeat_errors++;
+ if (td->tdesc0.TSR & TSR0_CRS)
+ stats->tx_carrier_errors++;
+ if (td->tdesc0.TSR & TSR0_ABT)
+ stats->tx_aborted_errors++;
+ if (td->tdesc0.TSR & TSR0_OWC)
+ stats->tx_window_errors++;
+ } else {
+ stats->tx_packets++;
+ stats->tx_bytes += tdinfo->skb->len;
+ }
+ velocity_free_tx_buf(vptr, tdinfo);
+ vptr->td_used[qnum]--;
+ }
+ vptr->td_tail[qnum] = idx;
+
+ if (AVAIL_TD(vptr, qnum) < 1) {
+ full = 1;
+ }
+ }
+ /*
+ * Look to see if we should kick the transmit network
+ * layer for more work.
+ */
+ if (netif_queue_stopped(vptr->dev) && (full == 0)
+ && (!(vptr->mii_status & VELOCITY_LINK_FAIL))) {
+ netif_wake_queue(vptr->dev);
+ }
+ return works;
+}
+
+/**
+ * velocity_print_link_status - link status reporting
+ * @vptr: velocity to report on
+ *
+ * Turn the link status of the velocity card into a kernel log
+ * description of the new link state, detailing speed and duplex
+ * status
+ */
+
+static void velocity_print_link_status(struct velocity_info *vptr)
+{
+
+ if (vptr->mii_status & VELOCITY_LINK_FAIL) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: failed to detect cable link\n", vptr->dev->name);
+ } else if (vptr->options.spd_dpx == SPD_DPX_AUTO) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: Link autonegation", vptr->dev->name);
+
+ if (vptr->mii_status & VELOCITY_SPEED_1000)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 1000M bps");
+ else if (vptr->mii_status & VELOCITY_SPEED_100)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps");
+ else
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps");
+
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " full duplex\n");
+ else
+ VELOCITY_PRT(MSG_LEVEL_INFO, " half duplex\n");
+ } else {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: Link forced", vptr->dev->name);
+ switch (vptr->options.spd_dpx) {
+ case SPD_DPX_100_HALF:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps half duplex\n");
+ break;
+ case SPD_DPX_100_FULL:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps full duplex\n");
+ break;
+ case SPD_DPX_10_HALF:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps half duplex\n");
+ break;
+ case SPD_DPX_10_FULL:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps full duplex\n");
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/**
+ * velocity_error - handle error from controller
+ * @vptr: velocity
+ * @status: card status
+ *
+ * Process an error report from the hardware and attempt to recover
+ * the card itself. At the moment we cannot recover from some
+ * theoretically impossible errors but this could be fixed using
+ * the pci_device_failed logic to bounce the hardware
+ *
+ */
+
+static void velocity_error(struct velocity_info *vptr, int status)
+{
+
+ if (status & ISR_TXSTLI) {
+ struct mac_regs * regs = vptr->mac_regs;
+
+ printk(KERN_ERR "TD structure errror TDindex=%hx\n", readw(®s->TDIdx[0]));
+ BYTE_REG_BITS_ON(TXESR_TDSTR, ®s->TXESR);
+ writew(TRDCSR_RUN, ®s->TDCSRClr);
+ netif_stop_queue(vptr->dev);
+
+ /* FIXME: port over the pci_device_failed code and use it
+ here */
+ }
+
+ if (status & ISR_SRCI) {
+ struct mac_regs * regs = vptr->mac_regs;
+ int linked;
+
+ if (vptr->options.spd_dpx == SPD_DPX_AUTO) {
+ vptr->mii_status = check_connection_type(regs);
+
+ /*
+ * If it is a 3119, disable frame bursting in
+ * halfduplex mode and enable it in fullduplex
+ * mode
+ */
+ if (vptr->rev_id < REV_ID_VT3216_A0) {
+ if (vptr->mii_status | VELOCITY_DUPLEX_FULL)
+ BYTE_REG_BITS_ON(TCR_TB2BDIS, ®s->TCR);
+ else
+ BYTE_REG_BITS_OFF(TCR_TB2BDIS, ®s->TCR);
+ }
+ /*
+ * Only enable CD heart beat counter in 10HD mode
+ */
+ if (!(vptr->mii_status & VELOCITY_DUPLEX_FULL) && (vptr->mii_status & VELOCITY_SPEED_10)) {
+ BYTE_REG_BITS_OFF(TESTCFG_HBDIS, ®s->TESTCFG);
+ } else {
+ BYTE_REG_BITS_ON(TESTCFG_HBDIS, ®s->TESTCFG);
+ }
+ }
+ /*
+ * Get link status from PHYSR0
+ */
+ linked = readb(®s->PHYSR0) & PHYSR0_LINKGD;
+
+ if (linked) {
+ vptr->mii_status &= ~VELOCITY_LINK_FAIL;
+ } else {
+ vptr->mii_status |= VELOCITY_LINK_FAIL;
+ }
+
+ velocity_print_link_status(vptr);
+ enable_flow_control_ability(vptr);
+
+ /*
+ * Re-enable auto-polling because SRCI will disable
+ * auto-polling
+ */
+
+ enable_mii_autopoll(regs);
+
+ if (vptr->mii_status & VELOCITY_LINK_FAIL)
+ netif_stop_queue(vptr->dev);
+ else
+ netif_wake_queue(vptr->dev);
+
+ };
+ if (status & ISR_MIBFI)
+ velocity_update_hw_mibs(vptr);
+ if (status & ISR_LSTEI)
+ mac_rx_queue_wake(vptr->mac_regs);
+}
+
+/**
+ * velocity_free_tx_buf - free transmit buffer
+ * @vptr: velocity
+ * @tdinfo: buffer
+ *
+ * Release an transmit buffer. If the buffer was preallocated then
+ * recycle it, if not then unmap the buffer.
+ */
+
+static void velocity_free_tx_buf(struct velocity_info *vptr, struct velocity_td_info *tdinfo)
+{
+ struct sk_buff *skb = tdinfo->skb;
+ int i;
+
+ /*
+ * Don't unmap the pre-allocated tx_bufs
+ */
+ if (tdinfo->skb_dma && (tdinfo->skb_dma[0] != tdinfo->buf_dma)) {
+
+ for (i = 0; i < tdinfo->nskb_dma; i++) {
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ pci_unmap_single(vptr->pdev, tdinfo->skb_dma[i], td->tdesc1.len, PCI_DMA_TODEVICE);
+#else
+ pci_unmap_single(vptr->pdev, tdinfo->skb_dma[i], skb->len, PCI_DMA_TODEVICE);
+#endif
+ tdinfo->skb_dma[i] = 0;
+ }
+ }
+ dev_kfree_skb_irq(skb);
+ tdinfo->skb = NULL;
+}
+
+/**
+ * velocity_open - interface activation callback
+ * @dev: network layer device to open
+ *
+ * Called when the network layer brings the interface up. Returns
+ * a negative posix error code on failure, or zero on success.
+ *
+ * All the ring allocation and set up is done on open for this
+ * adapter to minimise memory usage when inactive
+ */
+
+static int velocity_open(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ int ret;
+
+ vptr->rx_buf_sz = (dev->mtu <= 1504 ? PKT_BUF_SZ : dev->mtu + 32);
+
+ ret = velocity_init_rings(vptr);
+ if (ret < 0)
+ goto out;
+
+ ret = velocity_init_rd_ring(vptr);
+ if (ret < 0)
+ goto err_free_desc_rings;
+
+ ret = velocity_init_td_ring(vptr);
+ if (ret < 0)
+ goto err_free_rd_ring;
+
+ /* Ensure chip is running */
+ pci_set_power_state(vptr->pdev, 0);
+
+ velocity_init_registers(vptr, VELOCITY_INIT_COLD);
+
+ ret = request_irq(vptr->pdev->irq, &velocity_intr, SA_SHIRQ,
+ dev->name, dev);
+ if (ret < 0) {
+ /* Power down the chip */
+ pci_set_power_state(vptr->pdev, 3);
+ goto err_free_td_ring;
+ }
+
+ mac_enable_int(vptr->mac_regs);
+ netif_start_queue(dev);
+ vptr->flags |= VELOCITY_FLAGS_OPENED;
+out:
+ return ret;
+
+err_free_td_ring:
+ velocity_free_td_ring(vptr);
+err_free_rd_ring:
+ velocity_free_rd_ring(vptr);
+err_free_desc_rings:
+ velocity_free_rings(vptr);
+ goto out;
+}
+
+/**
+ * velocity_change_mtu - MTU change callback
+ * @dev: network device
+ * @new_mtu: desired MTU
+ *
+ * Handle requests from the networking layer for MTU change on
+ * this interface. It gets called on a change by the network layer.
+ * Return zero for success or negative posix error code.
+ */
+
+static int velocity_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct velocity_info *vptr = dev->priv;
+ unsigned long flags;
+ int oldmtu = dev->mtu;
+ int ret = 0;
+
+ if ((new_mtu < VELOCITY_MIN_MTU) || new_mtu > (VELOCITY_MAX_MTU)) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_NOTICE "%s: Invalid MTU.\n",
+ vptr->dev->name);
+ return -EINVAL;
+ }
+
+ if (new_mtu != oldmtu) {
+ spin_lock_irqsave(&vptr->lock, flags);
+
+ netif_stop_queue(dev);
+ velocity_shutdown(vptr);
+
+ velocity_free_td_ring(vptr);
+ velocity_free_rd_ring(vptr);
+
+ dev->mtu = new_mtu;
+ if (new_mtu > 8192)
+ vptr->rx_buf_sz = 9 * 1024;
+ else if (new_mtu > 4096)
+ vptr->rx_buf_sz = 8192;
+ else
+ vptr->rx_buf_sz = 4 * 1024;
+
+ ret = velocity_init_rd_ring(vptr);
+ if (ret < 0)
+ goto out_unlock;
+
+ ret = velocity_init_td_ring(vptr);
+ if (ret < 0)
+ goto out_unlock;
+
+ velocity_init_registers(vptr, VELOCITY_INIT_COLD);
+
+ mac_enable_int(vptr->mac_regs);
+ netif_start_queue(dev);
+out_unlock:
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ }
+
+ return ret;
+}
+
+/**
+ * velocity_shutdown - shut down the chip
+ * @vptr: velocity to deactivate
+ *
+ * Shuts down the internal operations of the velocity and
+ * disables interrupts, autopolling, transmit and receive
+ */
+
+static void velocity_shutdown(struct velocity_info *vptr)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ mac_disable_int(regs);
+ writel(CR0_STOP, ®s->CR0Set);
+ writew(0xFFFF, ®s->TDCSRClr);
+ writeb(0xFF, ®s->RDCSRClr);
+ safe_disable_mii_autopoll(regs);
+ mac_clear_isr(regs);
+}
+
+/**
+ * velocity_close - close adapter callback
+ * @dev: network device
+ *
+ * Callback from the network layer when the velocity is being
+ * deactivated by the network layer
+ */
+
+static int velocity_close(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ netif_stop_queue(dev);
+ velocity_shutdown(vptr);
+
+ if (vptr->flags & VELOCITY_FLAGS_WOL_ENABLED)
+ velocity_get_ip(vptr);
+ if (dev->irq != 0)
+ free_irq(dev->irq, dev);
+
+ /* Power down the chip */
+ pci_set_power_state(vptr->pdev, 3);
+
+ /* Free the resources */
+ velocity_free_td_ring(vptr);
+ velocity_free_rd_ring(vptr);
+ velocity_free_rings(vptr);
+
+ vptr->flags &= (~VELOCITY_FLAGS_OPENED);
+ return 0;
+}
+
+/**
+ * velocity_xmit - transmit packet callback
+ * @skb: buffer to transmit
+ * @dev: network device
+ *
+ * Called by the networ layer to request a packet is queued to
+ * the velocity. Returns zero on success.
+ */
+
+static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ int qnum = 0;
+ struct tx_desc *td_ptr;
+ struct velocity_td_info *tdinfo;
+ unsigned long flags;
+ int index;
+
+ int pktlen = skb->len;
+
+ spin_lock_irqsave(&vptr->lock, flags);
+
+ index = vptr->td_curr[qnum];
+ td_ptr = &(vptr->td_rings[qnum][index]);
+ tdinfo = &(vptr->td_infos[qnum][index]);
+
+ td_ptr->tdesc1.TCPLS = TCPLS_NORMAL;
+ td_ptr->tdesc1.TCR = TCR0_TIC;
+ td_ptr->td_buf[0].queue = 0;
+
+ /*
+ * Pad short frames.
+ */
+ if (pktlen < ETH_ZLEN) {
+ /* Cannot occur until ZC support */
+ if(skb_linearize(skb, GFP_ATOMIC))
+ return 0;
+ pktlen = ETH_ZLEN;
+ memcpy(tdinfo->buf, skb->data, skb->len);
+ memset(tdinfo->buf + skb->len, 0, ETH_ZLEN - skb->len);
+ tdinfo->skb = skb;
+ tdinfo->skb_dma[0] = tdinfo->buf_dma;
+ td_ptr->tdesc0.pktsize = pktlen;
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ } else
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ if (skb_shinfo(skb)->nr_frags > 0) {
+ int nfrags = skb_shinfo(skb)->nr_frags;
+ tdinfo->skb = skb;
+ if (nfrags > 6) {
+ skb_linearize(skb, GFP_ATOMIC);
+ memcpy(tdinfo->buf, skb->data, skb->len);
+ tdinfo->skb_dma[0] = tdinfo->buf_dma;
+ td_ptr->tdesc0.pktsize =
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ } else {
+ int i = 0;
+ tdinfo->nskb_dma = 0;
+ tdinfo->skb_dma[i] = pci_map_single(vptr->pdev, skb->data, skb->len - skb->data_len, PCI_DMA_TODEVICE);
+
+ td_ptr->tdesc0.pktsize = pktlen;
+
+ /* FIXME: support 48bit DMA later */
+ td_ptr->td_buf[i].pa_low = cpu_to_le32(tdinfo->skb_dma);
+ td_ptr->td_buf[i].pa_high = 0;
+ td_ptr->td_buf[i].bufsize = skb->len->skb->data_len;
+
+ for (i = 0; i < nfrags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ void *addr = ((void *) page_address(frag->page + frag->page_offset));
+
+ tdinfo->skb_dma[i + 1] = pci_map_single(vptr->pdev, addr, frag->size, PCI_DMA_TODEVICE);
+
+ td_ptr->td_buf[i + 1].pa_low = cpu_to_le32(tdinfo->skb_dma[i + 1]);
+ td_ptr->td_buf[i + 1].pa_high = 0;
+ td_ptr->td_buf[i + 1].bufsize = frag->size;
+ }
+ tdinfo->nskb_dma = i - 1;
+ td_ptr->tdesc1.CMDZ = i;
+ }
+
+ } else
+#endif
+ {
+ /*
+ * Map the linear network buffer into PCI space and
+ * add it to the transmit ring.
+ */
+ tdinfo->skb = skb;
+ tdinfo->skb_dma[0] = pci_map_single(vptr->pdev, skb->data, pktlen, PCI_DMA_TODEVICE);
+ td_ptr->tdesc0.pktsize = pktlen;
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ }
+
+ if (vptr->flags & VELOCITY_FLAGS_TAGGING) {
+ td_ptr->tdesc1.pqinf.VID = (vptr->options.vid & 0xfff);
+ td_ptr->tdesc1.pqinf.priority = 0;
+ td_ptr->tdesc1.pqinf.CFI = 0;
+ td_ptr->tdesc1.TCR |= TCR0_VETAG;
+ }
+
+ /*
+ * Handle hardware checksum
+ */
+ if ((vptr->flags & VELOCITY_FLAGS_TX_CSUM)
+ && (skb->ip_summed == CHECKSUM_HW)) {
+ struct iphdr *ip = skb->nh.iph;
+ if (ip->protocol == IPPROTO_TCP)
+ td_ptr->tdesc1.TCR |= TCR0_TCPCK;
+ else if (ip->protocol == IPPROTO_UDP)
+ td_ptr->tdesc1.TCR |= (TCR0_UDPCK);
+ td_ptr->tdesc1.TCR |= TCR0_IPCK;
+ }
+ {
+
+ int prev = index - 1;
+
+ if (prev < 0)
+ prev = vptr->options.numtx - 1;
+ td_ptr->tdesc0.owner = OWNED_BY_NIC;
+ vptr->td_used[qnum]++;
+ vptr->td_curr[qnum] = (index + 1) % vptr->options.numtx;
+
+ if (AVAIL_TD(vptr, qnum) < 1)
+ netif_stop_queue(dev);
+
+ td_ptr = &(vptr->td_rings[qnum][prev]);
+ td_ptr->td_buf[0].queue = 1;
+ mac_tx_queue_wake(vptr->mac_regs, qnum);
+ }
+ dev->trans_start = jiffies;
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ return 0;
+}
+
+/**
+ * velocity_intr - interrupt callback
+ * @irq: interrupt number
+ * @dev_instance: interrupting device
+ * @pt_regs: CPU register state at interrupt
+ *
+ * Called whenever an interrupt is generated by the velocity
+ * adapter IRQ line. We may not be the source of the interrupt
+ * and need to identify initially if we are, and if not exit as
+ * efficiently as possible.
+ */
+
+static int velocity_intr(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_instance;
+ struct velocity_info *vptr = dev->priv;
+ u32 isr_status;
+ int max_count = 0;
+
+
+ spin_lock(&vptr->lock);
+ isr_status = mac_read_isr(vptr->mac_regs);
+
+ /* Not us ? */
+ if (isr_status == 0) {
+ spin_unlock(&vptr->lock);
+ return IRQ_NONE;
+ }
+
+ mac_disable_int(vptr->mac_regs);
+
+ /*
+ * Keep processing the ISR until we have completed
+ * processing and the isr_status becomes zero
+ */
+
+ while (isr_status != 0) {
+ mac_write_isr(vptr->mac_regs, isr_status);
+ if (isr_status & (~(ISR_PRXI | ISR_PPRXI | ISR_PTXI | ISR_PPTXI)))
+ velocity_error(vptr, isr_status);
+ if (isr_status & (ISR_PRXI | ISR_PPRXI))
+ max_count += velocity_rx_srv(vptr, isr_status);
+ if (isr_status & (ISR_PTXI | ISR_PPTXI))
+ max_count += velocity_tx_srv(vptr, isr_status);
+ isr_status = mac_read_isr(vptr->mac_regs);
+ if (max_count > vptr->options.int_works)
+ {
+ printk(KERN_WARNING "%s: excessive work at interrupt.\n",
+ dev->name);
+ max_count = 0;
+ }
+ }
+ spin_unlock(&vptr->lock);
+ mac_enable_int(vptr->mac_regs);
+ return IRQ_HANDLED;
+
+}
+
+
+/**
+ * ether_crc - ethernet CRC function
+ *
+ * Compute an ethernet CRC hash of the data block provided. This
+ * is not performance optimised but is not needed in performance
+ * critical code paths.
+ *
+ * FIXME: could we use shared code here ?
+ */
+
+static inline u32 ether_crc(int length, unsigned char *data)
+{
+ static unsigned const ethernet_polynomial = 0x04c11db7U;
+
+ int crc = -1;
+
+ while (--length >= 0) {
+ unsigned char current_octet = *data++;
+ int bit;
+ for (bit = 0; bit < 8; bit++, current_octet >>= 1) {
+ crc = (crc << 1) ^ ((crc < 0) ^ (current_octet & 1) ? ethernet_polynomial : 0);
+ }
+ }
+ return crc;
+}
+
+/**
+ * velocity_set_multi - filter list change callback
+ * @dev: network device
+ *
+ * Called by the network layer when the filter lists need to change
+ * for a velocity adapter. Reload the CAMs with the new address
+ * filter ruleset.
+ */
+
+static void velocity_set_multi(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs * regs = vptr->mac_regs;
+ u8 rx_mode;
+ int i;
+ struct dev_mc_list *mclist;
+
+ if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
+ /* Unconditionally log net taps. */
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
+ writel(0xffffffff, ®s->MARCAM[0]);
+ writel(0xffffffff, ®s->MARCAM[4]);
+ rx_mode = (RCR_AM | RCR_AB | RCR_PROM);
+ } else if ((dev->mc_count > vptr->multicast_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ writel(0xffffffff, ®s->MARCAM[0]);
+ writel(0xffffffff, ®s->MARCAM[4]);
+ rx_mode = (RCR_AM | RCR_AB);
+ } else {
+ int offset = MCAM_SIZE - vptr->multicast_limit;
+ mac_get_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; i++, mclist = mclist->next) {
+ mac_set_cam(regs, i + offset, mclist->dmi_addr, VELOCITY_MULTICAST_CAM);
+ vptr->mCAMmask[(offset + i) / 8] |= 1 << ((offset + i) & 7);
+ }
+
+ mac_set_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+ rx_mode = (RCR_AM | RCR_AB);
+ }
+ if (dev->mtu > 1500)
+ rx_mode |= RCR_AL;
+
+ BYTE_REG_BITS_ON(rx_mode, ®s->RCR);
+
+}
+
+/**
+ * velocity_get_status - statistics callback
+ * @dev: network device
+ *
+ * Callback from the network layer to allow driver statistics
+ * to be resynchronized with hardware collected state. In the
+ * case of the velocity we need to pull the MIB counters from
+ * the hardware into the counters before letting the network
+ * layer display them.
+ */
+
+static struct net_device_stats *velocity_get_stats(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ /* If the hardware is down, don't touch MII */
+ if(!netif_running(dev))
+ return &vptr->stats;
+
+ spin_lock_irq(&vptr->lock);
+ velocity_update_hw_mibs(vptr);
+ spin_unlock_irq(&vptr->lock);
+
+ vptr->stats.rx_packets = vptr->mib_counter[HW_MIB_ifRxAllPkts];
+ vptr->stats.rx_errors = vptr->mib_counter[HW_MIB_ifRxErrorPkts];
+ vptr->stats.rx_length_errors = vptr->mib_counter[HW_MIB_ifInRangeLengthErrors];
+
+// unsigned long rx_dropped; /* no space in linux buffers */
+ vptr->stats.collisions = vptr->mib_counter[HW_MIB_ifTxEtherCollisions];
+ /* detailed rx_errors: */
+// unsigned long rx_length_errors;
+// unsigned long rx_over_errors; /* receiver ring buff overflow */
+ vptr->stats.rx_crc_errors = vptr->mib_counter[HW_MIB_ifRxPktCRCE];
+// unsigned long rx_frame_errors; /* recv'd frame alignment error */
+// unsigned long rx_fifo_errors; /* recv'r fifo overrun */
+// unsigned long rx_missed_errors; /* receiver missed packet */
+
+ /* detailed tx_errors */
+// unsigned long tx_fifo_errors;
+
+ return &vptr->stats;
+}
+
+
+/**
+ * velocity_ioctl - ioctl entry point
+ * @dev: network device
+ * @rq: interface request ioctl
+ * @cmd: command code
+ *
+ * Called when the user issues an ioctl request to the network
+ * device in question. The velocity interface supports MII.
+ */
+
+static int velocity_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ int ret;
+
+ /* If we are asked for information and the device is power
+ saving then we need to bring the device back up to talk to it */
+
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 0);
+
+ switch (cmd) {
+ case SIOCGMIIPHY: /* Get address of MII PHY in use. */
+ case SIOCGMIIREG: /* Read MII PHY register. */
+ case SIOCSMIIREG: /* Write to MII PHY register. */
+ ret = velocity_mii_ioctl(dev, rq, cmd);
+ break;
+
+ default:
+ ret = -EOPNOTSUPP;
+ }
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 3);
+
+
+ return ret;
+}
+
+/*
+ * Definition for our device driver. The PCI layer interface
+ * uses this to handle all our card discover and plugging
+ */
+
+static struct pci_driver velocity_driver = {
+ name:VELOCITY_NAME,
+ id_table:velocity_id_table,
+ probe:velocity_found1,
+ remove:velocity_remove1,
+#ifdef CONFIG_PM
+ suspend:velocity_suspend,
+ resume:velocity_resume,
+#endif
+};
+
+/**
+ * velocity_init_module - load time function
+ *
+ * Called when the velocity module is loaded. The PCI driver
+ * is registered with the PCI layer, and in turn will call
+ * the probe functions for each velocity adapter installed
+ * in the system.
+ */
+
+static int __init velocity_init_module(void)
+{
+ int ret;
+ ret = pci_module_init(&velocity_driver);
+
+#ifdef CONFIG_PM
+ register_inetaddr_notifier(&velocity_inetaddr_notifier);
+#endif
+ return ret;
+}
+
+/**
+ * velocity_cleanup - module unload
+ *
+ * When the velocity hardware is unloaded this function is called.
+ * It will clean up the notifiers and the unregister the PCI
+ * driver interface for this hardware. This in turn cleans up
+ * all discovered interfaces before returning from the function
+ */
+
+static void __exit velocity_cleanup_module(void)
+{
+#ifdef CONFIG_PM
+ unregister_inetaddr_notifier(&velocity_inetaddr_notifier);
+#endif
+ pci_unregister_driver(&velocity_driver);
+}
+
+module_init(velocity_init_module);
+module_exit(velocity_cleanup_module);
+
+
+/*
+ * MII access , media link mode setting functions
+ */
+
+
+/**
+ * mii_init - set up MII
+ * @vptr: velocity adapter
+ * @mii_status: links tatus
+ *
+ * Set up the PHY for the current link state.
+ */
+
+static void mii_init(struct velocity_info *vptr, u32 mii_status)
+{
+ u16 BMCR;
+
+ switch (PHYID_GET_PHY_ID(vptr->phy_id)) {
+ case PHYID_CICADA_CS8201:
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_OFF((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ /*
+ * Turn on ECHODIS bit in NWay-forced full mode and turn it
+ * off it in NWay-forced half mode for NWay-forced v.s.
+ * legacy-forced issue.
+ */
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ MII_REG_BITS_ON(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ else
+ MII_REG_BITS_OFF(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ /*
+ * Turn on Link/Activity LED enable bit for CIS8201
+ */
+ MII_REG_BITS_ON(PLED_LALBE, MII_REG_PLED, vptr->mac_regs);
+ break;
+ case PHYID_VT3216_32BIT:
+ case PHYID_VT3216_64BIT:
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_ON((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ /*
+ * Turn on ECHODIS bit in NWay-forced full mode and turn it
+ * off it in NWay-forced half mode for NWay-forced v.s.
+ * legacy-forced issue
+ */
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ MII_REG_BITS_ON(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ else
+ MII_REG_BITS_OFF(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ break;
+
+ case PHYID_MARVELL_1000:
+ case PHYID_MARVELL_1000S:
+ /*
+ * Assert CRS on Transmit
+ */
+ MII_REG_BITS_ON(PSCR_ACRSTX, MII_REG_PSCR, vptr->mac_regs);
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_ON((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ break;
+ default:
+ ;
+ }
+ velocity_mii_read(vptr->mac_regs, MII_REG_BMCR, &BMCR);
+ if (BMCR & BMCR_ISO) {
+ BMCR &= ~BMCR_ISO;
+ velocity_mii_write(vptr->mac_regs, MII_REG_BMCR, BMCR);
+ }
+}
+
+/**
+ * safe_disable_mii_autopoll - autopoll off
+ * @regs: velocity registers
+ *
+ * Turn off the autopoll and wait for it to disable on the chip
+ */
+
+static void safe_disable_mii_autopoll(struct mac_regs * regs)
+{
+ u16 ww;
+
+ /* turn off MAUTO */
+ writeb(0, ®s->MIICR);
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ udelay(1);
+ if (BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+}
+
+/**
+ * enable_mii_autopoll - turn on autopolling
+ * @regs: velocity registers
+ *
+ * Enable the MII link status autopoll feature on the Velocity
+ * hardware. Wait for it to enable.
+ */
+
+static void enable_mii_autopoll(struct mac_regs * regs)
+{
+ int ii;
+
+ writeb(0, &(regs->MIICR));
+ writeb(MIIADR_SWMPL, ®s->MIIADR);
+
+ for (ii = 0; ii < W_MAX_TIMEOUT; ii++) {
+ udelay(1);
+ if (BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+
+ writeb(MIICR_MAUTO, ®s->MIICR);
+
+ for (ii = 0; ii < W_MAX_TIMEOUT; ii++) {
+ udelay(1);
+ if (!BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+
+}
+
+/**
+ * velocity_mii_read - read MII data
+ * @regs: velocity registers
+ * @index: MII register index
+ * @data: buffer for received data
+ *
+ * Perform a single read of an MII 16bit register. Returns zero
+ * on success or -ETIMEDOUT if the PHY did not respond.
+ */
+
+static int velocity_mii_read(struct mac_regs * regs, u8 index, u16 *data)
+{
+ u16 ww;
+
+ /*
+ * Disable MIICR_MAUTO, so that mii addr can be set normally
+ */
+ safe_disable_mii_autopoll(regs);
+
+ writeb(index, ®s->MIIADR);
+
+ BYTE_REG_BITS_ON(MIICR_RCMD, ®s->MIICR);
+
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ if (!(readb(®s->MIICR) & MIICR_RCMD))
+ break;
+ }
+
+ *data = readw(®s->MIIDATA);
+
+ enable_mii_autopoll(regs);
+ if (ww == W_MAX_TIMEOUT)
+ return -ETIMEDOUT;
+ return 0;
+}
+
+/**
+ * velocity_mii_write - write MII data
+ * @regs: velocity registers
+ * @index: MII register index
+ * @data: 16bit data for the MII register
+ *
+ * Perform a single write to an MII 16bit register. Returns zero
+ * on success or -ETIMEDOUT if the PHY did not respond.
+ */
+
+static int velocity_mii_write(struct mac_regs * regs, u8 mii_addr, u16 data)
+{
+ u16 ww;
+
+ /*
+ * Disable MIICR_MAUTO, so that mii addr can be set normally
+ */
+ safe_disable_mii_autopoll(regs);
+
+ /* MII reg offset */
+ writeb(mii_addr, ®s->MIIADR);
+ /* set MII data */
+ writew(data, ®s->MIIDATA);
+
+ /* turn on MIICR_WCMD */
+ BYTE_REG_BITS_ON(MIICR_WCMD, ®s->MIICR);
+
+ /* W_MAX_TIMEOUT is the timeout period */
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ udelay(5);
+ if (!(readb(®s->MIICR) & MIICR_WCMD))
+ break;
+ }
+ enable_mii_autopoll(regs);
+
+ if (ww == W_MAX_TIMEOUT)
+ return -ETIMEDOUT;
+ return 0;
+}
+
+/**
+ * velocity_get_opt_media_mode - get media selection
+ * @vptr: velocity adapter
+ *
+ * Get the media mode stored in EEPROM or module options and load
+ * mii_status accordingly. The requested link state information
+ * is also returned.
+ */
+
+static u32 velocity_get_opt_media_mode(struct velocity_info *vptr)
+{
+ u32 status = 0;
+
+ switch (vptr->options.spd_dpx) {
+ case SPD_DPX_AUTO:
+ status = VELOCITY_AUTONEG_ENABLE;
+ break;
+ case SPD_DPX_100_FULL:
+ status = VELOCITY_SPEED_100 | VELOCITY_DUPLEX_FULL;
+ break;
+ case SPD_DPX_10_FULL:
+ status = VELOCITY_SPEED_10 | VELOCITY_DUPLEX_FULL;
+ break;
+ case SPD_DPX_100_HALF:
+ status = VELOCITY_SPEED_100;
+ break;
+ case SPD_DPX_10_HALF:
+ status = VELOCITY_SPEED_10;
+ break;
+ }
+ vptr->mii_status = status;
+ return status;
+}
+
+/**
+ * mii_set_auto_on - autonegotiate on
+ * @vptr: velocity
+ *
+ * Enable autonegotation on this interface
+ */
+
+static void mii_set_auto_on(struct velocity_info *vptr)
+{
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs))
+ MII_REG_BITS_ON(BMCR_REAUTO, MII_REG_BMCR, vptr->mac_regs);
+ else
+ MII_REG_BITS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs);
+}
+
+
+/*
+static void mii_set_auto_off(struct velocity_info * vptr)
+{
+ MII_REG_BITS_OFF(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs);
+}
+*/
+
+/**
+ * set_mii_flow_control - flow control setup
+ * @vptr: velocity interface
+ *
+ * Set up the flow control on this interface according to
+ * the supplied user/eeprom options.
+ */
+
+static void set_mii_flow_control(struct velocity_info *vptr)
+{
+ /*Enable or Disable PAUSE in ANAR */
+ switch (vptr->options.flow_cntl) {
+ case FLOW_CNTL_TX:
+ MII_REG_BITS_OFF(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_RX:
+ MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_TX_RX:
+ MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_DISABLE:
+ MII_REG_BITS_OFF(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_OFF(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * velocity_set_media_mode - set media mode
+ * @mii_status: old MII link state
+ *
+ * Check the media link state and configure the flow control
+ * PHY and also velocity hardware setup accordingly. In particular
+ * we need to set up CD polling and frame bursting.
+ */
+
+static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status)
+{
+ u32 curr_status;
+ struct mac_regs * regs = vptr->mac_regs;
+
+ vptr->mii_status = mii_check_media_mode(vptr->mac_regs);
+ curr_status = vptr->mii_status & (~VELOCITY_LINK_FAIL);
+
+ /* Set mii link status */
+ set_mii_flow_control(vptr);
+
+ /*
+ Check if new status is consisent with current status
+ if (((mii_status & curr_status) & VELOCITY_AUTONEG_ENABLE)
+ || (mii_status==curr_status)) {
+ vptr->mii_status=mii_check_media_mode(vptr->mac_regs);
+ vptr->mii_status=check_connection_type(vptr->mac_regs);
+ VELOCITY_PRT(MSG_LEVEL_INFO, "Velocity link no change\n");
+ return 0;
+ }
+ */
+
+ if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201) {
+ MII_REG_BITS_ON(AUXCR_MDPPS, MII_REG_AUXCR, vptr->mac_regs);
+ }
+
+ /*
+ * If connection type is AUTO
+ */
+ if (mii_status & VELOCITY_AUTONEG_ENABLE) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, "Velocity is AUTO mode\n");
+ /* clear force MAC mode bit */
+ BYTE_REG_BITS_OFF(CHIPGCR_FCMODE, ®s->CHIPGCR);
+ /* set duplex mode of MAC according to duplex mode of MII */
+ MII_REG_BITS_ON(ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+ MII_REG_BITS_ON(BMCR_SPEED1G, MII_REG_BMCR, vptr->mac_regs);
+
+ /* enable AUTO-NEGO mode */
+ mii_set_auto_on(vptr);
+ } else {
+ u16 ANAR;
+ u8 CHIPGCR;
+
+ /*
+ * 1. if it's 3119, disable frame bursting in halfduplex mode
+ * and enable it in fullduplex mode
+ * 2. set correct MII/GMII and half/full duplex mode in CHIPGCR
+ * 3. only enable CD heart beat counter in 10HD mode
+ */
+
+ /* set force MAC mode bit */
+ BYTE_REG_BITS_ON(CHIPGCR_FCMODE, ®s->CHIPGCR);
+
+ CHIPGCR = readb(®s->CHIPGCR);
+ CHIPGCR &= ~CHIPGCR_FCGMII;
+
+ if (mii_status & VELOCITY_DUPLEX_FULL) {
+ CHIPGCR |= CHIPGCR_FCFDX;
+ writeb(CHIPGCR, ®s->CHIPGCR);
+ VELOCITY_PRT(MSG_LEVEL_INFO, "set Velocity to forced full mode\n");
+ if (vptr->rev_id < REV_ID_VT3216_A0)
+ BYTE_REG_BITS_OFF(TCR_TB2BDIS, ®s->TCR);
+ } else {
+ CHIPGCR &= ~CHIPGCR_FCFDX;
+ VELOCITY_PRT(MSG_LEVEL_INFO, "set Velocity to forced half mode\n");
+ writeb(CHIPGCR, ®s->CHIPGCR);
+ if (vptr->rev_id < REV_ID_VT3216_A0)
+ BYTE_REG_BITS_ON(TCR_TB2BDIS, ®s->TCR);
+ }
+
+ MII_REG_BITS_OFF(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+
+ if (!(mii_status & VELOCITY_DUPLEX_FULL) && (mii_status & VELOCITY_SPEED_10)) {
+ BYTE_REG_BITS_OFF(TESTCFG_HBDIS, ®s->TESTCFG);
+ } else {
+ BYTE_REG_BITS_ON(TESTCFG_HBDIS, ®s->TESTCFG);
+ }
+ /* MII_REG_BITS_OFF(BMCR_SPEED1G, MII_REG_BMCR, vptr->mac_regs); */
+ velocity_mii_read(vptr->mac_regs, MII_REG_ANAR, &ANAR);
+ ANAR &= (~(ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10));
+ if (mii_status & VELOCITY_SPEED_100) {
+ if (mii_status & VELOCITY_DUPLEX_FULL)
+ ANAR |= ANAR_TXFD;
+ else
+ ANAR |= ANAR_TX;
+ } else {
+ if (mii_status & VELOCITY_DUPLEX_FULL)
+ ANAR |= ANAR_10FD;
+ else
+ ANAR |= ANAR_10;
+ }
+ velocity_mii_write(vptr->mac_regs, MII_REG_ANAR, ANAR);
+ /* enable AUTO-NEGO mode */
+ mii_set_auto_on(vptr);
+ /* MII_REG_BITS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs); */
+ }
+ /* vptr->mii_status=mii_check_media_mode(vptr->mac_regs); */
+ /* vptr->mii_status=check_connection_type(vptr->mac_regs); */
+ return VELOCITY_LINK_CHANGE;
+}
+
+/**
+ * mii_check_media_mode - check media state
+ * @regs: velocity registers
+ *
+ * Check the current MII status and determine the link status
+ * accordingly
+ */
+
+static u32 mii_check_media_mode(struct mac_regs * regs)
+{
+ u32 status = 0;
+ u16 ANAR;
+
+ if (!MII_REG_BITS_IS_ON(BMSR_LNK, MII_REG_BMSR, regs))
+ status |= VELOCITY_LINK_FAIL;
+
+ if (MII_REG_BITS_IS_ON(G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_SPEED_1000 | VELOCITY_DUPLEX_FULL;
+ else if (MII_REG_BITS_IS_ON(G1000CR_1000, MII_REG_G1000CR, regs))
+ status |= (VELOCITY_SPEED_1000);
+ else {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if (ANAR & ANAR_TXFD)
+ status |= (VELOCITY_SPEED_100 | VELOCITY_DUPLEX_FULL);
+ else if (ANAR & ANAR_TX)
+ status |= VELOCITY_SPEED_100;
+ else if (ANAR & ANAR_10FD)
+ status |= (VELOCITY_SPEED_10 | VELOCITY_DUPLEX_FULL);
+ else
+ status |= (VELOCITY_SPEED_10);
+ }
+
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, regs)) {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if ((ANAR & (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10))
+ == (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10)) {
+ if (MII_REG_BITS_IS_ON(G1000CR_1000 | G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_AUTONEG_ENABLE;
+ }
+ }
+
+ return status;
+}
+
+static u32 check_connection_type(struct mac_regs * regs)
+{
+ u32 status = 0;
+ u8 PHYSR0;
+ u16 ANAR;
+ PHYSR0 = readb(®s->PHYSR0);
+
+ /*
+ if (!(PHYSR0 & PHYSR0_LINKGD))
+ status|=VELOCITY_LINK_FAIL;
+ */
+
+ if (PHYSR0 & PHYSR0_FDPX)
+ status |= VELOCITY_DUPLEX_FULL;
+
+ if (PHYSR0 & PHYSR0_SPDG)
+ status |= VELOCITY_SPEED_1000;
+ if (PHYSR0 & PHYSR0_SPD10)
+ status |= VELOCITY_SPEED_10;
+ else
+ status |= VELOCITY_SPEED_100;
+
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, regs)) {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if ((ANAR & (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10))
+ == (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10)) {
+ if (MII_REG_BITS_IS_ON(G1000CR_1000 | G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_AUTONEG_ENABLE;
+ }
+ }
+
+ return status;
+}
+
+/**
+ * enable_flow_control_ability - flow control
+ * @vptr: veloity to configure
+ *
+ * Set up flow control according to the flow control options
+ * determined by the eeprom/configuration.
+ */
+
+static void enable_flow_control_ability(struct velocity_info *vptr)
+{
+
+ struct mac_regs * regs = vptr->mac_regs;
+
+ switch (vptr->options.flow_cntl) {
+
+ case FLOW_CNTL_DEFAULT:
+ if (BYTE_REG_BITS_IS_ON(PHYSR0_RXFLC, ®s->PHYSR0))
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ else
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+
+ if (BYTE_REG_BITS_IS_ON(PHYSR0_TXFLC, ®s->PHYSR0))
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ else
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_TX:
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_RX:
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_TX_RX:
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ break;
+
+ case FLOW_CNTL_DISABLE:
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ default:
+ break;
+ }
+
+}
+
+
+/**
+ * velocity_ethtool_up - pre hook for ethtool
+ * @dev: network device
+ *
+ * Called before an ethtool operation. We need to make sure the
+ * chip is out of D3 state before we poke at it.
+ */
+
+static int velocity_ethtool_up(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 0);
+ return 0;
+}
+
+/**
+ * velocity_ethtool_down - post hook for ethtool
+ * @dev: network device
+ *
+ * Called after an ethtool operation. Restore the chip back to D3
+ * state if it isn't running.
+ */
+
+static void velocity_ethtool_down(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 3);
+}
+
+static int velocity_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs * regs = vptr->mac_regs;
+ u32 status;
+ status = check_connection_type(vptr->mac_regs);
+
+ cmd->supported = SUPPORTED_TP | SUPPORTED_Autoneg | SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full;
+ if (status & VELOCITY_SPEED_100)
+ cmd->speed = SPEED_100;
+ else
+ cmd->speed = SPEED_10;
+ cmd->autoneg = (status & VELOCITY_AUTONEG_ENABLE) ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+ cmd->port = PORT_TP;
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->phy_address = readb(®s->MIIADR) & 0x1F;
+
+ if (status & VELOCITY_DUPLEX_FULL)
+ cmd->duplex = DUPLEX_FULL;
+ else
+ cmd->duplex = DUPLEX_HALF;
+
+ return 0;
+}
+
+static int velocity_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ u32 curr_status;
+ u32 new_status = 0;
+ int ret = 0;
+
+ curr_status = check_connection_type(vptr->mac_regs);
+ curr_status &= (~VELOCITY_LINK_FAIL);
+
+ new_status |= ((cmd->autoneg) ? VELOCITY_AUTONEG_ENABLE : 0);
+ new_status |= ((cmd->speed == SPEED_100) ? VELOCITY_SPEED_100 : 0);
+ new_status |= ((cmd->speed == SPEED_10) ? VELOCITY_SPEED_10 : 0);
+ new_status |= ((cmd->duplex == DUPLEX_FULL) ? VELOCITY_DUPLEX_FULL : 0);
+
+ if ((new_status & VELOCITY_AUTONEG_ENABLE) && (new_status != (curr_status | VELOCITY_AUTONEG_ENABLE)))
+ ret = -EINVAL;
+ else
+ velocity_set_media_mode(vptr, new_status);
+
+ return ret;
+}
+
+static u32 velocity_get_link(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs * regs = vptr->mac_regs;
+ return BYTE_REG_BITS_IS_ON(PHYSR0_LINKGD, ®s->PHYSR0) ? 0 : 1;
+}
+
+static void velocity_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ struct velocity_info *vptr = dev->priv;
+ strcpy(info->driver, VELOCITY_NAME);
+ strcpy(info->version, VELOCITY_VERSION);
+ strcpy(info->bus_info, vptr->pdev->slot_name);
+}
+
+static void velocity_ethtool_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct velocity_info *vptr = dev->priv;
+ wol->supported = WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_ARP;
+ wol->wolopts |= WAKE_MAGIC;
+ /*
+ if (vptr->wol_opts & VELOCITY_WOL_PHY)
+ wol.wolopts|=WAKE_PHY;
+ */
+ if (vptr->wol_opts & VELOCITY_WOL_UCAST)
+ wol->wolopts |= WAKE_UCAST;
+ if (vptr->wol_opts & VELOCITY_WOL_ARP)
+ wol->wolopts |= WAKE_ARP;
+ memcpy(&wol->sopass, vptr->wol_passwd, 6);
+}
+
+static int velocity_ethtool_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ if (!(wol->wolopts & (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_ARP)))
+ return -EFAULT;
+ vptr->wol_opts = VELOCITY_WOL_MAGIC;
+
+ /*
+ if (wol.wolopts & WAKE_PHY) {
+ vptr->wol_opts|=VELOCITY_WOL_PHY;
+ vptr->flags |=VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ */
+
+ if (wol->wolopts & WAKE_MAGIC) {
+ vptr->wol_opts |= VELOCITY_WOL_MAGIC;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ if (wol->wolopts & WAKE_UCAST) {
+ vptr->wol_opts |= VELOCITY_WOL_UCAST;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ if (wol->wolopts & WAKE_ARP) {
+ vptr->wol_opts |= VELOCITY_WOL_ARP;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ memcpy(vptr->wol_passwd, wol->sopass, 6);
+ return 0;
+}
+
+static u32 velocity_get_msglevel(struct net_device *dev)
+{
+ return msglevel;
+}
+
+static void velocity_set_msglevel(struct net_device *dev, u32 value)
+{
+ msglevel = value;
+}
+
+static struct ethtool_ops velocity_ethtool_ops = {
+ .get_settings = velocity_get_settings,
+ .set_settings = velocity_set_settings,
+ .get_drvinfo = velocity_get_drvinfo,
+ .get_wol = velocity_ethtool_get_wol,
+ .set_wol = velocity_ethtool_set_wol,
+ .get_msglevel = velocity_get_msglevel,
+ .set_msglevel = velocity_set_msglevel,
+ .get_link = velocity_get_link,
+ .begin = velocity_ethtool_up,
+ .complete = velocity_ethtool_down
+};
+
+/**
+ * velocity_mii_ioctl - MII ioctl handler
+ * @dev: network device
+ * @ifr: the ifreq block for the ioctl
+ * @cmd: the command
+ *
+ * Process MII requests made via ioctl from the network layer. These
+ * are used by tools like kudzu to interrogate the link state of the
+ * hardware
+ */
+
+static int velocity_mii_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs * regs = vptr->mac_regs;
+ unsigned long flags;
+ struct mii_ioctl_data *miidata = (struct mii_ioctl_data *) &(ifr->ifr_data);
+ int err;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ miidata->phy_id = readb(®s->MIIADR) & 0x1f;
+ break;
+ case SIOCGMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if(velocity_mii_read(vptr->mac_regs, miidata->reg_num & 0x1f, &(miidata->val_out)) < 0)
+ return -ETIMEDOUT;
+ break;
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ spin_lock_irqsave(&vptr->lock, flags);
+ err = velocity_mii_write(vptr->mac_regs, miidata->reg_num & 0x1f, miidata->val_in);
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ check_connection_type(vptr->mac_regs);
+ if(err)
+ return err;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ return 0;
+}
+
+#ifdef CONFIG_PM
+
+/**
+ * velocity_save_context - save registers
+ * @vptr: velocity
+ * @context: buffer for stored context
+ *
+ * Retrieve the current configuration from the velocity hardware
+ * and stash it in the context structure, for use by the context
+ * restore functions. This allows us to save things we need across
+ * power down states
+ */
+
+static void velocity_save_context(struct velocity_info *vptr, struct velocity_context * context)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ u16 i;
+ u8 *ptr = (u8 *)regs;
+
+ for (i = MAC_REG_PAR; i < MAC_REG_CR0_CLR; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+ for (i = MAC_REG_MAR; i < MAC_REG_TDCSR_CLR; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+ for (i = MAC_REG_RDBASE_LO; i < MAC_REG_FIFO_TEST0; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+}
+
+/**
+ * velocity_restore_context - restore registers
+ * @vptr: velocity
+ * @context: buffer for stored context
+ *
+ * Reload the register configuration from the velocity context
+ * created by velocity_save_context.
+ */
+
+static void velocity_restore_context(struct velocity_info *vptr, struct velocity_context *context)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ int i;
+ u8 *ptr = (u8 *)regs;
+
+ for (i = MAC_REG_PAR; i < MAC_REG_CR0_SET; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ /* Just skip cr0 */
+ for (i = MAC_REG_CR1_SET; i < MAC_REG_CR0_CLR; i++) {
+ /* Clear */
+ writeb(~(*((u8 *) (context->mac_reg + i))), ptr + i + 4);
+ /* Set */
+ writeb(*((u8 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_MAR; i < MAC_REG_IMR; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_RDBASE_LO; i < MAC_REG_FIFO_TEST0; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_TDCSR_SET; i <= MAC_REG_RDCSR_SET; i++) {
+ writeb(*((u8 *) (context->mac_reg + i)), ptr + i);
+ }
+
+}
+
+static int velocity_suspend(struct pci_dev *pdev, u32 state)
+{
+ struct velocity_info *vptr = pci_get_drvdata(pdev);
+ unsigned long flags;
+
+ if(!netif_running(vptr->dev))
+ return 0;
+
+ netif_device_detach(vptr->dev);
+
+ spin_lock_irqsave(&vptr->lock, flags);
+ pci_save_state(pdev, vptr->pci_state);
+#ifdef ETHTOOL_GWOL
+ if (vptr->flags & VELOCITY_FLAGS_WOL_ENABLED) {
+ velocity_get_ip(vptr);
+ velocity_save_context(vptr, &vptr->context);
+ velocity_shutdown(vptr);
+ velocity_set_wol(vptr);
+ pci_enable_wake(pdev, 3, 1);
+ pci_set_power_state(pdev, 3);
+ } else {
+ velocity_save_context(vptr, &vptr->context);
+ velocity_shutdown(vptr);
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, state);
+ }
+#else
+ pci_set_power_state(pdev, state);
+#endif
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ return 0;
+}
+
+static int velocity_resume(struct pci_dev *pdev)
+{
+ struct velocity_info *vptr = pci_get_drvdata(pdev);
+ unsigned long flags;
+ int i;
+
+ if(!netif_running(vptr->dev))
+ return 0;
+
+ pci_set_power_state(pdev, 0);
+ pci_enable_wake(pdev, 0, 0);
+ pci_restore_state(pdev, vptr->pci_state);
+
+ mac_wol_reset(vptr->mac_regs);
+
+ spin_lock_irqsave(&vptr->lock, flags);
+ velocity_restore_context(vptr, &vptr->context);
+ velocity_init_registers(vptr, VELOCITY_INIT_WOL);
+ mac_disable_int(vptr->mac_regs);
+
+ velocity_tx_srv(vptr, 0);
+
+ for (i = 0; i < vptr->num_txq; i++) {
+ if (vptr->td_used[i]) {
+ mac_tx_queue_wake(vptr->mac_regs, i);
+ }
+ }
+
+ mac_enable_int(vptr->mac_regs);
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ netif_device_attach(vptr->dev);
+
+ return 0;
+}
+
+static int velocity_netdev_event(struct notifier_block *nb, unsigned long notification, void *ptr)
+{
+ struct in_ifaddr *ifa = (struct in_ifaddr *) ptr;
+ struct net_device *dev;
+ struct velocity_info *vptr;
+
+ if (ifa) {
+ dev = ifa->ifa_dev->dev;
+ vptr = dev->priv;
+ velocity_get_ip(vptr);
+ }
+ return NOTIFY_DONE;
+}
+#endif
+
+/*
+ * Purpose: Functions to set WOL.
+ */
+
+const static unsigned short crc16_tab[256] = {
+ 0x0000, 0x1189, 0x2312, 0x329b, 0x4624, 0x57ad, 0x6536, 0x74bf,
+ 0x8c48, 0x9dc1, 0xaf5a, 0xbed3, 0xca6c, 0xdbe5, 0xe97e, 0xf8f7,
+ 0x1081, 0x0108, 0x3393, 0x221a, 0x56a5, 0x472c, 0x75b7, 0x643e,
+ 0x9cc9, 0x8d40, 0xbfdb, 0xae52, 0xdaed, 0xcb64, 0xf9ff, 0xe876,
+ 0x2102, 0x308b, 0x0210, 0x1399, 0x6726, 0x76af, 0x4434, 0x55bd,
+ 0xad4a, 0xbcc3, 0x8e58, 0x9fd1, 0xeb6e, 0xfae7, 0xc87c, 0xd9f5,
+ 0x3183, 0x200a, 0x1291, 0x0318, 0x77a7, 0x662e, 0x54b5, 0x453c,
+ 0xbdcb, 0xac42, 0x9ed9, 0x8f50, 0xfbef, 0xea66, 0xd8fd, 0xc974,
+ 0x4204, 0x538d, 0x6116, 0x709f, 0x0420, 0x15a9, 0x2732, 0x36bb,
+ 0xce4c, 0xdfc5, 0xed5e, 0xfcd7, 0x8868, 0x99e1, 0xab7a, 0xbaf3,
+ 0x5285, 0x430c, 0x7197, 0x601e, 0x14a1, 0x0528, 0x37b3, 0x263a,
+ 0xdecd, 0xcf44, 0xfddf, 0xec56, 0x98e9, 0x8960, 0xbbfb, 0xaa72,
+ 0x6306, 0x728f, 0x4014, 0x519d, 0x2522, 0x34ab, 0x0630, 0x17b9,
+ 0xef4e, 0xfec7, 0xcc5c, 0xddd5, 0xa96a, 0xb8e3, 0x8a78, 0x9bf1,
+ 0x7387, 0x620e, 0x5095, 0x411c, 0x35a3, 0x242a, 0x16b1, 0x0738,
+ 0xffcf, 0xee46, 0xdcdd, 0xcd54, 0xb9eb, 0xa862, 0x9af9, 0x8b70,
+ 0x8408, 0x9581, 0xa71a, 0xb693, 0xc22c, 0xd3a5, 0xe13e, 0xf0b7,
+ 0x0840, 0x19c9, 0x2b52, 0x3adb, 0x4e64, 0x5fed, 0x6d76, 0x7cff,
+ 0x9489, 0x8500, 0xb79b, 0xa612, 0xd2ad, 0xc324, 0xf1bf, 0xe036,
+ 0x18c1, 0x0948, 0x3bd3, 0x2a5a, 0x5ee5, 0x4f6c, 0x7df7, 0x6c7e,
+ 0xa50a, 0xb483, 0x8618, 0x9791, 0xe32e, 0xf2a7, 0xc03c, 0xd1b5,
+ 0x2942, 0x38cb, 0x0a50, 0x1bd9, 0x6f66, 0x7eef, 0x4c74, 0x5dfd,
+ 0xb58b, 0xa402, 0x9699, 0x8710, 0xf3af, 0xe226, 0xd0bd, 0xc134,
+ 0x39c3, 0x284a, 0x1ad1, 0x0b58, 0x7fe7, 0x6e6e, 0x5cf5, 0x4d7c,
+ 0xc60c, 0xd785, 0xe51e, 0xf497, 0x8028, 0x91a1, 0xa33a, 0xb2b3,
+ 0x4a44, 0x5bcd, 0x6956, 0x78df, 0x0c60, 0x1de9, 0x2f72, 0x3efb,
+ 0xd68d, 0xc704, 0xf59f, 0xe416, 0x90a9, 0x8120, 0xb3bb, 0xa232,
+ 0x5ac5, 0x4b4c, 0x79d7, 0x685e, 0x1ce1, 0x0d68, 0x3ff3, 0x2e7a,
+ 0xe70e, 0xf687, 0xc41c, 0xd595, 0xa12a, 0xb0a3, 0x8238, 0x93b1,
+ 0x6b46, 0x7acf, 0x4854, 0x59dd, 0x2d62, 0x3ceb, 0x0e70, 0x1ff9,
+ 0xf78f, 0xe606, 0xd49d, 0xc514, 0xb1ab, 0xa022, 0x92b9, 0x8330,
+ 0x7bc7, 0x6a4e, 0x58d5, 0x495c, 0x3de3, 0x2c6a, 0x1ef1, 0x0f78
+};
+
+
+static u32 mask_pattern[2][4] = {
+ {0x00203000, 0x000003C0, 0x00000000, 0x0000000}, /* ARP */
+ {0xfffff000, 0xffffffff, 0xffffffff, 0x000ffff} /* Magic Packet */
+};
+
+/**
+ * ether_crc16 - compute ethernet CRC
+ * @len: buffer length
+ * @cp: buffer
+ * @crc16: initial CRC
+ *
+ * Compute a CRC value for a block of data.
+ * FIXME: can we use generic functions ?
+ */
+
+static u16 ether_crc16(int len, u8 * cp, u16 crc16)
+{
+ while (len--)
+ crc16 = (crc16 >> 8) ^ crc16_tab[(crc16 ^ *cp++) & 0xff];
+ return (crc16);
+}
+
+/**
+ * bit_reverse - 16bit reverse
+ * @data: 16bit data t reverse
+ *
+ * Reverse the order of a 16bit value and return the reversed bits
+ */
+
+static u16 bit_reverse(u16 data)
+{
+ u32 new = 0x00000000;
+ int ii;
+
+
+ for (ii = 0; ii < 16; ii++) {
+ new |= ((u32) (data & 1) << (31 - ii));
+ data >>= 1;
+ }
+
+ return (u16) (new >> 16);
+}
+
+/**
+ * wol_calc_crc - WOL CRC
+ * @pattern: data pattern
+ * @mask_pattern: mask
+ *
+ * Compute the wake on lan crc hashes for the packet header
+ * we are interested in.
+ */
+
+u16 wol_calc_crc(int size, u8 * pattern, u8 *mask_pattern)
+{
+ u16 crc = 0xFFFF;
+ u8 mask;
+ int i, j;
+
+ for (i = 0; i < size; i++) {
+ mask = mask_pattern[i];
+
+ /* Skip this loop if the mask equals to zero */
+ if (mask == 0x00)
+ continue;
+
+ for (j = 0; j < 8; j++) {
+ if ((mask & 0x01) == 0) {
+ mask >>= 1;
+ continue;
+ }
+ mask >>= 1;
+ crc = ether_crc16(1, &(pattern[i * 8 + j]), crc);
+ }
+ }
+ /* Finally, invert the result once to get the correct data */
+ crc = ~crc;
+ return bit_reverse(crc);
+}
+
+/**
+ * velocity_set_wol - set up for wake on lan
+ * @vptr: velocity to set WOL status on
+ *
+ * Set a card up for wake on lan either by unicast or by
+ * ARP packet.
+ *
+ * FIXME: check static buffer is safe here
+ */
+
+static int velocity_set_wol(struct velocity_info *vptr)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+ static u8 buf[256];
+ int i;
+
+ writew(0xFFFF, ®s->WOLCRClr);
+ writeb(WOLCFG_SAB | WOLCFG_SAM, ®s->WOLCFGSet);
+ writew(WOLCR_MAGIC_EN, ®s->WOLCRSet);
+
+ /*
+ if (vptr->wol_opts & VELOCITY_WOL_PHY)
+ writew((WOLCR_LINKON_EN|WOLCR_LINKOFF_EN), ®s->WOLCRSet);
+ */
+
+ if (vptr->wol_opts & VELOCITY_WOL_UCAST) {
+ writew(WOLCR_UNICAST_EN, ®s->WOLCRSet);
+ }
+
+ if (vptr->wol_opts & VELOCITY_WOL_ARP) {
+ struct arp_packet *arp = (struct arp_packet *) buf;
+ u16 crc;
+ memset(buf, 0, sizeof(struct arp_packet) + 7);
+
+ for (i = 0; i < 4; i++)
+ writel(mask_pattern[0][i], ®s->ByteMask[0][i]);
+
+ arp->type = htons(ETH_P_ARP);
+ arp->ar_op = htons(1);
+
+ memcpy(arp->ar_tip, vptr->ip_addr, 4);
+
+ crc = wol_calc_crc((sizeof(struct arp_packet) + 7) / 8, buf, (u8 *) & mask_pattern[0][0]);
+
+ writew(crc, ®s->PatternCRC[0]);
+ writew(WOLCR_ARP_EN, ®s->WOLCRSet);
+ }
+
+ BYTE_REG_BITS_ON(PWCFG_WOLTYPE, ®s->PWCFGSet);
+ BYTE_REG_BITS_ON(PWCFG_LEGACY_WOLEN, ®s->PWCFGSet);
+
+ writew(0x0FFF, ®s->WOLSRClr);
+
+ if (vptr->mii_status & VELOCITY_AUTONEG_ENABLE) {
+ if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201)
+ MII_REG_BITS_ON(AUXCR_MDPPS, MII_REG_AUXCR, vptr->mac_regs);
+
+ MII_REG_BITS_OFF(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+ }
+
+ if (vptr->mii_status & VELOCITY_SPEED_1000)
+ MII_REG_BITS_ON(BMCR_REAUTO, MII_REG_BMCR, vptr->mac_regs);
+
+ BYTE_REG_BITS_ON(CHIPGCR_FCMODE, ®s->CHIPGCR);
+
+ {
+ u8 GCR;
+ GCR = readb(®s->CHIPGCR);
+ GCR = (GCR & ~CHIPGCR_FCGMII) | CHIPGCR_FCFDX;
+ writeb(GCR, ®s->CHIPGCR);
+ }
+
+ BYTE_REG_BITS_OFF(ISR_PWEI, ®s->ISR);
+ /* Turn on SWPTAG just before entering power mode */
+ BYTE_REG_BITS_ON(STICKHW_SWPTAG, ®s->STICKHW);
+ /* Go to bed ..... */
+ BYTE_REG_BITS_ON((STICKHW_DS1 | STICKHW_DS0), ®s->STICKHW);
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * Copyright (c) 1996, 2003 VIA Networking Technologies, Inc.
+ * All rights reserved.
+ *
+ * This software may be redistributed and/or modified under
+ * the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * File: via-velocity.h
+ *
+ * Purpose: Header file to define driver's private structures.
+ *
+ * Author: Chuang Liang-Shing, AJ Jiang
+ *
+ * Date: Jan 24, 2003
+ */
+
+
+#ifndef VELOCITY_H
+#define VELOCITY_H
+
+#define VELOCITY_TX_CSUM_SUPPORT
+
+#define VELOCITY_NAME "via-velocity"
+#define VELOCITY_FULL_DRV_NAM "VIA Networking Velocity Family Gigabit Ethernet Adapter Driver"
+#define VELOCITY_VERSION "1.13"
+
+#define PKT_BUF_SZ 1540
+
+#define MAX_UNITS 8
+#define OPTION_DEFAULT { [0 ... MAX_UNITS-1] = -1}
+
+#define REV_ID_VT6110 (0)
+#define DEVICE_ID (0x3119)
+
+#define BYTE_REG_BITS_ON(x,p) do { writeb(readb((p))|(x),(p));} while (0)
+#define WORD_REG_BITS_ON(x,p) do { writew(readw((p))|(x),(p));} while (0)
+#define DWORD_REG_BITS_ON(x,p) do { writel(readl((p))|(x),(p));} while (0)
+
+#define BYTE_REG_BITS_IS_ON(x,p) (readb((p)) & (x))
+#define WORD_REG_BITS_IS_ON(x,p) (readw((p)) & (x))
+#define DWORD_REG_BITS_IS_ON(x,p) (readl((p)) & (x))
+
+#define BYTE_REG_BITS_OFF(x,p) do { writeb(readb((p)) & (~(x)),(p));} while (0)
+#define WORD_REG_BITS_OFF(x,p) do { writew(readw((p)) & (~(x)),(p));} while (0)
+#define DWORD_REG_BITS_OFF(x,p) do { writel(readl((p)) & (~(x)),(p));} while (0)
+
+#define BYTE_REG_BITS_SET(x,m,p) do { writeb( (readb((p)) & (~(m))) |(x),(p));} while (0)
+#define WORD_REG_BITS_SET(x,m,p) do { writew( (readw((p)) & (~(m))) |(x),(p));} while (0)
+#define DWORD_REG_BITS_SET(x,m,p) do { writel( (readl((p)) & (~(m)))|(x),(p));} while (0)
+
+#define VAR_USED(p) do {(p)=(p);} while (0)
+
+/*
+ * Purpose: Structures for MAX RX/TX descriptors.
+ */
+
+
+#define B_OWNED_BY_CHIP 1
+#define B_OWNED_BY_HOST 0
+
+/*
+ * Bits in the RSR0 register
+ */
+
+#define RSR_DETAG 0x0080
+#define RSR_SNTAG 0x0040
+#define RSR_RXER 0x0020
+#define RSR_RL 0x0010
+#define RSR_CE 0x0008
+#define RSR_FAE 0x0004
+#define RSR_CRC 0x0002
+#define RSR_VIDM 0x0001
+
+/*
+ * Bits in the RSR1 register
+ */
+
+#define RSR_RXOK 0x8000 // rx OK
+#define RSR_PFT 0x4000 // Perfect filtering address match
+#define RSR_MAR 0x2000 // MAC accept multicast address packet
+#define RSR_BAR 0x1000 // MAC accept broadcast address packet
+#define RSR_PHY 0x0800 // MAC accept physical address packet
+#define RSR_VTAG 0x0400 // 802.1p/1q tagging packet indicator
+#define RSR_STP 0x0200 // start of packet
+#define RSR_EDP 0x0100 // end of packet
+
+/*
+ * Bits in the RSR1 register
+ */
+
+#define RSR1_RXOK 0x80 // rx OK
+#define RSR1_PFT 0x40 // Perfect filtering address match
+#define RSR1_MAR 0x20 // MAC accept multicast address packet
+#define RSR1_BAR 0x10 // MAC accept broadcast address packet
+#define RSR1_PHY 0x08 // MAC accept physical address packet
+#define RSR1_VTAG 0x04 // 802.1p/1q tagging packet indicator
+#define RSR1_STP 0x02 // start of packet
+#define RSR1_EDP 0x01 // end of packet
+
+/*
+ * Bits in the CSM register
+ */
+
+#define CSM_IPOK 0x40 //IP Checkusm validatiaon ok
+#define CSM_TUPOK 0x20 //TCP/UDP Checkusm validatiaon ok
+#define CSM_FRAG 0x10 //Fragment IP datagram
+#define CSM_IPKT 0x04 //Received an IP packet
+#define CSM_TCPKT 0x02 //Received a TCP packet
+#define CSM_UDPKT 0x01 //Received a UDP packet
+
+/*
+ * Bits in the TSR0 register
+ */
+
+#define TSR0_ABT 0x0080 // Tx abort because of excessive collision
+#define TSR0_OWT 0x0040 // Jumbo frame Tx abort
+#define TSR0_OWC 0x0020 // Out of window collision
+#define TSR0_COLS 0x0010 // experience collision in this transmit event
+#define TSR0_NCR3 0x0008 // collision retry counter[3]
+#define TSR0_NCR2 0x0004 // collision retry counter[2]
+#define TSR0_NCR1 0x0002 // collision retry counter[1]
+#define TSR0_NCR0 0x0001 // collision retry counter[0]
+#define TSR0_TERR 0x8000 //
+#define TSR0_FDX 0x4000 // current transaction is serviced by full duplex mode
+#define TSR0_GMII 0x2000 // current transaction is serviced by GMII mode
+#define TSR0_LNKFL 0x1000 // packet serviced during link down
+#define TSR0_SHDN 0x0400 // shutdown case
+#define TSR0_CRS 0x0200 // carrier sense lost
+#define TSR0_CDH 0x0100 // AQE test fail (CD heartbeat)
+
+/*
+ * Bits in the TSR1 register
+ */
+
+#define TSR1_TERR 0x80 //
+#define TSR1_FDX 0x40 // current transaction is serviced by full duplex mode
+#define TSR1_GMII 0x20 // current transaction is serviced by GMII mode
+#define TSR1_LNKFL 0x10 // packet serviced during link down
+#define TSR1_SHDN 0x04 // shutdown case
+#define TSR1_CRS 0x02 // carrier sense lost
+#define TSR1_CDH 0x01 // AQE test fail (CD heartbeat)
+
+//
+// Bits in the TCR0 register
+//
+#define TCR0_TIC 0x80 // assert interrupt immediately while descriptor has been send complete
+#define TCR0_PIC 0x40 // priority interrupt request, INA# is issued over adaptive interrupt scheme
+#define TCR0_VETAG 0x20 // enable VLAN tag
+#define TCR0_IPCK 0x10 // request IP checksum calculation.
+#define TCR0_UDPCK 0x08 // request UDP checksum calculation.
+#define TCR0_TCPCK 0x04 // request TCP checksum calculation.
+#define TCR0_JMBO 0x02 // indicate a jumbo packet in GMAC side
+#define TCR0_CRC 0x01 // disable CRC generation
+
+#define TCPLS_NORMAL 3
+#define TCPLS_START 2
+#define TCPLS_END 1
+#define TCPLS_MED 0
+
+
+// max transmit or receive buffer size
+#define CB_RX_BUF_SIZE 2048UL // max buffer size
+ // NOTE: must be multiple of 4
+
+#define CB_MAX_RD_NUM 512 // MAX # of RD
+#define CB_MAX_TD_NUM 256 // MAX # of TD
+
+#define CB_INIT_RD_NUM_3119 128 // init # of RD, for setup VT3119
+#define CB_INIT_TD_NUM_3119 64 // init # of TD, for setup VT3119
+
+#define CB_INIT_RD_NUM 128 // init # of RD, for setup default
+#define CB_INIT_TD_NUM 64 // init # of TD, for setup default
+
+// for 3119
+#define CB_TD_RING_NUM 4 // # of TD rings.
+#define CB_MAX_SEG_PER_PKT 7 // max data seg per packet (Tx)
+
+
+/*
+ * If collisions excess 15 times , tx will abort, and
+ * if tx fifo underflow, tx will fail
+ * we should try to resend it
+ */
+
+#define CB_MAX_TX_ABORT_RETRY 3
+
+/*
+ * Receive descriptor
+ */
+
+struct rdesc0 {
+ u16 RSR; /* Receive status */
+ u16 len:14; /* Received packet length */
+ u16 reserved:1;
+ u16 owner:1; /* Who owns this buffer ? */
+};
+
+struct rdesc1 {
+ u16 PQTAG;
+ u8 CSM;
+ u8 IPKT;
+};
+
+struct rx_desc {
+ struct rdesc0 rdesc0;
+ struct rdesc1 rdesc1;
+ u32 pa_low; /* Low 32 bit PCI address */
+ u16 pa_high; /* Next 16 bit PCI address (48 total) */
+ u16 len:15; /* Frame size */
+ u16 inten:1; /* Enable interrupt */
+} __attribute__ ((__packed__));
+
+/*
+ * Transmit descriptor
+ */
+
+struct tdesc0 {
+ u16 TSR; /* Transmit status register */
+ u16 pktsize:14; /* Size of frame */
+ u16 reserved:1;
+ u16 owner:1; /* Who owns the buffer */
+};
+
+struct pqinf { /* Priority queue info */
+ u16 VID:12;
+ u16 CFI:1;
+ u16 priority:3;
+} __attribute__ ((__packed__));
+
+struct tdesc1 {
+ struct pqinf pqinf;
+ u8 TCR;
+ u8 TCPLS:2;
+ u8 reserved:2;
+ u8 CMDZ:4;
+} __attribute__ ((__packed__));
+
+struct td_buf {
+ u32 pa_low;
+ u16 pa_high;
+ u16 bufsize:14;
+ u16 reserved:1;
+ u16 queue:1;
+} __attribute__ ((__packed__));
+
+struct tx_desc {
+ struct tdesc0 tdesc0;
+ struct tdesc1 tdesc1;
+ struct td_buf td_buf[7];
+};
+
+struct velocity_rd_info {
+ struct sk_buff *skb;
+ dma_addr_t skb_dma;
+};
+
+/**
+ * alloc_rd_info - allocate an rd info block
+ *
+ * Alocate and initialize a receive info structure used for keeping
+ * track of kernel side information related to each receive
+ * descriptor we are using
+ */
+
+static inline struct velocity_rd_info *alloc_rd_info(void)
+{
+ struct velocity_rd_info *ptr;
+ if ((ptr = kmalloc(sizeof(struct velocity_rd_info), GFP_ATOMIC)) == NULL)
+ return NULL;
+ else {
+ memset(ptr, 0, sizeof(struct velocity_rd_info));
+ return ptr;
+ }
+}
+
+/*
+ * Used to track transmit side buffers.
+ */
+
+struct velocity_td_info {
+ struct sk_buff *skb;
+ u8 *buf;
+ int nskb_dma;
+ dma_addr_t skb_dma[7];
+ dma_addr_t buf_dma;
+};
+
+enum {
+ OWNED_BY_HOST = 0,
+ OWNED_BY_NIC = 1
+} velocity_owner;
+
+
+/*
+ * MAC registers and macros.
+ */
+
+
+#define MCAM_SIZE 64
+#define VCAM_SIZE 64
+#define TX_QUEUE_NO 4
+
+#define MAX_HW_MIB_COUNTER 32
+#define VELOCITY_MIN_MTU (1514-14)
+#define VELOCITY_MAX_MTU (9000)
+
+/*
+ * Registers in the MAC
+ */
+
+#define MAC_REG_PAR 0x00 // physical address
+#define MAC_REG_RCR 0x06
+#define MAC_REG_TCR 0x07
+#define MAC_REG_CR0_SET 0x08
+#define MAC_REG_CR1_SET 0x09
+#define MAC_REG_CR2_SET 0x0A
+#define MAC_REG_CR3_SET 0x0B
+#define MAC_REG_CR0_CLR 0x0C
+#define MAC_REG_CR1_CLR 0x0D
+#define MAC_REG_CR2_CLR 0x0E
+#define MAC_REG_CR3_CLR 0x0F
+#define MAC_REG_MAR 0x10
+#define MAC_REG_CAM 0x10
+#define MAC_REG_DEC_BASE_HI 0x18
+#define MAC_REG_DBF_BASE_HI 0x1C
+#define MAC_REG_ISR_CTL 0x20
+#define MAC_REG_ISR_HOTMR 0x20
+#define MAC_REG_ISR_TSUPTHR 0x20
+#define MAC_REG_ISR_RSUPTHR 0x20
+#define MAC_REG_ISR_CTL1 0x21
+#define MAC_REG_TXE_SR 0x22
+#define MAC_REG_RXE_SR 0x23
+#define MAC_REG_ISR 0x24
+#define MAC_REG_ISR0 0x24
+#define MAC_REG_ISR1 0x25
+#define MAC_REG_ISR2 0x26
+#define MAC_REG_ISR3 0x27
+#define MAC_REG_IMR 0x28
+#define MAC_REG_IMR0 0x28
+#define MAC_REG_IMR1 0x29
+#define MAC_REG_IMR2 0x2A
+#define MAC_REG_IMR3 0x2B
+#define MAC_REG_TDCSR_SET 0x30
+#define MAC_REG_RDCSR_SET 0x32
+#define MAC_REG_TDCSR_CLR 0x34
+#define MAC_REG_RDCSR_CLR 0x36
+#define MAC_REG_RDBASE_LO 0x38
+#define MAC_REG_RDINDX 0x3C
+#define MAC_REG_TDBASE_LO 0x40
+#define MAC_REG_RDCSIZE 0x50
+#define MAC_REG_TDCSIZE 0x52
+#define MAC_REG_TDINDX 0x54
+#define MAC_REG_TDIDX0 0x54
+#define MAC_REG_TDIDX1 0x56
+#define MAC_REG_TDIDX2 0x58
+#define MAC_REG_TDIDX3 0x5A
+#define MAC_REG_PAUSE_TIMER 0x5C
+#define MAC_REG_RBRDU 0x5E
+#define MAC_REG_FIFO_TEST0 0x60
+#define MAC_REG_FIFO_TEST1 0x64
+#define MAC_REG_CAMADDR 0x68
+#define MAC_REG_CAMCR 0x69
+#define MAC_REG_GFTEST 0x6A
+#define MAC_REG_FTSTCMD 0x6B
+#define MAC_REG_MIICFG 0x6C
+#define MAC_REG_MIISR 0x6D
+#define MAC_REG_PHYSR0 0x6E
+#define MAC_REG_PHYSR1 0x6F
+#define MAC_REG_MIICR 0x70
+#define MAC_REG_MIIADR 0x71
+#define MAC_REG_MIIDATA 0x72
+#define MAC_REG_SOFT_TIMER0 0x74
+#define MAC_REG_SOFT_TIMER1 0x76
+#define MAC_REG_CFGA 0x78
+#define MAC_REG_CFGB 0x79
+#define MAC_REG_CFGC 0x7A
+#define MAC_REG_CFGD 0x7B
+#define MAC_REG_DCFG0 0x7C
+#define MAC_REG_DCFG1 0x7D
+#define MAC_REG_MCFG0 0x7E
+#define MAC_REG_MCFG1 0x7F
+
+#define MAC_REG_TBIST 0x80
+#define MAC_REG_RBIST 0x81
+#define MAC_REG_PMCC 0x82
+#define MAC_REG_STICKHW 0x83
+#define MAC_REG_MIBCR 0x84
+#define MAC_REG_EERSV 0x85
+#define MAC_REG_REVID 0x86
+#define MAC_REG_MIBREAD 0x88
+#define MAC_REG_BPMA 0x8C
+#define MAC_REG_EEWR_DATA 0x8C
+#define MAC_REG_BPMD_WR 0x8F
+#define MAC_REG_BPCMD 0x90
+#define MAC_REG_BPMD_RD 0x91
+#define MAC_REG_EECHKSUM 0x92
+#define MAC_REG_EECSR 0x93
+#define MAC_REG_EERD_DATA 0x94
+#define MAC_REG_EADDR 0x96
+#define MAC_REG_EMBCMD 0x97
+#define MAC_REG_JMPSR0 0x98
+#define MAC_REG_JMPSR1 0x99
+#define MAC_REG_JMPSR2 0x9A
+#define MAC_REG_JMPSR3 0x9B
+#define MAC_REG_CHIPGSR 0x9C
+#define MAC_REG_TESTCFG 0x9D
+#define MAC_REG_DEBUG 0x9E
+#define MAC_REG_CHIPGCR 0x9F
+#define MAC_REG_WOLCR0_SET 0xA0
+#define MAC_REG_WOLCR1_SET 0xA1
+#define MAC_REG_PWCFG_SET 0xA2
+#define MAC_REG_WOLCFG_SET 0xA3
+#define MAC_REG_WOLCR0_CLR 0xA4
+#define MAC_REG_WOLCR1_CLR 0xA5
+#define MAC_REG_PWCFG_CLR 0xA6
+#define MAC_REG_WOLCFG_CLR 0xA7
+#define MAC_REG_WOLSR0_SET 0xA8
+#define MAC_REG_WOLSR1_SET 0xA9
+#define MAC_REG_WOLSR0_CLR 0xAC
+#define MAC_REG_WOLSR1_CLR 0xAD
+#define MAC_REG_PATRN_CRC0 0xB0
+#define MAC_REG_PATRN_CRC1 0xB2
+#define MAC_REG_PATRN_CRC2 0xB4
+#define MAC_REG_PATRN_CRC3 0xB6
+#define MAC_REG_PATRN_CRC4 0xB8
+#define MAC_REG_PATRN_CRC5 0xBA
+#define MAC_REG_PATRN_CRC6 0xBC
+#define MAC_REG_PATRN_CRC7 0xBE
+#define MAC_REG_BYTEMSK0_0 0xC0
+#define MAC_REG_BYTEMSK0_1 0xC4
+#define MAC_REG_BYTEMSK0_2 0xC8
+#define MAC_REG_BYTEMSK0_3 0xCC
+#define MAC_REG_BYTEMSK1_0 0xD0
+#define MAC_REG_BYTEMSK1_1 0xD4
+#define MAC_REG_BYTEMSK1_2 0xD8
+#define MAC_REG_BYTEMSK1_3 0xDC
+#define MAC_REG_BYTEMSK2_0 0xE0
+#define MAC_REG_BYTEMSK2_1 0xE4
+#define MAC_REG_BYTEMSK2_2 0xE8
+#define MAC_REG_BYTEMSK2_3 0xEC
+#define MAC_REG_BYTEMSK3_0 0xF0
+#define MAC_REG_BYTEMSK3_1 0xF4
+#define MAC_REG_BYTEMSK3_2 0xF8
+#define MAC_REG_BYTEMSK3_3 0xFC
+
+/*
+ * Bits in the RCR register
+ */
+
+#define RCR_AS 0x80
+#define RCR_AP 0x40
+#define RCR_AL 0x20
+#define RCR_PROM 0x10
+#define RCR_AB 0x08
+#define RCR_AM 0x04
+#define RCR_AR 0x02
+#define RCR_SEP 0x01
+
+/*
+ * Bits in the TCR register
+ */
+
+#define TCR_TB2BDIS 0x80
+#define TCR_COLTMC1 0x08
+#define TCR_COLTMC0 0x04
+#define TCR_LB1 0x02 /* loopback[1] */
+#define TCR_LB0 0x01 /* loopback[0] */
+
+/*
+ * Bits in the CR0 register
+ */
+
+#define CR0_TXON 0x00000008UL
+#define CR0_RXON 0x00000004UL
+#define CR0_STOP 0x00000002UL /* stop MAC, default = 1 */
+#define CR0_STRT 0x00000001UL /* start MAC */
+#define CR0_SFRST 0x00008000UL /* software reset */
+#define CR0_TM1EN 0x00004000UL
+#define CR0_TM0EN 0x00002000UL
+#define CR0_DPOLL 0x00000800UL /* disable rx/tx auto polling */
+#define CR0_DISAU 0x00000100UL
+#define CR0_XONEN 0x00800000UL
+#define CR0_FDXTFCEN 0x00400000UL /* full-duplex TX flow control enable */
+#define CR0_FDXRFCEN 0x00200000UL /* full-duplex RX flow control enable */
+#define CR0_HDXFCEN 0x00100000UL /* half-duplex flow control enable */
+#define CR0_XHITH1 0x00080000UL /* TX XON high threshold 1 */
+#define CR0_XHITH0 0x00040000UL /* TX XON high threshold 0 */
+#define CR0_XLTH1 0x00020000UL /* TX pause frame low threshold 1 */
+#define CR0_XLTH0 0x00010000UL /* TX pause frame low threshold 0 */
+#define CR0_GSPRST 0x80000000UL
+#define CR0_FORSRST 0x40000000UL
+#define CR0_FPHYRST 0x20000000UL
+#define CR0_DIAG 0x10000000UL
+#define CR0_INTPCTL 0x04000000UL
+#define CR0_GINTMSK1 0x02000000UL
+#define CR0_GINTMSK0 0x01000000UL
+
+/*
+ * Bits in the CR1 register
+ */
+
+#define CR1_SFRST 0x80 /* software reset */
+#define CR1_TM1EN 0x40
+#define CR1_TM0EN 0x20
+#define CR1_DPOLL 0x08 /* disable rx/tx auto polling */
+#define CR1_DISAU 0x01
+
+/*
+ * Bits in the CR2 register
+ */
+
+#define CR2_XONEN 0x80
+#define CR2_FDXTFCEN 0x40 /* full-duplex TX flow control enable */
+#define CR2_FDXRFCEN 0x20 /* full-duplex RX flow control enable */
+#define CR2_HDXFCEN 0x10 /* half-duplex flow control enable */
+#define CR2_XHITH1 0x08 /* TX XON high threshold 1 */
+#define CR2_XHITH0 0x04 /* TX XON high threshold 0 */
+#define CR2_XLTH1 0x02 /* TX pause frame low threshold 1 */
+#define CR2_XLTH0 0x01 /* TX pause frame low threshold 0 */
+
+/*
+ * Bits in the CR3 register
+ */
+
+#define CR3_GSPRST 0x80
+#define CR3_FORSRST 0x40
+#define CR3_FPHYRST 0x20
+#define CR3_DIAG 0x10
+#define CR3_INTPCTL 0x04
+#define CR3_GINTMSK1 0x02
+#define CR3_GINTMSK0 0x01
+
+#define ISRCTL_UDPINT 0x8000
+#define ISRCTL_TSUPDIS 0x4000
+#define ISRCTL_RSUPDIS 0x2000
+#define ISRCTL_PMSK1 0x1000
+#define ISRCTL_PMSK0 0x0800
+#define ISRCTL_INTPD 0x0400
+#define ISRCTL_HCRLD 0x0200
+#define ISRCTL_SCRLD 0x0100
+
+/*
+ * Bits in the ISR_CTL1 register
+ */
+
+#define ISRCTL1_UDPINT 0x80
+#define ISRCTL1_TSUPDIS 0x40
+#define ISRCTL1_RSUPDIS 0x20
+#define ISRCTL1_PMSK1 0x10
+#define ISRCTL1_PMSK0 0x08
+#define ISRCTL1_INTPD 0x04
+#define ISRCTL1_HCRLD 0x02
+#define ISRCTL1_SCRLD 0x01
+
+/*
+ * Bits in the TXE_SR register
+ */
+
+#define TXESR_TFDBS 0x08
+#define TXESR_TDWBS 0x04
+#define TXESR_TDRBS 0x02
+#define TXESR_TDSTR 0x01
+
+/*
+ * Bits in the RXE_SR register
+ */
+
+#define RXESR_RFDBS 0x08
+#define RXESR_RDWBS 0x04
+#define RXESR_RDRBS 0x02
+#define RXESR_RDSTR 0x01
+
+/*
+ * Bits in the ISR register
+ */
+
+#define ISR_ISR3 0x80000000UL
+#define ISR_ISR2 0x40000000UL
+#define ISR_ISR1 0x20000000UL
+#define ISR_ISR0 0x10000000UL
+#define ISR_TXSTLI 0x02000000UL
+#define ISR_RXSTLI 0x01000000UL
+#define ISR_HFLD 0x00800000UL
+#define ISR_UDPI 0x00400000UL
+#define ISR_MIBFI 0x00200000UL
+#define ISR_SHDNI 0x00100000UL
+#define ISR_PHYI 0x00080000UL
+#define ISR_PWEI 0x00040000UL
+#define ISR_TMR1I 0x00020000UL
+#define ISR_TMR0I 0x00010000UL
+#define ISR_SRCI 0x00008000UL
+#define ISR_LSTPEI 0x00004000UL
+#define ISR_LSTEI 0x00002000UL
+#define ISR_OVFI 0x00001000UL
+#define ISR_FLONI 0x00000800UL
+#define ISR_RACEI 0x00000400UL
+#define ISR_TXWB1I 0x00000200UL
+#define ISR_TXWB0I 0x00000100UL
+#define ISR_PTX3I 0x00000080UL
+#define ISR_PTX2I 0x00000040UL
+#define ISR_PTX1I 0x00000020UL
+#define ISR_PTX0I 0x00000010UL
+#define ISR_PTXI 0x00000008UL
+#define ISR_PRXI 0x00000004UL
+#define ISR_PPTXI 0x00000002UL
+#define ISR_PPRXI 0x00000001UL
+
+/*
+ * Bits in the IMR register
+ */
+
+#define IMR_TXSTLM 0x02000000UL
+#define IMR_UDPIM 0x00400000UL
+#define IMR_MIBFIM 0x00200000UL
+#define IMR_SHDNIM 0x00100000UL
+#define IMR_PHYIM 0x00080000UL
+#define IMR_PWEIM 0x00040000UL
+#define IMR_TMR1IM 0x00020000UL
+#define IMR_TMR0IM 0x00010000UL
+
+#define IMR_SRCIM 0x00008000UL
+#define IMR_LSTPEIM 0x00004000UL
+#define IMR_LSTEIM 0x00002000UL
+#define IMR_OVFIM 0x00001000UL
+#define IMR_FLONIM 0x00000800UL
+#define IMR_RACEIM 0x00000400UL
+#define IMR_TXWB1IM 0x00000200UL
+#define IMR_TXWB0IM 0x00000100UL
+
+#define IMR_PTX3IM 0x00000080UL
+#define IMR_PTX2IM 0x00000040UL
+#define IMR_PTX1IM 0x00000020UL
+#define IMR_PTX0IM 0x00000010UL
+#define IMR_PTXIM 0x00000008UL
+#define IMR_PRXIM 0x00000004UL
+#define IMR_PPTXIM 0x00000002UL
+#define IMR_PPRXIM 0x00000001UL
+
+/* 0x0013FB0FUL = initial value of IMR */
+
+#define INT_MASK_DEF (IMR_PPTXIM|IMR_PPRXIM|IMR_PTXIM|IMR_PRXIM|\
+ IMR_PWEIM|IMR_TXWB0IM|IMR_TXWB1IM|IMR_FLONIM|\
+ IMR_OVFIM|IMR_LSTEIM|IMR_LSTPEIM|IMR_SRCIM|IMR_MIBFIM|\
+ IMR_SHDNIM|IMR_TMR1IM|IMR_TMR0IM|IMR_TXSTLM)
+
+/*
+ * Bits in the TDCSR0/1, RDCSR0 register
+ */
+
+#define TRDCSR_DEAD 0x0008
+#define TRDCSR_WAK 0x0004
+#define TRDCSR_ACT 0x0002
+#define TRDCSR_RUN 0x0001
+
+/*
+ * Bits in the CAMADDR register
+ */
+
+#define CAMADDR_CAMEN 0x80
+#define CAMADDR_VCAMSL 0x40
+
+/*
+ * Bits in the CAMCR register
+ */
+
+#define CAMCR_PS1 0x80
+#define CAMCR_PS0 0x40
+#define CAMCR_AITRPKT 0x20
+#define CAMCR_AITR16 0x10
+#define CAMCR_CAMRD 0x08
+#define CAMCR_CAMWR 0x04
+#define CAMCR_PS_CAM_MASK 0x40
+#define CAMCR_PS_CAM_DATA 0x80
+#define CAMCR_PS_MAR 0x00
+
+/*
+ * Bits in the MIICFG register
+ */
+
+#define MIICFG_MPO1 0x80
+#define MIICFG_MPO0 0x40
+#define MIICFG_MFDC 0x20
+
+/*
+ * Bits in the MIISR register
+ */
+
+#define MIISR_MIDLE 0x80
+
+/*
+ * Bits in the PHYSR0 register
+ */
+
+#define PHYSR0_PHYRST 0x80
+#define PHYSR0_LINKGD 0x40
+#define PHYSR0_FDPX 0x10
+#define PHYSR0_SPDG 0x08
+#define PHYSR0_SPD10 0x04
+#define PHYSR0_RXFLC 0x02
+#define PHYSR0_TXFLC 0x01
+
+/*
+ * Bits in the PHYSR1 register
+ */
+
+#define PHYSR1_PHYTBI 0x01
+
+/*
+ * Bits in the MIICR register
+ */
+
+#define MIICR_MAUTO 0x80
+#define MIICR_RCMD 0x40
+#define MIICR_WCMD 0x20
+#define MIICR_MDPM 0x10
+#define MIICR_MOUT 0x08
+#define MIICR_MDO 0x04
+#define MIICR_MDI 0x02
+#define MIICR_MDC 0x01
+
+/*
+ * Bits in the MIIADR register
+ */
+
+#define MIIADR_SWMPL 0x80
+
+/*
+ * Bits in the CFGA register
+ */
+
+#define CFGA_PMHCTG 0x08
+#define CFGA_GPIO1PD 0x04
+#define CFGA_ABSHDN 0x02
+#define CFGA_PACPI 0x01
+
+/*
+ * Bits in the CFGB register
+ */
+
+#define CFGB_GTCKOPT 0x80
+#define CFGB_MIIOPT 0x40
+#define CFGB_CRSEOPT 0x20
+#define CFGB_OFSET 0x10
+#define CFGB_CRANDOM 0x08
+#define CFGB_CAP 0x04
+#define CFGB_MBA 0x02
+#define CFGB_BAKOPT 0x01
+
+/*
+ * Bits in the CFGC register
+ */
+
+#define CFGC_EELOAD 0x80
+#define CFGC_BROPT 0x40
+#define CFGC_DLYEN 0x20
+#define CFGC_DTSEL 0x10
+#define CFGC_BTSEL 0x08
+#define CFGC_BPS2 0x04 /* bootrom select[2] */
+#define CFGC_BPS1 0x02 /* bootrom select[1] */
+#define CFGC_BPS0 0x01 /* bootrom select[0] */
+
+/*
+ * Bits in the CFGD register
+ */
+
+#define CFGD_IODIS 0x80
+#define CFGD_MSLVDACEN 0x40
+#define CFGD_CFGDACEN 0x20
+#define CFGD_PCI64EN 0x10
+#define CFGD_HTMRL4 0x08
+
+/*
+ * Bits in the DCFG1 register
+ */
+
+#define DCFG_XMWI 0x8000
+#define DCFG_XMRM 0x4000
+#define DCFG_XMRL 0x2000
+#define DCFG_PERDIS 0x1000
+#define DCFG_MRWAIT 0x0400
+#define DCFG_MWWAIT 0x0200
+#define DCFG_LATMEN 0x0100
+
+/*
+ * Bits in the MCFG0 register
+ */
+
+#define MCFG_RXARB 0x0080
+#define MCFG_RFT1 0x0020
+#define MCFG_RFT0 0x0010
+#define MCFG_LOWTHOPT 0x0008
+#define MCFG_PQEN 0x0004
+#define MCFG_RTGOPT 0x0002
+#define MCFG_VIDFR 0x0001
+
+/*
+ * Bits in the MCFG1 register
+ */
+
+#define MCFG_TXARB 0x8000
+#define MCFG_TXQBK1 0x0800
+#define MCFG_TXQBK0 0x0400
+#define MCFG_TXQNOBK 0x0200
+#define MCFG_SNAPOPT 0x0100
+
+/*
+ * Bits in the PMCC register
+ */
+
+#define PMCC_DSI 0x80
+#define PMCC_D2_DIS 0x40
+#define PMCC_D1_DIS 0x20
+#define PMCC_D3C_EN 0x10
+#define PMCC_D3H_EN 0x08
+#define PMCC_D2_EN 0x04
+#define PMCC_D1_EN 0x02
+#define PMCC_D0_EN 0x01
+
+/*
+ * Bits in STICKHW
+ */
+
+#define STICKHW_SWPTAG 0x10
+#define STICKHW_WOLSR 0x08
+#define STICKHW_WOLEN 0x04
+#define STICKHW_DS1 0x02 /* R/W by software/cfg cycle */
+#define STICKHW_DS0 0x01 /* suspend well DS write port */
+
+/*
+ * Bits in the MIBCR register
+ */
+
+#define MIBCR_MIBISTOK 0x80
+#define MIBCR_MIBISTGO 0x40
+#define MIBCR_MIBINC 0x20
+#define MIBCR_MIBHI 0x10
+#define MIBCR_MIBFRZ 0x08
+#define MIBCR_MIBFLSH 0x04
+#define MIBCR_MPTRINI 0x02
+#define MIBCR_MIBCLR 0x01
+
+/*
+ * Bits in the EERSV register
+ */
+
+#define EERSV_BOOT_RPL ((u8) 0x01) /* Boot method selection for VT6110 */
+
+#define EERSV_BOOT_MASK ((u8) 0x06)
+#define EERSV_BOOT_INT19 ((u8) 0x00)
+#define EERSV_BOOT_INT18 ((u8) 0x02)
+#define EERSV_BOOT_LOCAL ((u8) 0x04)
+#define EERSV_BOOT_BEV ((u8) 0x06)
+
+
+/*
+ * Bits in BPCMD
+ */
+
+#define BPCMD_BPDNE 0x80
+#define BPCMD_EBPWR 0x02
+#define BPCMD_EBPRD 0x01
+
+/*
+ * Bits in the EECSR register
+ */
+
+#define EECSR_EMBP 0x40 /* eeprom embeded programming */
+#define EECSR_RELOAD 0x20 /* eeprom content reload */
+#define EECSR_DPM 0x10 /* eeprom direct programming */
+#define EECSR_ECS 0x08 /* eeprom CS pin */
+#define EECSR_ECK 0x04 /* eeprom CK pin */
+#define EECSR_EDI 0x02 /* eeprom DI pin */
+#define EECSR_EDO 0x01 /* eeprom DO pin */
+
+/*
+ * Bits in the EMBCMD register
+ */
+
+#define EMBCMD_EDONE 0x80
+#define EMBCMD_EWDIS 0x08
+#define EMBCMD_EWEN 0x04
+#define EMBCMD_EWR 0x02
+#define EMBCMD_ERD 0x01
+
+/*
+ * Bits in TESTCFG register
+ */
+
+#define TESTCFG_HBDIS 0x80
+
+/*
+ * Bits in CHIPGCR register
+ */
+
+#define CHIPGCR_FCGMII 0x80
+#define CHIPGCR_FCFDX 0x40
+#define CHIPGCR_FCRESV 0x20
+#define CHIPGCR_FCMODE 0x10
+#define CHIPGCR_LPSOPT 0x08
+#define CHIPGCR_TM1US 0x04
+#define CHIPGCR_TM0US 0x02
+#define CHIPGCR_PHYINTEN 0x01
+
+/*
+ * Bits in WOLCR0
+ */
+
+#define WOLCR_MSWOLEN7 0x0080 /* enable pattern match filtering */
+#define WOLCR_MSWOLEN6 0x0040
+#define WOLCR_MSWOLEN5 0x0020
+#define WOLCR_MSWOLEN4 0x0010
+#define WOLCR_MSWOLEN3 0x0008
+#define WOLCR_MSWOLEN2 0x0004
+#define WOLCR_MSWOLEN1 0x0002
+#define WOLCR_MSWOLEN0 0x0001
+#define WOLCR_ARP_EN 0x0001
+
+/*
+ * Bits in WOLCR1
+ */
+
+#define WOLCR_LINKOFF_EN 0x0800 /* link off detected enable */
+#define WOLCR_LINKON_EN 0x0400 /* link on detected enable */
+#define WOLCR_MAGIC_EN 0x0200 /* magic packet filter enable */
+#define WOLCR_UNICAST_EN 0x0100 /* unicast filter enable */
+
+
+/*
+ * Bits in PWCFG
+ */
+
+#define PWCFG_PHYPWOPT 0x80 /* internal MII I/F timing */
+#define PWCFG_PCISTICK 0x40 /* PCI sticky R/W enable */
+#define PWCFG_WOLTYPE 0x20 /* pulse(1) or button (0) */
+#define PWCFG_LEGCY_WOL 0x10
+#define PWCFG_PMCSR_PME_SR 0x08
+#define PWCFG_PMCSR_PME_EN 0x04 /* control by PCISTICK */
+#define PWCFG_LEGACY_WOLSR 0x02 /* Legacy WOL_SR shadow */
+#define PWCFG_LEGACY_WOLEN 0x01 /* Legacy WOL_EN shadow */
+
+/*
+ * Bits in WOLCFG
+ */
+
+#define WOLCFG_PMEOVR 0x80 /* for legacy use, force PMEEN always */
+#define WOLCFG_SAM 0x20 /* accept multicast case reset, default=0 */
+#define WOLCFG_SAB 0x10 /* accept broadcast case reset, default=0 */
+#define WOLCFG_SMIIACC 0x08 /* ?? */
+#define WOLCFG_SGENWH 0x02
+#define WOLCFG_PHYINTEN 0x01 /* 0:PHYINT trigger enable, 1:use internal MII
+ to report status change */
+/*
+ * Bits in WOLSR1
+ */
+
+#define WOLSR_LINKOFF_INT 0x0800
+#define WOLSR_LINKON_INT 0x0400
+#define WOLSR_MAGIC_INT 0x0200
+#define WOLSR_UNICAST_INT 0x0100
+
+/*
+ * Ethernet address filter type
+ */
+
+#define PKT_TYPE_NONE 0x0000 /* Turn off receiver */
+#define PKT_TYPE_DIRECTED 0x0001 /* obselete, directed address is always accepted */
+#define PKT_TYPE_MULTICAST 0x0002
+#define PKT_TYPE_ALL_MULTICAST 0x0004
+#define PKT_TYPE_BROADCAST 0x0008
+#define PKT_TYPE_PROMISCUOUS 0x0020
+#define PKT_TYPE_LONG 0x2000 /* NOTE.... the definition of LONG is >2048 bytes in our chip */
+#define PKT_TYPE_RUNT 0x4000
+#define PKT_TYPE_ERROR 0x8000 /* Accept error packets, e.g. CRC error */
+
+/*
+ * Loopback mode
+ */
+
+#define MAC_LB_NONE 0x00
+#define MAC_LB_INTERNAL 0x01
+#define MAC_LB_EXTERNAL 0x02
+
+/*
+ * Enabled mask value of irq
+ */
+
+#if defined(_SIM)
+#define IMR_MASK_VALUE 0x0033FF0FUL /* initial value of IMR
+ set IMR0 to 0x0F according to spec */
+
+#else
+#define IMR_MASK_VALUE 0x0013FB0FUL /* initial value of IMR
+ ignore MIBFI,RACEI to
+ reduce intr. frequency
+ NOTE.... do not enable NoBuf int mask at driver driver
+ when (1) NoBuf -> RxThreshold = SF
+ (2) OK -> RxThreshold = original value
+ */
+#endif
+
+/*
+ * Revision id
+ */
+
+#define REV_ID_VT3119_A0 0x00
+#define REV_ID_VT3119_A1 0x01
+#define REV_ID_VT3216_A0 0x10
+
+/*
+ * Max time out delay time
+ */
+
+#define W_MAX_TIMEOUT 0x0FFFU
+
+
+/*
+ * MAC registers as a structure. Cannot be directly accessed this
+ * way but generates offsets for readl/writel() calls
+ */
+
+struct mac_regs {
+ volatile u8 PAR[6]; /* 0x00 */
+ volatile u8 RCR;
+ volatile u8 TCR;
+
+ volatile u32 CR0Set; /* 0x08 */
+ volatile u32 CR0Clr; /* 0x0C */
+
+ volatile u8 MARCAM[8]; /* 0x10 */
+
+ volatile u32 DecBaseHi; /* 0x18 */
+ volatile u16 DbfBaseHi; /* 0x1C */
+ volatile u16 reserved_1E;
+
+ volatile u16 ISRCTL; /* 0x20 */
+ volatile u8 TXESR;
+ volatile u8 RXESR;
+
+ volatile u32 ISR; /* 0x24 */
+ volatile u32 IMR;
+
+ volatile u32 TDStatusPort; /* 0x2C */
+
+ volatile u16 TDCSRSet; /* 0x30 */
+ volatile u8 RDCSRSet;
+ volatile u8 reserved_33;
+ volatile u16 TDCSRClr;
+ volatile u8 RDCSRClr;
+ volatile u8 reserved_37;
+
+ volatile u32 RDBaseLo; /* 0x38 */
+ volatile u16 RDIdx; /* 0x3C */
+ volatile u16 reserved_3E;
+
+ volatile u32 TDBaseLo[4]; /* 0x40 */
+
+ volatile u16 RDCSize; /* 0x50 */
+ volatile u16 TDCSize; /* 0x52 */
+ volatile u16 TDIdx[4]; /* 0x54 */
+ volatile u16 tx_pause_timer; /* 0x5C */
+ volatile u16 RBRDU; /* 0x5E */
+
+ volatile u32 FIFOTest0; /* 0x60 */
+ volatile u32 FIFOTest1; /* 0x64 */
+
+ volatile u8 CAMADDR; /* 0x68 */
+ volatile u8 CAMCR; /* 0x69 */
+ volatile u8 GFTEST; /* 0x6A */
+ volatile u8 FTSTCMD; /* 0x6B */
+
+ volatile u8 MIICFG; /* 0x6C */
+ volatile u8 MIISR;
+ volatile u8 PHYSR0;
+ volatile u8 PHYSR1;
+ volatile u8 MIICR;
+ volatile u8 MIIADR;
+ volatile u16 MIIDATA;
+
+ volatile u16 SoftTimer0; /* 0x74 */
+ volatile u16 SoftTimer1;
+
+ volatile u8 CFGA; /* 0x78 */
+ volatile u8 CFGB;
+ volatile u8 CFGC;
+ volatile u8 CFGD;
+
+ volatile u16 DCFG; /* 0x7C */
+ volatile u16 MCFG;
+
+ volatile u8 TBIST; /* 0x80 */
+ volatile u8 RBIST;
+ volatile u8 PMCPORT;
+ volatile u8 STICKHW;
+
+ volatile u8 MIBCR; /* 0x84 */
+ volatile u8 reserved_85;
+ volatile u8 rev_id;
+ volatile u8 PORSTS;
+
+ volatile u32 MIBData; /* 0x88 */
+
+ volatile u16 EEWrData;
+
+ volatile u8 reserved_8E;
+ volatile u8 BPMDWr;
+ volatile u8 BPCMD;
+ volatile u8 BPMDRd;
+
+ volatile u8 EECHKSUM; /* 0x92 */
+ volatile u8 EECSR;
+
+ volatile u16 EERdData; /* 0x94 */
+ volatile u8 EADDR;
+ volatile u8 EMBCMD;
+
+
+ volatile u8 JMPSR0; /* 0x98 */
+ volatile u8 JMPSR1;
+ volatile u8 JMPSR2;
+ volatile u8 JMPSR3;
+ volatile u8 CHIPGSR; /* 0x9C */
+ volatile u8 TESTCFG;
+ volatile u8 DEBUG;
+ volatile u8 CHIPGCR;
+
+ volatile u16 WOLCRSet; /* 0xA0 */
+ volatile u8 PWCFGSet;
+ volatile u8 WOLCFGSet;
+
+ volatile u16 WOLCRClr; /* 0xA4 */
+ volatile u8 PWCFGCLR;
+ volatile u8 WOLCFGClr;
+
+ volatile u16 WOLSRSet; /* 0xA8 */
+ volatile u16 reserved_AA;
+
+ volatile u16 WOLSRClr; /* 0xAC */
+ volatile u16 reserved_AE;
+
+ volatile u16 PatternCRC[8]; /* 0xB0 */
+ volatile u32 ByteMask[4][4]; /* 0xC0 */
+} __attribute__ ((__packed__));
+
+
+enum hw_mib {
+ HW_MIB_ifRxAllPkts = 0,
+ HW_MIB_ifRxOkPkts,
+ HW_MIB_ifTxOkPkts,
+ HW_MIB_ifRxErrorPkts,
+ HW_MIB_ifRxRuntOkPkt,
+ HW_MIB_ifRxRuntErrPkt,
+ HW_MIB_ifRx64Pkts,
+ HW_MIB_ifTx64Pkts,
+ HW_MIB_ifRx65To127Pkts,
+ HW_MIB_ifTx65To127Pkts,
+ HW_MIB_ifRx128To255Pkts,
+ HW_MIB_ifTx128To255Pkts,
+ HW_MIB_ifRx256To511Pkts,
+ HW_MIB_ifTx256To511Pkts,
+ HW_MIB_ifRx512To1023Pkts,
+ HW_MIB_ifTx512To1023Pkts,
+ HW_MIB_ifRx1024To1518Pkts,
+ HW_MIB_ifTx1024To1518Pkts,
+ HW_MIB_ifTxEtherCollisions,
+ HW_MIB_ifRxPktCRCE,
+ HW_MIB_ifRxJumboPkts,
+ HW_MIB_ifTxJumboPkts,
+ HW_MIB_ifRxMacControlFrames,
+ HW_MIB_ifTxMacControlFrames,
+ HW_MIB_ifRxPktFAE,
+ HW_MIB_ifRxLongOkPkt,
+ HW_MIB_ifRxLongPktErrPkt,
+ HW_MIB_ifTXSQEErrors,
+ HW_MIB_ifRxNobuf,
+ HW_MIB_ifRxSymbolErrors,
+ HW_MIB_ifInRangeLengthErrors,
+ HW_MIB_ifLateCollisions,
+ HW_MIB_SIZE
+};
+
+enum chip_type {
+ CHIP_TYPE_VT6110 = 1,
+};
+
+struct velocity_info_tbl {
+ enum chip_type chip_id;
+ char *name;
+ int io_size;
+ int txqueue;
+ u32 flags;
+};
+
+#define mac_hw_mibs_init(regs) {\
+ BYTE_REG_BITS_ON(MIBCR_MIBFRZ,&((regs)->MIBCR));\
+ BYTE_REG_BITS_ON(MIBCR_MIBCLR,&((regs)->MIBCR));\
+ do {}\
+ while (BYTE_REG_BITS_IS_ON(MIBCR_MIBCLR,&((regs)->MIBCR)));\
+ BYTE_REG_BITS_OFF(MIBCR_MIBFRZ,&((regs)->MIBCR));\
+}
+
+#define mac_read_isr(regs) readl(&((regs)->ISR))
+#define mac_write_isr(regs, x) writel((x),&((regs)->ISR))
+#define mac_clear_isr(regs) writel(0xffffffffL,&((regs)->ISR))
+
+#define mac_write_int_mask(mask, regs) writel((mask),&((regs)->IMR));
+#define mac_disable_int(regs) writel(CR0_GINTMSK1,&((regs)->CR0Clr))
+#define mac_enable_int(regs) writel(CR0_GINTMSK1,&((regs)->CR0Set))
+
+#define mac_hw_mibs_read(regs, MIBs) {\
+ int i;\
+ BYTE_REG_BITS_ON(MIBCR_MPTRINI,&((regs)->MIBCR));\
+ for (i=0;i<HW_MIB_SIZE;i++) {\
+ (MIBs)[i]=readl(&((regs)->MIBData));\
+ }\
+}
+
+#define mac_set_dma_length(regs, n) {\
+ BYTE_REG_BITS_SET((n),0x07,&((regs)->DCFG));\
+}
+
+#define mac_set_rx_thresh(regs, n) {\
+ BYTE_REG_BITS_SET((n),(MCFG_RFT0|MCFG_RFT1),&((regs)->MCFG));\
+}
+
+#define mac_rx_queue_run(regs) {\
+ writeb(TRDCSR_RUN, &((regs)->RDCSRSet));\
+}
+
+#define mac_rx_queue_wake(regs) {\
+ writeb(TRDCSR_WAK, &((regs)->RDCSRSet));\
+}
+
+#define mac_tx_queue_run(regs, n) {\
+ writew(TRDCSR_RUN<<((n)*4),&((regs)->TDCSRSet));\
+}
+
+#define mac_tx_queue_wake(regs, n) {\
+ writew(TRDCSR_WAK<<(n*4),&((regs)->TDCSRSet));\
+}
+
+#define mac_eeprom_reload(regs) {\
+ int i=0;\
+ BYTE_REG_BITS_ON(EECSR_RELOAD,&((regs)->EECSR));\
+ do {\
+ udelay(10);\
+ if (i++>0x1000) {\
+ break;\
+ }\
+ }while (BYTE_REG_BITS_IS_ON(EECSR_RELOAD,&((regs)->EECSR)));\
+}
+
+enum velocity_cam_type {
+ VELOCITY_VLAN_ID_CAM = 0,
+ VELOCITY_MULTICAST_CAM
+};
+
+/**
+ * mac_get_cam_mask - Read a CAM mask
+ * @regs: register block for this velocity
+ * @mask: buffer to store mask
+ * @cam_type: CAM to fetch
+ *
+ * Fetch the mask bits of the selected CAM and store them into the
+ * provided mask buffer.
+ */
+
+static inline void mac_get_cam_mask(struct mac_regs * regs, u8 * mask, enum velocity_cam_type cam_type)
+{
+ int i;
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_MASK, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_VCAMSL, ®s->CAMADDR);
+ else
+ writeb(0, ®s->CAMADDR);
+
+ /* read mask */
+ for (i = 0; i < 8; i++)
+ *mask++ = readb(&(regs->MARCAM[i]));
+
+ /* disable CAMEN */
+ writeb(0, ®s->CAMADDR);
+
+ /* Select mar */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+}
+
+/**
+ * mac_set_cam_mask - Set a CAM mask
+ * @regs: register block for this velocity
+ * @mask: CAM mask to load
+ * @cam_type: CAM to store
+ *
+ * Store a new mask into a CAM
+ */
+
+static inline void mac_set_cam_mask(struct mac_regs * regs, u8 * mask, enum velocity_cam_type cam_type)
+{
+ int i;
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_MASK, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN, ®s->CAMADDR);
+
+ for (i = 0; i < 8; i++) {
+ writeb(*mask++, &(regs->MARCAM[i]));
+ }
+ /* disable CAMEN */
+ writeb(0, ®s->CAMADDR);
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_set_cam - set CAM data
+ * @regs: register block of this velocity
+ * @idx: Cam index
+ * @addr: 2 or 6 bytes of CAM data
+ * @cam_type: CAM to load
+ *
+ * Load an address or vlan tag into a CAM
+ */
+
+static inline void mac_set_cam(struct mac_regs * regs, int idx, u8 *addr, enum velocity_cam_type cam_type)
+{
+ int i;
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_DATA, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ idx &= (64 - 1);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL | idx, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN | idx, ®s->CAMADDR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writew(*((u16 *) addr), ®s->MARCAM[0]);
+ else {
+ for (i = 0; i < 6; i++) {
+ writeb(*addr++, &(regs->MARCAM[i]));
+ }
+ }
+ BYTE_REG_BITS_ON(CAMCR_CAMWR, ®s->CAMCR);
+
+ udelay(10);
+
+ writeb(0, ®s->CAMADDR);
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_get_cam - fetch CAM data
+ * @regs: register block of this velocity
+ * @idx: Cam index
+ * @addr: buffer to hold up to 6 bytes of CAM data
+ * @cam_type: CAM to load
+ *
+ * Load an address or vlan tag from a CAM into the buffer provided by
+ * the caller. VLAN tags are 2 bytes the address cam entries are 6.
+ */
+
+static inline void mac_get_cam(struct mac_regs * regs, int idx, u8 *addr, enum velocity_cam_type cam_type)
+{
+ int i;
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_DATA, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ idx &= (64 - 1);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL | idx, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN | idx, ®s->CAMADDR);
+
+ BYTE_REG_BITS_ON(CAMCR_CAMRD, ®s->CAMCR);
+
+ udelay(10);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ *((u16 *) addr) = readw(&(regs->MARCAM[0]));
+ else
+ for (i = 0; i < 6; i++, addr++)
+ *((u8 *) addr) = readb(&(regs->MARCAM[i]));
+
+ writeb(0, ®s->CAMADDR);
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_wol_reset - reset WOL after exiting low power
+ * @regs: register block of this velocity
+ *
+ * Called after we drop out of wake on lan mode in order to
+ * reset the Wake on lan features. This function doesn't restore
+ * the rest of the logic from the result of sleep/wakeup
+ */
+
+inline static void mac_wol_reset(struct mac_regs * regs)
+{
+
+ /* Turn off SWPTAG right after leaving power mode */
+ BYTE_REG_BITS_OFF(STICKHW_SWPTAG, ®s->STICKHW);
+ /* clear sticky bits */
+ BYTE_REG_BITS_OFF((STICKHW_DS1 | STICKHW_DS0), ®s->STICKHW);
+
+ BYTE_REG_BITS_OFF(CHIPGCR_FCGMII, ®s->CHIPGCR);
+ BYTE_REG_BITS_OFF(CHIPGCR_FCMODE, ®s->CHIPGCR);
+ /* disable force PME-enable */
+ writeb(WOLCFG_PMEOVR, ®s->WOLCFGClr);
+ /* disable power-event config bit */
+ writew(0xFFFF, ®s->WOLCRClr);
+ /* clear power status */
+ writew(0xFFFF, ®s->WOLSRClr);
+}
+
+
+/*
+ * Header for WOL definitions. Used to compute hashes
+ */
+
+typedef u8 MCAM_ADDR[ETH_ALEN];
+
+struct arp_packet {
+ u8 dest_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
+ u16 type;
+ u16 ar_hrd;
+ u16 ar_pro;
+ u8 ar_hln;
+ u8 ar_pln;
+ u16 ar_op;
+ u8 ar_sha[ETH_ALEN];
+ u8 ar_sip[4];
+ u8 ar_tha[ETH_ALEN];
+ u8 ar_tip[4];
+} __attribute__ ((__packed__));
+
+struct _magic_packet {
+ u8 dest_mac[6];
+ u8 src_mac[6];
+ u16 type;
+ u8 MAC[16][6];
+ u8 password[6];
+} __attribute__ ((__packed__));
+
+/*
+ * Store for chip context when saving and restoring status. Not
+ * all fields are saved/restored currently.
+ */
+
+struct velocity_context {
+ u8 mac_reg[256];
+ MCAM_ADDR cam_addr[MCAM_SIZE];
+ u16 vcam[VCAM_SIZE];
+ u32 cammask[2];
+ u32 patcrc[2];
+ u32 pattern[8];
+};
+
+
+/*
+ * MII registers.
+ */
+
+
+/*
+ * Registers in the MII (offset unit is WORD)
+ */
+
+#define MII_REG_BMCR 0x00 // physical address
+#define MII_REG_BMSR 0x01 //
+#define MII_REG_PHYID1 0x02 // OUI
+#define MII_REG_PHYID2 0x03 // OUI + Module ID + REV ID
+#define MII_REG_ANAR 0x04 //
+#define MII_REG_ANLPAR 0x05 //
+#define MII_REG_G1000CR 0x09 //
+#define MII_REG_G1000SR 0x0A //
+#define MII_REG_MODCFG 0x10 //
+#define MII_REG_TCSR 0x16 //
+#define MII_REG_PLED 0x1B //
+// NS, MYSON only
+#define MII_REG_PCR 0x17 //
+// ESI only
+#define MII_REG_PCSR 0x17 //
+#define MII_REG_AUXCR 0x1C //
+
+// Marvell 88E1000/88E1000S
+#define MII_REG_PSCR 0x10 // PHY specific control register
+
+//
+// Bits in the BMCR register
+//
+#define BMCR_RESET 0x8000 //
+#define BMCR_LBK 0x4000 //
+#define BMCR_SPEED100 0x2000 //
+#define BMCR_AUTO 0x1000 //
+#define BMCR_PD 0x0800 //
+#define BMCR_ISO 0x0400 //
+#define BMCR_REAUTO 0x0200 //
+#define BMCR_FDX 0x0100 //
+#define BMCR_SPEED1G 0x0040 //
+//
+// Bits in the BMSR register
+//
+#define BMSR_AUTOCM 0x0020 //
+#define BMSR_LNK 0x0004 //
+
+//
+// Bits in the ANAR register
+//
+#define ANAR_ASMDIR 0x0800 // Asymmetric PAUSE support
+#define ANAR_PAUSE 0x0400 // Symmetric PAUSE Support
+#define ANAR_T4 0x0200 //
+#define ANAR_TXFD 0x0100 //
+#define ANAR_TX 0x0080 //
+#define ANAR_10FD 0x0040 //
+#define ANAR_10 0x0020 //
+//
+// Bits in the ANLPAR register
+//
+#define ANLPAR_ASMDIR 0x0800 // Asymmetric PAUSE support
+#define ANLPAR_PAUSE 0x0400 // Symmetric PAUSE Support
+#define ANLPAR_T4 0x0200 //
+#define ANLPAR_TXFD 0x0100 //
+#define ANLPAR_TX 0x0080 //
+#define ANLPAR_10FD 0x0040 //
+#define ANLPAR_10 0x0020 //
+
+//
+// Bits in the G1000CR register
+//
+#define G1000CR_1000FD 0x0200 // PHY is 1000-T Full-duplex capable
+#define G1000CR_1000 0x0100 // PHY is 1000-T Half-duplex capable
+
+//
+// Bits in the G1000SR register
+//
+#define G1000SR_1000FD 0x0800 // LP PHY is 1000-T Full-duplex capable
+#define G1000SR_1000 0x0400 // LP PHY is 1000-T Half-duplex capable
+
+#define TCSR_ECHODIS 0x2000 //
+#define AUXCR_MDPPS 0x0004 //
+
+// Bits in the PLED register
+#define PLED_LALBE 0x0004 //
+
+// Marvell 88E1000/88E1000S Bits in the PHY specific control register (10h)
+#define PSCR_ACRSTX 0x0800 // Assert CRS on Transmit
+
+#define PHYID_CICADA_CS8201 0x000FC410UL
+#define PHYID_VT3216_32BIT 0x000FC610UL
+#define PHYID_VT3216_64BIT 0x000FC600UL
+#define PHYID_MARVELL_1000 0x01410C50UL
+#define PHYID_MARVELL_1000S 0x01410C40UL
+
+#define PHYID_REV_ID_MASK 0x0000000FUL
+
+#define PHYID_GET_PHY_REV_ID(i) ((i) & PHYID_REV_ID_MASK)
+#define PHYID_GET_PHY_ID(i) ((i) & ~PHYID_REV_ID_MASK)
+
+#define MII_REG_BITS_ON(x,i,p) do {\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ (w)|=(x);\
+ velocity_mii_write((p),(i),(w));\
+} while (0)
+
+#define MII_REG_BITS_OFF(x,i,p) do {\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ (w)&=(~(x));\
+ velocity_mii_write((p),(i),(w));\
+} while (0)
+
+#define MII_REG_BITS_IS_ON(x,i,p) ({\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ ((int) ((w) & (x)));})
+
+#define MII_GET_PHY_ID(p) ({\
+ u32 id;\
+ velocity_mii_read((p),MII_REG_PHYID2,(u16 *) &id);\
+ velocity_mii_read((p),MII_REG_PHYID1,((u16 *) &id)+1);\
+ (id);})
+
+/*
+ * Inline debug routine
+ */
+
+
+enum velocity_msg_level {
+ MSG_LEVEL_ERR = 0, //Errors that will cause abnormal operation.
+ MSG_LEVEL_NOTICE = 1, //Some errors need users to be notified.
+ MSG_LEVEL_INFO = 2, //Normal message.
+ MSG_LEVEL_VERBOSE = 3, //Will report all trival errors.
+ MSG_LEVEL_DEBUG = 4 //Only for debug purpose.
+};
+
+#ifdef VELOCITY_DEBUG
+#define ASSERT(x) { \
+ if (!(x)) { \
+ printk(KERN_ERR "assertion %s failed: file %s line %d\n", #x,\
+ __FUNCTION__, __LINE__);\
+ BUG(); \
+ }\
+}
+#define VELOCITY_DBG(p,args...) printk(p, ##args)
+#else
+#define ASSERT(x)
+#define VELOCITY_DBG(x)
+#endif
+
+#define VELOCITY_PRT(l, p, args...) do {if (l<=msglevel) printk( p ,##args);} while (0)
+
+#define VELOCITY_PRT_CAMMASK(p,t) {\
+ int i;\
+ if ((t)==VELOCITY_MULTICAST_CAM) {\
+ for (i=0;i<(MCAM_SIZE/8);i++)\
+ printk("%02X",(p)->mCAMmask[i]);\
+ }\
+ else {\
+ for (i=0;i<(VCAM_SIZE/8);i++)\
+ printk("%02X",(p)->vCAMmask[i]);\
+ }\
+ printk("\n");\
+}
+
+
+
+#define VELOCITY_WOL_MAGIC 0x00000000UL
+#define VELOCITY_WOL_PHY 0x00000001UL
+#define VELOCITY_WOL_ARP 0x00000002UL
+#define VELOCITY_WOL_UCAST 0x00000004UL
+#define VELOCITY_WOL_BCAST 0x00000010UL
+#define VELOCITY_WOL_MCAST 0x00000020UL
+#define VELOCITY_WOL_MAGIC_SEC 0x00000040UL
+
+/*
+ * Flags for options
+ */
+
+#define VELOCITY_FLAGS_TAGGING 0x00000001UL
+#define VELOCITY_FLAGS_TX_CSUM 0x00000002UL
+#define VELOCITY_FLAGS_RX_CSUM 0x00000004UL
+#define VELOCITY_FLAGS_IP_ALIGN 0x00000008UL
+#define VELOCITY_FLAGS_VAL_PKT_LEN 0x00000010UL
+
+#define VELOCITY_FLAGS_FLOW_CTRL 0x01000000UL
+
+/*
+ * Flags for driver status
+ */
+
+#define VELOCITY_FLAGS_OPENED 0x00010000UL
+#define VELOCITY_FLAGS_VMNS_CONNECTED 0x00020000UL
+#define VELOCITY_FLAGS_VMNS_COMMITTED 0x00040000UL
+#define VELOCITY_FLAGS_WOL_ENABLED 0x00080000UL
+
+/*
+ * Flags for MII status
+ */
+
+#define VELOCITY_LINK_FAIL 0x00000001UL
+#define VELOCITY_SPEED_10 0x00000002UL
+#define VELOCITY_SPEED_100 0x00000004UL
+#define VELOCITY_SPEED_1000 0x00000008UL
+#define VELOCITY_DUPLEX_FULL 0x00000010UL
+#define VELOCITY_AUTONEG_ENABLE 0x00000020UL
+#define VELOCITY_FORCED_BY_EEPROM 0x00000040UL
+
+/*
+ * For velocity_set_media_duplex
+ */
+
+#define VELOCITY_LINK_CHANGE 0x00000001UL
+
+enum speed_opt {
+ SPD_DPX_AUTO = 0,
+ SPD_DPX_100_HALF = 1,
+ SPD_DPX_100_FULL = 2,
+ SPD_DPX_10_HALF = 3,
+ SPD_DPX_10_FULL = 4
+};
+
+enum velocity_init_type {
+ VELOCITY_INIT_COLD = 0,
+ VELOCITY_INIT_RESET,
+ VELOCITY_INIT_WOL
+};
+
+enum velocity_flow_cntl_type {
+ FLOW_CNTL_DEFAULT = 1,
+ FLOW_CNTL_TX,
+ FLOW_CNTL_RX,
+ FLOW_CNTL_TX_RX,
+ FLOW_CNTL_DISABLE,
+};
+
+struct velocity_opt {
+ int numrx; /* Number of RX descriptors */
+ int numtx; /* Number of TX descriptors */
+ enum speed_opt spd_dpx; /* Media link mode */
+ int vid; /* vlan id */
+ int DMA_length; /* DMA length */
+ int rx_thresh; /* RX_THRESH */
+ int flow_cntl;
+ int wol_opts; /* Wake on lan options */
+ int td_int_count;
+ int int_works;
+ int rx_bandwidth_hi;
+ int rx_bandwidth_lo;
+ int rx_bandwidth_en;
+ u32 flags;
+};
+
+struct velocity_info {
+ struct velocity_info *next;
+ struct velocity_info *prev;
+
+ struct pci_dev *pdev;
+ struct net_device *dev;
+ struct net_device_stats stats;
+
+#if CONFIG_PM
+ u32 pci_state[16];
+#endif
+
+ dma_addr_t rd_pool_dma;
+ dma_addr_t td_pool_dma[TX_QUEUE_NO];
+
+ dma_addr_t tx_bufs_dma;
+ u8 *tx_bufs;
+
+ u8 ip_addr[4];
+ enum chip_type chip_id;
+
+ struct mac_regs * mac_regs;
+ unsigned long memaddr;
+ unsigned long ioaddr;
+ u32 io_size;
+
+ u8 rev_id;
+
+#define AVAIL_TD(p,q) ((p)->options.numtx-((p)->td_used[(q)]))
+
+ int num_txq;
+
+ volatile int td_used[TX_QUEUE_NO];
+ int td_curr[TX_QUEUE_NO];
+ int td_tail[TX_QUEUE_NO];
+ struct tx_desc *td_rings[TX_QUEUE_NO];
+ struct velocity_td_info *td_infos[TX_QUEUE_NO];
+
+ int rd_curr;
+ int rd_used;
+ struct rx_desc *rd_ring;
+ struct velocity_rd_info *rd_info; /* It's an array */
+
+#define GET_RD_BY_IDX(vptr, idx) (vptr->rd_ring[idx])
+ u32 mib_counter[MAX_HW_MIB_COUNTER];
+ struct velocity_opt options;
+
+ u32 int_mask;
+
+ u32 flags;
+
+ int rx_buf_sz;
+ u32 mii_status;
+ u32 phy_id;
+ int multicast_limit;
+
+ u8 vCAMmask[(VCAM_SIZE / 8)];
+ u8 mCAMmask[(MCAM_SIZE / 8)];
+
+ spinlock_t lock;
+ spinlock_t xmit_lock;
+
+ int wol_opts;
+ u8 wol_passwd[6];
+
+ struct velocity_context context;
+
+ u32 ticks;
+ u32 rx_bytes;
+
+};
+
+/**
+ * velocity_get_ip - find an IP address for the device
+ * @vptr: Velocity to query
+ *
+ * Dig out an IP address for this interface so that we can
+ * configure wakeup with WOL for ARP. If there are multiple IP
+ * addresses on this chain then we use the first - multi-IP WOL is not
+ * supported.
+ *
+ * CHECK ME: locking
+ */
+
+inline static int velocity_get_ip(struct velocity_info *vptr)
+{
+ struct in_device *in_dev = (struct in_device *) vptr->dev->ip_ptr;
+ struct in_ifaddr *ifa;
+
+ if (in_dev != NULL) {
+ ifa = (struct in_ifaddr *) in_dev->ifa_list;
+ if (ifa != NULL) {
+ memcpy(vptr->ip_addr, &ifa->ifa_address, 4);
+ return 0;
+ }
+ }
+ return -ENOENT;
+}
+
+/**
+ * velocity_update_hw_mibs - fetch MIB counters from chip
+ * @vptr: velocity to update
+ *
+ * The velocity hardware keeps certain counters in the hardware
+ * side. We need to read these when the user asks for statistics
+ * or when they overflow (causing an interrupt). The read of the
+ * statistic clears it, so we keep running master counters in user
+ * space.
+ */
+
+static inline void velocity_update_hw_mibs(struct velocity_info *vptr)
+{
+ u32 tmp;
+ int i;
+ BYTE_REG_BITS_ON(MIBCR_MIBFLSH, &(vptr->mac_regs->MIBCR));
+
+ while (BYTE_REG_BITS_IS_ON(MIBCR_MIBFLSH, &(vptr->mac_regs->MIBCR)));
+
+ BYTE_REG_BITS_ON(MIBCR_MPTRINI, &(vptr->mac_regs->MIBCR));
+ for (i = 0; i < HW_MIB_SIZE; i++) {
+ tmp = readl(&(vptr->mac_regs->MIBData)) & 0x00FFFFFFUL;
+ vptr->mib_counter[i] += tmp;
+ }
+}
+
+/**
+ * init_flow_control_register - set up flow control
+ * @vptr: velocity to configure
+ *
+ * Configure the flow control registers for this velocity device.
+ */
+
+static inline void init_flow_control_register(struct velocity_info *vptr)
+{
+ struct mac_regs * regs = vptr->mac_regs;
+
+ /* Set {XHITH1, XHITH0, XLTH1, XLTH0} in FlowCR1 to {1, 0, 1, 1}
+ depend on RD=64, and Turn on XNOEN in FlowCR1 */
+ writel((CR0_XONEN | CR0_XHITH1 | CR0_XLTH1 | CR0_XLTH0), ®s->CR0Set);
+ writel((CR0_FDXTFCEN | CR0_FDXRFCEN | CR0_HDXFCEN | CR0_XHITH0), ®s->CR0Clr);
+
+ /* Set TxPauseTimer to 0xFFFF */
+ writew(0xFFFF, ®s->tx_pause_timer);
+
+ /* Initialize RBRDU to Rx buffer count. */
+ writew(vptr->options.numrx, ®s->RBRDU);
+}
+
+
+#endif
--- /dev/null
+/*
+ * (C) 2004 Margit Schubert-While <margitsw@t-online.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/*
+ * Compatibility header file to aid support of different kernel versions
+ */
+
+#ifdef PRISM54_COMPAT24
+#include "prismcompat24.h"
+#else /* PRISM54_COMPAT24 */
+
+#ifndef _PRISM_COMPAT_H
+#define _PRISM_COMPAT_H
+
+#include <linux/device.h>
+#include <linux/firmware.h>
+#include <linux/config.h>
+#include <linux/moduleparam.h>
+#include <linux/workqueue.h>
+#include <linux/compiler.h>
+
+#if !defined(CONFIG_FW_LOADER) && !defined(CONFIG_FW_LOADER_MODULE)
+#error Firmware Loading is not configured in the kernel !
+#endif
+
+#define prism54_synchronize_irq(irq) synchronize_irq(irq)
+
+#define PRISM_FW_PDEV &priv->pdev->dev
+
+#endif /* _PRISM_COMPAT_H */
+#endif /* PRISM54_COMPAT24 */
--- /dev/null
+/*
+ * Driver for the Cirrus PD6729 PCI-PCMCIA bridge.
+ *
+ * Based on the i82092.c driver.
+ *
+ * This software may be used and distributed according to the terms of
+ * the GNU General Public License, incorporated herein by reference.
+ */
+
+#include <linux/kernel.h>
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/workqueue.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/cs.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+
+#include "pd6729.h"
+#include "i82365.h"
+#include "cirrus.h"
+
+MODULE_LICENSE("GPL");
+
+#define MAX_SOCKETS 2
+
+/* simple helper functions */
+/* External clock time, in nanoseconds. 120 ns = 8.33 MHz */
+#define to_cycles(ns) ((ns)/120)
+
+static spinlock_t port_lock = SPIN_LOCK_UNLOCKED;
+
+/* basic value read/write functions */
+
+static unsigned char indirect_read(struct pd6729_socket *socket, unsigned short reg)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg += socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+
+ return val;
+}
+
+static unsigned short indirect_read16(struct pd6729_socket *socket, unsigned short reg)
+{
+ unsigned long port;
+ unsigned short tmp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ tmp = inb(port + 1);
+ reg++;
+ outb(reg, port);
+ tmp = tmp | (inb(port + 1) << 8);
+ spin_unlock_irqrestore(&port_lock, flags);
+
+ return tmp;
+}
+
+static void indirect_write(struct pd6729_socket *socket, unsigned short reg, unsigned char value)
+{
+ unsigned long port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ outb(value, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_setbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ val |= mask;
+ outb(reg, port);
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_resetbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ val &= ~mask;
+ outb(reg, port);
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_write16(struct pd6729_socket *socket, unsigned short reg, unsigned short value)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+
+ outb(reg, port);
+ val = value & 255;
+ outb(val, port + 1);
+
+ reg++;
+
+ outb(reg, port);
+ val = value >> 8;
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+/* Interrupt handler functionality */
+
+static irqreturn_t pd6729_interrupt(int irq, void *dev, struct pt_regs *regs)
+{
+ struct pd6729_socket *socket = (struct pd6729_socket *)dev;
+ int i;
+ int loopcount = 0;
+ int handled = 0;
+ unsigned int events, active = 0;
+
+ while (1) {
+ loopcount++;
+ if (loopcount > 20) {
+ printk(KERN_ERR "pd6729: infinite eventloop in interrupt\n");
+ break;
+ }
+
+ active = 0;
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ unsigned int csc;
+
+ /* card status change register */
+ csc = indirect_read(&socket[i], I365_CSC);
+ if (csc == 0) /* no events on this socket */
+ continue;
+
+ handled = 1;
+ events = 0;
+
+ if (csc & I365_CSC_DETECT) {
+ events |= SS_DETECT;
+ dprintk("Card detected in socket %i!\n", i);
+ }
+
+ if (indirect_read(&socket[i], I365_INTCTL) & I365_PC_IOCARD) {
+ /* For IO/CARDS, bit 0 means "read the card" */
+ events |= (csc & I365_CSC_STSCHG) ? SS_STSCHG : 0;
+ } else {
+ /* Check for battery/ready events */
+ events |= (csc & I365_CSC_BVD1) ? SS_BATDEAD : 0;
+ events |= (csc & I365_CSC_BVD2) ? SS_BATWARN : 0;
+ events |= (csc & I365_CSC_READY) ? SS_READY : 0;
+ }
+
+ if (events) {
+ pcmcia_parse_events(&socket[i].socket, events);
+ }
+ active |= events;
+ }
+
+ if (active == 0) /* no more events to handle */
+ break;
+ }
+ return IRQ_RETVAL(handled);
+}
+
+/* socket functions */
+
+static void set_bridge_state(struct pd6729_socket *socket)
+{
+ indirect_write(socket, I365_GBLCTL, 0x00);
+ indirect_write(socket, I365_GENCTL, 0x00);
+
+ indirect_setbit(socket, I365_INTCTL, 0x08);
+}
+
+static int pd6729_get_status(struct pcmcia_socket *sock, u_int *value)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned int status;
+ unsigned int data;
+ struct pd6729_socket *t;
+
+ /* Interface Status Register */
+ status = indirect_read(socket, I365_STATUS);
+ *value = 0;
+
+ if ((status & I365_CS_DETECT) == I365_CS_DETECT) {
+ *value |= SS_DETECT;
+ }
+
+ /* IO cards have a different meaning of bits 0,1 */
+ /* Also notice the inverse-logic on the bits */
+ if (indirect_read(socket, I365_INTCTL) & I365_PC_IOCARD) {
+ /* IO card */
+ if (!(status & I365_CS_STSCHG))
+ *value |= SS_STSCHG;
+ } else {
+ /* non I/O card */
+ if (!(status & I365_CS_BVD1))
+ *value |= SS_BATDEAD;
+ if (!(status & I365_CS_BVD2))
+ *value |= SS_BATWARN;
+ }
+
+ if (status & I365_CS_WRPROT)
+ *value |= SS_WRPROT; /* card is write protected */
+
+ if (status & I365_CS_READY)
+ *value |= SS_READY; /* card is not busy */
+
+ if (status & I365_CS_POWERON)
+ *value |= SS_POWERON; /* power is applied to the card */
+
+ t = (socket->number) ? socket : socket + 1;
+ indirect_write(t, PD67_EXT_INDEX, PD67_EXTERN_DATA);
+ data = indirect_read16(t, PD67_EXT_DATA);
+ *value |= (data & PD67_EXD_VS1(socket->number)) ? 0 : SS_3VCARD;
+
+ return 0;
+}
+
+
+static int pd6729_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char reg, vcc, vpp;
+
+ state->flags = 0;
+ state->Vcc = 0;
+ state->Vpp = 0;
+ state->io_irq = 0;
+ state->csc_mask = 0;
+
+ /* First the power status of the socket */
+ /* PCTRL - Power Control Register */
+ reg = indirect_read(socket, I365_POWER);
+
+ if (reg & I365_PWR_AUTO)
+ state->flags |= SS_PWR_AUTO; /* Automatic Power Switch */
+
+ if (reg & I365_PWR_OUT)
+ state->flags |= SS_OUTPUT_ENA; /* Output signals are enabled */
+
+ vcc = reg & I365_VCC_MASK; vpp = reg & I365_VPP1_MASK;
+
+ if (reg & I365_VCC_5V) {
+ state->Vcc = (indirect_read(socket, PD67_MISC_CTL_1) &
+ PD67_MC1_VCC_3V) ? 33 : 50;
+
+ if (vpp == I365_VPP1_5V) {
+ if (state->Vcc == 50)
+ state->Vpp = 50;
+ else
+ state->Vpp = 33;
+ }
+ if (vpp == I365_VPP1_12V)
+ state->Vpp = 120;
+ }
+
+ /* Now the IO card, RESET flags and IO interrupt */
+ /* IGENC, Interrupt and General Control */
+ reg = indirect_read(socket, I365_INTCTL);
+
+ if ((reg & I365_PC_RESET) == 0)
+ state->flags |= SS_RESET;
+ if (reg & I365_PC_IOCARD)
+ state->flags |= SS_IOCARD; /* This is an IO card */
+
+ /* Set the IRQ number */
+ state->io_irq = socket->socket.pci_irq;
+
+ /* Card status change */
+ /* CSCICR, Card Status Change Interrupt Configuration */
+ reg = indirect_read(socket, I365_CSCINT);
+
+ if (reg & I365_CSC_DETECT)
+ state->csc_mask |= SS_DETECT; /* Card detect is enabled */
+
+ if (state->flags & SS_IOCARD) {/* IO Cards behave different */
+ if (reg & I365_CSC_STSCHG)
+ state->csc_mask |= SS_STSCHG;
+ } else {
+ if (reg & I365_CSC_BVD1)
+ state->csc_mask |= SS_BATDEAD;
+ if (reg & I365_CSC_BVD2)
+ state->csc_mask |= SS_BATWARN;
+ if (reg & I365_CSC_READY)
+ state->csc_mask |= SS_READY;
+ }
+
+ return 0;
+}
+
+static int pd6729_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char reg;
+
+ /* First, set the global controller options */
+
+ set_bridge_state(socket);
+
+ /* Values for the IGENC register */
+
+ reg = 0;
+ /* The reset bit has "inverse" logic */
+ if (!(state->flags & SS_RESET))
+ reg = reg | I365_PC_RESET;
+ if (state->flags & SS_IOCARD)
+ reg = reg | I365_PC_IOCARD;
+
+ /* IGENC, Interrupt and General Control Register */
+ indirect_write(socket, I365_INTCTL, reg);
+
+ /* Power registers */
+
+ reg = I365_PWR_NORESET; /* default: disable resetdrv on resume */
+
+ if (state->flags & SS_PWR_AUTO) {
+ dprintk("Auto power\n");
+ reg |= I365_PWR_AUTO; /* automatic power mngmnt */
+ }
+ if (state->flags & SS_OUTPUT_ENA) {
+ dprintk("Power Enabled\n");
+ reg |= I365_PWR_OUT; /* enable power */
+ }
+
+ switch (state->Vcc) {
+ case 0:
+ break;
+ case 33:
+ dprintk("setting voltage to Vcc to 3.3V on socket %i\n",
+ socket->number);
+ reg |= I365_VCC_5V;
+ indirect_setbit(socket, PD67_MISC_CTL_1, PD67_MC1_VCC_3V);
+ break;
+ case 50:
+ dprintk("setting voltage to Vcc to 5V on socket %i\n",
+ socket->number);
+ reg |= I365_VCC_5V;
+ indirect_resetbit(socket, PD67_MISC_CTL_1, PD67_MC1_VCC_3V);
+ break;
+ default:
+ dprintk("pd6729: pd6729_set_socket called with invalid VCC power value: %i\n",
+ state->Vcc);
+ return -EINVAL;
+ }
+
+ switch (state->Vpp) {
+ case 0:
+ dprintk("not setting Vpp on socket %i\n", socket->number);
+ break;
+ case 33:
+ case 50:
+ dprintk("setting Vpp to Vcc for socket %i\n", socket->number);
+ reg |= I365_VPP1_5V;
+ break;
+ case 120:
+ dprintk("setting Vpp to 12.0\n");
+ reg |= I365_VPP1_12V;
+ break;
+ default:
+ dprintk("pd6729: pd6729_set_socket called with invalid VPP power value: %i\n",
+ state->Vpp);
+ return -EINVAL;
+ }
+
+ /* only write if changed */
+ if (reg != indirect_read(socket, I365_POWER))
+ indirect_write(socket, I365_POWER, reg);
+
+ /* Now, specifiy that all interrupts are to be done as PCI interrupts */
+ indirect_write(socket, PD67_EXT_INDEX, PD67_EXT_CTL_1);
+ indirect_write(socket, PD67_EXT_DATA, PD67_EC1_INV_MGMT_IRQ | PD67_EC1_INV_CARD_IRQ);
+
+ /* Enable specific interrupt events */
+
+ reg = 0x00;
+ if (state->csc_mask & SS_DETECT) {
+ reg |= I365_CSC_DETECT;
+ }
+ if (state->flags & SS_IOCARD) {
+ if (state->csc_mask & SS_STSCHG)
+ reg |= I365_CSC_STSCHG;
+ } else {
+ if (state->csc_mask & SS_BATDEAD)
+ reg |= I365_CSC_BVD1;
+ if (state->csc_mask & SS_BATWARN)
+ reg |= I365_CSC_BVD2;
+ if (state->csc_mask & SS_READY)
+ reg |= I365_CSC_READY;
+ }
+ reg |= 0x30; /* management IRQ: PCI INTA# = "irq 3" */
+ indirect_write(socket, I365_CSCINT, reg);
+
+ reg = indirect_read(socket, I365_INTCTL);
+ reg |= 0x03; /* card IRQ: PCI INTA# = "irq 3" */
+ indirect_write(socket, I365_INTCTL, reg);
+
+ /* now clear the (probably bogus) pending stuff by doing a dummy read */
+ (void)indirect_read(socket, I365_CSC);
+
+ return 0;
+}
+
+static int pd6729_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *io)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char map, ioctl;
+
+ map = io->map;
+
+ /* Check error conditions */
+ if (map > 1) {
+ dprintk("pd6729_set_io_map with invalid map");
+ return -EINVAL;
+ }
+
+ /* Turn off the window before changing anything */
+ if (indirect_read(socket, I365_ADDRWIN) & I365_ENA_IO(map))
+ indirect_resetbit(socket, I365_ADDRWIN, I365_ENA_IO(map));
+
+/* dprintk("set_io_map: Setting range to %x - %x\n", io->start, io->stop);*/
+
+ /* write the new values */
+ indirect_write16(socket, I365_IO(map)+I365_W_START, io->start);
+ indirect_write16(socket, I365_IO(map)+I365_W_STOP, io->stop);
+
+ ioctl = indirect_read(socket, I365_IOCTL) & ~I365_IOCTL_MASK(map);
+
+ if (io->flags & MAP_0WS) ioctl |= I365_IOCTL_0WS(map);
+ if (io->flags & MAP_16BIT) ioctl |= I365_IOCTL_16BIT(map);
+ if (io->flags & MAP_AUTOSZ) ioctl |= I365_IOCTL_IOCS16(map);
+
+ indirect_write(socket, I365_IOCTL, ioctl);
+
+ /* Turn the window back on if needed */
+ if (io->flags & MAP_ACTIVE)
+ indirect_setbit(socket, I365_ADDRWIN, I365_ENA_IO(map));
+
+ return 0;
+}
+
+static int pd6729_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *mem)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned short base, i;
+ unsigned char map;
+
+ map = mem->map;
+ if (map > 4) {
+ printk("pd6729_set_mem_map: invalid map");
+ return -EINVAL;
+ }
+
+ if ((mem->sys_start > mem->sys_stop) || (mem->speed > 1000)) {
+ printk("pd6729_set_mem_map: invalid address / speed");
+ /* printk("invalid mem map for socket %i : %lx to %lx with a start of %x\n",
+ sock, mem->sys_start, mem->sys_stop, mem->card_start); */
+ return -EINVAL;
+ }
+
+ /* Turn off the window before changing anything */
+ if (indirect_read(socket, I365_ADDRWIN) & I365_ENA_MEM(map))
+ indirect_resetbit(socket, I365_ADDRWIN, I365_ENA_MEM(map));
+
+ /* write the start address */
+ base = I365_MEM(map);
+ i = (mem->sys_start >> 12) & 0x0fff;
+ if (mem->flags & MAP_16BIT)
+ i |= I365_MEM_16BIT;
+ if (mem->flags & MAP_0WS)
+ i |= I365_MEM_0WS;
+ indirect_write16(socket, base + I365_W_START, i);
+
+ /* write the stop address */
+
+ i= (mem->sys_stop >> 12) & 0x0fff;
+ switch (to_cycles(mem->speed)) {
+ case 0:
+ break;
+ case 1:
+ i |= I365_MEM_WS0;
+ break;
+ case 2:
+ i |= I365_MEM_WS1;
+ break;
+ default:
+ i |= I365_MEM_WS1 | I365_MEM_WS0;
+ break;
+ }
+
+ indirect_write16(socket, base + I365_W_STOP, i);
+
+ /* Take care of high byte */
+ indirect_write(socket, PD67_EXT_INDEX, PD67_MEM_PAGE(map));
+ indirect_write(socket, PD67_EXT_DATA, mem->sys_start >> 24);
+
+ /* card start */
+
+ i = ((mem->card_start - mem->sys_start) >> 12) & 0x3fff;
+ if (mem->flags & MAP_WRPROT)
+ i |= I365_MEM_WRPROT;
+ if (mem->flags & MAP_ATTRIB) {
+/* dprintk("requesting attribute memory for socket %i\n",
+ socket->number);*/
+ i |= I365_MEM_REG;
+ } else {
+/* dprintk("requesting normal memory for socket %i\n",
+ socket->number);*/
+ }
+ indirect_write16(socket, base + I365_W_OFF, i);
+
+ /* Enable the window if necessary */
+ if (mem->flags & MAP_ACTIVE)
+ indirect_setbit(socket, I365_ADDRWIN, I365_ENA_MEM(map));
+
+ return 0;
+}
+
+static int pd6729_suspend(struct pcmcia_socket *sock)
+{
+ return pd6729_set_socket(sock, &dead_socket);
+}
+
+static int pd6729_init(struct pcmcia_socket *sock)
+{
+ int i;
+ struct resource res = { .end = 0x0fff };
+ pccard_io_map io = { 0, 0, 0, 0, 1 };
+ pccard_mem_map mem = { .res = &res, .sys_stop = 0x0fff };
+
+ pd6729_set_socket(sock, &dead_socket);
+ for (i = 0; i < 2; i++) {
+ io.map = i;
+ pd6729_set_io_map(sock, &io);
+ }
+ for (i = 0; i < 5; i++) {
+ mem.map = i;
+ pd6729_set_mem_map(sock, &mem);
+ }
+
+ return 0;
+}
+
+
+/* the pccard structure and its functions */
+static struct pccard_operations pd6729_operations = {
+ .init = pd6729_init,
+ .suspend = pd6729_suspend,
+ .get_status = pd6729_get_status,
+ .get_socket = pd6729_get_socket,
+ .set_socket = pd6729_set_socket,
+ .set_io_map = pd6729_set_io_map,
+ .set_mem_map = pd6729_set_mem_map,
+};
+
+static int __devinit pd6729_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+{
+ int i, j, ret;
+ char configbyte;
+ struct pd6729_socket *socket;
+
+ socket = kmalloc(sizeof(struct pd6729_socket) * MAX_SOCKETS, GFP_KERNEL);
+ if (!socket)
+ return -ENOMEM;
+
+ memset(socket, 0, sizeof(struct pd6729_socket) * MAX_SOCKETS);
+
+ if ((ret = pci_enable_device(dev)))
+ goto err_out_free_mem;
+
+ printk(KERN_INFO "pd6729: Cirrus PD6729 PCI to PCMCIA Bridge at 0x%lx on irq %d\n",
+ pci_resource_start(dev, 0), dev->irq);
+ printk(KERN_INFO "pd6729: configured as a %d socket device.\n", MAX_SOCKETS);
+ /* Since we have no memory BARs some firmware we may not
+ have had PCI_COMMAND_MEM enabled, yet the device needs
+ it. */
+ pci_read_config_byte(dev, PCI_COMMAND, &configbyte);
+ if (!(configbyte & PCI_COMMAND_MEMORY)) {
+ printk(KERN_DEBUG "pd6729: Enabling PCI_COMMAND_MEMORY.\n");
+ configbyte |= PCI_COMMAND_MEMORY;
+ pci_write_config_byte(dev, PCI_COMMAND, configbyte);
+ }
+
+ ret = pci_request_regions(dev, "pd6729");
+ if (ret) {
+ printk(KERN_INFO "pd6729: pci request region failed.\n");
+ goto err_out_disable;
+ }
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ socket[i].io_base = pci_resource_start(dev, 0);
+ socket[i].socket.features |= SS_CAP_PCCARD;
+ socket[i].socket.map_size = 0x1000;
+ socket[i].socket.irq_mask = 0;
+ socket[i].socket.pci_irq = dev->irq;
+ socket[i].socket.owner = THIS_MODULE;
+
+ socket[i].number = i;
+
+ socket[i].socket.ops = &pd6729_operations;
+ socket[i].socket.dev.dev = &dev->dev;
+ socket[i].socket.driver_data = &socket[i];
+ }
+
+ pci_set_drvdata(dev, socket);
+
+ /* Register the interrupt handler */
+ if ((ret = request_irq(dev->irq, pd6729_interrupt, SA_SHIRQ, "pd6729", socket))) {
+ printk(KERN_ERR "pd6729: Failed to register irq %d, aborting\n", dev->irq);
+ goto err_out_free_res;
+ }
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ ret = pcmcia_register_socket(&socket[i].socket);
+ if (ret) {
+ printk(KERN_INFO "pd6729: pcmcia_register_socket failed.\n");
+ for (j = 0; j < i ; j++)
+ pcmcia_unregister_socket(&socket[j].socket);
+ goto err_out_free_res2;
+ }
+ }
+
+ return 0;
+
+ err_out_free_res2:
+ free_irq(dev->irq, socket);
+ err_out_free_res:
+ pci_release_regions(dev);
+ err_out_disable:
+ pci_disable_device(dev);
+
+ err_out_free_mem:
+ kfree(socket);
+ return ret;
+}
+
+static void __devexit pd6729_pci_remove(struct pci_dev *dev)
+{
+ int i;
+ struct pd6729_socket *socket = pci_get_drvdata(dev);
+
+ for (i = 0; i < MAX_SOCKETS; i++)
+ pcmcia_unregister_socket(&socket[i].socket);
+
+ free_irq(dev->irq, socket);
+ pci_release_regions(dev);
+ pci_disable_device(dev);
+
+ kfree(socket);
+}
+
+static int pd6729_socket_suspend(struct pci_dev *dev, u32 state)
+{
+ return pcmcia_socket_dev_suspend(&dev->dev, state);
+}
+
+static int pd6729_socket_resume(struct pci_dev *dev)
+{
+ return pcmcia_socket_dev_resume(&dev->dev);
+}
+
+static struct pci_device_id pd6729_pci_ids[] = {
+ {
+ .vendor = PCI_VENDOR_ID_CIRRUS,
+ .device = PCI_DEVICE_ID_CIRRUS_6729,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, pd6729_pci_ids);
+
+static struct pci_driver pd6729_pci_drv = {
+ .name = "pd6729",
+ .id_table = pd6729_pci_ids,
+ .probe = pd6729_pci_probe,
+ .remove = __devexit_p(pd6729_pci_remove),
+ .suspend = pd6729_socket_suspend,
+ .resume = pd6729_socket_resume,
+};
+
+static int pd6729_module_init(void)
+{
+ return pci_module_init(&pd6729_pci_drv);
+}
+
+static void pd6729_module_exit(void)
+{
+ pci_unregister_driver(&pd6729_pci_drv);
+}
+
+module_init(pd6729_module_init);
+module_exit(pd6729_module_exit);
--- /dev/null
+#ifndef _INCLUDE_GUARD_PD6729_H_
+#define _INCLUDE_GUARD_PD6729_H_
+
+/* Debuging defines */
+#ifdef NOTRACE
+#define dprintk(fmt, args...) printk(fmt , ## args)
+#else
+#define dprintk(fmt, args...) do {} while (0)
+#endif
+
+/* Flags for I365_GENCTL */
+#define I365_DF_VS1 0x40 /* DF-step Voltage Sense */
+#define I365_DF_VS2 0x80
+
+/* Fields in PD67_EXTERN_DATA */
+#define PD67_EXD_VS1(s) (0x01 << ((s) << 1))
+#define PD67_EXD_VS2(s) (0x02 << ((s) << 1))
+
+
+
+
+struct pd6729_socket {
+ int number;
+ unsigned long io_base; /* base io address of the socket */
+ struct pcmcia_socket socket;
+};
+
+#endif
--- /dev/null
+/* temporary measure */
+extern int pxa2xx_drv_pcmcia_probe(struct device *);
+
--- /dev/null
+/*
+ * socket_sysfs.c -- most of socket-related sysfs output
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * (C) 2003 - 2004 Dominik Brodowski
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/config.h>
+#include <linux/string.h>
+#include <linux/major.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/timer.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/pm.h>
+#include <linux/pci.h>
+#include <linux/device.h>
+#include <linux/suspend.h>
+#include <asm/system.h>
+#include <asm/irq.h>
+
+#define IN_CARD_SERVICES
+#include <pcmcia/version.h>
+#include <pcmcia/cs_types.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/bulkmem.h>
+#include <pcmcia/cistpl.h>
+#include <pcmcia/cisreg.h>
+#include <pcmcia/ds.h>
+#include "cs_internal.h"
+
+#define to_socket(_dev) container_of(_dev, struct pcmcia_socket, dev)
+
+static ssize_t pccard_show_type(struct class_device *dev, char *buf)
+{
+ int val;
+ struct pcmcia_socket *s = to_socket(dev);
+
+ if (!(s->state & SOCKET_PRESENT))
+ return -ENODEV;
+ s->ops->get_status(s, &val);
+ if (val & SS_CARDBUS)
+ return sprintf(buf, "32-bit\n");
+ if (val & SS_DETECT)
+ return sprintf(buf, "16-bit\n");
+ return sprintf(buf, "invalid\n");
+}
+static CLASS_DEVICE_ATTR(card_type, 0400, pccard_show_type, NULL);
+
+static ssize_t pccard_show_voltage(struct class_device *dev, char *buf)
+{
+ int val;
+ struct pcmcia_socket *s = to_socket(dev);
+
+ if (!(s->state & SOCKET_PRESENT))
+ return -ENODEV;
+ s->ops->get_status(s, &val);
+ if (val & SS_3VCARD)
+ return sprintf(buf, "3.3V\n");
+ if (val & SS_XVCARD)
+ return sprintf(buf, "X.XV\n");
+ return sprintf(buf, "5.0V\n");
+}
+static CLASS_DEVICE_ATTR(card_voltage, 0400, pccard_show_voltage, NULL);
+
+static ssize_t pccard_show_vpp(struct class_device *dev, char *buf)
+{
+ struct pcmcia_socket *s = to_socket(dev);
+ if (!(s->state & SOCKET_PRESENT))
+ return -ENODEV;
+ return sprintf(buf, "%d.%dV\n", s->socket.Vpp / 10, s->socket.Vpp % 10);
+}
+static CLASS_DEVICE_ATTR(card_vpp, 0400, pccard_show_vpp, NULL);
+
+static ssize_t pccard_show_vcc(struct class_device *dev, char *buf)
+{
+ struct pcmcia_socket *s = to_socket(dev);
+ if (!(s->state & SOCKET_PRESENT))
+ return -ENODEV;
+ return sprintf(buf, "%d.%dV\n", s->socket.Vcc / 10, s->socket.Vcc % 10);
+}
+static CLASS_DEVICE_ATTR(card_vcc, 0400, pccard_show_vcc, NULL);
+
+
+static ssize_t pccard_store_insert(struct class_device *dev, const char *buf, size_t count)
+{
+ ssize_t ret;
+ struct pcmcia_socket *s = to_socket(dev);
+
+ if (!count)
+ return -EINVAL;
+
+ ret = pcmcia_insert_card(s);
+
+ return ret ? ret : count;
+}
+static CLASS_DEVICE_ATTR(card_insert, 0200, NULL, pccard_store_insert);
+
+static ssize_t pccard_store_eject(struct class_device *dev, const char *buf, size_t count)
+{
+ ssize_t ret;
+ struct pcmcia_socket *s = to_socket(dev);
+
+ if (!count)
+ return -EINVAL;
+
+ ret = pcmcia_eject_card(s);
+
+ return ret ? ret : count;
+}
+static CLASS_DEVICE_ATTR(card_eject, 0200, NULL, pccard_store_eject);
+
+
+static struct class_device_attribute *pccard_socket_attributes[] = {
+ &class_device_attr_card_type,
+ &class_device_attr_card_voltage,
+ &class_device_attr_card_vpp,
+ &class_device_attr_card_vcc,
+ &class_device_attr_card_insert,
+ &class_device_attr_card_eject,
+ NULL,
+};
+
+static int __devinit pccard_sysfs_add_socket(struct class_device *class_dev)
+{
+ struct class_device_attribute **attr;
+ int ret = 0;
+
+ for (attr = pccard_socket_attributes; *attr; attr++) {
+ ret = class_device_create_file(class_dev, *attr);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
+static void __devexit pccard_sysfs_remove_socket(struct class_device *class_dev)
+{
+ struct class_device_attribute **attr;
+
+ for (attr = pccard_socket_attributes; *attr; attr++)
+ class_device_remove_file(class_dev, *attr);
+}
+
+struct class_interface pccard_sysfs_interface = {
+ .class = &pcmcia_socket_class,
+ .add = &pccard_sysfs_add_socket,
+ .remove = __devexit_p(&pccard_sysfs_remove_socket),
+};
--- /dev/null
+/*
+ *
+ * linux/drivers/s390/net/ctcdbug.c ($Revision: 1.1 $)
+ *
+ * Linux on zSeries OSA Express and HiperSockets support
+ *
+ * Copyright 2000,2003 IBM Corporation
+ *
+ * Author(s): Original Code written by
+ * Peter Tiedemann (ptiedem@de.ibm.com)
+ *
+ * $Revision: 1.1 $ $Date: 2004/07/02 16:31:22 $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include "ctcdbug.h"
+
+/**
+ * Debug Facility Stuff
+ */
+debug_info_t *dbf_setup = NULL;
+debug_info_t *dbf_data = NULL;
+debug_info_t *dbf_trace = NULL;
+
+DEFINE_PER_CPU(char[256], dbf_txt_buf);
+
+void
+unregister_dbf_views(void)
+{
+ if (dbf_setup)
+ debug_unregister(dbf_setup);
+ if (dbf_data)
+ debug_unregister(dbf_data);
+ if (dbf_trace)
+ debug_unregister(dbf_trace);
+}
+int
+register_dbf_views(void)
+{
+ dbf_setup = debug_register(CTC_DBF_SETUP_NAME,
+ CTC_DBF_SETUP_INDEX,
+ CTC_DBF_SETUP_NR_AREAS,
+ CTC_DBF_SETUP_LEN);
+ dbf_data = debug_register(CTC_DBF_DATA_NAME,
+ CTC_DBF_DATA_INDEX,
+ CTC_DBF_DATA_NR_AREAS,
+ CTC_DBF_DATA_LEN);
+ dbf_trace = debug_register(CTC_DBF_TRACE_NAME,
+ CTC_DBF_TRACE_INDEX,
+ CTC_DBF_TRACE_NR_AREAS,
+ CTC_DBF_TRACE_LEN);
+
+ if ((dbf_setup == NULL) || (dbf_data == NULL) ||
+ (dbf_trace == NULL)) {
+ unregister_dbf_views();
+ return -ENOMEM;
+ }
+ debug_register_view(dbf_setup, &debug_hex_ascii_view);
+ debug_set_level(dbf_setup, CTC_DBF_SETUP_LEVEL);
+
+ debug_register_view(dbf_data, &debug_hex_ascii_view);
+ debug_set_level(dbf_data, CTC_DBF_DATA_LEVEL);
+
+ debug_register_view(dbf_trace, &debug_hex_ascii_view);
+ debug_set_level(dbf_trace, CTC_DBF_TRACE_LEVEL);
+
+ return 0;
+}
+
+
--- /dev/null
+/*
+ *
+ * linux/drivers/s390/net/ctcdbug.h ($Revision: 1.1 $)
+ *
+ * Linux on zSeries OSA Express and HiperSockets support
+ *
+ * Copyright 2000,2003 IBM Corporation
+ *
+ * Author(s): Original Code written by
+ * Peter Tiedemann (ptiedem@de.ibm.com)
+ *
+ * $Revision: 1.1 $ $Date: 2004/07/02 16:31:22 $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <asm/debug.h>
+/**
+ * Debug Facility stuff
+ */
+#define CTC_DBF_SETUP_NAME "ctc_setup"
+#define CTC_DBF_SETUP_LEN 16
+#define CTC_DBF_SETUP_INDEX 3
+#define CTC_DBF_SETUP_NR_AREAS 1
+#define CTC_DBF_SETUP_LEVEL 3
+
+#define CTC_DBF_DATA_NAME "ctc_data"
+#define CTC_DBF_DATA_LEN 128
+#define CTC_DBF_DATA_INDEX 3
+#define CTC_DBF_DATA_NR_AREAS 1
+#define CTC_DBF_DATA_LEVEL 2
+
+#define CTC_DBF_TRACE_NAME "ctc_trace"
+#define CTC_DBF_TRACE_LEN 16
+#define CTC_DBF_TRACE_INDEX 2
+#define CTC_DBF_TRACE_NR_AREAS 2
+#define CTC_DBF_TRACE_LEVEL 3
+
+#define DBF_TEXT(name,level,text) \
+ do { \
+ debug_text_event(dbf_##name,level,text); \
+ } while (0)
+
+#define DBF_HEX(name,level,addr,len) \
+ do { \
+ debug_event(dbf_##name,level,(void*)(addr),len); \
+ } while (0)
+
+extern DEFINE_PER_CPU(char[256], dbf_txt_buf);
+extern debug_info_t *dbf_setup;
+extern debug_info_t *dbf_data;
+extern debug_info_t *dbf_trace;
+
+
+#define DBF_TEXT_(name,level,text...) \
+ do { \
+ char* dbf_txt_buf = get_cpu_var(dbf_txt_buf); \
+ sprintf(dbf_txt_buf, text); \
+ debug_text_event(dbf_##name,level,dbf_txt_buf); \
+ put_cpu_var(dbf_txt_buf); \
+ } while (0)
+
+#define DBF_SPRINTF(name,level,text...) \
+ do { \
+ debug_sprintf_event(dbf_trace, level, ##text ); \
+ debug_sprintf_event(dbf_trace, level, text ); \
+ } while (0)
+
+
+int register_dbf_views(void);
+
+void unregister_dbf_views(void);
+
+/**
+ * some more debug stuff
+ */
+
+#define HEXDUMP16(importance,header,ptr) \
+PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \
+ "%02x %02x %02x %02x %02x %02x %02x %02x\n", \
+ *(((char*)ptr)),*(((char*)ptr)+1),*(((char*)ptr)+2), \
+ *(((char*)ptr)+3),*(((char*)ptr)+4),*(((char*)ptr)+5), \
+ *(((char*)ptr)+6),*(((char*)ptr)+7),*(((char*)ptr)+8), \
+ *(((char*)ptr)+9),*(((char*)ptr)+10),*(((char*)ptr)+11), \
+ *(((char*)ptr)+12),*(((char*)ptr)+13), \
+ *(((char*)ptr)+14),*(((char*)ptr)+15)); \
+PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \
+ "%02x %02x %02x %02x %02x %02x %02x %02x\n", \
+ *(((char*)ptr)+16),*(((char*)ptr)+17), \
+ *(((char*)ptr)+18),*(((char*)ptr)+19), \
+ *(((char*)ptr)+20),*(((char*)ptr)+21), \
+ *(((char*)ptr)+22),*(((char*)ptr)+23), \
+ *(((char*)ptr)+24),*(((char*)ptr)+25), \
+ *(((char*)ptr)+26),*(((char*)ptr)+27), \
+ *(((char*)ptr)+28),*(((char*)ptr)+29), \
+ *(((char*)ptr)+30),*(((char*)ptr)+31));
+
+static inline void
+hex_dump(unsigned char *buf, size_t len)
+{
+ size_t i;
+
+ for (i = 0; i < len; i++) {
+ if (i && !(i % 16))
+ printk("\n");
+ printk("%02x ", *(buf + i));
+ }
+ printk("\n");
+}
+
--- /dev/null
+/*
+ 3w-9xxx.c -- 3ware 9000 Storage Controller device driver for Linux.
+
+ Written By: Adam Radford <linuxraid@amcc.com>
+
+ Copyright (C) 2004 Applied Micro Circuits Corporation.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; version 2 of the License.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ NO WARRANTY
+ THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ solely responsible for determining the appropriateness of using and
+ distributing the Program and assumes all risks associated with its
+ exercise of rights under this Agreement, including but not limited to
+ the risks and costs of program errors, damage to or loss of data,
+ programs or equipment, and unavailability or interruption of operations.
+
+ DISCLAIMER OF LIABILITY
+ NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ Bugs/Comments/Suggestions should be mailed to:
+ linuxraid@amcc.com
+
+ For more information, goto:
+ http://www.amcc.com
+
+ Note: This version of the driver does not contain a bundled firmware
+ image.
+
+ History
+ -------
+ 2.26.02.000 - Driver cleanup for kernel submission.
+ 2.26.02.001 - Replace schedule_timeout() calls with msleep().
+*/
+
+#include <linux/module.h>
+#include <linux/reboot.h>
+#include <linux/spinlock.h>
+#include <linux/interrupt.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/time.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_cmnd.h>
+#include "3w-9xxx.h"
+
+/* Globals */
+static const char *twa_driver_version="2.26.02.001";
+static TW_Device_Extension *twa_device_extension_list[TW_MAX_SLOT];
+static unsigned int twa_device_extension_count;
+static int twa_major = -1;
+extern struct timezone sys_tz;
+
+/* Module parameters */
+MODULE_AUTHOR ("AMCC");
+MODULE_DESCRIPTION ("3ware 9000 Storage Controller Linux Driver");
+MODULE_LICENSE("GPL");
+
+/* Function prototypes */
+static void twa_aen_queue_event(TW_Device_Extension *tw_dev, TW_Command_Apache_Header *header);
+static int twa_aen_read_queue(TW_Device_Extension *tw_dev, int request_id);
+static char *twa_aen_severity_lookup(unsigned char severity_code);
+static void twa_aen_sync_time(TW_Device_Extension *tw_dev, int request_id);
+static int twa_chrdev_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg);
+static int twa_chrdev_open(struct inode *inode, struct file *file);
+static int twa_fill_sense(TW_Device_Extension *tw_dev, int request_id, int copy_sense, int print_host);
+static void twa_free_request_id(TW_Device_Extension *tw_dev,int request_id);
+static void twa_get_request_id(TW_Device_Extension *tw_dev, int *request_id);
+static int twa_initconnection(TW_Device_Extension *tw_dev, int message_credits,
+ u32 set_features, unsigned short current_fw_srl,
+ unsigned short current_fw_arch_id,
+ unsigned short current_fw_branch,
+ unsigned short current_fw_build,
+ unsigned short *fw_on_ctlr_srl,
+ unsigned short *fw_on_ctlr_arch_id,
+ unsigned short *fw_on_ctlr_branch,
+ unsigned short *fw_on_ctlr_build,
+ u32 *init_connect_result);
+static void twa_load_sgl(TW_Command_Full *full_command_packet, int request_id, dma_addr_t dma_handle, int length);
+static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds);
+static int twa_poll_status_gone(TW_Device_Extension *tw_dev, u32 flag, int seconds);
+static int twa_post_command_packet(TW_Device_Extension *tw_dev, int request_id, char internal);
+static int twa_reset_device_extension(TW_Device_Extension *tw_dev);
+static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset);
+static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Apache *sglistarg);
+static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id);
+static char *twa_string_lookup(twa_message_type *table, unsigned int aen_code);
+static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id);
+
+/* Functions */
+
+/* Show some statistics about the card */
+static ssize_t twa_show_stats(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *host = class_to_shost(class_dev);
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+ unsigned long flags = 0;
+ ssize_t len;
+
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ len = snprintf(buf, PAGE_SIZE, "Driver version: %s\n"
+ "Current commands posted: %4d\n"
+ "Max commands posted: %4d\n"
+ "Current pending commands: %4d\n"
+ "Max pending commands: %4d\n"
+ "Last sgl length: %4d\n"
+ "Max sgl length: %4d\n"
+ "Last sector count: %4d\n"
+ "Max sector count: %4d\n"
+ "SCSI Host Resets: %4d\n"
+ "SCSI Aborts/Timeouts: %4d\n"
+ "AEN's: %4d\n",
+ twa_driver_version,
+ tw_dev->posted_request_count,
+ tw_dev->max_posted_request_count,
+ tw_dev->pending_request_count,
+ tw_dev->max_pending_request_count,
+ tw_dev->sgl_entries,
+ tw_dev->max_sgl_entries,
+ tw_dev->sector_count,
+ tw_dev->max_sector_count,
+ tw_dev->num_resets,
+ tw_dev->num_aborts,
+ tw_dev->aen_count);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ return len;
+} /* End twa_show_stats() */
+
+/* This function will set a devices queue depth */
+static ssize_t twa_store_queue_depth(struct device *dev, const char *buf, size_t count)
+{
+ int queue_depth;
+ struct scsi_device *sdev = to_scsi_device(dev);
+
+ queue_depth = simple_strtoul(buf, NULL, 0);
+ if (queue_depth > TW_Q_LENGTH-2)
+ return -EINVAL;
+ scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, queue_depth);
+
+ return count;
+} /* End twa_store_queue_depth() */
+
+/* Create sysfs 'queue_depth' entry */
+static struct device_attribute twa_queue_depth_attr = {
+ .attr = {
+ .name = "queue_depth",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = twa_store_queue_depth
+};
+
+/* Device attributes initializer */
+static struct device_attribute *twa_dev_attrs[] = {
+ &twa_queue_depth_attr,
+ NULL,
+};
+
+/* Create sysfs 'stats' entry */
+static struct class_device_attribute twa_host_stats_attr = {
+ .attr = {
+ .name = "stats",
+ .mode = S_IRUGO,
+ },
+ .show = twa_show_stats
+};
+
+/* Host attributes initializer */
+static struct class_device_attribute *twa_host_attrs[] = {
+ &twa_host_stats_attr,
+ NULL,
+};
+
+/* File operations struct for character device */
+static struct file_operations twa_fops = {
+ .owner = THIS_MODULE,
+ .ioctl = twa_chrdev_ioctl,
+ .open = twa_chrdev_open,
+ .release = NULL
+};
+
+/* This function will complete an aen request from the isr */
+static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Command_Apache_Header *header;
+ unsigned short aen;
+ int retval = 1;
+
+ header = (TW_Command_Apache_Header *)tw_dev->generic_buffer_virt[request_id];
+ tw_dev->posted_request_count--;
+ aen = header->status_block.error;
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ command_packet = &full_command_packet->command.oldcommand;
+
+ /* First check for internal completion of set param for time sync */
+ if (TW_OP_OUT(command_packet->opcode__sgloffset) == TW_OP_SET_PARAM) {
+ /* Keep reading the queue in case there are more aen's */
+ if (twa_aen_read_queue(tw_dev, request_id))
+ goto out2;
+ else {
+ retval = 0;
+ goto out;
+ }
+ }
+
+ switch (aen) {
+ case TW_AEN_QUEUE_EMPTY:
+ /* Quit reading the queue if this is the last one */
+ break;
+ case TW_AEN_SYNC_TIME_WITH_HOST:
+ twa_aen_sync_time(tw_dev, request_id);
+ retval = 0;
+ goto out;
+ default:
+ twa_aen_queue_event(tw_dev, header);
+
+ /* If there are more aen's, keep reading the queue */
+ if (twa_aen_read_queue(tw_dev, request_id))
+ goto out2;
+ else {
+ retval = 0;
+ goto out;
+ }
+ }
+ retval = 0;
+out2:
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ clear_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags);
+out:
+ return retval;
+} /* End twa_aen_complete() */
+
+/* This function will drain aen queue */
+static int twa_aen_drain_queue(TW_Device_Extension *tw_dev, int no_check_reset)
+{
+ int request_id = 0;
+ char cdb[TW_MAX_CDB_LEN];
+ TW_SG_Apache sglist[1];
+ int finished = 0, count = 0;
+ TW_Command_Full *full_command_packet;
+ TW_Command_Apache_Header *header;
+ unsigned short aen;
+ int first_reset = 0, queue = 0, retval = 1;
+
+ if (no_check_reset)
+ first_reset = 0;
+ else
+ first_reset = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+
+ /* Initialize cdb */
+ memset(&cdb, 0, TW_MAX_CDB_LEN);
+ cdb[0] = REQUEST_SENSE; /* opcode */
+ cdb[4] = TW_ALLOCATION_LENGTH; /* allocation length */
+
+ /* Initialize sglist */
+ memset(&sglist, 0, sizeof(TW_SG_Apache));
+ sglist[0].length = TW_SECTOR_SIZE;
+ sglist[0].address = tw_dev->generic_buffer_phys[request_id];
+
+ if (sglist[0].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1, "Found unaligned address during AEN drain");
+ goto out;
+ }
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ do {
+ /* Send command to the board */
+ if (twa_scsiop_execute_scsi(tw_dev, request_id, cdb, 1, sglist)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2, "Error posting request sense");
+ goto out;
+ }
+
+ /* Now poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x3, "No valid response while draining AEN queue");
+ tw_dev->posted_request_count--;
+ goto out;
+ }
+
+ tw_dev->posted_request_count--;
+ header = (TW_Command_Apache_Header *)tw_dev->generic_buffer_virt[request_id];
+ aen = header->status_block.error;
+ queue = 0;
+ count++;
+
+ switch (aen) {
+ case TW_AEN_QUEUE_EMPTY:
+ if (first_reset != 1)
+ goto out;
+ else
+ finished = 1;
+ break;
+ case TW_AEN_SOFT_RESET:
+ if (first_reset == 0)
+ first_reset = 1;
+ else
+ queue = 1;
+ break;
+ case TW_AEN_SYNC_TIME_WITH_HOST:
+ break;
+ default:
+ queue = 1;
+ }
+
+ /* Now queue an event info */
+ if (queue)
+ twa_aen_queue_event(tw_dev, header);
+ } while ((finished == 0) && (count < TW_MAX_AEN_DRAIN));
+
+ if (count == TW_MAX_AEN_DRAIN)
+ goto out;
+
+ retval = 0;
+out:
+ tw_dev->state[request_id] = TW_S_INITIAL;
+ return retval;
+} /* End twa_aen_drain_queue() */
+
+/* This function will queue an event */
+static void twa_aen_queue_event(TW_Device_Extension *tw_dev, TW_Command_Apache_Header *header)
+{
+ u32 local_time;
+ struct timeval time;
+ TW_Event *event;
+ unsigned short aen;
+ char host[16];
+
+ tw_dev->aen_count++;
+
+ /* Fill out event info */
+ event = tw_dev->event_queue[tw_dev->error_index];
+
+ /* Check for clobber */
+ host[0] = '\0';
+ if (tw_dev->host) {
+ sprintf(host, " scsi%d:", tw_dev->host->host_no);
+ if (event->retrieved == TW_AEN_NOT_RETRIEVED)
+ tw_dev->aen_clobber = 1;
+ }
+
+ aen = header->status_block.error;
+ memset(event, 0, sizeof(TW_Event));
+
+ event->severity = TW_SEV_OUT(header->status_block.severity__reserved);
+ do_gettimeofday(&time);
+ local_time = (u32)(time.tv_sec - (sys_tz.tz_minuteswest * 60));
+ event->time_stamp_sec = local_time;
+ event->aen_code = aen;
+ event->retrieved = TW_AEN_NOT_RETRIEVED;
+ event->sequence_id = tw_dev->error_sequence_id;
+ tw_dev->error_sequence_id++;
+
+ header->err_specific_desc[sizeof(header->err_specific_desc) - 1] = '\0';
+ event->parameter_len = strlen(header->err_specific_desc);
+ memcpy(event->parameter_data, header->err_specific_desc, event->parameter_len);
+ if (event->severity != TW_AEN_SEVERITY_DEBUG)
+ printk(KERN_WARNING "3w-9xxx:%s AEN: %s (0x%02X:0x%04X): %s:%s.\n",
+ host,
+ twa_aen_severity_lookup(TW_SEV_OUT(header->status_block.severity__reserved)),
+ TW_MESSAGE_SOURCE_CONTROLLER_EVENT, aen,
+ twa_string_lookup(twa_aen_table, aen),
+ header->err_specific_desc);
+ else
+ tw_dev->aen_count--;
+
+ if ((tw_dev->error_index + 1) == TW_Q_LENGTH)
+ tw_dev->event_queue_wrapped = 1;
+ tw_dev->error_index = (tw_dev->error_index + 1 ) % TW_Q_LENGTH;
+} /* End twa_aen_queue_event() */
+
+/* This function will read the aen queue from the isr */
+static int twa_aen_read_queue(TW_Device_Extension *tw_dev, int request_id)
+{
+ char cdb[TW_MAX_CDB_LEN];
+ TW_SG_Apache sglist[1];
+ TW_Command_Full *full_command_packet;
+ int retval = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+
+ /* Initialize cdb */
+ memset(&cdb, 0, TW_MAX_CDB_LEN);
+ cdb[0] = REQUEST_SENSE; /* opcode */
+ cdb[4] = TW_ALLOCATION_LENGTH; /* allocation length */
+
+ /* Initialize sglist */
+ memset(&sglist, 0, sizeof(TW_SG_Apache));
+ sglist[0].length = TW_SECTOR_SIZE;
+ sglist[0].address = tw_dev->generic_buffer_phys[request_id];
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ /* Now post the command packet */
+ if (twa_scsiop_execute_scsi(tw_dev, request_id, cdb, 1, sglist)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x4, "Post failed while reading AEN queue");
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_aen_read_queue() */
+
+/* This function will look up an AEN severity string */
+static char *twa_aen_severity_lookup(unsigned char severity_code)
+{
+ char *retval = NULL;
+
+ if ((severity_code < (unsigned char) TW_AEN_SEVERITY_ERROR) ||
+ (severity_code > (unsigned char) TW_AEN_SEVERITY_DEBUG))
+ goto out;
+
+ retval = twa_aen_severity_table[severity_code];
+out:
+ return retval;
+} /* End twa_aen_severity_lookup() */
+
+/* This function will sync firmware time with the host time */
+static void twa_aen_sync_time(TW_Device_Extension *tw_dev, int request_id)
+{
+ u32 schedulertime;
+ struct timeval utc;
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Param_Apache *param;
+ u32 local_time;
+
+ /* Fill out the command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ command_packet = &full_command_packet->command.oldcommand;
+ command_packet->opcode__sgloffset = TW_OPSGL_IN(2, TW_OP_SET_PARAM);
+ command_packet->request_id = request_id;
+ command_packet->byte8_offset.param.sgl[0].address = tw_dev->generic_buffer_phys[request_id];
+ command_packet->byte8_offset.param.sgl[0].length = TW_SECTOR_SIZE;
+ command_packet->size = TW_COMMAND_SIZE;
+ command_packet->byte6_offset.parameter_count = 1;
+
+ /* Setup the param */
+ param = (TW_Param_Apache *)tw_dev->generic_buffer_virt[request_id];
+ memset(param, 0, TW_SECTOR_SIZE);
+ param->table_id = TW_TIMEKEEP_TABLE | 0x8000; /* Controller time keep table */
+ param->parameter_id = 0x3; /* SchedulerTime */
+ param->parameter_size_bytes = 4;
+
+ /* Convert system time in UTC to local time seconds since last
+ Sunday 12:00AM */
+ do_gettimeofday(&utc);
+ local_time = (u32)(utc.tv_sec - (sys_tz.tz_minuteswest * 60));
+ schedulertime = local_time - (3 * 86400);
+ schedulertime = schedulertime % 604800;
+
+ memcpy(param->data, &schedulertime, sizeof(u32));
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ /* Now post the command */
+ twa_post_command_packet(tw_dev, request_id, 1);
+} /* End twa_aen_sync_time() */
+
+/* This function will allocate memory and check if it is correctly aligned */
+static int twa_allocate_memory(TW_Device_Extension *tw_dev, int size, int which)
+{
+ int i;
+ dma_addr_t dma_handle;
+ unsigned long *cpu_addr;
+ int retval = 1;
+
+ cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, &dma_handle);
+ if (!cpu_addr) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x5, "Memory allocation failed");
+ goto out;
+ }
+
+ if ((unsigned long)cpu_addr % (TW_ALIGNMENT_9000)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x6, "Failed to allocate correctly aligned memory");
+ pci_free_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, cpu_addr, dma_handle);
+ goto out;
+ }
+
+ memset(cpu_addr, 0, size*TW_Q_LENGTH);
+
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ switch(which) {
+ case 0:
+ tw_dev->command_packet_phys[i] = dma_handle+(i*size);
+ tw_dev->command_packet_virt[i] = (TW_Command_Full *)((unsigned char *)cpu_addr + (i*size));
+ break;
+ case 1:
+ tw_dev->generic_buffer_phys[i] = dma_handle+(i*size);
+ tw_dev->generic_buffer_virt[i] = (unsigned long *)((unsigned char *)cpu_addr + (i*size));
+ break;
+ }
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_allocate_memory() */
+
+/* This function will check the status register for unexpected bits */
+static int twa_check_bits(u32 status_reg_value)
+{
+ int retval = 1;
+
+ if ((status_reg_value & TW_STATUS_EXPECTED_BITS) != TW_STATUS_EXPECTED_BITS)
+ goto out;
+ if ((status_reg_value & TW_STATUS_UNEXPECTED_BITS) != 0)
+ goto out;
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_check_bits() */
+
+/* This function will check the srl and decide if we are compatible */
+static int twa_check_srl(TW_Device_Extension *tw_dev, int *flashed)
+{
+ int retval = 1;
+ unsigned short fw_on_ctlr_srl = 0, fw_on_ctlr_arch_id = 0;
+ unsigned short fw_on_ctlr_branch = 0, fw_on_ctlr_build = 0;
+ u32 init_connect_result = 0;
+
+ if (twa_initconnection(tw_dev, TW_INIT_MESSAGE_CREDITS,
+ TW_EXTENDED_INIT_CONNECT, TW_CURRENT_FW_SRL,
+ TW_9000_ARCH_ID, TW_CURRENT_FW_BRANCH,
+ TW_CURRENT_FW_BUILD, &fw_on_ctlr_srl,
+ &fw_on_ctlr_arch_id, &fw_on_ctlr_branch,
+ &fw_on_ctlr_build, &init_connect_result)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x7, "Initconnection failed while checking SRL");
+ goto out;
+ }
+
+ tw_dev->working_srl = TW_CURRENT_FW_SRL;
+ tw_dev->working_branch = TW_CURRENT_FW_BRANCH;
+ tw_dev->working_build = TW_CURRENT_FW_BUILD;
+
+ /* Try base mode compatibility */
+ if (!(init_connect_result & TW_CTLR_FW_COMPATIBLE)) {
+ if (twa_initconnection(tw_dev, TW_INIT_MESSAGE_CREDITS,
+ TW_EXTENDED_INIT_CONNECT,
+ TW_BASE_FW_SRL, TW_9000_ARCH_ID,
+ TW_BASE_FW_BRANCH, TW_BASE_FW_BUILD,
+ &fw_on_ctlr_srl, &fw_on_ctlr_arch_id,
+ &fw_on_ctlr_branch, &fw_on_ctlr_build,
+ &init_connect_result)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xa, "Initconnection (base mode) failed while checking SRL");
+ goto out;
+ }
+ if (!(init_connect_result & TW_CTLR_FW_COMPATIBLE)) {
+ if (TW_CURRENT_FW_SRL > fw_on_ctlr_srl) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x32, "Firmware and driver incompatibility: please upgrade firmware");
+ } else {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x33, "Firmware and driver incompatibility: please upgrade driver");
+ }
+ goto out;
+ }
+ tw_dev->working_srl = TW_BASE_FW_SRL;
+ tw_dev->working_branch = TW_BASE_FW_BRANCH;
+ tw_dev->working_build = TW_BASE_FW_BUILD;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_check_srl() */
+
+/* This function handles ioctl for the character device */
+static int twa_chrdev_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+{
+ long timeout;
+ unsigned long *cpu_addr, data_buffer_length_adjusted = 0, flags = 0;
+ dma_addr_t dma_handle;
+ int request_id = 0;
+ unsigned int sequence_id = 0;
+ unsigned char event_index, start_index;
+ TW_Ioctl_Driver_Command driver_command;
+ TW_Ioctl_Buf_Apache *tw_ioctl;
+ TW_Lock *tw_lock;
+ TW_Command_Full *full_command_packet;
+ TW_Compatibility_Info *tw_compat_info;
+ TW_Event *event;
+ struct timeval current_time;
+ u32 current_time_ms;
+ TW_Device_Extension *tw_dev = twa_device_extension_list[iminor(inode)];
+ int retval = TW_IOCTL_ERROR_OS_EFAULT;
+
+ /* Only let one of these through at a time */
+ if (down_interruptible(&tw_dev->ioctl_sem)) {
+ retval = TW_IOCTL_ERROR_OS_EINTR;
+ goto out;
+ }
+
+ /* First copy down the driver command */
+ if (copy_from_user(&driver_command, (void *)arg, sizeof(TW_Ioctl_Driver_Command)))
+ goto out2;
+
+ /* Check data buffer size */
+ if (driver_command.buffer_length > TW_MAX_SECTORS * 512) {
+ retval = TW_IOCTL_ERROR_OS_EINVAL;
+ goto out2;
+ }
+
+ /* Hardware can only do multiple of 512 byte transfers */
+ data_buffer_length_adjusted = (driver_command.buffer_length + 511) & ~511;
+
+ /* Now allocate ioctl buf memory */
+ cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, data_buffer_length_adjusted+sizeof(TW_Ioctl_Buf_Apache) - 1, &dma_handle);
+ if (!cpu_addr) {
+ retval = TW_IOCTL_ERROR_OS_ENOMEM;
+ goto out2;
+ }
+
+ tw_ioctl = (TW_Ioctl_Buf_Apache *)cpu_addr;
+
+ /* Now copy down the entire ioctl */
+ if (copy_from_user(tw_ioctl, (void *)arg, driver_command.buffer_length + sizeof(TW_Ioctl_Buf_Apache) - 1))
+ goto out3;
+
+ /* See which ioctl we are doing */
+ switch (cmd) {
+ case TW_IOCTL_FIRMWARE_PASS_THROUGH:
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ twa_get_request_id(tw_dev, &request_id);
+
+ /* Flag internal command */
+ tw_dev->srb[request_id] = 0;
+
+ /* Flag chrdev ioctl */
+ tw_dev->chrdev_request_id = request_id;
+
+ full_command_packet = &tw_ioctl->firmware_command;
+
+ /* Load request id and sglist for both command types */
+ twa_load_sgl(full_command_packet, request_id, dma_handle, data_buffer_length_adjusted);
+
+ memcpy(tw_dev->command_packet_virt[request_id], &(tw_ioctl->firmware_command), sizeof(TW_Command_Full));
+
+ /* Now post the command packet to the controller */
+ twa_post_command_packet(tw_dev, request_id, 1);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+
+ timeout = TW_IOCTL_CHRDEV_TIMEOUT*HZ;
+
+ /* Now wait for command to complete */
+ timeout = wait_event_interruptible_timeout(tw_dev->ioctl_wqueue, tw_dev->chrdev_request_id == TW_IOCTL_CHRDEV_FREE, timeout);
+
+ /* Check if we timed out, got a signal, or didn't get
+ an interrupt */
+ if ((timeout <= 0) && (tw_dev->chrdev_request_id != TW_IOCTL_CHRDEV_FREE)) {
+ /* Now we need to reset the board */
+ if (timeout == TW_IOCTL_ERROR_OS_ERESTARTSYS) {
+ retval = timeout;
+ } else {
+ printk(KERN_WARNING "3w-9xxx: scsi%d: WARNING: (0x%02X:0x%04X): Character ioctl (0x%x) timed out, resetting card.\n",
+ tw_dev->host->host_no, TW_DRIVER, 0xc,
+ cmd);
+ retval = TW_IOCTL_ERROR_OS_EIO;
+ }
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ tw_dev->posted_request_count--;
+ twa_reset_device_extension(tw_dev);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ goto out3;
+ }
+
+ /* Now copy in the command packet response */
+ memcpy(&(tw_ioctl->firmware_command), tw_dev->command_packet_virt[request_id], sizeof(TW_Command_Full));
+
+ /* Now complete the io */
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ break;
+ case TW_IOCTL_GET_COMPATIBILITY_INFO:
+ tw_ioctl->driver_command.status = 0;
+ /* Copy compatiblity struct into ioctl data buffer */
+ tw_compat_info = (TW_Compatibility_Info *)tw_ioctl->data_buffer;
+ strncpy(tw_compat_info->driver_version, twa_driver_version, strlen(twa_driver_version));
+ tw_compat_info->working_srl = tw_dev->working_srl;
+ tw_compat_info->working_branch = tw_dev->working_branch;
+ tw_compat_info->working_build = tw_dev->working_build;
+ break;
+ case TW_IOCTL_GET_LAST_EVENT:
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ } else
+ tw_ioctl->driver_command.status = 0;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ tw_ioctl->driver_command.status = 0;
+ }
+ event_index = (tw_dev->error_index - 1 + TW_Q_LENGTH) % TW_Q_LENGTH;
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_FIRST_EVENT:
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ } else
+ tw_ioctl->driver_command.status = 0;
+ event_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ tw_ioctl->driver_command.status = 0;
+ event_index = 0;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_NEXT_EVENT:
+ event = (TW_Event *)tw_ioctl->data_buffer;
+ sequence_id = event->sequence_id;
+ tw_ioctl->driver_command.status = 0;
+
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ }
+ start_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ start_index = 0;
+ }
+ event_index = (start_index + sequence_id - tw_dev->event_queue[start_index]->sequence_id + 1) % TW_Q_LENGTH;
+
+ if (!(tw_dev->event_queue[event_index]->sequence_id > sequence_id)) {
+ if (tw_ioctl->driver_command.status == TW_IOCTL_ERROR_STATUS_AEN_CLOBBER)
+ tw_dev->aen_clobber = 1;
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_PREVIOUS_EVENT:
+ event = (TW_Event *)tw_ioctl->data_buffer;
+ sequence_id = event->sequence_id;
+ tw_ioctl->driver_command.status = 0;
+
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ }
+ start_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ start_index = 0;
+ }
+ event_index = (start_index + sequence_id - tw_dev->event_queue[start_index]->sequence_id - 1) % TW_Q_LENGTH;
+
+ if (!(tw_dev->event_queue[event_index]->sequence_id < sequence_id)) {
+ if (tw_ioctl->driver_command.status == TW_IOCTL_ERROR_STATUS_AEN_CLOBBER)
+ tw_dev->aen_clobber = 1;
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_LOCK:
+ tw_lock = (TW_Lock *)tw_ioctl->data_buffer;
+ do_gettimeofday(¤t_time);
+ current_time_ms = (current_time.tv_sec * 1000) + (current_time.tv_usec / 1000);
+
+ if ((tw_lock->force_flag == 1) || (tw_dev->ioctl_sem_lock == 0) || (current_time_ms >= tw_dev->ioctl_msec)) {
+ tw_dev->ioctl_sem_lock = 1;
+ tw_dev->ioctl_msec = current_time_ms + tw_lock->timeout_msec;
+ tw_ioctl->driver_command.status = 0;
+ tw_lock->time_remaining_msec = tw_lock->timeout_msec;
+ } else {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_LOCKED;
+ tw_lock->time_remaining_msec = tw_dev->ioctl_msec - current_time_ms;
+ }
+ break;
+ case TW_IOCTL_RELEASE_LOCK:
+ if (tw_dev->ioctl_sem_lock == 1) {
+ tw_dev->ioctl_sem_lock = 0;
+ tw_ioctl->driver_command.status = 0;
+ } else {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NOT_LOCKED;
+ }
+ break;
+ default:
+ retval = TW_IOCTL_ERROR_OS_ENOTTY;
+ goto out3;
+ }
+
+ /* Now copy the entire response to userspace */
+ if (copy_to_user((void *)arg, tw_ioctl, sizeof(TW_Ioctl_Buf_Apache) + driver_command.buffer_length - 1) == 0)
+ retval = 0;
+out3:
+ /* Now free ioctl buf memory */
+ pci_free_consistent(tw_dev->tw_pci_dev, data_buffer_length_adjusted+sizeof(TW_Ioctl_Buf_Apache) - 1, cpu_addr, dma_handle);
+out2:
+ up(&tw_dev->ioctl_sem);
+out:
+ return retval;
+} /* End twa_chrdev_ioctl() */
+
+/* This function handles open for the character device */
+static int twa_chrdev_open(struct inode *inode, struct file *file)
+{
+ unsigned int minor_number;
+ int retval = TW_IOCTL_ERROR_OS_ENODEV;
+
+ minor_number = iminor(inode);
+ if (minor_number >= twa_device_extension_count)
+ goto out;
+ retval = 0;
+out:
+ return retval;
+} /* End twa_chrdev_open() */
+
+/* This function will print readable messages from status register errors */
+static int twa_decode_bits(TW_Device_Extension *tw_dev, u32 status_reg_value)
+{
+ int retval = 1;
+
+ /* Check for various error conditions and handle them appropriately */
+ if (status_reg_value & TW_STATUS_PCI_PARITY_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xc, "PCI Parity Error: clearing");
+ writel(TW_CONTROL_CLEAR_PARITY_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_PCI_ABORT) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xd, "PCI Abort: clearing");
+ writel(TW_CONTROL_CLEAR_PCI_ABORT, TW_CONTROL_REG_ADDR(tw_dev));
+ pci_write_config_word(tw_dev->tw_pci_dev, PCI_STATUS, TW_PCI_CLEAR_PCI_ABORT);
+ }
+
+ if (status_reg_value & TW_STATUS_QUEUE_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xe, "Controller Queue Error: clearing");
+ writel(TW_CONTROL_CLEAR_QUEUE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_SBUF_WRITE_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xf, "SBUF Write Error: clearing");
+ writel(TW_CONTROL_CLEAR_SBUF_WRITE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_MICROCONTROLLER_ERROR) {
+ if (tw_dev->reset_print == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x10, "Microcontroller Error: clearing");
+ tw_dev->reset_print = 1;
+ }
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_decode_bits() */
+
+/* This function will empty the response queue */
+static int twa_empty_response_queue(TW_Device_Extension *tw_dev)
+{
+ u32 status_reg_value, response_que_value;
+ int count = 0, retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ while (((status_reg_value & TW_STATUS_RESPONSE_QUEUE_EMPTY) == 0) && (count < TW_MAX_RESPONSE_DRAIN)) {
+ response_que_value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ count++;
+ }
+ if (count == TW_MAX_RESPONSE_DRAIN)
+ goto out;
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_empty_response_queue() */
+
+/* This function passes sense keys from firmware to scsi layer */
+static int twa_fill_sense(TW_Device_Extension *tw_dev, int request_id, int copy_sense, int print_host)
+{
+ TW_Command_Full *full_command_packet;
+ unsigned short error;
+ int retval = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ /* Don't print error for Logical unit not supported during rollcall */
+ error = full_command_packet->header.status_block.error;
+ if ((error != TW_ERROR_LOGICAL_UNIT_NOT_SUPPORTED) && (error != TW_ERROR_UNIT_OFFLINE)) {
+ if (print_host)
+ printk(KERN_WARNING "3w-9xxx: scsi%d: ERROR: (0x%02X:0x%04X): %s:%s.\n",
+ tw_dev->host->host_no,
+ TW_MESSAGE_SOURCE_CONTROLLER_ERROR,
+ full_command_packet->header.status_block.error,
+ twa_string_lookup(twa_error_table,
+ full_command_packet->header.status_block.error),
+ full_command_packet->header.err_specific_desc);
+ else
+ printk(KERN_WARNING "3w-9xxx: ERROR: (0x%02X:0x%04X): %s:%s.\n",
+ TW_MESSAGE_SOURCE_CONTROLLER_ERROR,
+ full_command_packet->header.status_block.error,
+ twa_string_lookup(twa_error_table,
+ full_command_packet->header.status_block.error),
+ full_command_packet->header.err_specific_desc);
+ }
+
+ if (copy_sense) {
+ memcpy(tw_dev->srb[request_id]->sense_buffer, full_command_packet->header.sense_data, TW_SENSE_DATA_LENGTH);
+ tw_dev->srb[request_id]->result = (full_command_packet->command.newcommand.status << 1);
+ retval = TW_ISR_DONT_RESULT;
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_fill_sense() */
+
+/* This function will free up device extension resources */
+static void twa_free_device_extension(TW_Device_Extension *tw_dev)
+{
+ if (tw_dev->command_packet_virt[0])
+ pci_free_consistent(tw_dev->tw_pci_dev,
+ sizeof(TW_Command_Full)*TW_Q_LENGTH,
+ tw_dev->command_packet_virt[0],
+ tw_dev->command_packet_phys[0]);
+
+ if (tw_dev->generic_buffer_virt[0])
+ pci_free_consistent(tw_dev->tw_pci_dev,
+ TW_SECTOR_SIZE*TW_Q_LENGTH,
+ tw_dev->generic_buffer_virt[0],
+ tw_dev->generic_buffer_phys[0]);
+
+ if (tw_dev->event_queue[0])
+ kfree(tw_dev->event_queue[0]);
+} /* End twa_free_device_extension() */
+
+/* This function will free a request id */
+static void twa_free_request_id(TW_Device_Extension *tw_dev, int request_id)
+{
+ tw_dev->free_queue[tw_dev->free_tail] = request_id;
+ tw_dev->state[request_id] = TW_S_FINISHED;
+ tw_dev->free_tail = (tw_dev->free_tail + 1) % TW_Q_LENGTH;
+} /* End twa_free_request_id() */
+
+/* This function will get parameter table entires from the firmware */
+static void *twa_get_param(TW_Device_Extension *tw_dev, int request_id, int table_id, int parameter_id, int parameter_size_bytes)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Param_Apache *param;
+ unsigned long param_value;
+ void *retval = NULL;
+
+ /* Setup the command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ command_packet = &full_command_packet->command.oldcommand;
+
+ command_packet->opcode__sgloffset = TW_OPSGL_IN(2, TW_OP_GET_PARAM);
+ command_packet->size = TW_COMMAND_SIZE;
+ command_packet->request_id = request_id;
+ command_packet->byte6_offset.block_count = 1;
+
+ /* Now setup the param */
+ param = (TW_Param_Apache *)tw_dev->generic_buffer_virt[request_id];
+ memset(param, 0, TW_SECTOR_SIZE);
+ param->table_id = table_id | 0x8000;
+ param->parameter_id = parameter_id;
+ param->parameter_size_bytes = parameter_size_bytes;
+ param_value = tw_dev->generic_buffer_phys[request_id];
+
+ command_packet->byte8_offset.param.sgl[0].address = param_value;
+ command_packet->byte8_offset.param.sgl[0].length = TW_SECTOR_SIZE;
+
+ /* Post the command packet to the board */
+ twa_post_command_packet(tw_dev, request_id, 1);
+
+ /* Poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30))
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x13, "No valid response during get param")
+ else
+ retval = (void *)&(param->data[0]);
+
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_INITIAL;
+
+ return retval;
+} /* End twa_get_param() */
+
+/* This function will assign an available request id */
+static void twa_get_request_id(TW_Device_Extension *tw_dev, int *request_id)
+{
+ *request_id = tw_dev->free_queue[tw_dev->free_head];
+ tw_dev->free_head = (tw_dev->free_head + 1) % TW_Q_LENGTH;
+ tw_dev->state[*request_id] = TW_S_STARTED;
+} /* End twa_get_request_id() */
+
+/* This function will send an initconnection command to controller */
+static int twa_initconnection(TW_Device_Extension *tw_dev, int message_credits,
+ u32 set_features, unsigned short current_fw_srl,
+ unsigned short current_fw_arch_id,
+ unsigned short current_fw_branch,
+ unsigned short current_fw_build,
+ unsigned short *fw_on_ctlr_srl,
+ unsigned short *fw_on_ctlr_arch_id,
+ unsigned short *fw_on_ctlr_branch,
+ unsigned short *fw_on_ctlr_build,
+ u32 *init_connect_result)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Initconnect *tw_initconnect;
+ int request_id = 0, retval = 1;
+
+ /* Initialize InitConnection command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ full_command_packet->header.header_desc.size_header = 128;
+
+ tw_initconnect = (TW_Initconnect *)&full_command_packet->command.oldcommand;
+ tw_initconnect->opcode__reserved = TW_OPRES_IN(0, TW_OP_INIT_CONNECTION);
+ tw_initconnect->request_id = request_id;
+ tw_initconnect->message_credits = message_credits;
+ tw_initconnect->features = set_features;
+#if BITS_PER_LONG > 32
+ /* Turn on 64-bit sgl support */
+ tw_initconnect->features |= 1;
+#endif
+
+ if (set_features & TW_EXTENDED_INIT_CONNECT) {
+ tw_initconnect->size = TW_INIT_COMMAND_PACKET_SIZE_EXTENDED;
+ tw_initconnect->fw_srl = current_fw_srl;
+ tw_initconnect->fw_arch_id = current_fw_arch_id;
+ tw_initconnect->fw_branch = current_fw_branch;
+ tw_initconnect->fw_build = current_fw_build;
+ } else
+ tw_initconnect->size = TW_INIT_COMMAND_PACKET_SIZE;
+
+ /* Send command packet to the board */
+ twa_post_command_packet(tw_dev, request_id, 1);
+
+ /* Poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x15, "No valid response during init connection");
+ } else {
+ if (set_features & TW_EXTENDED_INIT_CONNECT) {
+ *fw_on_ctlr_srl = tw_initconnect->fw_srl;
+ *fw_on_ctlr_arch_id = tw_initconnect->fw_arch_id;
+ *fw_on_ctlr_branch = tw_initconnect->fw_branch;
+ *fw_on_ctlr_build = tw_initconnect->fw_build;
+ *init_connect_result = tw_initconnect->result;
+ }
+ retval = 0;
+ }
+
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_INITIAL;
+
+ return retval;
+} /* End twa_initconnection() */
+
+/* This function will initialize the fields of a device extension */
+static int twa_initialize_device_extension(TW_Device_Extension *tw_dev)
+{
+ int i, retval = 1;
+
+ /* Initialize command packet buffers */
+ if (twa_allocate_memory(tw_dev, sizeof(TW_Command_Full), 0)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x16, "Command packet memory allocation failed");
+ goto out;
+ }
+
+ /* Initialize generic buffer */
+ if (twa_allocate_memory(tw_dev, TW_SECTOR_SIZE, 1)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x17, "Generic memory allocation failed");
+ goto out;
+ }
+
+ /* Allocate event info space */
+ tw_dev->event_queue[0] = kmalloc(sizeof(TW_Event) * TW_Q_LENGTH, GFP_KERNEL);
+ if (!tw_dev->event_queue[0]) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x18, "Event info memory allocation failed");
+ goto out;
+ }
+
+ memset(tw_dev->event_queue[0], 0, sizeof(TW_Event) * TW_Q_LENGTH);
+
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ tw_dev->event_queue[i] = (TW_Event *)((unsigned char *)tw_dev->event_queue[0] + (i * sizeof(TW_Event)));
+ tw_dev->free_queue[i] = i;
+ tw_dev->state[i] = TW_S_INITIAL;
+ }
+
+ tw_dev->pending_head = TW_Q_START;
+ tw_dev->pending_tail = TW_Q_START;
+ tw_dev->free_head = TW_Q_START;
+ tw_dev->free_tail = TW_Q_START;
+ tw_dev->error_sequence_id = 1;
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+
+ init_MUTEX(&tw_dev->ioctl_sem);
+ init_waitqueue_head(&tw_dev->ioctl_wqueue);
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_initialize_device_extension() */
+
+/* This function is the interrupt service routine */
+static irqreturn_t twa_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ int request_id, error = 0;
+ u32 status_reg_value;
+ TW_Response_Queue response_que;
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)dev_instance;
+ int handled = 0;
+
+ /* Get the per adapter lock */
+ spin_lock(tw_dev->host->host_lock);
+
+ /* See if the interrupt matches this instance */
+ if (tw_dev->tw_pci_dev->irq == (unsigned int)irq) {
+
+ handled = 1;
+
+ /* Read the registers */
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ /* Check if this is our interrupt, otherwise bail */
+ if (!(status_reg_value & TW_STATUS_VALID_INTERRUPT))
+ goto twa_interrupt_bail;
+
+ /* Check controller for errors */
+ if (twa_check_bits(status_reg_value)) {
+ if (twa_decode_bits(tw_dev, status_reg_value)) {
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+
+ /* Handle host interrupt */
+ if (status_reg_value & TW_STATUS_HOST_INTERRUPT)
+ TW_CLEAR_HOST_INTERRUPT(tw_dev);
+
+ /* Handle attention interrupt */
+ if (status_reg_value & TW_STATUS_ATTENTION_INTERRUPT) {
+ TW_CLEAR_ATTENTION_INTERRUPT(tw_dev);
+ if (!(test_and_set_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags))) {
+ twa_get_request_id(tw_dev, &request_id);
+
+ error = twa_aen_read_queue(tw_dev, request_id);
+ if (error) {
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ clear_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags);
+ }
+ }
+ }
+
+ /* Handle command interrupt */
+ if (status_reg_value & TW_STATUS_COMMAND_INTERRUPT) {
+ TW_MASK_COMMAND_INTERRUPT(tw_dev);
+ /* Drain as many pending commands as we can */
+ while (tw_dev->pending_request_count > 0) {
+ request_id = tw_dev->pending_queue[tw_dev->pending_head];
+ if (tw_dev->state[request_id] != TW_S_PENDING) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x19, "Found request id that wasn't pending");
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ if (twa_post_command_packet(tw_dev, request_id, 1)==0) {
+ tw_dev->pending_head = (tw_dev->pending_head + 1) % TW_Q_LENGTH;
+ tw_dev->pending_request_count--;
+ } else {
+ /* If we get here, we will continue re-posting on the next command interrupt */
+ break;
+ }
+ }
+ }
+
+ /* Handle response interrupt */
+ if (status_reg_value & TW_STATUS_RESPONSE_INTERRUPT) {
+
+ /* Drain the response queue from the board */
+ while ((status_reg_value & TW_STATUS_RESPONSE_QUEUE_EMPTY) == 0) {
+ /* Complete the response */
+ response_que.value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ request_id = TW_RESID_OUT(response_que.response_id);
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ error = 0;
+ command_packet = &full_command_packet->command.oldcommand;
+ /* Check for command packet errors */
+ if (full_command_packet->command.newcommand.status != 0) {
+ if (tw_dev->srb[request_id] != 0) {
+ error = twa_fill_sense(tw_dev, request_id, 1, 1);
+ } else {
+ /* Skip ioctl error prints */
+ if (request_id != tw_dev->chrdev_request_id) {
+ error = twa_fill_sense(tw_dev, request_id, 0, 1);
+ }
+ }
+ }
+
+ /* Check for correct state */
+ if (tw_dev->state[request_id] != TW_S_POSTED) {
+ if (tw_dev->srb[request_id] != 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1a, "Received a request id that wasn't posted");
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+
+ /* Check for internal command completion */
+ if (tw_dev->srb[request_id] == 0) {
+ if (request_id != tw_dev->chrdev_request_id) {
+ if (twa_aen_complete(tw_dev, request_id))
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1b, "Error completing AEN during attention interrupt");
+ } else {
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+ wake_up(&tw_dev->ioctl_wqueue);
+ }
+ } else {
+ twa_scsiop_execute_scsi_complete(tw_dev, request_id);
+ /* If no error command was a success */
+ if (error == 0) {
+ tw_dev->srb[request_id]->result = (DID_OK << 16);
+ }
+
+ /* If error, command failed */
+ if (error == 1) {
+ /* Ask for a host reset */
+ tw_dev->srb[request_id]->result = (DID_OK << 16) | (CHECK_CONDITION << 1);
+ }
+
+ /* Now complete the io */
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ tw_dev->posted_request_count--;
+ tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+ twa_unmap_scsi_data(tw_dev, request_id);
+ }
+
+ /* Check for valid status after each drain */
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ if (twa_check_bits(status_reg_value)) {
+ if (twa_decode_bits(tw_dev, status_reg_value)) {
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+ }
+ }
+ }
+twa_interrupt_bail:
+ spin_unlock(tw_dev->host->host_lock);
+ return IRQ_RETVAL(handled);
+} /* End twa_interrupt() */
+
+/* This function will load the request id and various sgls for ioctls */
+static void twa_load_sgl(TW_Command_Full *full_command_packet, int request_id, dma_addr_t dma_handle, int length)
+{
+ TW_Command *oldcommand;
+ TW_Command_Apache *newcommand;
+ TW_SG_Entry *sgl;
+
+ if (TW_OP_OUT(full_command_packet->command.newcommand.opcode__reserved) == TW_OP_EXECUTE_SCSI) {
+ newcommand = &full_command_packet->command.newcommand;
+ newcommand->request_id = request_id;
+ newcommand->sg_list[0].address = dma_handle + sizeof(TW_Ioctl_Buf_Apache) - 1;
+ newcommand->sg_list[0].length = length;
+ } else {
+ oldcommand = &full_command_packet->command.oldcommand;
+ oldcommand->request_id = request_id;
+
+ if (TW_SGL_OUT(oldcommand->opcode__sgloffset)) {
+ /* Load the sg list */
+ sgl = (TW_SG_Entry *)((u32 *)oldcommand+TW_SGL_OUT(oldcommand->opcode__sgloffset));
+ sgl->address = dma_handle + sizeof(TW_Ioctl_Buf_Apache) - 1;
+ sgl->length = length;
+ }
+ }
+} /* End twa_load_sgl() */
+
+/* This function will perform a pci-dma mapping for a scatter gather list */
+static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ int use_sg;
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+ int retval = 0;
+
+ if (cmd->use_sg == 0)
+ goto out;
+
+ use_sg = pci_map_sg(pdev, cmd->buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
+
+ if (use_sg == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list");
+ goto out;
+ }
+
+ cmd->SCp.phase = TW_PHASE_SGLIST;
+ cmd->SCp.have_data_in = use_sg;
+ retval = use_sg;
+out:
+ return retval;
+} /* End twa_map_scsi_sg_data() */
+
+/* This function will perform a pci-dma map for a single buffer */
+static dma_addr_t twa_map_scsi_single_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ dma_addr_t mapping;
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+ int retval = 0;
+
+ if (cmd->request_bufflen == 0) {
+ retval = 0;
+ goto out;
+ }
+
+ mapping = pci_map_single(pdev, cmd->request_buffer, cmd->request_bufflen, DMA_BIDIRECTIONAL);
+
+ if (mapping == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Failed to map page");
+ goto out;
+ }
+
+ cmd->SCp.phase = TW_PHASE_SINGLE;
+ cmd->SCp.have_data_in = mapping;
+ retval = mapping;
+out:
+ return retval;
+} /* End twa_map_scsi_single_data() */
+
+/* This function will poll for a response interrupt of a request */
+static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds)
+{
+ int retval = 1, found = 0, response_request_id;
+ TW_Response_Queue response_queue;
+ TW_Command_Full *full_command_packet = tw_dev->command_packet_virt[request_id];
+
+ if (twa_poll_status_gone(tw_dev, TW_STATUS_RESPONSE_QUEUE_EMPTY, seconds) == 0) {
+ response_queue.value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ response_request_id = TW_RESID_OUT(response_queue.response_id);
+ if (request_id != response_request_id) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1e, "Found unexpected request id while polling for response");
+ goto out;
+ }
+ if (TW_OP_OUT(full_command_packet->command.newcommand.opcode__reserved) == TW_OP_EXECUTE_SCSI) {
+ if (full_command_packet->command.newcommand.status != 0) {
+ /* bad response */
+ twa_fill_sense(tw_dev, request_id, 0, 0);
+ goto out;
+ }
+ found = 1;
+ } else {
+ if (full_command_packet->command.oldcommand.status != 0) {
+ /* bad response */
+ twa_fill_sense(tw_dev, request_id, 0, 0);
+ goto out;
+ }
+ found = 1;
+ }
+ }
+
+ if (found)
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_response() */
+
+/* This function will poll the status register for a flag */
+static int twa_poll_status(TW_Device_Extension *tw_dev, u32 flag, int seconds)
+{
+ u32 status_reg_value;
+ unsigned long before;
+ int retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ before = jiffies;
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ while ((status_reg_value & flag) != flag) {
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (time_after(jiffies, before + HZ * seconds))
+ goto out;
+
+ msleep(50);
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_status() */
+
+/* This function will poll the status register for disappearance of a flag */
+static int twa_poll_status_gone(TW_Device_Extension *tw_dev, u32 flag, int seconds)
+{
+ u32 status_reg_value;
+ unsigned long before;
+ int retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ before = jiffies;
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ while ((status_reg_value & flag) != 0) {
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (time_after(jiffies, before + HZ * seconds))
+ goto out;
+
+ msleep(50);
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_status_gone() */
+
+/* This function will attempt to post a command packet to the board */
+static int twa_post_command_packet(TW_Device_Extension *tw_dev, int request_id, char internal)
+{
+ u32 status_reg_value;
+ unsigned long command_que_value;
+ int retval = 1;
+
+ command_que_value = tw_dev->command_packet_phys[request_id];
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (((tw_dev->pending_request_count > 0) && (tw_dev->state[request_id] != TW_S_PENDING)) || (status_reg_value & TW_STATUS_COMMAND_QUEUE_FULL)) {
+
+ /* Only pend internal driver commands */
+ if (!internal) {
+ retval = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ }
+
+ /* Couldn't post the command packet, so we do it later */
+ if (tw_dev->state[request_id] != TW_S_PENDING) {
+ tw_dev->state[request_id] = TW_S_PENDING;
+ tw_dev->pending_request_count++;
+ if (tw_dev->pending_request_count > tw_dev->max_pending_request_count) {
+ tw_dev->max_pending_request_count = tw_dev->pending_request_count;
+ }
+ tw_dev->pending_queue[tw_dev->pending_tail] = request_id;
+ tw_dev->pending_tail = (tw_dev->pending_tail + 1) % TW_Q_LENGTH;
+ }
+ TW_UNMASK_COMMAND_INTERRUPT(tw_dev);
+ goto out;
+ } else {
+ /* We successfully posted the command packet */
+#if BITS_PER_LONG > 32
+ writeq(TW_COMMAND_OFFSET + command_que_value, TW_COMMAND_QUEUE_REG_ADDR(tw_dev));
+#else
+ writel(TW_COMMAND_OFFSET + command_que_value, TW_COMMAND_QUEUE_REG_ADDR(tw_dev));
+#endif
+ tw_dev->state[request_id] = TW_S_POSTED;
+ tw_dev->posted_request_count++;
+ if (tw_dev->posted_request_count > tw_dev->max_posted_request_count) {
+ tw_dev->max_posted_request_count = tw_dev->posted_request_count;
+ }
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_post_command_packet() */
+
+/* This function will reset a device extension */
+static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
+{
+ int i = 0;
+ int retval = 1;
+
+ /* Abort all requests that are in progress */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ if ((tw_dev->state[i] != TW_S_FINISHED) &&
+ (tw_dev->state[i] != TW_S_INITIAL) &&
+ (tw_dev->state[i] != TW_S_COMPLETED)) {
+ if (tw_dev->srb[i]) {
+ tw_dev->srb[i]->result = (DID_RESET << 16);
+ tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+ twa_unmap_scsi_data(tw_dev, i);
+ }
+ }
+ }
+
+ /* Reset queues and counts */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ tw_dev->free_queue[i] = i;
+ tw_dev->state[i] = TW_S_INITIAL;
+ }
+ tw_dev->free_head = TW_Q_START;
+ tw_dev->free_tail = TW_Q_START;
+ tw_dev->posted_request_count = 0;
+ tw_dev->pending_request_count = 0;
+ tw_dev->pending_head = TW_Q_START;
+ tw_dev->pending_tail = TW_Q_START;
+ tw_dev->reset_print = 0;
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+ tw_dev->flags = 0;
+
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ if (twa_reset_sequence(tw_dev, 1))
+ goto out;
+
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_reset_device_extension() */
+
+/* This function will reset a controller */
+static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset)
+{
+ int tries = 0, retval = 1, flashed = 0, do_soft_reset = soft_reset;
+
+ while (tries < TW_MAX_RESET_TRIES) {
+ if (do_soft_reset)
+ TW_SOFT_RESET(tw_dev);
+
+ /* Make sure controller is in a good state */
+ if (twa_poll_status(tw_dev, TW_STATUS_MICROCONTROLLER_READY | (do_soft_reset == 1 ? TW_STATUS_ATTENTION_INTERRUPT : 0), 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1f, "Microcontroller not ready during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ /* Empty response queue */
+ if (twa_empty_response_queue(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x20, "Response queue empty failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ flashed = 0;
+
+ /* Check for compatibility/flash */
+ if (twa_check_srl(tw_dev, &flashed)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x21, "Compatibility check failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ } else {
+ if (flashed) {
+ tries++;
+ continue;
+ }
+ }
+
+ /* Drain the AEN queue */
+ if (twa_aen_drain_queue(tw_dev, soft_reset)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x22, "AEN drain failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ /* If we got here, controller is in a good state */
+ retval = 0;
+ goto out;
+ }
+out:
+ return retval;
+} /* End twa_reset_sequence() */
+
+/* This funciton returns unit geometry in cylinders/heads/sectors */
+static int twa_scsi_biosparam(struct scsi_device *sdev, struct block_device *bdev, sector_t capacity, int geom[])
+{
+ int heads, sectors, cylinders;
+ TW_Device_Extension *tw_dev;
+
+ tw_dev = (TW_Device_Extension *)sdev->host->hostdata;
+
+ if (capacity >= 0x200000) {
+ heads = 255;
+ sectors = 63;
+ cylinders = sector_div(capacity, heads * sectors);
+ } else {
+ heads = 64;
+ sectors = 32;
+ cylinders = sector_div(capacity, heads * sectors);
+ }
+
+ geom[0] = heads;
+ geom[1] = sectors;
+ geom[2] = cylinders;
+
+ return 0;
+} /* End twa_scsi_biosparam() */
+
+/* This is the new scsi eh abort function */
+static int twa_scsi_eh_abort(struct scsi_cmnd *SCpnt)
+{
+ int i;
+ TW_Device_Extension *tw_dev = NULL;
+ int retval = FAILED;
+
+ tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ spin_unlock_irq(tw_dev->host->host_lock);
+
+ tw_dev->num_aborts++;
+
+ /* If we find any IO's in process, we have to reset the card */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ if ((tw_dev->state[i] != TW_S_FINISHED) && (tw_dev->state[i] != TW_S_INITIAL)) {
+ printk(KERN_WARNING "3w-9xxx: scsi%d: WARNING: (0x%02X:0x%04X): Unit #%d: Command (0x%x) timed out, resetting card.\n",
+ tw_dev->host->host_no, TW_DRIVER, 0x2c,
+ SCpnt->device->id, SCpnt->cmnd[0]);
+ if (twa_reset_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2a, "Controller reset failed during scsi abort");
+ goto out;
+ }
+ break;
+ }
+ }
+ retval = SUCCESS;
+out:
+ spin_lock_irq(tw_dev->host->host_lock);
+ return retval;
+} /* End twa_scsi_eh_abort() */
+
+/* This is the new scsi eh reset function */
+static int twa_scsi_eh_reset(struct scsi_cmnd *SCpnt)
+{
+ TW_Device_Extension *tw_dev = NULL;
+ int retval = FAILED;
+
+ tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ spin_unlock_irq(tw_dev->host->host_lock);
+
+ tw_dev->num_resets++;
+
+ printk(KERN_WARNING "3w-9xxx: scsi%d: SCSI host reset started.\n", tw_dev->host->host_no);
+
+ /* Now reset the card and some of the device extension data */
+ if (twa_reset_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2b, "Controller reset failed during scsi host reset");
+ goto out;
+ }
+ printk(KERN_WARNING "3w-9xxx: scsi%d: SCSI host reset succeeded.\n", tw_dev->host->host_no);
+ retval = SUCCESS;
+out:
+ spin_lock_irq(tw_dev->host->host_lock);
+ return retval;
+} /* End twa_scsi_eh_reset() */
+
+/* This is the main scsi queue function to handle scsi opcodes */
+static int twa_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
+{
+ int request_id, retval;
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ /* Save done function into scsi_cmnd struct */
+ SCpnt->scsi_done = done;
+
+ /* Get a free request id */
+ twa_get_request_id(tw_dev, &request_id);
+
+ /* Save the scsi command for use by the ISR */
+ tw_dev->srb[request_id] = SCpnt;
+
+ /* Initialize phase to zero */
+ SCpnt->SCp.phase = TW_PHASE_INITIAL;
+
+ retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ switch (retval) {
+ case SCSI_MLQUEUE_HOST_BUSY:
+ twa_free_request_id(tw_dev, request_id);
+ break;
+ case 1:
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ SCpnt->result = (DID_ERROR << 16);
+ done(SCpnt);
+ }
+
+ return retval;
+} /* End twa_scsi_queue() */
+
+/* This function hands scsi cdb's to the firmware */
+static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Apache *sglistarg)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command_Apache *command_packet;
+ u32 num_sectors = 0x0;
+ int i, sg_count;
+ struct scsi_cmnd *srb = NULL;
+ struct scatterlist *sglist = NULL;
+ u32 buffaddr = 0x0;
+ int retval = 1;
+
+ if (tw_dev->srb[request_id]) {
+ if (tw_dev->srb[request_id]->request_buffer) {
+ sglist = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
+ }
+ srb = tw_dev->srb[request_id];
+ }
+
+ /* Initialize command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ full_command_packet->header.header_desc.size_header = 128;
+ full_command_packet->header.status_block.error = 0;
+ full_command_packet->header.status_block.severity__reserved = 0;
+
+ command_packet = &full_command_packet->command.newcommand;
+ command_packet->status = 0;
+ command_packet->opcode__reserved = TW_OPRES_IN(0, TW_OP_EXECUTE_SCSI);
+
+ /* We forced 16 byte cdb use earlier */
+ if (!cdb)
+ memcpy(command_packet->cdb, srb->cmnd, TW_MAX_CDB_LEN);
+ else
+ memcpy(command_packet->cdb, cdb, TW_MAX_CDB_LEN);
+
+ if (srb)
+ command_packet->unit = srb->device->id;
+ else
+ command_packet->unit = 0;
+
+ command_packet->request_id = request_id;
+ command_packet->sgl_offset = 16;
+
+ if (!sglistarg) {
+ /* Map sglist from scsi layer to cmd packet */
+ if (tw_dev->srb[request_id]->use_sg == 0) {
+ if (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH) {
+ command_packet->sg_list[0].address = tw_dev->generic_buffer_phys[request_id];
+ command_packet->sg_list[0].length = TW_MIN_SGL_LENGTH;
+ } else {
+ buffaddr = twa_map_scsi_single_data(tw_dev, request_id);
+ if (buffaddr == 0)
+ goto out;
+
+ command_packet->sg_list[0].address = buffaddr;
+ command_packet->sg_list[0].length = tw_dev->srb[request_id]->request_bufflen;
+ }
+ command_packet->sgl_entries = 1;
+
+ if (command_packet->sg_list[0].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2d, "Found unaligned address during execute scsi");
+ goto out;
+ }
+ }
+
+ if (tw_dev->srb[request_id]->use_sg > 0) {
+ sg_count = twa_map_scsi_sg_data(tw_dev, request_id);
+ if (sg_count == 0)
+ goto out;
+
+ for (i = 0; i < sg_count; i++) {
+ command_packet->sg_list[i].address = sg_dma_address(&sglist[i]);
+ command_packet->sg_list[i].length = sg_dma_len(&sglist[i]);
+ if (command_packet->sg_list[i].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2e, "Found unaligned sgl address during execute scsi");
+ goto out;
+ }
+ }
+ command_packet->sgl_entries = tw_dev->srb[request_id]->use_sg;
+ }
+ } else {
+ /* Internal cdb post */
+ for (i = 0; i < use_sg; i++) {
+ command_packet->sg_list[i].address = sglistarg[i].address;
+ command_packet->sg_list[i].length = sglistarg[i].length;
+ if (command_packet->sg_list[i].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2f, "Found unaligned sgl address during internal post");
+ goto out;
+ }
+ }
+ command_packet->sgl_entries = use_sg;
+ }
+
+ if (srb) {
+ if (srb->cmnd[0] == READ_6 || srb->cmnd[0] == WRITE_6)
+ num_sectors = (u32)srb->cmnd[4];
+
+ if (srb->cmnd[0] == READ_10 || srb->cmnd[0] == WRITE_10)
+ num_sectors = (u32)srb->cmnd[8] | ((u32)srb->cmnd[7] << 8);
+ }
+
+ /* Update sector statistic */
+ tw_dev->sector_count = num_sectors;
+ if (tw_dev->sector_count > tw_dev->max_sector_count)
+ tw_dev->max_sector_count = tw_dev->sector_count;
+
+ /* Update SG statistics */
+ if (srb) {
+ tw_dev->sgl_entries = tw_dev->srb[request_id]->use_sg;
+ if (tw_dev->sgl_entries > tw_dev->max_sgl_entries)
+ tw_dev->max_sgl_entries = tw_dev->sgl_entries;
+ }
+
+ /* Now post the command to the board */
+ if (srb) {
+ retval = twa_post_command_packet(tw_dev, request_id, 0);
+ } else {
+ twa_post_command_packet(tw_dev, request_id, 1);
+ retval = 0;
+ }
+out:
+ return retval;
+} /* End twa_scsiop_execute_scsi() */
+
+/* This function completes an execute scsi operation */
+static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id)
+{
+ /* Copy the response if too small */
+ if ((tw_dev->srb[request_id]->request_buffer) && (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH)) {
+ memcpy(tw_dev->srb[request_id]->request_buffer,
+ tw_dev->generic_buffer_virt[request_id],
+ tw_dev->srb[request_id]->request_bufflen);
+ }
+} /* End twa_scsiop_execute_scsi_complete() */
+
+/* This function tells the controller to shut down */
+static void __twa_shutdown(TW_Device_Extension *tw_dev)
+{
+ /* Disable interrupts */
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ printk(KERN_WARNING "3w-9xxx: Shutting down host %d.\n", tw_dev->host->host_no);
+
+ /* Tell the card we are shutting down */
+ if (twa_initconnection(tw_dev, 1, 0, 0, 0, 0, 0, NULL, NULL, NULL, NULL, NULL)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x31, "Connection shutdown failed");
+ } else {
+ printk(KERN_WARNING "3w-9xxx: Shutdown complete.\n");
+ }
+
+ /* Clear all interrupts just before exit */
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+} /* End __twa_shutdown() */
+
+/* Wrapper for __twa_shutdown */
+static void twa_shutdown(struct device *dev)
+{
+ struct Scsi_Host *host = pci_get_drvdata(to_pci_dev(dev));
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ __twa_shutdown(tw_dev);
+} /* End twa_shutdown() */
+
+/* This function will look up a string */
+static char *twa_string_lookup(twa_message_type *table, unsigned int code)
+{
+ int index;
+
+ for (index = 0; ((code != table[index].code) &&
+ (table[index].text != (char *)0)); index++);
+ return(table[index].text);
+} /* End twa_string_lookup() */
+
+/* This function will perform a pci-dma unmap */
+static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+
+ switch(cmd->SCp.phase) {
+ case TW_PHASE_SINGLE:
+ pci_unmap_single(pdev, cmd->SCp.have_data_in, cmd->request_bufflen, DMA_BIDIRECTIONAL);
+ break;
+ case TW_PHASE_SGLIST:
+ pci_unmap_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
+ break;
+ }
+} /* End twa_unmap_scsi_data() */
+
+/* scsi_host_template initializer */
+static struct scsi_host_template driver_template = {
+ .module = THIS_MODULE,
+ .name = "3ware 9000 Storage Controller",
+ .queuecommand = twa_scsi_queue,
+ .eh_abort_handler = twa_scsi_eh_abort,
+ .eh_host_reset_handler = twa_scsi_eh_reset,
+ .bios_param = twa_scsi_biosparam,
+ .can_queue = TW_Q_LENGTH-2,
+ .this_id = -1,
+ .sg_tablesize = TW_APACHE_MAX_SGL_LENGTH,
+ .max_sectors = TW_MAX_SECTORS,
+ .cmd_per_lun = TW_MAX_CMDS_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = twa_host_attrs,
+ .sdev_attrs = twa_dev_attrs,
+ .emulated = 1
+};
+
+/* This function will probe and initialize a card */
+static int __devinit twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+{
+ struct Scsi_Host *host = NULL;
+ TW_Device_Extension *tw_dev;
+ u32 mem_addr;
+ int retval = -ENODEV;
+
+ retval = pci_enable_device(pdev);
+ if (retval) {
+ TW_PRINTK(host, TW_DRIVER, 0x34, "Failed to enable pci device");
+ goto out_disable_device;
+ }
+
+ pci_set_master(pdev);
+
+ retval = pci_set_dma_mask(pdev, TW_DMA_MASK);
+ if (retval) {
+ TW_PRINTK(host, TW_DRIVER, 0x23, "Failed to set dma mask");
+ goto out_disable_device;
+ }
+
+ host = scsi_host_alloc(&driver_template, sizeof(TW_Device_Extension));
+ if (!host) {
+ TW_PRINTK(host, TW_DRIVER, 0x24, "Failed to allocate memory for device extension");
+ retval = -ENOMEM;
+ goto out_disable_device;
+ }
+ tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ memset(tw_dev, 0, sizeof(TW_Device_Extension));
+
+ /* Save values to device extension */
+ tw_dev->host = host;
+ tw_dev->tw_pci_dev = pdev;
+
+ if (twa_initialize_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x25, "Failed to initialize device extension");
+ goto out_free_device_extension;
+ }
+
+ /* Request IO regions */
+ retval = pci_request_regions(pdev, "3w-9xxx");
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x26, "Failed to get mem region");
+ goto out_free_device_extension;
+ }
+
+ mem_addr = pci_resource_start(pdev, 1);
+
+ /* Save base address */
+ tw_dev->base_addr = ioremap(mem_addr, PAGE_SIZE);
+ if (!tw_dev->base_addr) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x35, "Failed to ioremap");
+ goto out_release_mem_region;
+ }
+
+ /* Disable interrupts on the card */
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ /* Initialize the card */
+ if (twa_reset_sequence(tw_dev, 0))
+ goto out_release_mem_region;
+
+ /* Set host specific parameters */
+ host->max_id = TW_MAX_UNITS;
+ host->max_cmd_len = TW_MAX_CDB_LEN;
+
+ /* Luns and channels aren't supported by adapter */
+ host->max_lun = 0;
+ host->max_channel = 0;
+
+ /* Register the card with the kernel SCSI layer */
+ retval = scsi_add_host(host, &pdev->dev);
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x27, "scsi add host failed");
+ goto out_release_mem_region;
+ }
+
+ pci_set_drvdata(pdev, host);
+
+ printk(KERN_WARNING "3w-9xxx: scsi%d: Found a 3ware 9000 Storage Controller at 0x%x, IRQ: %d.\n",
+ host->host_no, mem_addr, pdev->irq);
+ printk(KERN_WARNING "3w-9xxx: scsi%d: Firmware %s, BIOS %s, Ports: %d.\n",
+ host->host_no,
+ (char *)twa_get_param(tw_dev, 0, TW_VERSION_TABLE,
+ TW_PARAM_FWVER, TW_PARAM_FWVER_LENGTH),
+ (char *)twa_get_param(tw_dev, 1, TW_VERSION_TABLE,
+ TW_PARAM_BIOSVER, TW_PARAM_BIOSVER_LENGTH),
+ *(int *)twa_get_param(tw_dev, 2, TW_INFORMATION_TABLE,
+ TW_PARAM_PORTCOUNT, TW_PARAM_PORTCOUNT_LENGTH));
+
+ /* Now setup the interrupt handler */
+ retval = request_irq(pdev->irq, twa_interrupt, SA_SHIRQ, "3w-9xxx", tw_dev);
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x30, "Error requesting IRQ");
+ goto out_remove_host;
+ }
+
+ twa_device_extension_list[twa_device_extension_count] = tw_dev;
+ twa_device_extension_count++;
+
+ /* Re-enable interrupts on the card */
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+
+ /* Finally, scan the host */
+ scsi_scan_host(host);
+
+ if (twa_major == -1) {
+ if ((twa_major = register_chrdev (0, "twa", &twa_fops)) < 0)
+ TW_PRINTK(host, TW_DRIVER, 0x29, "Failed to register character device");
+ }
+ return 0;
+
+out_remove_host:
+ scsi_remove_host(host);
+out_release_mem_region:
+ pci_release_regions(pdev);
+out_free_device_extension:
+ twa_free_device_extension(tw_dev);
+ scsi_host_put(host);
+out_disable_device:
+ pci_disable_device(pdev);
+
+ return retval;
+} /* End twa_probe() */
+
+/* This function is called to remove a device */
+static void twa_remove(struct pci_dev *pdev)
+{
+ struct Scsi_Host *host = pci_get_drvdata(pdev);
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ scsi_remove_host(tw_dev->host);
+
+ __twa_shutdown(tw_dev);
+
+ /* Free up the IRQ */
+ free_irq(tw_dev->tw_pci_dev->irq, tw_dev);
+
+ /* Free up the mem region */
+ pci_release_regions(pdev);
+
+ /* Free up device extension resources */
+ twa_free_device_extension(tw_dev);
+
+ /* Unregister character device */
+ if (twa_major >= 0) {
+ unregister_chrdev(twa_major, "twa");
+ twa_major = -1;
+ }
+
+ scsi_host_put(tw_dev->host);
+ pci_disable_device(pdev);
+ twa_device_extension_count--;
+} /* End twa_remove() */
+
+/* PCI Devices supported by this driver */
+static struct pci_device_id twa_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_3WARE, PCI_DEVICE_ID_3WARE_9000,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { }
+};
+MODULE_DEVICE_TABLE(pci, twa_pci_tbl);
+
+/* pci_driver initializer */
+static struct pci_driver twa_driver = {
+ .name = "3w-9xxx",
+ .id_table = twa_pci_tbl,
+ .probe = twa_probe,
+ .remove = twa_remove,
+ .driver = {
+ .shutdown = twa_shutdown
+ }
+};
+
+/* This function is called on driver initialization */
+static int __init twa_init(void)
+{
+ printk(KERN_WARNING "3ware 9000 Storage Controller device driver for Linux v%s.\n", twa_driver_version);
+
+ return pci_module_init(&twa_driver);
+} /* End twa_init() */
+
+/* This function is called on driver exit */
+static void __exit twa_exit(void)
+{
+ pci_unregister_driver(&twa_driver);
+} /* End twa_exit() */
+
+module_init(twa_init);
+module_exit(twa_exit);
+
--- /dev/null
+/*
+ 3w-9xxx.h -- 3ware 9000 Storage Controller device driver for Linux.
+
+ Written By: Adam Radford <linuxraid@amcc.com>
+
+ Copyright (C) 2004 Applied Micro Circuits Corporation.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; version 2 of the License.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ NO WARRANTY
+ THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ solely responsible for determining the appropriateness of using and
+ distributing the Program and assumes all risks associated with its
+ exercise of rights under this Agreement, including but not limited to
+ the risks and costs of program errors, damage to or loss of data,
+ programs or equipment, and unavailability or interruption of operations.
+
+ DISCLAIMER OF LIABILITY
+ NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ Bugs/Comments/Suggestions should be mailed to:
+ linuxraid@amcc.com
+
+ For more information, goto:
+ http://www.amcc.com
+*/
+
+#ifndef _3W_9XXX_H
+#define _3W_9XXX_H
+
+/* AEN string type */
+typedef struct TAG_twa_message_type {
+ unsigned int code;
+ char* text;
+} twa_message_type;
+
+/* AEN strings */
+static twa_message_type twa_aen_table[] = {
+ {0x0000, "AEN queue empty"},
+ {0x0001, "Controller reset occurred"},
+ {0x0002, "Degraded unit detected"},
+ {0x0003, "Controller error occured"},
+ {0x0004, "Background rebuild failed"},
+ {0x0005, "Background rebuild done"},
+ {0x0006, "Incomplete unit detected"},
+ {0x0007, "Background initialize done"},
+ {0x0008, "Unclean shutdown detected"},
+ {0x0009, "Drive timeout detected"},
+ {0x000A, "Drive error detected"},
+ {0x000B, "Rebuild started"},
+ {0x000C, "Background initialize started"},
+ {0x000D, "Entire logical unit was deleted"},
+ {0x000E, "Background initialize failed"},
+ {0x000F, "SMART attribute exceeded threshold"},
+ {0x0010, "Power supply reported AC under range"},
+ {0x0011, "Power supply reported DC out of range"},
+ {0x0012, "Power supply reported a malfunction"},
+ {0x0013, "Power supply predicted malfunction"},
+ {0x0014, "Battery charge is below threshold"},
+ {0x0015, "Fan speed is below threshold"},
+ {0x0016, "Temperature sensor is above threshold"},
+ {0x0017, "Power supply was removed"},
+ {0x0018, "Power supply was inserted"},
+ {0x0019, "Drive was removed from a bay"},
+ {0x001A, "Drive was inserted into a bay"},
+ {0x001B, "Drive bay cover door was opened"},
+ {0x001C, "Drive bay cover door was closed"},
+ {0x001D, "Product case was opened"},
+ {0x0020, "Prepare for shutdown (power-off)"},
+ {0x0021, "Downgrade UDMA mode to lower speed"},
+ {0x0022, "Upgrade UDMA mode to higher speed"},
+ {0x0023, "Sector repair completed"},
+ {0x0024, "Sbuf memory test failed"},
+ {0x0025, "Error flushing cached write data to array"},
+ {0x0026, "Drive reported data ECC error"},
+ {0x0027, "DCB has checksum error"},
+ {0x0028, "DCB version is unsupported"},
+ {0x0029, "Background verify started"},
+ {0x002A, "Background verify failed"},
+ {0x002B, "Background verify done"},
+ {0x002C, "Bad sector overwritten during rebuild"},
+ {0x002D, "Background rebuild error on source drive"},
+ {0x002E, "Replace failed because replacement drive too small"},
+ {0x002F, "Verify failed because array was never initialized"},
+ {0x0030, "Unsupported ATA drive"},
+ {0x0031, "Synchronize host/controller time"},
+ {0x0032, "Spare capacity is inadequate for some units"},
+ {0x0033, "Background migration started"},
+ {0x0034, "Background migration failed"},
+ {0x0035, "Background migration done"},
+ {0x0036, "Verify detected and fixed data/parity mismatch"},
+ {0x0037, "SO-DIMM incompatible"},
+ {0x0038, "SO-DIMM not detected"},
+ {0x0039, "Corrected Sbuf ECC error"},
+ {0x003A, "Drive power on reset detected"},
+ {0x003B, "Background rebuild paused"},
+ {0x003C, "Background initialize paused"},
+ {0x003D, "Background verify paused"},
+ {0x003E, "Background migration paused"},
+ {0x003F, "Corrupt flash file system detected"},
+ {0x0040, "Flash file system repaired"},
+ {0x0041, "Unit number assignments were lost"},
+ {0x0042, "Error during read of primary DCB"},
+ {0x0043, "Latent error found in backup DCB"},
+ {0x00FC, "Recovered/finished array membership update"},
+ {0x00FD, "Handler lockup"},
+ {0x00FE, "Retrying PCI transfer"},
+ {0x00FF, "AEN queue is full"},
+ {0xFFFFFFFF, (char*) 0}
+};
+
+/* AEN severity table */
+static char *twa_aen_severity_table[] =
+{
+ "None", "ERROR", "WARNING", "INFO", "DEBUG", (char*) 0
+};
+
+/* Error strings */
+static twa_message_type twa_error_table[] = {
+ {0x0100, "SGL entry contains zero data"},
+ {0x0101, "Invalid command opcode"},
+ {0x0102, "SGL entry has unaligned address"},
+ {0x0103, "SGL size does not match command"},
+ {0x0104, "SGL entry has illegal length"},
+ {0x0105, "Command packet is not aligned"},
+ {0x0106, "Invalid request ID"},
+ {0x0107, "Duplicate request ID"},
+ {0x0108, "ID not locked"},
+ {0x0109, "LBA out of range"},
+ {0x010A, "Logical unit not supported"},
+ {0x010B, "Parameter table does not exist"},
+ {0x010C, "Parameter index does not exist"},
+ {0x010D, "Invalid field in CDB"},
+ {0x010E, "Specified port has invalid drive"},
+ {0x010F, "Parameter item size mismatch"},
+ {0x0110, "Failed memory allocation"},
+ {0x0111, "Memory request too large"},
+ {0x0112, "Out of memory segments"},
+ {0x0113, "Invalid address to deallocate"},
+ {0x0114, "Out of memory"},
+ {0x0115, "Out of heap"},
+ {0x0120, "Double degrade"},
+ {0x0121, "Drive not degraded"},
+ {0x0122, "Reconstruct error"},
+ {0x0123, "Replace not accepted"},
+ {0x0124, "Replace drive capacity too small"},
+ {0x0125, "Sector count not allowed"},
+ {0x0126, "No spares left"},
+ {0x0127, "Reconstruct error"},
+ {0x0128, "Unit is offline"},
+ {0x0129, "Cannot update status to DCB"},
+ {0x0130, "Invalid stripe handle"},
+ {0x0131, "Handle that was not locked"},
+ {0x0132, "Handle that was not empty"},
+ {0x0133, "Handle has different owner"},
+ {0x0140, "IPR has parent"},
+ {0x0150, "Illegal Pbuf address alignment"},
+ {0x0151, "Illegal Pbuf transfer length"},
+ {0x0152, "Illegal Sbuf address alignment"},
+ {0x0153, "Illegal Sbuf transfer length"},
+ {0x0160, "Command packet too large"},
+ {0x0161, "SGL exceeds maximum length"},
+ {0x0162, "SGL has too many entries"},
+ {0x0170, "Insufficient resources for rebuilder"},
+ {0x0171, "Verify error (data != parity)"},
+ {0x0180, "Requested segment not in directory of this DCB"},
+ {0x0181, "DCB segment has unsupported version"},
+ {0x0182, "DCB segment has checksum error"},
+ {0x0183, "DCB support (settings) segment invalid"},
+ {0x0184, "DCB UDB (unit descriptor block) segment invalid"},
+ {0x0185, "DCB GUID (globally unique identifier) segment invalid"},
+ {0x01A0, "Could not clear Sbuf"},
+ {0x01C0, "Flash identify failed"},
+ {0x01C1, "Flash out of bounds"},
+ {0x01C2, "Flash verify error"},
+ {0x01C3, "Flash file object not found"},
+ {0x01C4, "Flash file already present"},
+ {0x01C5, "Flash file system full"},
+ {0x01C6, "Flash file not present"},
+ {0x01C7, "Flash file size error"},
+ {0x01C8, "Bad flash file checksum"},
+ {0x01CA, "Corrupt flash file system detected"},
+ {0x01D0, "Invalid field in parameter list"},
+ {0x01D1, "Parameter list length error"},
+ {0x01D2, "Parameter item is not changeable"},
+ {0x01D3, "Parameter item is not saveable"},
+ {0x0200, "UDMA CRC error"},
+ {0x0201, "Internal CRC error"},
+ {0x0202, "Data ECC error"},
+ {0x0203, "ADP level 1 error"},
+ {0x0204, "Port timeout"},
+ {0x0205, "Drive power on reset"},
+ {0x0206, "ADP level 2 error"},
+ {0x0207, "Soft reset failed"},
+ {0x0208, "Drive not ready"},
+ {0x0209, "Unclassified port error"},
+ {0x020A, "Drive aborted command"},
+ {0x0210, "Internal CRC error"},
+ {0x0211, "PCI abort error"},
+ {0x0212, "PCI parity error"},
+ {0x0213, "Port handler error"},
+ {0x0214, "Token interrupt count error"},
+ {0x0215, "Timeout waiting for PCI transfer"},
+ {0x0216, "Corrected buffer ECC"},
+ {0x0217, "Uncorrected buffer ECC"},
+ {0x0230, "Unsupported command during flash recovery"},
+ {0x0231, "Next image buffer expected"},
+ {0x0232, "Binary image architecture incompatible"},
+ {0x0233, "Binary image has no signature"},
+ {0x0234, "Binary image has bad checksum"},
+ {0x0235, "Image downloaded overflowed buffer"},
+ {0x0240, "I2C device not found"},
+ {0x0241, "I2C transaction aborted"},
+ {0x0242, "SO-DIMM parameter(s) incompatible using defaults"},
+ {0x0243, "SO-DIMM unsupported"},
+ {0x0248, "SPI transfer status error"},
+ {0x0249, "SPI transfer timeout error"},
+ {0x0250, "Invalid unit descriptor size in CreateUnit"},
+ {0x0251, "Unit descriptor size exceeds data buffer in CreateUnit"},
+ {0x0252, "Invalid value in CreateUnit descriptor"},
+ {0x0253, "Inadequate disk space to support descriptor in CreateUnit"},
+ {0x0254, "Unable to create data channel for this unit descriptor"},
+ {0x0255, "CreateUnit descriptor specifies a drive already in use"},
+ {0x0256, "Unable to write configuration to all disks during CreateUnit"},
+ {0x0257, "CreateUnit does not support this descriptor version"},
+ {0x0258, "Invalid subunit for RAID 0 or 5 in CreateUnit"},
+ {0x0259, "Too many descriptors in CreateUnit"},
+ {0x025A, "Invalid configuration specified in CreateUnit descriptor"},
+ {0x025B, "Invalid LBA offset specified in CreateUnit descriptor"},
+ {0x025C, "Invalid stripelet size specified in CreateUnit descriptor"},
+ {0x0260, "SMART attribute exceeded threshold"},
+ {0xFFFFFFFF, (char*) 0}
+};
+
+/* Control register bit definitions */
+#define TW_CONTROL_CLEAR_HOST_INTERRUPT 0x00080000
+#define TW_CONTROL_CLEAR_ATTENTION_INTERRUPT 0x00040000
+#define TW_CONTROL_MASK_COMMAND_INTERRUPT 0x00020000
+#define TW_CONTROL_MASK_RESPONSE_INTERRUPT 0x00010000
+#define TW_CONTROL_UNMASK_COMMAND_INTERRUPT 0x00008000
+#define TW_CONTROL_UNMASK_RESPONSE_INTERRUPT 0x00004000
+#define TW_CONTROL_CLEAR_ERROR_STATUS 0x00000200
+#define TW_CONTROL_ISSUE_SOFT_RESET 0x00000100
+#define TW_CONTROL_ENABLE_INTERRUPTS 0x00000080
+#define TW_CONTROL_DISABLE_INTERRUPTS 0x00000040
+#define TW_CONTROL_ISSUE_HOST_INTERRUPT 0x00000020
+#define TW_CONTROL_CLEAR_PARITY_ERROR 0x00800000
+#define TW_CONTROL_CLEAR_QUEUE_ERROR 0x00400000
+#define TW_CONTROL_CLEAR_PCI_ABORT 0x00100000
+#define TW_CONTROL_CLEAR_SBUF_WRITE_ERROR 0x00000008
+
+/* Status register bit definitions */
+#define TW_STATUS_MAJOR_VERSION_MASK 0xF0000000
+#define TW_STATUS_MINOR_VERSION_MASK 0x0F000000
+#define TW_STATUS_PCI_PARITY_ERROR 0x00800000
+#define TW_STATUS_QUEUE_ERROR 0x00400000
+#define TW_STATUS_MICROCONTROLLER_ERROR 0x00200000
+#define TW_STATUS_PCI_ABORT 0x00100000
+#define TW_STATUS_HOST_INTERRUPT 0x00080000
+#define TW_STATUS_ATTENTION_INTERRUPT 0x00040000
+#define TW_STATUS_COMMAND_INTERRUPT 0x00020000
+#define TW_STATUS_RESPONSE_INTERRUPT 0x00010000
+#define TW_STATUS_COMMAND_QUEUE_FULL 0x00008000
+#define TW_STATUS_RESPONSE_QUEUE_EMPTY 0x00004000
+#define TW_STATUS_MICROCONTROLLER_READY 0x00002000
+#define TW_STATUS_COMMAND_QUEUE_EMPTY 0x00001000
+#define TW_STATUS_EXPECTED_BITS 0x00002000
+#define TW_STATUS_UNEXPECTED_BITS 0x00F00008
+#define TW_STATUS_SBUF_WRITE_ERROR 0x00000008
+#define TW_STATUS_VALID_INTERRUPT 0x00DF0008
+
+/* RESPONSE QUEUE BIT DEFINITIONS */
+#define TW_RESPONSE_ID_MASK 0x00000FF0
+
+/* PCI related defines */
+#define TW_DEVICE_NAME "3w-9xxx"
+#define TW_NUMDEVICES 1
+#define TW_PCI_CLEAR_PARITY_ERRORS 0xc100
+#define TW_PCI_CLEAR_PCI_ABORT 0x2000
+
+/* Command packet opcodes used by the driver */
+#define TW_OP_INIT_CONNECTION 0x1
+#define TW_OP_GET_PARAM 0x12
+#define TW_OP_SET_PARAM 0x13
+#define TW_OP_EXECUTE_SCSI 0x10
+#define TW_OP_DOWNLOAD_FIRMWARE 0x16
+#define TW_OP_RESET 0x1C
+
+/* Asynchronous Event Notification (AEN) codes used by the driver */
+#define TW_AEN_QUEUE_EMPTY 0x0000
+#define TW_AEN_SOFT_RESET 0x0001
+#define TW_AEN_SYNC_TIME_WITH_HOST 0x031
+#define TW_AEN_SEVERITY_ERROR 0x1
+#define TW_AEN_SEVERITY_DEBUG 0x4
+#define TW_AEN_NOT_RETRIEVED 0x1
+#define TW_AEN_RETRIEVED 0x2
+
+/* Command state defines */
+#define TW_S_INITIAL 0x1 /* Initial state */
+#define TW_S_STARTED 0x2 /* Id in use */
+#define TW_S_POSTED 0x4 /* Posted to the controller */
+#define TW_S_PENDING 0x8 /* Waiting to be posted in isr */
+#define TW_S_COMPLETED 0x10 /* Completed by isr */
+#define TW_S_FINISHED 0x20 /* I/O completely done */
+
+/* Compatibility defines */
+#define TW_9000_ARCH_ID 0x5
+#define TW_CURRENT_FW_SRL 24
+#define TW_CURRENT_FW_BUILD 5
+#define TW_CURRENT_FW_BRANCH 1
+
+/* Phase defines */
+#define TW_PHASE_INITIAL 0
+#define TW_PHASE_SINGLE 1
+#define TW_PHASE_SGLIST 2
+
+/* Misc defines */
+#define TW_SECTOR_SIZE 512
+#define TW_ALIGNMENT_9000 4 /* 4 bytes */
+#define TW_ALIGNMENT_9000_SGL 0x3
+#define TW_MAX_UNITS 16
+#define TW_INIT_MESSAGE_CREDITS 0x100
+#define TW_INIT_COMMAND_PACKET_SIZE 0x3
+#define TW_INIT_COMMAND_PACKET_SIZE_EXTENDED 0x6
+#define TW_EXTENDED_INIT_CONNECT 0x2
+#define TW_BUNDLED_FW_SAFE_TO_FLASH 0x4
+#define TW_CTLR_FW_RECOMMENDS_FLASH 0x8
+#define TW_CTLR_FW_COMPATIBLE 0x2
+#define TW_BASE_FW_SRL 0x17
+#define TW_BASE_FW_BRANCH 0
+#define TW_BASE_FW_BUILD 1
+#if BITS_PER_LONG > 32
+#define TW_APACHE_MAX_SGL_LENGTH 72
+#define TW_ESCALADE_MAX_SGL_LENGTH 41
+#define TW_APACHE_CMD_PKT_SIZE 5
+#else
+#define TW_APACHE_MAX_SGL_LENGTH 109
+#define TW_ESCALADE_MAX_SGL_LENGTH 62
+#define TW_APACHE_CMD_PKT_SIZE 4
+#endif
+#define TW_ATA_PASS_SGL_MAX 60
+#define TW_Q_LENGTH 256
+#define TW_Q_START 0
+#define TW_MAX_SLOT 32
+#define TW_MAX_RESET_TRIES 2
+#define TW_MAX_CMDS_PER_LUN 254
+#define TW_MAX_RESPONSE_DRAIN 256
+#define TW_MAX_AEN_DRAIN 40
+#define TW_IN_IOCTL 2
+#define TW_IN_CHRDEV_IOCTL 3
+#define TW_IN_ATTENTION_LOOP 4
+#define TW_MAX_SECTORS 256
+#define TW_AEN_WAIT_TIME 1000
+#define TW_IOCTL_WAIT_TIME (1 * HZ) /* 1 second */
+#define TW_MAX_CDB_LEN 16
+#define TW_ISR_DONT_COMPLETE 2
+#define TW_ISR_DONT_RESULT 3
+#define TW_IOCTL_CHRDEV_TIMEOUT 60 /* 60 seconds */
+#define TW_IOCTL_CHRDEV_FREE -1
+#define TW_COMMAND_OFFSET 128 /* 128 bytes */
+#define TW_VERSION_TABLE 0x0402
+#define TW_TIMEKEEP_TABLE 0x040A
+#define TW_INFORMATION_TABLE 0x0403
+#define TW_PARAM_FWVER 3
+#define TW_PARAM_FWVER_LENGTH 16
+#define TW_PARAM_BIOSVER 4
+#define TW_PARAM_BIOSVER_LENGTH 16
+#define TW_PARAM_PORTCOUNT 3
+#define TW_PARAM_PORTCOUNT_LENGTH 1
+#define TW_MIN_SGL_LENGTH 0x200 /* 512 bytes */
+#define TW_MAX_SENSE_LENGTH 256
+#define TW_EVENT_SOURCE_AEN 0x1000
+#define TW_EVENT_SOURCE_COMMAND 0x1001
+#define TW_EVENT_SOURCE_PCHIP 0x1002
+#define TW_EVENT_SOURCE_DRIVER 0x1003
+#define TW_IOCTL_GET_COMPATIBILITY_INFO 0x101
+#define TW_IOCTL_GET_LAST_EVENT 0x102
+#define TW_IOCTL_GET_FIRST_EVENT 0x103
+#define TW_IOCTL_GET_NEXT_EVENT 0x104
+#define TW_IOCTL_GET_PREVIOUS_EVENT 0x105
+#define TW_IOCTL_GET_LOCK 0x106
+#define TW_IOCTL_RELEASE_LOCK 0x107
+#define TW_IOCTL_FIRMWARE_PASS_THROUGH 0x108
+#define TW_IOCTL_ERROR_STATUS_NOT_LOCKED 0x1001 // Not locked
+#define TW_IOCTL_ERROR_STATUS_LOCKED 0x1002 // Already locked
+#define TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS 0x1003 // No more events
+#define TW_IOCTL_ERROR_STATUS_AEN_CLOBBER 0x1004 // AEN clobber occurred
+#define TW_IOCTL_ERROR_OS_EFAULT -EFAULT // Bad address
+#define TW_IOCTL_ERROR_OS_EINTR -EINTR // Interrupted system call
+#define TW_IOCTL_ERROR_OS_EINVAL -EINVAL // Invalid argument
+#define TW_IOCTL_ERROR_OS_ENOMEM -ENOMEM // Out of memory
+#define TW_IOCTL_ERROR_OS_ERESTARTSYS -ERESTARTSYS // Restart system call
+#define TW_IOCTL_ERROR_OS_EIO -EIO // I/O error
+#define TW_IOCTL_ERROR_OS_ENOTTY -ENOTTY // Not a typewriter
+#define TW_IOCTL_ERROR_OS_ENODEV -ENODEV // No such device
+#define TW_ALLOCATION_LENGTH 128
+#define TW_SENSE_DATA_LENGTH 18
+#define TW_STATUS_CHECK_CONDITION 2
+#define TW_ERROR_LOGICAL_UNIT_NOT_SUPPORTED 0x10a
+#define TW_ERROR_UNIT_OFFLINE 0x128
+#define TW_MESSAGE_SOURCE_CONTROLLER_ERROR 3
+#define TW_MESSAGE_SOURCE_CONTROLLER_EVENT 4
+#define TW_MESSAGE_SOURCE_LINUX_DRIVER 6
+#define TW_DRIVER TW_MESSAGE_SOURCE_LINUX_DRIVER
+#define TW_MESSAGE_SOURCE_LINUX_OS 9
+#define TW_OS TW_MESSAGE_SOURCE_LINUX_OS
+#if BITS_PER_LONG > 32
+#define TW_COMMAND_SIZE 5
+#define TW_DMA_MASK DMA_64BIT_MASK
+#else
+#define TW_COMMAND_SIZE 4
+#define TW_DMA_MASK DMA_32BIT_MASK
+#endif
+#ifndef PCI_DEVICE_ID_3WARE_9000
+#define PCI_DEVICE_ID_3WARE_9000 0x1002
+#endif
+
+/* Bitmask macros to eliminate bitfields */
+
+/* opcode: 5, reserved: 3 */
+#define TW_OPRES_IN(x,y) ((x << 5) | (y & 0x1f))
+#define TW_OP_OUT(x) (x & 0x1f)
+
+/* opcode: 5, sgloffset: 3 */
+#define TW_OPSGL_IN(x,y) ((x << 5) | (y & 0x1f))
+#define TW_SGL_OUT(x) ((x >> 5) & 0x7)
+
+/* severity: 3, reserved: 5 */
+#define TW_SEV_OUT(x) (x & 0x7)
+
+/* reserved_1: 4, response_id: 8, reserved_2: 20 */
+#define TW_RESID_OUT(x) ((x >> 4) & 0xff)
+
+/* Macros */
+#define TW_CONTROL_REG_ADDR(x) (x->base_addr)
+#define TW_STATUS_REG_ADDR(x) ((unsigned char *)x->base_addr + 0x4)
+#if BITS_PER_LONG > 32
+#define TW_COMMAND_QUEUE_REG_ADDR(x) ((unsigned char *)x->base_addr + 0x20)
+#else
+#define TW_COMMAND_QUEUE_REG_ADDR(x) ((unsigned char *)x->base_addr + 0x8)
+#endif
+#define TW_RESPONSE_QUEUE_REG_ADDR(x) ((unsigned char *)x->base_addr + 0xC)
+#define TW_CLEAR_ALL_INTERRUPTS(x) (writel(TW_STATUS_VALID_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_CLEAR_ATTENTION_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_ATTENTION_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_CLEAR_HOST_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_HOST_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_DISABLE_INTERRUPTS(x) (writel(TW_CONTROL_DISABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_ENABLE_AND_CLEAR_INTERRUPTS(x) (writel(TW_CONTROL_CLEAR_ATTENTION_INTERRUPT | TW_CONTROL_UNMASK_RESPONSE_INTERRUPT | TW_CONTROL_ENABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_MASK_COMMAND_INTERRUPT(x) (writel(TW_CONTROL_MASK_COMMAND_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_UNMASK_COMMAND_INTERRUPT(x) (writel(TW_CONTROL_UNMASK_COMMAND_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_SOFT_RESET(x) (writel(TW_CONTROL_ISSUE_SOFT_RESET | \
+ TW_CONTROL_CLEAR_HOST_INTERRUPT | \
+ TW_CONTROL_CLEAR_ATTENTION_INTERRUPT | \
+ TW_CONTROL_MASK_COMMAND_INTERRUPT | \
+ TW_CONTROL_MASK_RESPONSE_INTERRUPT | \
+ TW_CONTROL_CLEAR_ERROR_STATUS | \
+ TW_CONTROL_DISABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_PRINTK(h,a,b,c) { \
+if (h) \
+printk(KERN_WARNING "3w-9xxx: scsi%d: ERROR: (0x%02X:0x%04X): %s.\n",h->host_no,a,b,c); \
+else \
+printk(KERN_WARNING "3w-9xxx: ERROR: (0x%02X:0x%04X): %s.\n",a,b,c); \
+}
+
+#pragma pack(1)
+
+/* Scatter Gather List Entry */
+typedef struct TAG_TW_SG_Entry {
+ unsigned long address;
+ u32 length;
+} TW_SG_Entry;
+
+/* Command Packet */
+typedef struct TW_Command {
+ unsigned char opcode__sgloffset;
+ unsigned char size;
+ unsigned char request_id;
+ unsigned char unit__hostid;
+ /* Second DWORD */
+ unsigned char status;
+ unsigned char flags;
+ union {
+ unsigned short block_count;
+ unsigned short parameter_count;
+ } byte6_offset;
+ union {
+ struct {
+ u32 lba;
+ TW_SG_Entry sgl[TW_ESCALADE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ u32 padding[2]; /* pad to 512 bytes */
+#else
+ u32 padding;
+#endif
+ } io;
+ struct {
+ TW_SG_Entry sgl[TW_ESCALADE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ u32 padding[3];
+#else
+ u32 padding[2];
+#endif
+ } param;
+ } byte8_offset;
+} TW_Command;
+
+/* Scatter gather element for 9000+ controllers */
+typedef struct TAG_TW_SG_Apache {
+ unsigned long address;
+ u32 length;
+} TW_SG_Apache;
+
+/* Command Packet for 9000+ controllers */
+typedef struct TAG_TW_Command_Apache {
+ unsigned char opcode__reserved;
+ unsigned char unit;
+ unsigned short request_id;
+ unsigned char status;
+ unsigned char sgl_offset;
+ unsigned short sgl_entries;
+ unsigned char cdb[16];
+ TW_SG_Apache sg_list[TW_APACHE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ unsigned char padding[8];
+#endif
+} TW_Command_Apache;
+
+/* New command packet header */
+typedef struct TAG_TW_Command_Apache_Header {
+ unsigned char sense_data[TW_SENSE_DATA_LENGTH];
+ struct {
+ char reserved[4];
+ unsigned short error;
+ unsigned char padding;
+ unsigned char severity__reserved;
+ } status_block;
+ unsigned char err_specific_desc[98];
+ struct {
+ unsigned char size_header;
+ unsigned short reserved;
+ unsigned char size_sense;
+ } header_desc;
+} TW_Command_Apache_Header;
+
+/* This struct is a union of the 2 command packets */
+typedef struct TAG_TW_Command_Full {
+ TW_Command_Apache_Header header;
+ union {
+ TW_Command oldcommand;
+ TW_Command_Apache newcommand;
+ } command;
+} TW_Command_Full;
+
+/* Initconnection structure */
+typedef struct TAG_TW_Initconnect {
+ unsigned char opcode__reserved;
+ unsigned char size;
+ unsigned char request_id;
+ unsigned char res2;
+ unsigned char status;
+ unsigned char flags;
+ unsigned short message_credits;
+ u32 features;
+ unsigned short fw_srl;
+ unsigned short fw_arch_id;
+ unsigned short fw_branch;
+ unsigned short fw_build;
+ u32 result;
+} TW_Initconnect;
+
+/* Event info structure */
+typedef struct TAG_TW_Event
+{
+ unsigned int sequence_id;
+ unsigned int time_stamp_sec;
+ unsigned short aen_code;
+ unsigned char severity;
+ unsigned char retrieved;
+ unsigned char repeat_count;
+ unsigned char parameter_len;
+ unsigned char parameter_data[98];
+} TW_Event;
+
+typedef struct TAG_TW_Ioctl_Driver_Command {
+ unsigned int control_code;
+ unsigned int status;
+ unsigned int unique_id;
+ unsigned int sequence_id;
+ unsigned int os_specific;
+ unsigned int buffer_length;
+} TW_Ioctl_Driver_Command;
+
+typedef struct TAG_TW_Ioctl_Apache {
+ TW_Ioctl_Driver_Command driver_command;
+ char padding[488];
+ TW_Command_Full firmware_command;
+ char data_buffer[1];
+} TW_Ioctl_Buf_Apache;
+
+/* Lock structure for ioctl get/release lock */
+typedef struct TAG_TW_Lock {
+ unsigned long timeout_msec;
+ unsigned long time_remaining_msec;
+ unsigned long force_flag;
+} TW_Lock;
+
+/* GetParam descriptor */
+typedef struct {
+ unsigned short table_id;
+ unsigned short parameter_id;
+ unsigned short parameter_size_bytes;
+ unsigned short actual_parameter_size_bytes;
+ unsigned char data[1];
+} TW_Param_Apache, *PTW_Param_Apache;
+
+/* Response queue */
+typedef union TAG_TW_Response_Queue {
+ u32 response_id;
+ u32 value;
+} TW_Response_Queue;
+
+typedef struct TAG_TW_Info {
+ char *buffer;
+ int length;
+ int offset;
+ int position;
+} TW_Info;
+
+/* Compatibility information structure */
+typedef struct TAG_TW_Compatibility_Info
+{
+ char driver_version[32];
+ unsigned short working_srl;
+ unsigned short working_branch;
+ unsigned short working_build;
+} TW_Compatibility_Info;
+
+typedef struct TAG_TW_Device_Extension {
+ u32 *base_addr;
+ unsigned long *generic_buffer_virt[TW_Q_LENGTH];
+ unsigned long generic_buffer_phys[TW_Q_LENGTH];
+ TW_Command_Full *command_packet_virt[TW_Q_LENGTH];
+ unsigned long command_packet_phys[TW_Q_LENGTH];
+ struct pci_dev *tw_pci_dev;
+ struct scsi_cmnd *srb[TW_Q_LENGTH];
+ unsigned char free_queue[TW_Q_LENGTH];
+ unsigned char free_head;
+ unsigned char free_tail;
+ unsigned char pending_queue[TW_Q_LENGTH];
+ unsigned char pending_head;
+ unsigned char pending_tail;
+ int state[TW_Q_LENGTH];
+ unsigned int posted_request_count;
+ unsigned int max_posted_request_count;
+ unsigned int pending_request_count;
+ unsigned int max_pending_request_count;
+ unsigned int max_sgl_entries;
+ unsigned int sgl_entries;
+ unsigned int num_aborts;
+ unsigned int num_resets;
+ unsigned int sector_count;
+ unsigned int max_sector_count;
+ unsigned int aen_count;
+ struct Scsi_Host *host;
+ long flags;
+ int reset_print;
+ TW_Event *event_queue[TW_Q_LENGTH];
+ unsigned char error_index;
+ unsigned char event_queue_wrapped;
+ unsigned int error_sequence_id;
+ int ioctl_sem_lock;
+ u32 ioctl_msec;
+ int chrdev_request_id;
+ wait_queue_head_t ioctl_wqueue;
+ struct semaphore ioctl_sem;
+ char aen_clobber;
+ unsigned short working_srl;
+ unsigned short working_branch;
+ unsigned short working_build;
+} TW_Device_Extension;
+
+#pragma pack()
+
+#endif /* _3W_9XXX_H */
+
--- /dev/null
+/*
+ * fdomain.c -- Future Domain TMC-16x0 SCSI driver
+ * Author: Rickard E. Faith, faith@cs.unc.edu
+ * Copyright 1992-1996, 1998 Rickard E. Faith (faith@acm.org)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2, or (at your option) any
+ * later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+extern struct scsi_host_template fdomain_driver_template;
+extern int fdomain_setup(char *str);
+extern struct Scsi_Host *__fdomain_16x0_detect(struct scsi_host_template *tpnt );
+extern int fdomain_16x0_bus_reset(struct scsi_cmnd *SCpnt);
--- /dev/null
+/*
+ * ipr.c -- driver for IBM Power Linux RAID adapters
+ *
+ * Written By: Brian King, IBM Corporation
+ *
+ * Copyright (C) 2003, 2004 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/*
+ * Notes:
+ *
+ * This driver is used to control the following SCSI adapters:
+ *
+ * IBM iSeries: 5702, 5703, 2780, 5709, 570A, 570B
+ *
+ * IBM pSeries: PCI-X Dual Channel Ultra 320 SCSI RAID Adapter
+ * PCI-X Dual Channel Ultra 320 SCSI Adapter
+ * PCI-X Dual Channel Ultra 320 SCSI RAID Enablement Card
+ * Embedded SCSI adapter on p615 and p655 systems
+ *
+ * Supported Hardware Features:
+ * - Ultra 320 SCSI controller
+ * - PCI-X host interface
+ * - Embedded PowerPC RISC Processor and Hardware XOR DMA Engine
+ * - Non-Volatile Write Cache
+ * - Supports attachment of non-RAID disks, tape, and optical devices
+ * - RAID Levels 0, 5, 10
+ * - Hot spare
+ * - Background Parity Checking
+ * - Background Data Scrubbing
+ * - Ability to increase the capacity of an existing RAID 5 disk array
+ * by adding disks
+ *
+ * Driver Features:
+ * - Tagged command queuing
+ * - Adapter microcode download
+ * - PCI hot plug
+ * - SCSI device hot plug
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/wait.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/blkdev.h>
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/processor.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_request.h>
+#include "ipr.h"
+
+/*
+ * Global Data
+ */
+static struct list_head ipr_ioa_head = LIST_HEAD_INIT(ipr_ioa_head);
+static unsigned int ipr_log_level = IPR_DEFAULT_LOG_LEVEL;
+static unsigned int ipr_max_speed = 1;
+static int ipr_testmode = 0;
+static spinlock_t ipr_driver_lock = SPIN_LOCK_UNLOCKED;
+
+/* This table describes the differences between DMA controller chips */
+static const struct ipr_chip_cfg_t ipr_chip_cfg[] = {
+ { /* Gemstone */
+ .mailbox = 0x0042C,
+ .cache_line_size = 0x20,
+ {
+ .set_interrupt_mask_reg = 0x0022C,
+ .clr_interrupt_mask_reg = 0x00230,
+ .sense_interrupt_mask_reg = 0x0022C,
+ .clr_interrupt_reg = 0x00228,
+ .sense_interrupt_reg = 0x00224,
+ .ioarrin_reg = 0x00404,
+ .sense_uproc_interrupt_reg = 0x00214,
+ .set_uproc_interrupt_reg = 0x00214,
+ .clr_uproc_interrupt_reg = 0x00218
+ }
+ },
+ { /* Snipe */
+ .mailbox = 0x0052C,
+ .cache_line_size = 0x20,
+ {
+ .set_interrupt_mask_reg = 0x00288,
+ .clr_interrupt_mask_reg = 0x0028C,
+ .sense_interrupt_mask_reg = 0x00288,
+ .clr_interrupt_reg = 0x00284,
+ .sense_interrupt_reg = 0x00280,
+ .ioarrin_reg = 0x00504,
+ .sense_uproc_interrupt_reg = 0x00290,
+ .set_uproc_interrupt_reg = 0x00290,
+ .clr_uproc_interrupt_reg = 0x00294
+ }
+ },
+};
+
+static int ipr_max_bus_speeds [] = {
+ IPR_80MBs_SCSI_RATE, IPR_U160_SCSI_RATE, IPR_U320_SCSI_RATE
+};
+
+MODULE_AUTHOR("Brian King <brking@us.ibm.com>");
+MODULE_DESCRIPTION("IBM Power RAID SCSI Adapter Driver");
+module_param_named(max_speed, ipr_max_speed, uint, 0);
+MODULE_PARM_DESC(max_speed, "Maximum bus speed (0-2). Default: 1=U160. Speeds: 0=80 MB/s, 1=U160, 2=U320");
+module_param_named(log_level, ipr_log_level, uint, 0);
+MODULE_PARM_DESC(log_level, "Set to 0 - 4 for increasing verbosity of device driver");
+module_param_named(testmode, ipr_testmode, int, 0);
+MODULE_PARM_DESC(testmode, "DANGEROUS!!! Allows unsupported configurations");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(IPR_DRIVER_VERSION);
+
+static const char *ipr_gpdd_dev_end_states[] = {
+ "Command complete",
+ "Terminated by host",
+ "Terminated by device reset",
+ "Terminated by bus reset",
+ "Unknown",
+ "Command not started"
+};
+
+static const char *ipr_gpdd_dev_bus_phases[] = {
+ "Bus free",
+ "Arbitration",
+ "Selection",
+ "Message out",
+ "Command",
+ "Message in",
+ "Data out",
+ "Data in",
+ "Status",
+ "Reselection",
+ "Unknown"
+};
+
+/* A constant array of IOASCs/URCs/Error Messages */
+static const
+struct ipr_error_table_t ipr_error_table[] = {
+ {0x00000000, 1, 1,
+ "8155: An unknown error was received"},
+ {0x00330000, 0, 0,
+ "Soft underlength error"},
+ {0x005A0000, 0, 0,
+ "Command to be cancelled not found"},
+ {0x00808000, 0, 0,
+ "Qualified success"},
+ {0x01080000, 1, 1,
+ "FFFE: Soft device bus error recovered by the IOA"},
+ {0x01170600, 0, 1,
+ "FFF9: Device sector reassign successful"},
+ {0x01170900, 0, 1,
+ "FFF7: Media error recovered by device rewrite procedures"},
+ {0x01180200, 0, 1,
+ "7001: IOA sector reassignment successful"},
+ {0x01180500, 0, 1,
+ "FFF9: Soft media error. Sector reassignment recommended"},
+ {0x01180600, 0, 1,
+ "FFF7: Media error recovered by IOA rewrite procedures"},
+ {0x01418000, 0, 1,
+ "FF3D: Soft PCI bus error recovered by the IOA"},
+ {0x01440000, 1, 1,
+ "FFF6: Device hardware error recovered by the IOA"},
+ {0x01448100, 0, 1,
+ "FFF6: Device hardware error recovered by the device"},
+ {0x01448200, 1, 1,
+ "FF3D: Soft IOA error recovered by the IOA"},
+ {0x01448300, 0, 1,
+ "FFFA: Undefined device response recovered by the IOA"},
+ {0x014A0000, 1, 1,
+ "FFF6: Device bus error, message or command phase"},
+ {0x015D0000, 0, 1,
+ "FFF6: Failure prediction threshold exceeded"},
+ {0x015D9200, 0, 1,
+ "8009: Impending cache battery pack failure"},
+ {0x02040400, 0, 0,
+ "34FF: Disk device format in progress"},
+ {0x023F0000, 0, 0,
+ "Synchronization required"},
+ {0x024E0000, 0, 0,
+ "No ready, IOA shutdown"},
+ {0x02670100, 0, 1,
+ "3020: Storage subsystem configuration error"},
+ {0x03110B00, 0, 0,
+ "FFF5: Medium error, data unreadable, recommend reassign"},
+ {0x03110C00, 0, 0,
+ "7000: Medium error, data unreadable, do not reassign"},
+ {0x03310000, 0, 1,
+ "FFF3: Disk media format bad"},
+ {0x04050000, 0, 1,
+ "3002: Addressed device failed to respond to selection"},
+ {0x04080000, 1, 1,
+ "3100: Device bus error"},
+ {0x04080100, 0, 1,
+ "3109: IOA timed out a device command"},
+ {0x04088000, 0, 0,
+ "3120: SCSI bus is not operational"},
+ {0x04118000, 0, 1,
+ "9000: IOA reserved area data check"},
+ {0x04118100, 0, 1,
+ "9001: IOA reserved area invalid data pattern"},
+ {0x04118200, 0, 1,
+ "9002: IOA reserved area LRC error"},
+ {0x04320000, 0, 1,
+ "102E: Out of alternate sectors for disk storage"},
+ {0x04330000, 1, 1,
+ "FFF4: Data transfer underlength error"},
+ {0x04338000, 1, 1,
+ "FFF4: Data transfer overlength error"},
+ {0x043E0100, 0, 1,
+ "3400: Logical unit failure"},
+ {0x04408500, 0, 1,
+ "FFF4: Device microcode is corrupt"},
+ {0x04418000, 1, 1,
+ "8150: PCI bus error"},
+ {0x04430000, 1, 0,
+ "Unsupported device bus message received"},
+ {0x04440000, 1, 1,
+ "FFF4: Disk device problem"},
+ {0x04448200, 1, 1,
+ "8150: Permanent IOA failure"},
+ {0x04448300, 0, 1,
+ "3010: Disk device returned wrong response to IOA"},
+ {0x04448400, 0, 1,
+ "8151: IOA microcode error"},
+ {0x04448500, 0, 0,
+ "Device bus status error"},
+ {0x04448600, 0, 1,
+ "8157: IOA error requiring IOA reset to recover"},
+ {0x04490000, 0, 0,
+ "Message reject received from the device"},
+ {0x04449200, 0, 1,
+ "8008: A permanent cache battery pack failure occurred"},
+ {0x0444A000, 0, 1,
+ "9090: Disk unit has been modified after the last known status"},
+ {0x0444A200, 0, 1,
+ "9081: IOA detected device error"},
+ {0x0444A300, 0, 1,
+ "9082: IOA detected device error"},
+ {0x044A0000, 1, 1,
+ "3110: Device bus error, message or command phase"},
+ {0x04670400, 0, 1,
+ "9091: Incorrect hardware configuration change has been detected"},
+ {0x046E0000, 0, 1,
+ "FFF4: Command to logical unit failed"},
+ {0x05240000, 1, 0,
+ "Illegal request, invalid request type or request packet"},
+ {0x05250000, 0, 0,
+ "Illegal request, invalid resource handle"},
+ {0x05260000, 0, 0,
+ "Illegal request, invalid field in parameter list"},
+ {0x05260100, 0, 0,
+ "Illegal request, parameter not supported"},
+ {0x05260200, 0, 0,
+ "Illegal request, parameter value invalid"},
+ {0x052C0000, 0, 0,
+ "Illegal request, command sequence error"},
+ {0x06040500, 0, 1,
+ "9031: Array protection temporarily suspended, protection resuming"},
+ {0x06040600, 0, 1,
+ "9040: Array protection temporarily suspended, protection resuming"},
+ {0x06290000, 0, 1,
+ "FFFB: SCSI bus was reset"},
+ {0x06290500, 0, 0,
+ "FFFE: SCSI bus transition to single ended"},
+ {0x06290600, 0, 0,
+ "FFFE: SCSI bus transition to LVD"},
+ {0x06298000, 0, 1,
+ "FFFB: SCSI bus was reset by another initiator"},
+ {0x063F0300, 0, 1,
+ "3029: A device replacement has occurred"},
+ {0x064C8000, 0, 1,
+ "9051: IOA cache data exists for a missing or failed device"},
+ {0x06670100, 0, 1,
+ "9025: Disk unit is not supported at its physical location"},
+ {0x06670600, 0, 1,
+ "3020: IOA detected a SCSI bus configuration error"},
+ {0x06678000, 0, 1,
+ "3150: SCSI bus configuration error"},
+ {0x06690200, 0, 1,
+ "9041: Array protection temporarily suspended"},
+ {0x066B0200, 0, 1,
+ "9030: Array no longer protected due to missing or failed disk unit"},
+ {0x07270000, 0, 0,
+ "Failure due to other device"},
+ {0x07278000, 0, 1,
+ "9008: IOA does not support functions expected by devices"},
+ {0x07278100, 0, 1,
+ "9010: Cache data associated with attached devices cannot be found"},
+ {0x07278200, 0, 1,
+ "9011: Cache data belongs to devices other than those attached"},
+ {0x07278400, 0, 1,
+ "9020: Array missing 2 or more devices with only 1 device present"},
+ {0x07278500, 0, 1,
+ "9021: Array missing 2 or more devices with 2 or more devices present"},
+ {0x07278600, 0, 1,
+ "9022: Exposed array is missing a required device"},
+ {0x07278700, 0, 1,
+ "9023: Array member(s) not at required physical locations"},
+ {0x07278800, 0, 1,
+ "9024: Array not functional due to present hardware configuration"},
+ {0x07278900, 0, 1,
+ "9026: Array not functional due to present hardware configuration"},
+ {0x07278A00, 0, 1,
+ "9027: Array is missing a device and parity is out of sync"},
+ {0x07278B00, 0, 1,
+ "9028: Maximum number of arrays already exist"},
+ {0x07278C00, 0, 1,
+ "9050: Required cache data cannot be located for a disk unit"},
+ {0x07278D00, 0, 1,
+ "9052: Cache data exists for a device that has been modified"},
+ {0x07278F00, 0, 1,
+ "9054: IOA resources not available due to previous problems"},
+ {0x07279100, 0, 1,
+ "9092: Disk unit requires initialization before use"},
+ {0x07279200, 0, 1,
+ "9029: Incorrect hardware configuration change has been detected"},
+ {0x07279600, 0, 1,
+ "9060: One or more disk pairs are missing from an array"},
+ {0x07279700, 0, 1,
+ "9061: One or more disks are missing from an array"},
+ {0x07279800, 0, 1,
+ "9062: One or more disks are missing from an array"},
+ {0x07279900, 0, 1,
+ "9063: Maximum number of functional arrays has been exceeded"},
+ {0x0B260000, 0, 0,
+ "Aborted command, invalid descriptor"},
+ {0x0B5A0000, 0, 0,
+ "Command terminated by host"}
+};
+
+static const struct ipr_ses_table_entry ipr_ses_table[] = {
+ { "2104-DL1 ", "XXXXXXXXXXXXXXXX", 80 },
+ { "2104-TL1 ", "XXXXXXXXXXXXXXXX", 80 },
+ { "HSBP07M P U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Hidive 7 slot */
+ { "HSBP05M P U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Hidive 5 slot */
+ { "HSBP05M S U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Bowtie */
+ { "HSBP06E ASU2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* MartinFenning */
+ { "2104-DU3 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "2104-TU3 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "HSBP04C RSU2SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "HSBP06E RSU2SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "St V1S2 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "HSBPD4M PU3SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "VSBPD1H U3SCSI", "XXXXXXX*XXXXXXXX", 160 }
+};
+
+/*
+ * Function Prototypes
+ */
+static int ipr_reset_alert(struct ipr_cmnd *);
+static void ipr_process_ccn(struct ipr_cmnd *);
+static void ipr_process_error(struct ipr_cmnd *);
+static void ipr_reset_ioa_job(struct ipr_cmnd *);
+static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *,
+ enum ipr_shutdown_type);
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+/**
+ * ipr_trc_hook - Add a trace entry to the driver trace
+ * @ipr_cmd: ipr command struct
+ * @type: trace type
+ * @add_data: additional data
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_trc_hook(struct ipr_cmnd *ipr_cmd,
+ u8 type, u32 add_data)
+{
+ struct ipr_trace_entry *trace_entry;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ trace_entry = &ioa_cfg->trace[ioa_cfg->trace_index++];
+ trace_entry->time = jiffies;
+ trace_entry->op_code = ipr_cmd->ioarcb.cmd_pkt.cdb[0];
+ trace_entry->type = type;
+ trace_entry->cmd_index = ipr_cmd->cmd_index;
+ trace_entry->res_handle = ipr_cmd->ioarcb.res_handle;
+ trace_entry->u.add_data = add_data;
+}
+#else
+#define ipr_trc_hook(ipr_cmd, type, add_data) do { } while(0)
+#endif
+
+/**
+ * ipr_reinit_ipr_cmnd - Re-initialize an IPR Cmnd block for reuse
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reinit_ipr_cmnd(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+
+ memset(&ioarcb->cmd_pkt, 0, sizeof(struct ipr_cmd_pkt));
+ ioarcb->write_data_transfer_length = 0;
+ ioarcb->read_data_transfer_length = 0;
+ ioarcb->write_ioadl_len = 0;
+ ioarcb->read_ioadl_len = 0;
+ ioasa->ioasc = 0;
+ ioasa->residual_data_len = 0;
+
+ ipr_cmd->scsi_cmd = NULL;
+ ipr_cmd->sense_buffer[0] = 0;
+ ipr_cmd->dma_use_sg = 0;
+}
+
+/**
+ * ipr_init_ipr_cmnd - Initialize an IPR Cmnd block
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_init_ipr_cmnd(struct ipr_cmnd *ipr_cmd)
+{
+ ipr_reinit_ipr_cmnd(ipr_cmd);
+ ipr_cmd->u.scratch = 0;
+ init_timer(&ipr_cmd->timer);
+}
+
+/**
+ * ipr_get_free_ipr_cmnd - Get a free IPR Cmnd block
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * pointer to ipr command struct
+ **/
+static
+struct ipr_cmnd *ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd;
+
+ ipr_cmd = list_entry(ioa_cfg->free_q.next, struct ipr_cmnd, queue);
+ list_del(&ipr_cmd->queue);
+ ipr_init_ipr_cmnd(ipr_cmd);
+
+ return ipr_cmd;
+}
+
+/**
+ * ipr_unmap_sglist - Unmap scatterlist if mapped
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_unmap_sglist(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+
+ if (ipr_cmd->dma_use_sg) {
+ if (scsi_cmd->use_sg > 0) {
+ pci_unmap_sg(ioa_cfg->pdev, scsi_cmd->request_buffer,
+ scsi_cmd->use_sg,
+ scsi_cmd->sc_data_direction);
+ } else {
+ pci_unmap_single(ioa_cfg->pdev, ipr_cmd->dma_handle,
+ scsi_cmd->request_bufflen,
+ scsi_cmd->sc_data_direction);
+ }
+ }
+}
+
+/**
+ * ipr_mask_and_clear_interrupts - Mask all and clear specified interrupts
+ * @ioa_cfg: ioa config struct
+ * @clr_ints: interrupts to clear
+ *
+ * This function masks all interrupts on the adapter, then clears the
+ * interrupts specified in the mask
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_mask_and_clear_interrupts(struct ipr_ioa_cfg *ioa_cfg,
+ u32 clr_ints)
+{
+ volatile u32 int_reg;
+
+ /* Stop new interrupts */
+ ioa_cfg->allow_interrupts = 0;
+
+ /* Set interrupt mask to stop all new interrupts */
+ writel(~0, ioa_cfg->regs.set_interrupt_mask_reg);
+
+ /* Clear any pending interrupts */
+ writel(clr_ints, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+}
+
+/**
+ * ipr_save_pcix_cmd_reg - Save PCI-X command register
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
+
+ if (pcix_cmd_reg == 0) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
+ return -EIO;
+ }
+
+ if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ &ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
+ return -EIO;
+ }
+
+ ioa_cfg->saved_pcix_cmd_reg |= PCI_X_CMD_DPERR_E | PCI_X_CMD_ERO;
+ return 0;
+}
+
+/**
+ * ipr_set_pcix_cmd_reg - Setup PCI-X command register
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_set_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
+
+ if (pcix_cmd_reg) {
+ if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n");
+ return -EIO;
+ }
+ } else {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Failed to setup PCI-X command register\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_scsi_eh_done - mid-layer done function for aborted ops
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler for
+ * ops generated by the SCSI mid-layer which are being aborted.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+
+ scsi_cmd->result |= (DID_ERROR << 16);
+
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ scsi_cmd->scsi_done(scsi_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+}
+
+/**
+ * ipr_fail_all_ops - Fails all outstanding ops.
+ * @ioa_cfg: ioa config struct
+ *
+ * This function fails all outstanding ops.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_fail_all_ops(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd, *temp;
+
+ ENTER;
+ list_for_each_entry_safe(ipr_cmd, temp, &ioa_cfg->pending_q, queue) {
+ list_del(&ipr_cmd->queue);
+
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_IOA_WAS_RESET);
+ ipr_cmd->ioasa.ilid = cpu_to_be32(IPR_DRIVER_ILID);
+
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, IPR_IOASC_IOA_WAS_RESET);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->done(ipr_cmd);
+ }
+
+ LEAVE;
+}
+
+/**
+ * ipr_do_req - Send driver initiated requests.
+ * @ipr_cmd: ipr command struct
+ * @done: done function
+ * @timeout_func: timeout function
+ * @timeout: timeout value
+ *
+ * This function sends the specified command to the adapter with the
+ * timeout given. The done function is invoked on command completion.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_do_req(struct ipr_cmnd *ipr_cmd,
+ void (*done) (struct ipr_cmnd *),
+ void (*timeout_func) (struct ipr_cmnd *), u32 timeout)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ ipr_cmd->done = done;
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + timeout;
+ ipr_cmd->timer.function = (void (*)(unsigned long))timeout_func;
+
+ add_timer(&ipr_cmd->timer);
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, 0);
+
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+}
+
+/**
+ * ipr_internal_cmd_done - Op done function for an internally generated op.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for an internally generated,
+ * blocking op. It simply wakes the sleeping thread.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_internal_cmd_done(struct ipr_cmnd *ipr_cmd)
+{
+ if (ipr_cmd->u.sibling)
+ ipr_cmd->u.sibling = NULL;
+ else
+ complete(&ipr_cmd->completion);
+}
+
+/**
+ * ipr_send_blocking_cmd - Send command and sleep on its completion.
+ * @ipr_cmd: ipr command struct
+ * @timeout_func: function to invoke if command times out
+ * @timeout: timeout
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_send_blocking_cmd(struct ipr_cmnd *ipr_cmd,
+ void (*timeout_func) (struct ipr_cmnd *ipr_cmd),
+ u32 timeout)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ init_completion(&ipr_cmd->completion);
+ ipr_do_req(ipr_cmd, ipr_internal_cmd_done, timeout_func, timeout);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ wait_for_completion(&ipr_cmd->completion);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+}
+
+/**
+ * ipr_send_hcam - Send an HCAM to the adapter.
+ * @ioa_cfg: ioa config struct
+ * @type: HCAM type
+ * @hostrcb: hostrcb struct
+ *
+ * This function will send a Host Controlled Async command to the adapter.
+ * If HCAMs are currently not allowed to be issued to the adapter, it will
+ * place the hostrcb on the free queue.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_send_hcam(struct ipr_ioa_cfg *ioa_cfg, u8 type,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioarcb *ioarcb;
+
+ if (ioa_cfg->allow_cmds) {
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_pending_q);
+
+ ipr_cmd->u.hostrcb = hostrcb;
+ ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_HCAM;
+ ioarcb->cmd_pkt.cdb[0] = IPR_HOST_CONTROLLED_ASYNC;
+ ioarcb->cmd_pkt.cdb[1] = type;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(hostrcb->hcam) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(hostrcb->hcam) & 0xff;
+
+ ioarcb->read_data_transfer_length = cpu_to_be32(sizeof(hostrcb->hcam));
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ipr_cmd->ioadl[0].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | sizeof(hostrcb->hcam));
+ ipr_cmd->ioadl[0].address = cpu_to_be32(hostrcb->hostrcb_dma);
+
+ if (type == IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE)
+ ipr_cmd->done = ipr_process_ccn;
+ else
+ ipr_cmd->done = ipr_process_error;
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_IOA_RES_ADDR);
+
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+ } else {
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_free_q);
+ }
+}
+
+/**
+ * ipr_init_res_entry - Initialize a resource entry struct.
+ * @res: resource entry struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_init_res_entry(struct ipr_resource_entry *res)
+{
+ res->needs_sync_complete = 1;
+ res->in_erp = 0;
+ res->add_to_ml = 0;
+ res->del_from_ml = 0;
+ res->resetting_device = 0;
+ res->tcq_active = 0;
+ res->qdepth = IPR_MAX_CMD_PER_LUN;
+ res->sdev = NULL;
+}
+
+/**
+ * ipr_handle_config_change - Handle a config change from the adapter
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_handle_config_change(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_resource_entry *res = NULL;
+ struct ipr_config_table_entry *cfgte;
+ u32 is_ndn = 1;
+
+ cfgte = &hostrcb->hcam.u.ccn.cfgte;
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!memcmp(&res->cfgte.res_addr, &cfgte->res_addr,
+ sizeof(cfgte->res_addr))) {
+ is_ndn = 0;
+ break;
+ }
+ }
+
+ if (is_ndn) {
+ if (list_empty(&ioa_cfg->free_res_q)) {
+ ipr_send_hcam(ioa_cfg,
+ IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE,
+ hostrcb);
+ return;
+ }
+
+ res = list_entry(ioa_cfg->free_res_q.next,
+ struct ipr_resource_entry, queue);
+
+ list_del(&res->queue);
+ ipr_init_res_entry(res);
+ list_add_tail(&res->queue, &ioa_cfg->used_res_q);
+ }
+
+ memcpy(&res->cfgte, cfgte, sizeof(struct ipr_config_table_entry));
+
+ if (hostrcb->hcam.notify_type == IPR_HOST_RCB_NOTIF_TYPE_REM_ENTRY) {
+ if (res->sdev) {
+ res->sdev->hostdata = NULL;
+ res->del_from_ml = 1;
+ if (ioa_cfg->allow_ml_add_del)
+ schedule_work(&ioa_cfg->work_q);
+ } else
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ } else if (!res->sdev) {
+ res->add_to_ml = 1;
+ if (ioa_cfg->allow_ml_add_del)
+ schedule_work(&ioa_cfg->work_q);
+ }
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+}
+
+/**
+ * ipr_process_ccn - Op done function for a CCN.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for a configuration
+ * change notification host controlled async from the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ list_del(&hostrcb->queue);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ if (ioasc) {
+ if (ioasc != IPR_IOASC_IOA_WAS_RESET)
+ dev_err(&ioa_cfg->pdev->dev,
+ "Host RCB failed with IOASC: 0x%08X\n", ioasc);
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+ } else {
+ ipr_handle_config_change(ioa_cfg, hostrcb);
+ }
+}
+
+/**
+ * ipr_log_vpd - Log the passed VPD to the error log.
+ * @vpids: vendor/product id struct
+ * @serial_num: serial number string
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_vpd(struct ipr_std_inq_vpids *vpids, u8 *serial_num)
+{
+ char buffer[max_t(int, sizeof(struct ipr_std_inq_vpids),
+ IPR_SERIAL_NUM_LEN) + 1];
+
+ memcpy(buffer, vpids, sizeof(struct ipr_std_inq_vpids));
+ buffer[sizeof(struct ipr_std_inq_vpids)] = '\0';
+ ipr_err("Vendor/Product ID: %s\n", buffer);
+
+ memcpy(buffer, serial_num, IPR_SERIAL_NUM_LEN);
+ buffer[IPR_SERIAL_NUM_LEN] = '\0';
+ ipr_err(" Serial Number: %s\n", buffer);
+}
+
+/**
+ * ipr_log_cache_error - Log a cache error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_cache_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_hostrcb_type_02_error *error =
+ &hostrcb->hcam.u.error.u.type_02_error;
+
+ ipr_err("-----Current Configuration-----\n");
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&error->ioa_vpids, error->ioa_sn);
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&error->cfc_vpids, error->cfc_sn);
+
+ ipr_err("-----Expected Configuration-----\n");
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&error->ioa_last_attached_to_cfc_vpids,
+ error->ioa_last_attached_to_cfc_sn);
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&error->cfc_last_attached_to_ioa_vpids,
+ error->cfc_last_attached_to_ioa_sn);
+
+ ipr_err("Additional IOA Data: %08X %08X %08X\n",
+ be32_to_cpu(error->ioa_data[0]),
+ be32_to_cpu(error->ioa_data[1]),
+ be32_to_cpu(error->ioa_data[2]));
+}
+
+/**
+ * ipr_log_config_error - Log a configuration error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_config_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int errors_logged, i;
+ struct ipr_hostrcb_device_data_entry *dev_entry;
+ struct ipr_hostrcb_type_03_error *error;
+
+ error = &hostrcb->hcam.u.error.u.type_03_error;
+ errors_logged = be32_to_cpu(error->errors_logged);
+
+ ipr_err("Device Errors Detected/Logged: %d/%d\n",
+ be32_to_cpu(error->errors_detected), errors_logged);
+
+ dev_entry = error->dev_entry;
+
+ for (i = 0; i < errors_logged; i++, dev_entry++) {
+ ipr_err_separator;
+
+ if (dev_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Device %d: missing\n", i + 1);
+ } else {
+ ipr_err("Device %d: %d:%d:%d:%d\n", i + 1,
+ ioa_cfg->host->host_no, dev_entry->dev_res_addr.bus,
+ dev_entry->dev_res_addr.target, dev_entry->dev_res_addr.lun);
+ }
+ ipr_log_vpd(&dev_entry->dev_vpids, dev_entry->dev_sn);
+
+ ipr_err("-----New Device Information-----\n");
+ ipr_log_vpd(&dev_entry->new_dev_vpids, dev_entry->new_dev_sn);
+
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&dev_entry->ioa_last_with_dev_vpids,
+ dev_entry->ioa_last_with_dev_sn);
+
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&dev_entry->cfc_last_with_dev_vpids,
+ dev_entry->cfc_last_with_dev_sn);
+
+ ipr_err("Additional IOA Data: %08X %08X %08X %08X %08X\n",
+ be32_to_cpu(dev_entry->ioa_data[0]),
+ be32_to_cpu(dev_entry->ioa_data[1]),
+ be32_to_cpu(dev_entry->ioa_data[2]),
+ be32_to_cpu(dev_entry->ioa_data[3]),
+ be32_to_cpu(dev_entry->ioa_data[4]));
+ }
+}
+
+/**
+ * ipr_log_array_error - Log an array configuration error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_array_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int i;
+ struct ipr_hostrcb_type_04_error *error;
+ struct ipr_hostrcb_array_data_entry *array_entry;
+ u8 zero_sn[IPR_SERIAL_NUM_LEN];
+
+ memset(zero_sn, '0', IPR_SERIAL_NUM_LEN);
+
+ error = &hostrcb->hcam.u.error.u.type_04_error;
+
+ ipr_err_separator;
+
+ ipr_err("RAID %s Array Configuration: %d:%d:%d:%d\n",
+ error->protection_level,
+ ioa_cfg->host->host_no,
+ error->last_func_vset_res_addr.bus,
+ error->last_func_vset_res_addr.target,
+ error->last_func_vset_res_addr.lun);
+
+ ipr_err_separator;
+
+ array_entry = error->array_member;
+
+ for (i = 0; i < 18; i++) {
+ if (!memcmp(array_entry->serial_num, zero_sn, IPR_SERIAL_NUM_LEN))
+ continue;
+
+ if (error->exposed_mode_adn == i) {
+ ipr_err("Exposed Array Member %d:\n", i);
+ } else {
+ ipr_err("Array Member %d:\n", i);
+ }
+
+ ipr_log_vpd(&array_entry->vpids, array_entry->serial_num);
+
+ if (array_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Current Location: unknown\n");
+ } else {
+ ipr_err("Current Location: %d:%d:%d:%d\n",
+ ioa_cfg->host->host_no,
+ array_entry->dev_res_addr.bus,
+ array_entry->dev_res_addr.target,
+ array_entry->dev_res_addr.lun);
+ }
+
+ if (array_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Expected Location: unknown\n");
+ } else {
+ ipr_err("Expected Location: %d:%d:%d:%d\n",
+ ioa_cfg->host->host_no,
+ array_entry->expected_dev_res_addr.bus,
+ array_entry->expected_dev_res_addr.target,
+ array_entry->expected_dev_res_addr.lun);
+ }
+
+ ipr_err_separator;
+
+ if (i == 9)
+ array_entry = error->array_member2;
+ else
+ array_entry++;
+ }
+}
+
+/**
+ * ipr_log_generic_error - Log an adapter error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_generic_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int i;
+ int ioa_data_len = be32_to_cpu(hostrcb->hcam.length);
+
+ if (ioa_data_len == 0)
+ return;
+
+ ipr_err("IOA Error Data:\n");
+ ipr_err("Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F\n");
+
+ for (i = 0; i < ioa_data_len / 4; i += 4) {
+ ipr_err("%08X: %08X %08X %08X %08X\n", i*4,
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+1]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+2]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+3]));
+ }
+}
+
+/**
+ * ipr_get_error - Find the specfied IOASC in the ipr_error_table.
+ * @ioasc: IOASC
+ *
+ * This function will return the index of into the ipr_error_table
+ * for the specified IOASC. If the IOASC is not in the table,
+ * 0 will be returned, which points to the entry used for unknown errors.
+ *
+ * Return value:
+ * index into the ipr_error_table
+ **/
+static u32 ipr_get_error(u32 ioasc)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ipr_error_table); i++)
+ if (ipr_error_table[i].ioasc == ioasc)
+ return i;
+
+ return 0;
+}
+
+/**
+ * ipr_handle_log_data - Log an adapter error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * This function logs an adapter error to the system.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_handle_log_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ u32 ioasc;
+ int error_index;
+
+ if (hostrcb->hcam.notify_type != IPR_HOST_RCB_NOTIF_TYPE_ERROR_LOG_ENTRY)
+ return;
+
+ if (hostrcb->hcam.notifications_lost == IPR_HOST_RCB_NOTIFICATIONS_LOST)
+ dev_err(&ioa_cfg->pdev->dev, "Error notifications lost\n");
+
+ ioasc = be32_to_cpu(hostrcb->hcam.u.error.failing_dev_ioasc);
+
+ if (ioasc == IPR_IOASC_BUS_WAS_RESET ||
+ ioasc == IPR_IOASC_BUS_WAS_RESET_BY_OTHER) {
+ /* Tell the midlayer we had a bus reset so it will handle the UA properly */
+ scsi_report_bus_reset(ioa_cfg->host,
+ hostrcb->hcam.u.error.failing_dev_res_addr.bus);
+ }
+
+ error_index = ipr_get_error(ioasc);
+
+ if (!ipr_error_table[error_index].log_hcam)
+ return;
+
+ if (ipr_is_device(&hostrcb->hcam.u.error.failing_dev_res_addr)) {
+ ipr_res_err(ioa_cfg, hostrcb->hcam.u.error.failing_dev_res_addr,
+ "%s\n", ipr_error_table[error_index].error);
+ } else {
+ dev_err(&ioa_cfg->pdev->dev, "%s\n",
+ ipr_error_table[error_index].error);
+ }
+
+ /* Set indication we have logged an error */
+ ioa_cfg->errors_logged++;
+
+ if (ioa_cfg->log_level < IPR_DEFAULT_LOG_LEVEL)
+ return;
+
+ switch (hostrcb->hcam.overlay_id) {
+ case IPR_HOST_RCB_OVERLAY_ID_1:
+ ipr_log_generic_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_2:
+ ipr_log_cache_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_3:
+ ipr_log_config_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_4:
+ case IPR_HOST_RCB_OVERLAY_ID_6:
+ ipr_log_array_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_DEFAULT:
+ ipr_log_generic_error(ioa_cfg, hostrcb);
+ break;
+ default:
+ dev_err(&ioa_cfg->pdev->dev,
+ "Unknown error received. Overlay ID: %d\n",
+ hostrcb->hcam.overlay_id);
+ break;
+ }
+}
+
+/**
+ * ipr_process_error - Op done function for an adapter error log.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for an error log host
+ * controlled async from the adapter. It will log the error and
+ * send the HCAM back to the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_process_error(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ list_del(&hostrcb->queue);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ if (!ioasc) {
+ ipr_handle_log_data(ioa_cfg, hostrcb);
+ } else if (ioasc != IPR_IOASC_IOA_WAS_RESET) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Host RCB failed with IOASC: 0x%08X\n", ioasc);
+ }
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb);
+}
+
+/**
+ * ipr_timeout - An internally generated op has timed out.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function blocks host requests and initiates an
+ * adapter reset.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_timeout(struct ipr_cmnd *ipr_cmd)
+{
+ unsigned long lock_flags = 0;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter being reset due to command timeout.\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ if (!ioa_cfg->in_reset_reload || ioa_cfg->reset_cmd == ipr_cmd)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+/**
+ * ipr_reset_reload - Reset/Reload the IOA
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * This function resets the adapter and re-initializes it.
+ * This function assumes that all new host commands have been stopped.
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_reset_reload(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ if (!ioa_cfg->in_reset_reload)
+ ipr_initiate_ioa_reset(ioa_cfg, shutdown_type);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+
+ /* If we got hit with a host reset while we were already resetting
+ the adapter for some reason, and the reset failed. */
+ if (ioa_cfg->ioa_is_dead) {
+ ipr_trace;
+ return FAILED;
+ }
+
+ return SUCCESS;
+}
+
+/**
+ * ipr_find_ses_entry - Find matching SES in SES table
+ * @res: resource entry struct of SES
+ *
+ * Return value:
+ * pointer to SES table entry / NULL on failure
+ **/
+static const struct ipr_ses_table_entry *
+ipr_find_ses_entry(struct ipr_resource_entry *res)
+{
+ int i, j, matches;
+ const struct ipr_ses_table_entry *ste = ipr_ses_table;
+
+ for (i = 0; i < ARRAY_SIZE(ipr_ses_table); i++, ste++) {
+ for (j = 0, matches = 0; j < IPR_PROD_ID_LEN; j++) {
+ if (ste->compare_product_id_byte[j] == 'X') {
+ if (res->cfgte.std_inq_data.vpids.product_id[j] == ste->product_id[j])
+ matches++;
+ else
+ break;
+ } else
+ matches++;
+ }
+
+ if (matches == IPR_PROD_ID_LEN)
+ return ste;
+ }
+
+ return NULL;
+}
+
+/**
+ * ipr_get_max_scsi_speed - Determine max SCSI speed for a given bus
+ * @ioa_cfg: ioa config struct
+ * @bus: SCSI bus
+ * @bus_width: bus width
+ *
+ * Return value:
+ * SCSI bus speed in units of 100KHz, 1600 is 160 MHz
+ * For a 2-byte wide SCSI bus, the maximum transfer speed is
+ * twice the maximum transfer rate (e.g. for a wide enabled bus,
+ * max 160MHz = max 320MB/sec).
+ **/
+static u32 ipr_get_max_scsi_speed(struct ipr_ioa_cfg *ioa_cfg, u8 bus, u8 bus_width)
+{
+ struct ipr_resource_entry *res;
+ const struct ipr_ses_table_entry *ste;
+ u32 max_xfer_rate = IPR_MAX_SCSI_RATE(bus_width);
+
+ /* Loop through each config table entry in the config table buffer */
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!(IPR_IS_SES_DEVICE(res->cfgte.std_inq_data)))
+ continue;
+
+ if (bus != res->cfgte.res_addr.bus)
+ continue;
+
+ if (!(ste = ipr_find_ses_entry(res)))
+ continue;
+
+ max_xfer_rate = (ste->max_bus_speed_limit * 10) / (bus_width / 8);
+ }
+
+ return max_xfer_rate;
+}
+
+/**
+ * ipr_wait_iodbg_ack - Wait for an IODEBUG ACK from the IOA
+ * @ioa_cfg: ioa config struct
+ * @max_delay: max delay in micro-seconds to wait
+ *
+ * Waits for an IODEBUG ACK from the IOA, doing busy looping.
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_wait_iodbg_ack(struct ipr_ioa_cfg *ioa_cfg, int max_delay)
+{
+ volatile u32 pcii_reg;
+ int delay = 1;
+
+ /* Read interrupt reg until IOA signals IO Debug Acknowledge */
+ while (delay < max_delay) {
+ pcii_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ if (pcii_reg & IPR_PCII_IO_DEBUG_ACKNOWLEDGE)
+ return 0;
+
+ /* udelay cannot be used if delay is more than a few milliseconds */
+ if ((delay / 1000) > MAX_UDELAY_MS)
+ mdelay(delay / 1000);
+ else
+ udelay(delay);
+
+ delay += delay;
+ }
+ return -EIO;
+}
+
+/**
+ * ipr_get_ldump_data_section - Dump IOA memory
+ * @ioa_cfg: ioa config struct
+ * @start_addr: adapter address to dump
+ * @dest: destination kernel buffer
+ * @length_in_words: length to dump in 4 byte words
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_get_ldump_data_section(struct ipr_ioa_cfg *ioa_cfg,
+ u32 start_addr,
+ u32 *dest, u32 length_in_words)
+{
+ volatile u32 temp_pcii_reg;
+ int i, delay = 0;
+
+ /* Write IOA interrupt reg starting LDUMP state */
+ writel((IPR_UPROCI_RESET_ALERT | IPR_UPROCI_IO_DEBUG_ALERT),
+ ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ /* Wait for IO debug acknowledge */
+ if (ipr_wait_iodbg_ack(ioa_cfg,
+ IPR_LDUMP_MAX_LONG_ACK_DELAY_IN_USEC)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA dump long data transfer timeout\n");
+ return -EIO;
+ }
+
+ /* Signal LDUMP interlocked - clear IO debug ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+
+ /* Write Mailbox with starting address */
+ writel(start_addr, ioa_cfg->ioa_mailbox);
+
+ /* Signal address valid - clear IOA Reset alert */
+ writel(IPR_UPROCI_RESET_ALERT,
+ ioa_cfg->regs.clr_uproc_interrupt_reg);
+
+ for (i = 0; i < length_in_words; i++) {
+ /* Wait for IO debug acknowledge */
+ if (ipr_wait_iodbg_ack(ioa_cfg,
+ IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA dump short data transfer timeout\n");
+ return -EIO;
+ }
+
+ /* Read data from mailbox and increment destination pointer */
+ *dest = cpu_to_be32(readl(ioa_cfg->ioa_mailbox));
+ dest++;
+
+ /* For all but the last word of data, signal data received */
+ if (i < (length_in_words - 1)) {
+ /* Signal dump data received - Clear IO debug Ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+ }
+ }
+
+ /* Signal end of block transfer. Set reset alert then clear IO debug ack */
+ writel(IPR_UPROCI_RESET_ALERT,
+ ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ writel(IPR_UPROCI_IO_DEBUG_ALERT,
+ ioa_cfg->regs.clr_uproc_interrupt_reg);
+
+ /* Signal dump data received - Clear IO debug Ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+
+ /* Wait for IOA to signal LDUMP exit - IOA reset alert will be cleared */
+ while (delay < IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC) {
+ temp_pcii_reg =
+ readl(ioa_cfg->regs.sense_uproc_interrupt_reg);
+
+ if (!(temp_pcii_reg & IPR_UPROCI_RESET_ALERT))
+ return 0;
+
+ udelay(10);
+ delay += 10;
+ }
+
+ return 0;
+}
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+/**
+ * ipr_sdt_copy - Copy Smart Dump Table to kernel buffer
+ * @ioa_cfg: ioa config struct
+ * @pci_address: adapter address
+ * @length: length of data to copy
+ *
+ * Copy data from PCI adapter to kernel buffer.
+ * Note: length MUST be a 4 byte multiple
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_sdt_copy(struct ipr_ioa_cfg *ioa_cfg,
+ unsigned long pci_address, u32 length)
+{
+ int bytes_copied = 0;
+ int cur_len, rc, rem_len, rem_page_len;
+ u32 *page;
+ unsigned long lock_flags = 0;
+ struct ipr_ioa_dump *ioa_dump = &ioa_cfg->dump->ioa_dump;
+
+ while (bytes_copied < length &&
+ (ioa_dump->hdr.len + bytes_copied) < IPR_MAX_IOA_DUMP_SIZE) {
+ if (ioa_dump->page_offset >= PAGE_SIZE ||
+ ioa_dump->page_offset == 0) {
+ page = (u32 *)__get_free_page(GFP_ATOMIC);
+
+ if (!page) {
+ ipr_trace;
+ return bytes_copied;
+ }
+
+ ioa_dump->page_offset = 0;
+ ioa_dump->ioa_data[ioa_dump->next_page_index] = page;
+ ioa_dump->next_page_index++;
+ } else
+ page = ioa_dump->ioa_data[ioa_dump->next_page_index - 1];
+
+ rem_len = length - bytes_copied;
+ rem_page_len = PAGE_SIZE - ioa_dump->page_offset;
+ cur_len = min(rem_len, rem_page_len);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->sdt_state == ABORT_DUMP) {
+ rc = -EIO;
+ } else {
+ rc = ipr_get_ldump_data_section(ioa_cfg,
+ pci_address + bytes_copied,
+ &page[ioa_dump->page_offset / 4],
+ (cur_len / sizeof(u32)));
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ if (!rc) {
+ ioa_dump->page_offset += cur_len;
+ bytes_copied += cur_len;
+ } else {
+ ipr_trace;
+ break;
+ }
+ schedule();
+ }
+
+ return bytes_copied;
+}
+
+/**
+ * ipr_init_dump_entry_hdr - Initialize a dump entry header.
+ * @hdr: dump entry header struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_init_dump_entry_hdr(struct ipr_dump_entry_header *hdr)
+{
+ hdr->eye_catcher = IPR_DUMP_EYE_CATCHER;
+ hdr->num_elems = 1;
+ hdr->offset = sizeof(*hdr);
+ hdr->status = IPR_DUMP_STATUS_SUCCESS;
+}
+
+/**
+ * ipr_dump_ioa_type_data - Fill in the adapter type in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_ioa_type_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+
+ ipr_init_dump_entry_hdr(&driver_dump->ioa_type_entry.hdr);
+ driver_dump->ioa_type_entry.hdr.len =
+ sizeof(struct ipr_dump_ioa_type_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->ioa_type_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ driver_dump->ioa_type_entry.hdr.id = IPR_DUMP_DRIVER_TYPE_ID;
+ driver_dump->ioa_type_entry.type = ioa_cfg->type;
+ driver_dump->ioa_type_entry.fw_version = (ucode_vpd->major_release << 24) |
+ (ucode_vpd->card_type << 16) | (ucode_vpd->minor_release[0] << 8) |
+ ucode_vpd->minor_release[1];
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_version_data - Fill in the driver version in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_version_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->version_entry.hdr);
+ driver_dump->version_entry.hdr.len =
+ sizeof(struct ipr_dump_version_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->version_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_ASCII;
+ driver_dump->version_entry.hdr.id = IPR_DUMP_DRIVER_VERSION_ID;
+ strcpy(driver_dump->version_entry.version, IPR_DRIVER_VERSION);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_trace_data - Fill in the IOA trace in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_trace_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->trace_entry.hdr);
+ driver_dump->trace_entry.hdr.len =
+ sizeof(struct ipr_dump_trace_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->trace_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ driver_dump->trace_entry.hdr.id = IPR_DUMP_TRACE_ID;
+ memcpy(driver_dump->trace_entry.trace, ioa_cfg->trace, IPR_TRACE_SIZE);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_location_data - Fill in the IOA location in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_location_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->location_entry.hdr);
+ driver_dump->location_entry.hdr.len =
+ sizeof(struct ipr_dump_location_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->location_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_ASCII;
+ driver_dump->location_entry.hdr.id = IPR_DUMP_LOCATION_ID;
+ strcpy(driver_dump->location_entry.location, ioa_cfg->pdev->dev.bus_id);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_get_ioa_dump - Perform a dump of the driver and adapter.
+ * @ioa_cfg: ioa config struct
+ * @dump: dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
+{
+ unsigned long start_addr, sdt_word;
+ unsigned long lock_flags = 0;
+ struct ipr_driver_dump *driver_dump = &dump->driver_dump;
+ struct ipr_ioa_dump *ioa_dump = &dump->ioa_dump;
+ u32 num_entries, start_off, end_off;
+ u32 bytes_to_copy, bytes_copied, rc;
+ struct ipr_sdt *sdt;
+ int i;
+
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->sdt_state != GET_DUMP) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ start_addr = readl(ioa_cfg->ioa_mailbox);
+
+ if (!ipr_sdt_is_fmt2(start_addr)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Invalid dump table format: %lx\n", start_addr);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ dev_err(&ioa_cfg->pdev->dev, "Dump of IOA initiated\n");
+
+ driver_dump->hdr.eye_catcher = IPR_DUMP_EYE_CATCHER;
+
+ /* Initialize the overall dump header */
+ driver_dump->hdr.len = sizeof(struct ipr_driver_dump);
+ driver_dump->hdr.num_entries = 1;
+ driver_dump->hdr.first_entry_offset = sizeof(struct ipr_dump_header);
+ driver_dump->hdr.status = IPR_DUMP_STATUS_SUCCESS;
+ driver_dump->hdr.os = IPR_DUMP_OS_LINUX;
+ driver_dump->hdr.driver_name = IPR_DUMP_DRIVER_NAME;
+
+ ipr_dump_version_data(ioa_cfg, driver_dump);
+ ipr_dump_location_data(ioa_cfg, driver_dump);
+ ipr_dump_ioa_type_data(ioa_cfg, driver_dump);
+ ipr_dump_trace_data(ioa_cfg, driver_dump);
+
+ /* Update dump_header */
+ driver_dump->hdr.len += sizeof(struct ipr_dump_entry_header);
+
+ /* IOA Dump entry */
+ ipr_init_dump_entry_hdr(&ioa_dump->hdr);
+ ioa_dump->format = IPR_SDT_FMT2;
+ ioa_dump->hdr.len = 0;
+ ioa_dump->hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ ioa_dump->hdr.id = IPR_DUMP_IOA_DUMP_ID;
+
+ /* First entries in sdt are actually a list of dump addresses and
+ lengths to gather the real dump data. sdt represents the pointer
+ to the ioa generated dump table. Dump data will be extracted based
+ on entries in this table */
+ sdt = &ioa_dump->sdt;
+
+ rc = ipr_get_ldump_data_section(ioa_cfg, start_addr, (u32 *)sdt,
+ sizeof(struct ipr_sdt) / sizeof(u32));
+
+ /* Smart Dump table is ready to use and the first entry is valid */
+ if (rc || (be32_to_cpu(sdt->hdr.state) != IPR_FMT2_SDT_READY_TO_USE)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Dump of IOA failed. Dump table not valid: %d, %X.\n",
+ rc, be32_to_cpu(sdt->hdr.state));
+ driver_dump->hdr.status = IPR_DUMP_STATUS_FAILED;
+ ioa_cfg->sdt_state = DUMP_OBTAINED;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ num_entries = be32_to_cpu(sdt->hdr.num_entries_used);
+
+ if (num_entries > IPR_NUM_SDT_ENTRIES)
+ num_entries = IPR_NUM_SDT_ENTRIES;
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ for (i = 0; i < num_entries; i++) {
+ if (ioa_dump->hdr.len > IPR_MAX_IOA_DUMP_SIZE) {
+ driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS;
+ break;
+ }
+
+ if (sdt->entry[i].flags & IPR_SDT_VALID_ENTRY) {
+ sdt_word = be32_to_cpu(sdt->entry[i].bar_str_offset);
+ start_off = sdt_word & IPR_FMT2_MBX_ADDR_MASK;
+ end_off = be32_to_cpu(sdt->entry[i].end_offset);
+
+ if (ipr_sdt_is_fmt2(sdt_word) && sdt_word) {
+ bytes_to_copy = end_off - start_off;
+ if (bytes_to_copy > IPR_MAX_IOA_DUMP_SIZE) {
+ sdt->entry[i].flags &= ~IPR_SDT_VALID_ENTRY;
+ continue;
+ }
+
+ /* Copy data from adapter to driver buffers */
+ bytes_copied = ipr_sdt_copy(ioa_cfg, sdt_word,
+ bytes_to_copy);
+
+ ioa_dump->hdr.len += bytes_copied;
+
+ if (bytes_copied != bytes_to_copy) {
+ driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS;
+ break;
+ }
+ }
+ }
+ }
+
+ dev_err(&ioa_cfg->pdev->dev, "Dump of IOA completed.\n");
+
+ /* Update dump_header */
+ driver_dump->hdr.len += ioa_dump->hdr.len;
+ wmb();
+ ioa_cfg->sdt_state = DUMP_OBTAINED;
+ LEAVE;
+}
+
+#else
+#define ipr_get_ioa_dump(ioa_cfg, dump) do { } while(0)
+#endif
+
+/**
+ * ipr_worker_thread - Worker thread
+ * @data: ioa config struct
+ *
+ * Called at task level from a work thread. This function takes care
+ * of adding and removing device from the mid-layer as configuration
+ * changes are detected by the adapter.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_worker_thread(void *data)
+{
+ unsigned long lock_flags;
+ struct ipr_resource_entry *res;
+ struct scsi_device *sdev;
+ struct ipr_dump *dump;
+ struct ipr_ioa_cfg *ioa_cfg = data;
+ u8 bus, target, lun;
+ int did_work;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->sdt_state == GET_DUMP) {
+ dump = ioa_cfg->dump;
+ if (!dump || !kobject_get(&dump->kobj)) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ ipr_get_ioa_dump(ioa_cfg, dump);
+ kobject_put(&dump->kobj);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->sdt_state == DUMP_OBTAINED)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+restart:
+ do {
+ did_work = 0;
+ if (!ioa_cfg->allow_cmds || !ioa_cfg->allow_ml_add_del) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (res->del_from_ml && res->sdev) {
+ did_work = 1;
+ sdev = res->sdev;
+ if (!scsi_device_get(sdev)) {
+ res->sdev = NULL;
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ }
+ break;
+ }
+ }
+ } while(did_work);
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (res->add_to_ml) {
+ bus = res->cfgte.res_addr.bus;
+ target = res->cfgte.res_addr.target;
+ lun = res->cfgte.res_addr.lun;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_add_device(ioa_cfg->host, bus, target, lun);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ goto restart;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+/**
+ * ipr_read_trace - Dump the adapter trace
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_read_trace(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int size = IPR_TRACE_SIZE;
+ char *src = (char *)ioa_cfg->trace;
+
+ if (off > size)
+ return 0;
+ if (off + count > size) {
+ size -= off;
+ count = size;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ memcpy(buf, &src[off], count);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return count;
+}
+
+static struct bin_attribute ipr_trace_attr = {
+ .attr = {
+ .name = "trace",
+ .mode = S_IRUGO,
+ },
+ .size = 0,
+ .read = ipr_read_trace,
+};
+#endif
+
+/**
+ * ipr_show_fw_version - Show the firmware version
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_fw_version(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+ unsigned long lock_flags = 0;
+ int len;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ len = snprintf(buf, PAGE_SIZE, "%02X%02X%02X%02X\n",
+ ucode_vpd->major_release, ucode_vpd->card_type,
+ ucode_vpd->minor_release[0],
+ ucode_vpd->minor_release[1]);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+static struct class_device_attribute ipr_fw_version_attr = {
+ .attr = {
+ .name = "fw_version",
+ .mode = S_IRUGO,
+ },
+ .show = ipr_show_fw_version,
+};
+
+/**
+ * ipr_show_log_level - Show the adapter's error logging level
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_log_level(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int len;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ len = snprintf(buf, PAGE_SIZE, "%d\n", ioa_cfg->log_level);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+/**
+ * ipr_store_log_level - Change the adapter's error logging level
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_log_level(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->log_level = simple_strtoul(buf, NULL, 10);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return strlen(buf);
+}
+
+static struct class_device_attribute ipr_log_level_attr = {
+ .attr = {
+ .name = "log_level",
+ .mode = S_IRUGO | S_IWUSR,
+ },
+ .show = ipr_show_log_level,
+ .store = ipr_store_log_level
+};
+
+/**
+ * ipr_store_diagnostics - IOA Diagnostics interface
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will reset the adapter and wait a reasonable
+ * amount of time for any errors that the adapter might log.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_diagnostics(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int rc = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->errors_logged = 0;
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+
+ if (ioa_cfg->in_reset_reload) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ /* Wait for a second for any errors to be logged */
+ schedule_timeout(HZ);
+ } else {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return -EIO;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->in_reset_reload || ioa_cfg->errors_logged)
+ rc = -EIO;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ return rc;
+}
+
+static struct class_device_attribute ipr_diagnostics_attr = {
+ .attr = {
+ .name = "run_diagnostics",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_diagnostics
+};
+
+/**
+ * ipr_store_reset_adapter - Reset the adapter
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will reset the adapter.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_reset_adapter(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags;
+ int result = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (!ioa_cfg->in_reset_reload)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ return result;
+}
+
+static struct class_device_attribute ipr_ioa_reset_attr = {
+ .attr = {
+ .name = "reset_host",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_reset_adapter
+};
+
+/**
+ * ipr_alloc_ucode_buffer - Allocates a microcode download buffer
+ * @buf_len: buffer length
+ *
+ * Allocates a DMA'able buffer in chunks and assembles a scatter/gather
+ * list to use for microcode download
+ *
+ * Return value:
+ * pointer to sglist / NULL on failure
+ **/
+static struct ipr_sglist *ipr_alloc_ucode_buffer(int buf_len)
+{
+ int sg_size, order, bsize_elem, num_elem, i, j;
+ struct ipr_sglist *sglist;
+ struct scatterlist *scatterlist;
+ struct page *page;
+
+ /* Get the minimum size per scatter/gather element */
+ sg_size = buf_len / (IPR_MAX_SGLIST - 1);
+
+ /* Get the actual size per element */
+ order = get_order(sg_size);
+
+ /* Determine the actual number of bytes per element */
+ bsize_elem = PAGE_SIZE * (1 << order);
+
+ /* Determine the actual number of sg entries needed */
+ if (buf_len % bsize_elem)
+ num_elem = (buf_len / bsize_elem) + 1;
+ else
+ num_elem = buf_len / bsize_elem;
+
+ /* Allocate a scatter/gather list for the DMA */
+ sglist = kmalloc(sizeof(struct ipr_sglist) +
+ (sizeof(struct scatterlist) * (num_elem - 1)),
+ GFP_KERNEL);
+
+ if (sglist == NULL) {
+ ipr_trace;
+ return NULL;
+ }
+
+ memset(sglist, 0, sizeof(struct ipr_sglist) +
+ (sizeof(struct scatterlist) * (num_elem - 1)));
+
+ scatterlist = sglist->scatterlist;
+
+ sglist->order = order;
+ sglist->num_sg = num_elem;
+
+ /* Allocate a bunch of sg elements */
+ for (i = 0; i < num_elem; i++) {
+ page = alloc_pages(GFP_KERNEL, order);
+ if (!page) {
+ ipr_trace;
+
+ /* Free up what we already allocated */
+ for (j = i - 1; j >= 0; j--)
+ __free_pages(scatterlist[j].page, order);
+ kfree(sglist);
+ return NULL;
+ }
+
+ scatterlist[i].page = page;
+ }
+
+ return sglist;
+}
+
+/**
+ * ipr_free_ucode_buffer - Frees a microcode download buffer
+ * @p_dnld: scatter/gather list pointer
+ *
+ * Free a DMA'able ucode download buffer previously allocated with
+ * ipr_alloc_ucode_buffer
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_free_ucode_buffer(struct ipr_sglist *sglist)
+{
+ int i;
+
+ for (i = 0; i < sglist->num_sg; i++)
+ __free_pages(sglist->scatterlist[i].page, sglist->order);
+
+ kfree(sglist);
+}
+
+/**
+ * ipr_copy_ucode_buffer - Copy user buffer to kernel buffer
+ * @sglist: scatter/gather list pointer
+ * @buffer: buffer pointer
+ * @len: buffer length
+ *
+ * Copy a microcode image from a user buffer into a buffer allocated by
+ * ipr_alloc_ucode_buffer
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
+ u8 *buffer, u32 len)
+{
+ int bsize_elem, i, result = 0;
+ struct scatterlist *scatterlist;
+ void *kaddr;
+
+ /* Determine the actual number of bytes per element */
+ bsize_elem = PAGE_SIZE * (1 << sglist->order);
+
+ scatterlist = sglist->scatterlist;
+
+ for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) {
+ kaddr = kmap(scatterlist[i].page);
+ memcpy(kaddr, buffer, bsize_elem);
+ kunmap(scatterlist[i].page);
+
+ scatterlist[i].length = bsize_elem;
+
+ if (result != 0) {
+ ipr_trace;
+ return result;
+ }
+ }
+
+ if (len % bsize_elem) {
+ kaddr = kmap(scatterlist[i].page);
+ memcpy(kaddr, buffer, len % bsize_elem);
+ kunmap(scatterlist[i].page);
+
+ scatterlist[i].length = len % bsize_elem;
+ }
+
+ sglist->buffer_len = len;
+ return result;
+}
+
+/**
+ * ipr_map_ucode_buffer - Map a microcode download buffer
+ * @ipr_cmd: ipr command struct
+ * @sglist: scatter/gather list
+ * @len: total length of download buffer
+ *
+ * Maps a microcode download scatter/gather list for DMA and
+ * builds the IOADL.
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_map_ucode_buffer(struct ipr_cmnd *ipr_cmd,
+ struct ipr_sglist *sglist, int len)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct scatterlist *scatterlist = sglist->scatterlist;
+ int i;
+
+ ipr_cmd->dma_use_sg = pci_map_sg(ioa_cfg->pdev, scatterlist,
+ sglist->num_sg, DMA_TO_DEVICE);
+
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(len);
+ ioarcb->write_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+
+ for (i = 0; i < ipr_cmd->dma_use_sg; i++) {
+ ioadl[i].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_WRITE | sg_dma_len(&scatterlist[i]));
+ ioadl[i].address =
+ cpu_to_be32(sg_dma_address(&scatterlist[i]));
+ }
+
+ if (likely(ipr_cmd->dma_use_sg)) {
+ ioadl[i-1].flags_and_data_len |=
+ cpu_to_be32(IPR_IOADL_FLAGS_LAST);
+ }
+ else {
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_store_update_fw - Update the firmware on the adapter
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will update the firmware on the adapter.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_update_fw(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_ucode_image_header *image_hdr;
+ const struct firmware *fw_entry;
+ struct ipr_sglist *sglist;
+ unsigned long lock_flags;
+ char fname[100];
+ char *src;
+ int len, result, dnld_size;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ len = snprintf(fname, 99, "%s", buf);
+ fname[len-1] = '\0';
+
+ if(request_firmware(&fw_entry, fname, &ioa_cfg->pdev->dev)) {
+ dev_err(&ioa_cfg->pdev->dev, "Firmware file %s not found\n", fname);
+ return -EIO;
+ }
+
+ image_hdr = (struct ipr_ucode_image_header *)fw_entry->data;
+
+ if (be32_to_cpu(image_hdr->header_length) > fw_entry->size ||
+ (ioa_cfg->vpd_cbs->page3_data.card_type &&
+ ioa_cfg->vpd_cbs->page3_data.card_type != image_hdr->card_type)) {
+ dev_err(&ioa_cfg->pdev->dev, "Invalid microcode buffer\n");
+ release_firmware(fw_entry);
+ return -EINVAL;
+ }
+
+ src = (u8 *)image_hdr + be32_to_cpu(image_hdr->header_length);
+ dnld_size = fw_entry->size - be32_to_cpu(image_hdr->header_length);
+ sglist = ipr_alloc_ucode_buffer(dnld_size);
+
+ if (!sglist) {
+ dev_err(&ioa_cfg->pdev->dev, "Microcode buffer allocation failed\n");
+ release_firmware(fw_entry);
+ return -ENOMEM;
+ }
+
+ result = ipr_copy_ucode_buffer(sglist, src, dnld_size);
+
+ if (result) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Microcode buffer copy to DMA buffer failed\n");
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+ return result;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->ucode_sglist) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ dev_err(&ioa_cfg->pdev->dev,
+ "Microcode download already in progress\n");
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+ return -EIO;
+ }
+
+ ioa_cfg->ucode_sglist = sglist;
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->ucode_sglist = NULL;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+
+ return count;
+}
+
+static struct class_device_attribute ipr_update_fw_attr = {
+ .attr = {
+ .name = "update_fw",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_update_fw
+};
+
+static struct class_device_attribute *ipr_ioa_attrs[] = {
+ &ipr_fw_version_attr,
+ &ipr_log_level_attr,
+ &ipr_diagnostics_attr,
+ &ipr_ioa_reset_attr,
+ &ipr_update_fw_attr,
+ NULL,
+};
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+/**
+ * ipr_read_dump - Dump the adapter
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_read_dump(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+ char *src;
+ int len;
+ size_t rc = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ dump = ioa_cfg->dump;
+
+ if (ioa_cfg->sdt_state != DUMP_OBTAINED || !dump || !kobject_get(&dump->kobj)) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ if (off > dump->driver_dump.hdr.len) {
+ kobject_put(&dump->kobj);
+ return 0;
+ }
+
+ if (off + count > dump->driver_dump.hdr.len) {
+ count = dump->driver_dump.hdr.len - off;
+ rc = count;
+ }
+
+ if (count && off < sizeof(dump->driver_dump)) {
+ if (off + count > sizeof(dump->driver_dump))
+ len = sizeof(dump->driver_dump) - off;
+ else
+ len = count;
+ src = (u8 *)&dump->driver_dump + off;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ off -= sizeof(dump->driver_dump);
+
+ if (count && off < offsetof(struct ipr_ioa_dump, ioa_data)) {
+ if (off + count > offsetof(struct ipr_ioa_dump, ioa_data))
+ len = offsetof(struct ipr_ioa_dump, ioa_data) - off;
+ else
+ len = count;
+ src = (u8 *)&dump->ioa_dump + off;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ off -= offsetof(struct ipr_ioa_dump, ioa_data);
+
+ while (count) {
+ if ((off & PAGE_MASK) != ((off + count) & PAGE_MASK))
+ len = PAGE_ALIGN(off) - off;
+ else
+ len = count;
+ src = (u8 *)dump->ioa_dump.ioa_data[(off & PAGE_MASK) >> PAGE_SHIFT];
+ src += off & ~PAGE_MASK;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ kobject_put(&dump->kobj);
+ return rc;
+}
+
+/**
+ * ipr_release_dump - Free adapter dump memory
+ * @kobj: kobject struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_release_dump(struct kobject *kobj)
+{
+ struct ipr_dump *dump = container_of(kobj,struct ipr_dump,kobj);
+ struct ipr_ioa_cfg *ioa_cfg = dump->ioa_cfg;
+ unsigned long lock_flags = 0;
+ int i;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->dump = NULL;
+ ioa_cfg->sdt_state = INACTIVE;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ for (i = 0; i < dump->ioa_dump.next_page_index; i++)
+ free_page((unsigned long) dump->ioa_dump.ioa_data[i]);
+
+ kfree(dump);
+ LEAVE;
+}
+
+static struct kobj_type ipr_dump_kobj_type = {
+ .release = ipr_release_dump,
+};
+
+/**
+ * ipr_alloc_dump - Prepare for adapter dump
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+ dump = kmalloc(sizeof(struct ipr_dump), GFP_KERNEL);
+
+ if (!dump) {
+ ipr_err("Dump memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ memset(dump, 0, sizeof(struct ipr_dump));
+ kobject_init(&dump->kobj);
+ dump->kobj.ktype = &ipr_dump_kobj_type;
+ dump->ioa_cfg = ioa_cfg;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (INACTIVE != ioa_cfg->sdt_state) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ kfree(dump);
+ return 0;
+ }
+
+ ioa_cfg->dump = dump;
+ ioa_cfg->sdt_state = WAIT_FOR_DUMP;
+ if (ioa_cfg->ioa_is_dead && !ioa_cfg->dump_taken) {
+ ioa_cfg->dump_taken = 1;
+ schedule_work(&ioa_cfg->work_q);
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ LEAVE;
+ return 0;
+}
+
+/**
+ * ipr_free_dump - Free adapter dump memory
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_free_dump(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ dump = ioa_cfg->dump;
+ if (!dump) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+ }
+
+ ioa_cfg->dump = NULL;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ kobject_put(&dump->kobj);
+
+ LEAVE;
+ return 0;
+}
+
+/**
+ * ipr_write_dump - Setup dump state of adapter
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_write_dump(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ int rc;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ if (buf[0] == '1')
+ rc = ipr_alloc_dump(ioa_cfg);
+ else if (buf[0] == '0')
+ rc = ipr_free_dump(ioa_cfg);
+ else
+ return -EINVAL;
+
+ if (rc)
+ return rc;
+ else
+ return count;
+}
+
+static struct bin_attribute ipr_dump_attr = {
+ .attr = {
+ .name = "dump",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .size = 0,
+ .read = ipr_read_dump,
+ .write = ipr_write_dump
+};
+#else
+static int ipr_free_dump(struct ipr_ioa_cfg *ioa_cfg) { return 0; };
+#endif
+
+/**
+ * ipr_store_queue_depth - Change the device's queue depth
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_queue_depth(struct device *dev,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ int qdepth = simple_strtoul(buf, NULL, 10);
+ int tagged = 0;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res) {
+ res->qdepth = qdepth;
+
+ if (ipr_is_gscsi(res) && res->tcq_active)
+ tagged = MSG_ORDERED_TAG;
+
+ len = strlen(buf);
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_adjust_queue_depth(sdev, tagged, qdepth);
+ return len;
+}
+
+static struct device_attribute ipr_queue_depth_attr = {
+ .attr = {
+ .name = "queue_depth",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = ipr_store_queue_depth
+};
+
+/**
+ * ipr_show_tcq_enable - Show if the device is enabled for tcqing
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_tcq_enable(struct device *dev, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res)
+ len = snprintf(buf, PAGE_SIZE, "%d\n", res->tcq_active);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+/**
+ * ipr_store_tcq_enable - Change the device's TCQing state
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_tcq_enable(struct device *dev,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ int tcq_active = simple_strtoul(buf, NULL, 10);
+ int qdepth = IPR_MAX_CMD_PER_LUN;
+ int tagged = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+
+ if (res) {
+ res->tcq_active = 0;
+ qdepth = res->qdepth;
+
+ if (ipr_is_gscsi(res) && sdev->tagged_supported) {
+ if (tcq_active) {
+ tagged = MSG_ORDERED_TAG;
+ res->tcq_active = 1;
+ }
+
+ len = strlen(buf);
+ } else if (tcq_active) {
+ len = -EINVAL;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_adjust_queue_depth(sdev, tagged, qdepth);
+ return len;
+}
+
+static struct device_attribute ipr_tcqing_attr = {
+ .attr = {
+ .name = "tcq_enable",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = ipr_store_tcq_enable,
+ .show = ipr_show_tcq_enable
+};
+
+/**
+ * ipr_show_adapter_handle - Show the adapter's resource handle for this device
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_adapter_handle(struct device *dev, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res)
+ len = snprintf(buf, PAGE_SIZE, "%08X\n", res->cfgte.res_handle);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+static struct device_attribute ipr_adapter_handle_attr = {
+ .attr = {
+ .name = "adapter_handle",
+ .mode = S_IRUSR,
+ },
+ .show = ipr_show_adapter_handle
+};
+
+static struct device_attribute *ipr_dev_attrs[] = {
+ &ipr_queue_depth_attr,
+ &ipr_tcqing_attr,
+ &ipr_adapter_handle_attr,
+ NULL,
+};
+
+/**
+ * ipr_biosparam - Return the HSC mapping
+ * @sdev: scsi device struct
+ * @block_device: block device pointer
+ * @capacity: capacity of the device
+ * @parm: Array containing returned HSC values.
+ *
+ * This function generates the HSC parms that fdisk uses.
+ * We want to make sure we return something that places partitions
+ * on 4k boundaries for best performance with the IOA.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_biosparam(struct scsi_device *sdev,
+ struct block_device *block_device,
+ sector_t capacity, int *parm)
+{
+ int heads, sectors, cylinders;
+
+ heads = 128;
+ sectors = 32;
+
+ cylinders = capacity;
+ sector_div(cylinders, (128 * 32));
+
+ /* return result */
+ parm[0] = heads;
+ parm[1] = sectors;
+ parm[2] = cylinders;
+
+ return 0;
+}
+
+/**
+ * ipr_slave_destroy - Unconfigure a SCSI device
+ * @sdev: scsi device struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_slave_destroy(struct scsi_device *sdev)
+{
+ struct ipr_resource_entry *res;
+ struct ipr_ioa_cfg *ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *) sdev->hostdata;
+ if (res) {
+ sdev->hostdata = NULL;
+ res->sdev = NULL;
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+}
+
+/**
+ * ipr_slave_configure - Configure a SCSI device
+ * @sdev: scsi device struct
+ *
+ * This function configures the specified scsi device.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_slave_configure(struct scsi_device *sdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = sdev->hostdata;
+ if (res) {
+ if (ipr_is_af_dasd_device(res))
+ sdev->type = TYPE_RAID;
+ if (ipr_is_af_dasd_device(res) || ipr_is_ioa_resource(res))
+ sdev->scsi_level = 4;
+ if (ipr_is_vset_device(res))
+ sdev->timeout = IPR_VSET_RW_TIMEOUT;
+
+ sdev->allow_restart = 1;
+ scsi_adjust_queue_depth(sdev, 0, res->qdepth);
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+}
+
+/**
+ * ipr_slave_alloc - Prepare for commands to a device.
+ * @sdev: scsi device struct
+ *
+ * This function saves a pointer to the resource entry
+ * in the scsi device struct if the device exists. We
+ * can then use this pointer in ipr_queuecommand when
+ * handling new commands.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_slave_alloc(struct scsi_device *sdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags;
+
+ sdev->hostdata = NULL;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if ((res->cfgte.res_addr.bus == sdev->channel) &&
+ (res->cfgte.res_addr.target == sdev->id) &&
+ (res->cfgte.res_addr.lun == sdev->lun)) {
+ res->sdev = sdev;
+ res->add_to_ml = 0;
+ sdev->hostdata = res;
+ res->needs_sync_complete = 1;
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ return 0;
+}
+
+/**
+ * ipr_eh_host_reset - Reset the host adapter
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_host_reset(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ int rc;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter being reset as a result of error recovery.\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ rc = ipr_reset_reload(ioa_cfg, IPR_SHUTDOWN_ABBREV);
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_eh_dev_reset - Reset the device
+ * @scsi_cmd: scsi command struct
+ *
+ * This function issues a device reset to the affected device.
+ * A LUN reset will be sent to the device first. If that does
+ * not work, a target reset will be sent.
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_dev_reset(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_cmd_pkt *cmd_pkt;
+ u32 ioasc;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+
+ if (!res || (!ipr_is_gscsi(res) && !ipr_is_vset_device(res)))
+ return FAILED;
+
+ /*
+ * If we are currently going through reset/reload, return failed. This will force the
+ * mid-layer to call ipr_eh_host_reset, which will then go to sleep and wait for the
+ * reset to complete
+ */
+ if (ioa_cfg->in_reset_reload)
+ return FAILED;
+ if (ioa_cfg->ioa_is_dead)
+ return FAILED;
+
+ list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
+ if (ipr_cmd->ioarcb.res_handle == res->cfgte.res_handle) {
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+ }
+ }
+
+ res->resetting_device = 1;
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+
+ ipr_cmd->ioarcb.res_handle = res->cfgte.res_handle;
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_RESET_DEVICE;
+
+ ipr_sdev_err(scsi_cmd->device, "Resetting device\n");
+ ipr_send_blocking_cmd(ipr_cmd, ipr_timeout, IPR_DEVICE_RESET_TIMEOUT);
+
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ res->resetting_device = 0;
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ LEAVE;
+ return (IPR_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS);
+}
+
+/**
+ * ipr_bus_reset_done - Op done function for bus reset.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for a bus reset
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_bus_reset_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res;
+
+ ENTER;
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!memcmp(&res->cfgte.res_handle, &ipr_cmd->ioarcb.res_handle,
+ sizeof(res->cfgte.res_handle))) {
+ scsi_report_bus_reset(ioa_cfg->host, res->cfgte.res_addr.bus);
+ break;
+ }
+ }
+
+ /*
+ * If abort has not completed, indicate the reset has, else call the
+ * abort's done function to wake the sleeping eh thread
+ */
+ if (ipr_cmd->u.sibling->u.sibling)
+ ipr_cmd->u.sibling->u.sibling = NULL;
+ else
+ ipr_cmd->u.sibling->done(ipr_cmd->u.sibling);
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ LEAVE;
+}
+
+/**
+ * ipr_abort_timeout - An abort task has timed out
+ * @ipr_cmd: ipr command struct
+ *
+ * This function handles when an abort task times out. If this
+ * happens we issue a bus reset since we have resources tied
+ * up that must be freed before returning to the midlayer.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_abort_timeout(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_cmnd *reset_cmd;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_cmd_pkt *cmd_pkt;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ipr_cmd->completion.done || ioa_cfg->in_reset_reload) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ ipr_sdev_err(ipr_cmd->u.sdev, "Abort timed out. Resetting bus\n");
+ reset_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ipr_cmd->u.sibling = reset_cmd;
+ reset_cmd->u.sibling = ipr_cmd;
+ reset_cmd->ioarcb.res_handle = ipr_cmd->ioarcb.res_handle;
+ cmd_pkt = &reset_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_RESET_DEVICE;
+ cmd_pkt->cdb[2] = IPR_RESET_TYPE_SELECT | IPR_BUS_RESET;
+
+ ipr_do_req(reset_cmd, ipr_bus_reset_done, ipr_timeout, IPR_DEVICE_RESET_TIMEOUT);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+/**
+ * ipr_cancel_op - Cancel specified op
+ * @scsi_cmd: scsi command struct
+ *
+ * This function cancels specified op.
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_cancel_op(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_cmd_pkt *cmd_pkt;
+ u32 ioasc, ioarcb_addr;
+ int op_found = 0;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *)scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+
+ if (!res || (!ipr_is_gscsi(res) && !ipr_is_vset_device(res)))
+ return FAILED;
+
+ list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
+ if (ipr_cmd->scsi_cmd == scsi_cmd) {
+ ipr_cmd->done = ipr_scsi_eh_done;
+ op_found = 1;
+ break;
+ }
+ }
+
+ if (!op_found)
+ return SUCCESS;
+
+ ioarcb_addr = be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr);
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ipr_cmd->ioarcb.res_handle = res->cfgte.res_handle;
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_ABORT_TASK;
+ cmd_pkt->cdb[2] = (ioarcb_addr >> 24) & 0xff;
+ cmd_pkt->cdb[3] = (ioarcb_addr >> 16) & 0xff;
+ cmd_pkt->cdb[4] = (ioarcb_addr >> 8) & 0xff;
+ cmd_pkt->cdb[5] = ioarcb_addr & 0xff;
+ ipr_cmd->u.sdev = scsi_cmd->device;
+
+ ipr_sdev_err(scsi_cmd->device, "Aborting command: %02X\n", scsi_cmd->cmnd[0]);
+ ipr_send_blocking_cmd(ipr_cmd, ipr_abort_timeout, IPR_ABORT_TASK_TIMEOUT);
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ /*
+ * If the abort task timed out and we sent a bus reset, we will get
+ * one the following responses to the abort
+ */
+ if (ioasc == IPR_IOASC_BUS_WAS_RESET || ioasc == IPR_IOASC_SYNC_REQUIRED) {
+ ioasc = 0;
+ ipr_trace;
+ }
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ res->needs_sync_complete = 1;
+
+ LEAVE;
+ return (IPR_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS);
+}
+
+/**
+ * ipr_eh_abort - Abort a single op
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_abort(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+
+ /* If we are currently going through reset/reload, return failed. This will force the
+ mid-layer to call ipr_eh_host_reset, which will then go to sleep and wait for the
+ reset to complete */
+ if (ioa_cfg->in_reset_reload)
+ return FAILED;
+ if (ioa_cfg->ioa_is_dead)
+ return FAILED;
+ if (!scsi_cmd->device->hostdata)
+ return FAILED;
+
+ LEAVE;
+ return ipr_cancel_op(scsi_cmd);
+}
+
+/**
+ * ipr_handle_other_interrupt - Handle "other" interrupts
+ * @ioa_cfg: ioa config struct
+ * @int_reg: interrupt register
+ *
+ * Return value:
+ * IRQ_NONE / IRQ_HANDLED
+ **/
+static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg,
+ volatile u32 int_reg)
+{
+ irqreturn_t rc = IRQ_HANDLED;
+
+ if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) {
+ /* Mask the interrupt */
+ writel(IPR_PCII_IOA_TRANS_TO_OPER, ioa_cfg->regs.set_interrupt_mask_reg);
+
+ /* Clear the interrupt */
+ writel(IPR_PCII_IOA_TRANS_TO_OPER, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ list_del(&ioa_cfg->reset_cmd->queue);
+ del_timer(&ioa_cfg->reset_cmd->timer);
+ ipr_reset_ioa_job(ioa_cfg->reset_cmd);
+ } else {
+ if (int_reg & IPR_PCII_IOA_UNIT_CHECKED)
+ ioa_cfg->ioa_unit_checked = 1;
+ else
+ dev_err(&ioa_cfg->pdev->dev,
+ "Permanent IOA failure. 0x%08X\n", int_reg);
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_isr - Interrupt service routine
+ * @irq: irq number
+ * @devp: pointer to ioa config struct
+ * @regs: pt_regs struct
+ *
+ * Return value:
+ * IRQ_NONE / IRQ_HANDLED
+ **/
+static irqreturn_t ipr_isr(int irq, void *devp, struct pt_regs *regs)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp;
+ unsigned long lock_flags = 0;
+ volatile u32 int_reg, int_mask_reg;
+ u32 ioasc;
+ u16 cmd_index;
+ struct ipr_cmnd *ipr_cmd;
+ irqreturn_t rc = IRQ_NONE;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ /* If interrupts are disabled, ignore the interrupt */
+ if (!ioa_cfg->allow_interrupts) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_NONE;
+ }
+
+ int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
+
+ /* If an interrupt on the adapter did not occur, ignore it */
+ if (unlikely((int_reg & IPR_PCII_OPER_INTERRUPTS) == 0)) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_NONE;
+ }
+
+ while (1) {
+ ipr_cmd = NULL;
+
+ while ((be32_to_cpu(*ioa_cfg->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ ioa_cfg->toggle_bit) {
+
+ cmd_index = (be32_to_cpu(*ioa_cfg->hrrq_curr) &
+ IPR_HRRQ_REQ_RESP_HANDLE_MASK) >> IPR_HRRQ_REQ_RESP_HANDLE_SHIFT;
+
+ if (unlikely(cmd_index >= IPR_NUM_CMD_BLKS)) {
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev, "Invalid response handle from IOA\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_HANDLED;
+ }
+
+ ipr_cmd = ioa_cfg->ipr_cmnd_list[cmd_index];
+
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, ioasc);
+
+ list_del(&ipr_cmd->queue);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->done(ipr_cmd);
+
+ rc = IRQ_HANDLED;
+
+ if (ioa_cfg->hrrq_curr < ioa_cfg->hrrq_end) {
+ ioa_cfg->hrrq_curr++;
+ } else {
+ ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
+ ioa_cfg->toggle_bit ^= 1u;
+ }
+ }
+
+ if (ipr_cmd != NULL) {
+ /* Clear the PCI interrupt */
+ writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
+ } else
+ break;
+ }
+
+ if (unlikely(rc == IRQ_NONE))
+ rc = ipr_handle_other_interrupt(ioa_cfg, int_reg);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return rc;
+}
+
+/**
+ * ipr_build_ioadl - Build a scatter/gather list and map the buffer
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * 0 on success / -1 on failure
+ **/
+static int ipr_build_ioadl(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ int i;
+ struct scatterlist *sglist;
+ u32 length;
+ u32 ioadl_flags = 0;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+
+ length = scsi_cmd->request_bufflen;
+
+ if (length == 0)
+ return 0;
+
+ if (scsi_cmd->use_sg) {
+ ipr_cmd->dma_use_sg = pci_map_sg(ioa_cfg->pdev,
+ scsi_cmd->request_buffer,
+ scsi_cmd->use_sg,
+ scsi_cmd->sc_data_direction);
+
+ if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_WRITE;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(length);
+ ioarcb->write_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+ } else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_READ;
+ ioarcb->read_data_transfer_length = cpu_to_be32(length);
+ ioarcb->read_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+ }
+
+ sglist = scsi_cmd->request_buffer;
+
+ for (i = 0; i < ipr_cmd->dma_use_sg; i++) {
+ ioadl[i].flags_and_data_len =
+ cpu_to_be32(ioadl_flags | sg_dma_len(&sglist[i]));
+ ioadl[i].address =
+ cpu_to_be32(sg_dma_address(&sglist[i]));
+ }
+
+ if (likely(ipr_cmd->dma_use_sg)) {
+ ioadl[i-1].flags_and_data_len |=
+ cpu_to_be32(IPR_IOADL_FLAGS_LAST);
+ return 0;
+ } else
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
+ } else {
+ if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_WRITE;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(length);
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ } else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_READ;
+ ioarcb->read_data_transfer_length = cpu_to_be32(length);
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ }
+
+ ipr_cmd->dma_handle = pci_map_single(ioa_cfg->pdev,
+ scsi_cmd->request_buffer, length,
+ scsi_cmd->sc_data_direction);
+
+ if (likely(!pci_dma_mapping_error(ipr_cmd->dma_handle))) {
+ ipr_cmd->dma_use_sg = 1;
+ ioadl[0].flags_and_data_len =
+ cpu_to_be32(ioadl_flags | length | IPR_IOADL_FLAGS_LAST);
+ ioadl[0].address = cpu_to_be32(ipr_cmd->dma_handle);
+ return 0;
+ } else
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_single failed!\n");
+ }
+
+ return -1;
+}
+
+/**
+ * ipr_get_task_attributes - Translate SPI Q-Tag to task attributes
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * task attributes
+ **/
+static u8 ipr_get_task_attributes(struct scsi_cmnd *scsi_cmd)
+{
+ u8 tag[2];
+ u8 rc = IPR_FLAGS_LO_UNTAGGED_TASK;
+
+ if (scsi_populate_tag_msg(scsi_cmd, tag)) {
+ switch (tag[0]) {
+ case MSG_SIMPLE_TAG:
+ rc = IPR_FLAGS_LO_SIMPLE_TASK;
+ break;
+ case MSG_HEAD_TAG:
+ rc = IPR_FLAGS_LO_HEAD_OF_Q_TASK;
+ break;
+ case MSG_ORDERED_TAG:
+ rc = IPR_FLAGS_LO_ORDERED_TASK;
+ break;
+ };
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_erp_done - Process completion of ERP for a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function copies the sense buffer into the scsi_cmd
+ * struct and pushes the scsi_done function.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (IPR_IOASC_SENSE_KEY(ioasc) > 0) {
+ scsi_cmd->result |= (DID_ERROR << 16);
+ ipr_sdev_err(scsi_cmd->device,
+ "Request Sense failed with IOASC: 0x%08X\n", ioasc);
+ } else {
+ memcpy(scsi_cmd->sense_buffer, ipr_cmd->sense_buffer,
+ SCSI_SENSE_BUFFERSIZE);
+ }
+
+ if (res)
+ res->needs_sync_complete = 1;
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+}
+
+/**
+ * ipr_reinit_ipr_cmnd_for_erp - Re-initialize a cmnd block to be used for ERP
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reinit_ipr_cmnd_for_erp(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioarcb *ioarcb;
+ struct ipr_ioasa *ioasa;
+
+ ioarcb = &ipr_cmd->ioarcb;
+ ioasa = &ipr_cmd->ioasa;
+
+ memset(&ioarcb->cmd_pkt, 0, sizeof(struct ipr_cmd_pkt));
+ ioarcb->write_data_transfer_length = 0;
+ ioarcb->read_data_transfer_length = 0;
+ ioarcb->write_ioadl_len = 0;
+ ioarcb->read_ioadl_len = 0;
+ ioasa->ioasc = 0;
+ ioasa->residual_data_len = 0;
+}
+
+/**
+ * ipr_erp_request_sense - Send request sense to a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a request sense to a device as a result
+ * of a check condition.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_request_sense(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_cmd_pkt *cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+
+ ipr_reinit_ipr_cmnd_for_erp(ipr_cmd);
+
+ cmd_pkt->request_type = IPR_RQTYPE_SCSICDB;
+ cmd_pkt->cdb[0] = REQUEST_SENSE;
+ cmd_pkt->cdb[4] = SCSI_SENSE_BUFFERSIZE;
+ cmd_pkt->flags_hi |= IPR_FLAGS_HI_SYNC_OVERRIDE;
+ cmd_pkt->flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
+ cmd_pkt->timeout = cpu_to_be16(IPR_REQUEST_SENSE_TIMEOUT / HZ);
+
+ ipr_cmd->ioadl[0].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | SCSI_SENSE_BUFFERSIZE);
+ ipr_cmd->ioadl[0].address =
+ cpu_to_be32(ipr_cmd->sense_buffer_dma);
+
+ ipr_cmd->ioarcb.read_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ipr_cmd->ioarcb.read_data_transfer_length =
+ cpu_to_be32(SCSI_SENSE_BUFFERSIZE);
+
+ ipr_do_req(ipr_cmd, ipr_erp_done, ipr_timeout,
+ IPR_REQUEST_SENSE_TIMEOUT * 2);
+}
+
+/**
+ * ipr_erp_cancel_all - Send cancel all to a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a cancel all to a device to clear the
+ * queue. If we are running TCQ on the device, QERR is set to 1,
+ * which means all outstanding ops have been dropped on the floor.
+ * Cancel all will return them to us.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_cancel_all(struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ struct ipr_cmd_pkt *cmd_pkt;
+
+ res->in_erp = 1;
+
+ ipr_reinit_ipr_cmnd_for_erp(ipr_cmd);
+
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_CANCEL_ALL_REQUESTS;
+
+ ipr_do_req(ipr_cmd, ipr_erp_request_sense, ipr_timeout,
+ IPR_CANCEL_ALL_TIMEOUT);
+}
+
+/**
+ * ipr_dump_ioasa - Dump contents of IOASA
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler when ops
+ * fail. It will log the IOASA if appropriate. Only called
+ * for GPDD ops.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_dump_ioasa(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ int i;
+ u16 data_len;
+ u32 ioasc;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+ u32 *ioasa_data = (u32 *)ioasa;
+ int error_index;
+
+ ioasc = be32_to_cpu(ioasa->ioasc) & IPR_IOASC_IOASC_MASK;
+
+ if (0 == ioasc)
+ return;
+
+ if (ioa_cfg->log_level < IPR_DEFAULT_LOG_LEVEL)
+ return;
+
+ error_index = ipr_get_error(ioasc);
+
+ if (ioa_cfg->log_level < IPR_MAX_LOG_LEVEL) {
+ /* Don't log an error if the IOA already logged one */
+ if (ioasa->ilid != 0)
+ return;
+
+ if (ipr_error_table[error_index].log_ioasa == 0)
+ return;
+ }
+
+ ipr_sdev_err(ipr_cmd->scsi_cmd->device, "%s\n",
+ ipr_error_table[error_index].error);
+
+ if ((ioasa->u.gpdd.end_state <= ARRAY_SIZE(ipr_gpdd_dev_end_states)) &&
+ (ioasa->u.gpdd.bus_phase <= ARRAY_SIZE(ipr_gpdd_dev_bus_phases))) {
+ ipr_sdev_err(ipr_cmd->scsi_cmd->device,
+ "Device End state: %s Phase: %s\n",
+ ipr_gpdd_dev_end_states[ioasa->u.gpdd.end_state],
+ ipr_gpdd_dev_bus_phases[ioasa->u.gpdd.bus_phase]);
+ }
+
+ if (sizeof(struct ipr_ioasa) < be16_to_cpu(ioasa->ret_stat_len))
+ data_len = sizeof(struct ipr_ioasa);
+ else
+ data_len = be16_to_cpu(ioasa->ret_stat_len);
+
+ ipr_err("IOASA Dump:\n");
+
+ for (i = 0; i < data_len / 4; i += 4) {
+ ipr_err("%08X: %08X %08X %08X %08X\n", i*4,
+ be32_to_cpu(ioasa_data[i]),
+ be32_to_cpu(ioasa_data[i+1]),
+ be32_to_cpu(ioasa_data[i+2]),
+ be32_to_cpu(ioasa_data[i+3]));
+ }
+}
+
+/**
+ * ipr_gen_sense - Generate SCSI sense data from an IOASA
+ * @ioasa: IOASA
+ * @sense_buf: sense data buffer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_gen_sense(struct ipr_cmnd *ipr_cmd)
+{
+ u32 failing_lba;
+ u8 *sense_buf = ipr_cmd->scsi_cmd->sense_buffer;
+ struct ipr_resource_entry *res = ipr_cmd->scsi_cmd->device->hostdata;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+ u32 ioasc = be32_to_cpu(ioasa->ioasc);
+
+ memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE);
+
+ if (ioasc >= IPR_FIRST_DRIVER_IOASC)
+ return;
+
+ ipr_cmd->scsi_cmd->result = SAM_STAT_CHECK_CONDITION;
+
+ if (ipr_is_vset_device(res) &&
+ ioasc == IPR_IOASC_MED_DO_NOT_REALLOC &&
+ ioasa->u.vset.failing_lba_hi != 0) {
+ sense_buf[0] = 0x72;
+ sense_buf[1] = IPR_IOASC_SENSE_KEY(ioasc);
+ sense_buf[2] = IPR_IOASC_SENSE_CODE(ioasc);
+ sense_buf[3] = IPR_IOASC_SENSE_QUAL(ioasc);
+
+ sense_buf[7] = 12;
+ sense_buf[8] = 0;
+ sense_buf[9] = 0x0A;
+ sense_buf[10] = 0x80;
+
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_hi);
+
+ sense_buf[12] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[13] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[14] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[15] = failing_lba & 0x000000ff;
+
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_lo);
+
+ sense_buf[16] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[17] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[18] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[19] = failing_lba & 0x000000ff;
+ } else {
+ sense_buf[0] = 0x70;
+ sense_buf[2] = IPR_IOASC_SENSE_KEY(ioasc);
+ sense_buf[12] = IPR_IOASC_SENSE_CODE(ioasc);
+ sense_buf[13] = IPR_IOASC_SENSE_QUAL(ioasc);
+
+ /* Illegal request */
+ if ((IPR_IOASC_SENSE_KEY(ioasc) == 0x05) &&
+ (be32_to_cpu(ioasa->ioasc_specific) & IPR_FIELD_POINTER_VALID)) {
+ sense_buf[7] = 10; /* additional length */
+
+ /* IOARCB was in error */
+ if (IPR_IOASC_SENSE_CODE(ioasc) == 0x24)
+ sense_buf[15] = 0xC0;
+ else /* Parameter data was invalid */
+ sense_buf[15] = 0x80;
+
+ sense_buf[16] =
+ ((IPR_FIELD_POINTER_MASK &
+ be32_to_cpu(ioasa->ioasc_specific)) >> 8) & 0xff;
+ sense_buf[17] =
+ (IPR_FIELD_POINTER_MASK &
+ be32_to_cpu(ioasa->ioasc_specific)) & 0xff;
+ } else {
+ if (ioasc == IPR_IOASC_MED_DO_NOT_REALLOC) {
+ if (ipr_is_vset_device(res))
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_lo);
+ else
+ failing_lba = be32_to_cpu(ioasa->u.dasd.failing_lba);
+
+ sense_buf[0] |= 0x80; /* Or in the Valid bit */
+ sense_buf[3] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[4] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[5] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[6] = failing_lba & 0x000000ff;
+ }
+
+ sense_buf[7] = 6; /* additional length */
+ }
+ }
+}
+
+/**
+ * ipr_erp_start - Process an error response for a SCSI op
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * This function determines whether or not to initiate ERP
+ * on the affected device.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_start(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (!res) {
+ ipr_scsi_eh_done(ipr_cmd);
+ return;
+ }
+
+ if (ipr_is_gscsi(res))
+ ipr_dump_ioasa(ioa_cfg, ipr_cmd);
+ else
+ ipr_gen_sense(ipr_cmd);
+
+ switch (ioasc & IPR_IOASC_IOASC_MASK) {
+ case IPR_IOASC_ABORTED_CMD_TERM_BY_HOST:
+ scsi_cmd->result |= (DID_ERROR << 16);
+ break;
+ case IPR_IOASC_IR_RESOURCE_HANDLE:
+ scsi_cmd->result |= (DID_NO_CONNECT << 16);
+ break;
+ case IPR_IOASC_HW_SEL_TIMEOUT:
+ scsi_cmd->result |= (DID_NO_CONNECT << 16);
+ res->needs_sync_complete = 1;
+ break;
+ case IPR_IOASC_SYNC_REQUIRED:
+ if (!res->in_erp)
+ res->needs_sync_complete = 1;
+ scsi_cmd->result |= (DID_IMM_RETRY << 16);
+ break;
+ case IPR_IOASC_MED_DO_NOT_REALLOC: /* prevent retries */
+ scsi_cmd->result |= (DID_PASSTHROUGH << 16);
+ break;
+ case IPR_IOASC_BUS_WAS_RESET:
+ case IPR_IOASC_BUS_WAS_RESET_BY_OTHER:
+ /*
+ * Report the bus reset and ask for a retry. The device
+ * will give CC/UA the next command.
+ */
+ if (!res->resetting_device)
+ scsi_report_bus_reset(ioa_cfg->host, scsi_cmd->device->channel);
+ scsi_cmd->result |= (DID_ERROR << 16);
+ res->needs_sync_complete = 1;
+ break;
+ case IPR_IOASC_HW_DEV_BUS_STATUS:
+ scsi_cmd->result |= IPR_IOASC_SENSE_STATUS(ioasc);
+ if (IPR_IOASC_SENSE_STATUS(ioasc) == SAM_STAT_CHECK_CONDITION) {
+ ipr_erp_cancel_all(ipr_cmd);
+ return;
+ }
+ break;
+ case IPR_IOASC_NR_INIT_CMD_REQUIRED:
+ break;
+ default:
+ scsi_cmd->result |= (DID_ERROR << 16);
+ if (!ipr_is_vset_device(res))
+ res->needs_sync_complete = 1;
+ break;
+ }
+
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+}
+
+/**
+ * ipr_scsi_done - mid-layer done function
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler for
+ * ops generated by the SCSI mid-layer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ scsi_cmd->resid = be32_to_cpu(ipr_cmd->ioasa.residual_data_len);
+
+ if (likely(IPR_IOASC_SENSE_KEY(ioasc) == 0)) {
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+ } else
+ ipr_erp_start(ioa_cfg, ipr_cmd);
+}
+
+/**
+ * ipr_save_ioafp_mode_select - Save adapters mode select data
+ * @ioa_cfg: ioa config struct
+ * @scsi_cmd: scsi command struct
+ *
+ * This function saves mode select data for the adapter to
+ * use following an adapter reset.
+ *
+ * Return value:
+ * 0 on success / SCSI_MLQUEUE_HOST_BUSY on failure
+ **/
+static int ipr_save_ioafp_mode_select(struct ipr_ioa_cfg *ioa_cfg,
+ struct scsi_cmnd *scsi_cmd)
+{
+ if (!ioa_cfg->saved_mode_pages) {
+ ioa_cfg->saved_mode_pages = kmalloc(sizeof(struct ipr_mode_pages),
+ GFP_ATOMIC);
+ if (!ioa_cfg->saved_mode_pages) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA mode select buffer allocation failed\n");
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ memcpy(ioa_cfg->saved_mode_pages, scsi_cmd->buffer, scsi_cmd->cmnd[4]);
+ ioa_cfg->saved_mode_page_len = scsi_cmd->cmnd[4];
+ return 0;
+}
+
+/**
+ * ipr_queuecommand - Queue a mid-layer request
+ * @scsi_cmd: scsi command struct
+ * @done: done function
+ *
+ * This function queues a request generated by the mid-layer.
+ *
+ * Return value:
+ * 0 on success
+ * SCSI_MLQUEUE_DEVICE_BUSY if device is busy
+ * SCSI_MLQUEUE_HOST_BUSY if host is busy
+ **/
+static int ipr_queuecommand(struct scsi_cmnd *scsi_cmd,
+ void (*done) (struct scsi_cmnd *))
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_ioarcb *ioarcb;
+ struct ipr_cmnd *ipr_cmd;
+ int rc = 0;
+
+ scsi_cmd->scsi_done = done;
+ ioa_cfg = (struct ipr_ioa_cfg *)scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+ scsi_cmd->result = (DID_OK << 16);
+
+ /*
+ * We are currently blocking all devices due to a host reset
+ * We have told the host to stop giving us new requests, but
+ * ERP ops don't count. FIXME
+ */
+ if (unlikely(!ioa_cfg->allow_cmds))
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ /*
+ * FIXME - Create scsi_set_host_offline interface
+ * and the ioa_is_dead check can be removed
+ */
+ if (unlikely(ioa_cfg->ioa_is_dead || !res)) {
+ memset(scsi_cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ scsi_cmd->result = (DID_NO_CONNECT << 16);
+ scsi_cmd->scsi_done(scsi_cmd);
+ return 0;
+ }
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ioarcb = &ipr_cmd->ioarcb;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ memcpy(ioarcb->cmd_pkt.cdb, scsi_cmd->cmnd, scsi_cmd->cmd_len);
+ ipr_cmd->scsi_cmd = scsi_cmd;
+ ioarcb->res_handle = res->cfgte.res_handle;
+ ipr_cmd->done = ipr_scsi_done;
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_GET_PHYS_LOC(res->cfgte.res_addr));
+
+ if (ipr_is_gscsi(res) || ipr_is_vset_device(res)) {
+ if (scsi_cmd->underflow == 0)
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
+
+ if (res->needs_sync_complete) {
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_SYNC_COMPLETE;
+ res->needs_sync_complete = 0;
+ }
+
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
+ ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_DELAY_AFTER_RST;
+ ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_ALIGNED_BFR;
+ ioarcb->cmd_pkt.flags_lo |= ipr_get_task_attributes(scsi_cmd);
+ }
+
+ if (!ipr_is_gscsi(res) && scsi_cmd->cmnd[0] >= 0xC0)
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+
+ if (ipr_is_ioa_resource(res) && scsi_cmd->cmnd[0] == MODE_SELECT)
+ rc = ipr_save_ioafp_mode_select(ioa_cfg, scsi_cmd);
+
+ if (likely(rc == 0))
+ rc = ipr_build_ioadl(ioa_cfg, ipr_cmd);
+
+ if (likely(rc == 0)) {
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+ } else {
+ list_move_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_info - Get information about the card/driver
+ * @scsi_host: scsi host struct
+ *
+ * Return value:
+ * pointer to buffer with description string
+ **/
+static const char * ipr_ioa_info(struct Scsi_Host *host)
+{
+ static char buffer[512];
+ struct ipr_ioa_cfg *ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ ioa_cfg = (struct ipr_ioa_cfg *) host->hostdata;
+
+ spin_lock_irqsave(host->host_lock, lock_flags);
+ sprintf(buffer, "IBM %X Storage Adapter", ioa_cfg->type);
+ spin_unlock_irqrestore(host->host_lock, lock_flags);
+
+ return buffer;
+}
+
+static struct scsi_host_template driver_template = {
+ .module = THIS_MODULE,
+ .name = "IPR",
+ .info = ipr_ioa_info,
+ .queuecommand = ipr_queuecommand,
+ .eh_abort_handler = ipr_eh_abort,
+ .eh_device_reset_handler = ipr_eh_dev_reset,
+ .eh_host_reset_handler = ipr_eh_host_reset,
+ .slave_alloc = ipr_slave_alloc,
+ .slave_configure = ipr_slave_configure,
+ .slave_destroy = ipr_slave_destroy,
+ .bios_param = ipr_biosparam,
+ .can_queue = IPR_MAX_COMMANDS,
+ .this_id = -1,
+ .sg_tablesize = IPR_MAX_SGLIST,
+ .max_sectors = IPR_MAX_SECTORS,
+ .cmd_per_lun = IPR_MAX_CMD_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = ipr_ioa_attrs,
+ .sdev_attrs = ipr_dev_attrs,
+ .proc_name = IPR_NAME
+};
+
+#ifdef CONFIG_PPC_PSERIES
+static const u16 ipr_blocked_processors[] = {
+ PV_NORTHSTAR,
+ PV_PULSAR,
+ PV_POWER4,
+ PV_ICESTAR,
+ PV_SSTAR,
+ PV_POWER4p,
+ PV_630,
+ PV_630p
+};
+
+/**
+ * ipr_invalid_adapter - Determine if this adapter is supported on this hardware
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Adapters that use Gemstone revision < 3.1 do not work reliably on
+ * certain pSeries hardware. This function determines if the given
+ * adapter is in one of these confgurations or not.
+ *
+ * Return value:
+ * 1 if adapter is not supported / 0 if adapter is supported
+ **/
+static int ipr_invalid_adapter(struct ipr_ioa_cfg *ioa_cfg)
+{
+ u8 rev_id;
+ int i;
+
+ if (ioa_cfg->type == 0x5702) {
+ if (pci_read_config_byte(ioa_cfg->pdev, PCI_REVISION_ID,
+ &rev_id) == PCIBIOS_SUCCESSFUL) {
+ if (rev_id < 4) {
+ for (i = 0; i < ARRAY_SIZE(ipr_blocked_processors); i++){
+ if (__is_processor(ipr_blocked_processors[i]))
+ return 1;
+ }
+ }
+ }
+ }
+ return 0;
+}
+#else
+#define ipr_invalid_adapter(ioa_cfg) 0
+#endif
+
+/**
+ * ipr_ioa_bringdown_done - IOA bring down completion.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function processes the completion of an adapter bring down.
+ * It wakes any reset sleepers.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioa_bringdown_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ioa_cfg->in_reset_reload = 0;
+ ioa_cfg->reset_retries = 0;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+ LEAVE;
+
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioa_reset_done - IOA reset completion.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function processes the completion of an adapter reset.
+ * It schedules any necessary mid-layer add/removes and
+ * wakes any reset sleepers.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioa_reset_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_hostrcb *hostrcb, *temp;
+ int i = 0;
+
+ ENTER;
+ ioa_cfg->in_reset_reload = 0;
+ ioa_cfg->allow_cmds = 1;
+ ioa_cfg->reset_cmd = NULL;
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (ioa_cfg->allow_ml_add_del && (res->add_to_ml || res->del_from_ml)) {
+ ipr_trace;
+ schedule_work(&ioa_cfg->work_q);
+ break;
+ }
+ }
+
+ list_for_each_entry_safe(hostrcb, temp, &ioa_cfg->hostrcb_free_q, queue) {
+ list_del(&hostrcb->queue);
+ if (i++ < IPR_NUM_LOG_HCAMS)
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb);
+ else
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+ }
+
+ dev_info(&ioa_cfg->pdev->dev, "IOA initialized.\n");
+
+ ioa_cfg->reset_retries = 0;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+
+ if (!ioa_cfg->allow_cmds)
+ scsi_block_requests(ioa_cfg->host);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_set_sup_dev_dflt - Initialize a Set Supported Device buffer
+ * @supported_dev: supported device struct
+ * @vpids: vendor product id struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_set_sup_dev_dflt(struct ipr_supported_device *supported_dev,
+ struct ipr_std_inq_vpids *vpids)
+{
+ memset(supported_dev, 0, sizeof(struct ipr_supported_device));
+ memcpy(&supported_dev->vpids, vpids, sizeof(struct ipr_std_inq_vpids));
+ supported_dev->num_records = 1;
+ supported_dev->data_length =
+ cpu_to_be16(sizeof(struct ipr_supported_device));
+ supported_dev->reserved = 0;
+}
+
+/**
+ * ipr_set_supported_devs - Send Set Supported Devices for a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send a Set Supported Devices to the adapter
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_set_supported_devs(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_supported_device *supp_dev = &ioa_cfg->vpd_cbs->supp_dev;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_resource_entry *res = ipr_cmd->u.res;
+
+ ipr_cmd->job_step = ipr_ioa_reset_done;
+
+ list_for_each_entry_continue(res, &ioa_cfg->used_res_q, queue) {
+ if (!ipr_is_af_dasd_device(res))
+ continue;
+
+ ipr_cmd->u.res = res;
+ ipr_set_sup_dev_dflt(supp_dev, &res->cfgte.std_inq_data.vpids);
+
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_SET_SUPPORTED_DEVICES;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(struct ipr_supported_device) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(struct ipr_supported_device) & 0xff;
+
+ ioadl->flags_and_data_len = cpu_to_be32(IPR_IOADL_FLAGS_WRITE_LAST |
+ sizeof(struct ipr_supported_device));
+ ioadl->address = cpu_to_be32(ioa_cfg->vpd_cbs_dma +
+ offsetof(struct ipr_misc_cbs, supp_dev));
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->write_data_transfer_length =
+ cpu_to_be32(sizeof(struct ipr_supported_device));
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
+ IPR_SET_SUP_DEVICE_TIMEOUT);
+
+ ipr_cmd->job_step = ipr_set_supported_devs;
+ return IPR_RC_JOB_RETURN;
+ }
+
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_get_mode_page - Locate specified mode page
+ * @mode_pages: mode page buffer
+ * @page_code: page code to find
+ * @len: minimum required length for mode page
+ *
+ * Return value:
+ * pointer to mode page / NULL on failure
+ **/
+static void *ipr_get_mode_page(struct ipr_mode_pages *mode_pages,
+ u32 page_code, u32 len)
+{
+ struct ipr_mode_page_hdr *mode_hdr;
+ u32 page_length;
+ u32 length;
+
+ if (!mode_pages || (mode_pages->hdr.length == 0))
+ return NULL;
+
+ length = (mode_pages->hdr.length + 1) - 4 - mode_pages->hdr.block_desc_len;
+ mode_hdr = (struct ipr_mode_page_hdr *)
+ (mode_pages->data + mode_pages->hdr.block_desc_len);
+
+ while (length) {
+ if (IPR_GET_MODE_PAGE_CODE(mode_hdr) == page_code) {
+ if (mode_hdr->page_length >= (len - sizeof(struct ipr_mode_page_hdr)))
+ return mode_hdr;
+ break;
+ } else {
+ page_length = (sizeof(struct ipr_mode_page_hdr) +
+ mode_hdr->page_length);
+ length -= page_length;
+ mode_hdr = (struct ipr_mode_page_hdr *)
+ ((unsigned long)mode_hdr + page_length);
+ }
+ }
+ return NULL;
+}
+
+/**
+ * ipr_check_term_power - Check for term power errors
+ * @ioa_cfg: ioa config struct
+ * @mode_pages: IOAFP mode pages buffer
+ *
+ * Check the IOAFP's mode page 28 for term power errors
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_check_term_power(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_mode_pages *mode_pages)
+{
+ int i;
+ int entry_length;
+ struct ipr_dev_bus_entry *bus;
+ struct ipr_mode_page28 *mode_page;
+
+ mode_page = ipr_get_mode_page(mode_pages, 0x28,
+ sizeof(struct ipr_mode_page28));
+
+ entry_length = mode_page->entry_length;
+
+ bus = mode_page->bus;
+
+ for (i = 0; i < mode_page->num_entries; i++) {
+ if (bus->flags & IPR_SCSI_ATTR_NO_TERM_PWR) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Term power is absent on scsi bus %d\n",
+ bus->res_addr.bus);
+ }
+
+ bus = (struct ipr_dev_bus_entry *)((char *)bus + entry_length);
+ }
+}
+
+/**
+ * ipr_scsi_bus_speed_limit - Limit the SCSI speed based on SES table
+ * @ioa_cfg: ioa config struct
+ *
+ * Looks through the config table checking for SES devices. If
+ * the SES device is in the SES table indicating a maximum SCSI
+ * bus speed, the speed is limited for the bus.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_bus_speed_limit(struct ipr_ioa_cfg *ioa_cfg)
+{
+ u32 max_xfer_rate;
+ int i;
+
+ for (i = 0; i < IPR_MAX_NUM_BUSES; i++) {
+ max_xfer_rate = ipr_get_max_scsi_speed(ioa_cfg, i,
+ ioa_cfg->bus_attr[i].bus_width);
+
+ if (max_xfer_rate < ioa_cfg->bus_attr[i].max_xfer_rate)
+ ioa_cfg->bus_attr[i].max_xfer_rate = max_xfer_rate;
+ }
+}
+
+/**
+ * ipr_modify_ioafp_mode_page_28 - Modify IOAFP Mode Page 28
+ * @ioa_cfg: ioa config struct
+ * @mode_pages: mode page 28 buffer
+ *
+ * Updates mode page 28 based on driver configuration
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_modify_ioafp_mode_page_28(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_mode_pages *mode_pages)
+{
+ int i, entry_length;
+ struct ipr_dev_bus_entry *bus;
+ struct ipr_bus_attributes *bus_attr;
+ struct ipr_mode_page28 *mode_page;
+
+ mode_page = ipr_get_mode_page(mode_pages, 0x28,
+ sizeof(struct ipr_mode_page28));
+
+ entry_length = mode_page->entry_length;
+
+ /* Loop for each device bus entry */
+ for (i = 0, bus = mode_page->bus;
+ i < mode_page->num_entries;
+ i++, bus = (struct ipr_dev_bus_entry *)((u8 *)bus + entry_length)) {
+ if (bus->res_addr.bus > IPR_MAX_NUM_BUSES) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Invalid resource address reported: 0x%08X\n",
+ IPR_GET_PHYS_LOC(bus->res_addr));
+ continue;
+ }
+
+ bus_attr = &ioa_cfg->bus_attr[i];
+ bus->extended_reset_delay = IPR_EXTENDED_RESET_DELAY;
+ bus->bus_width = bus_attr->bus_width;
+ bus->max_xfer_rate = cpu_to_be32(bus_attr->max_xfer_rate);
+ bus->flags &= ~IPR_SCSI_ATTR_QAS_MASK;
+ if (bus_attr->qas_enabled)
+ bus->flags |= IPR_SCSI_ATTR_ENABLE_QAS;
+ else
+ bus->flags |= IPR_SCSI_ATTR_DISABLE_QAS;
+ }
+}
+
+/**
+ * ipr_build_mode_select - Build a mode select command
+ * @ipr_cmd: ipr command struct
+ * @res_handle: resource handle to send command to
+ * @parm: Byte 2 of Mode Sense command
+ * @dma_addr: DMA buffer address
+ * @xfer_len: data transfer length
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_build_mode_select(struct ipr_cmnd *ipr_cmd,
+ u32 res_handle, u8 parm, u32 dma_addr,
+ u8 xfer_len)
+{
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = res_handle;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->cmd_pkt.cdb[0] = MODE_SELECT;
+ ioarcb->cmd_pkt.cdb[1] = parm;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_WRITE_LAST | xfer_len);
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->write_data_transfer_length = cpu_to_be32(xfer_len);
+}
+
+/**
+ * ipr_ioafp_mode_select_page28 - Issue Mode Select Page 28 to IOA
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sets up the SCSI bus attributes and sends
+ * a Mode Select for Page 28 to activate them.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_mode_select_page28(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_mode_pages *mode_pages = &ioa_cfg->vpd_cbs->mode_pages;
+ int length;
+
+ ENTER;
+ if (ioa_cfg->saved_mode_pages) {
+ memcpy(mode_pages, ioa_cfg->saved_mode_pages,
+ ioa_cfg->saved_mode_page_len);
+ length = ioa_cfg->saved_mode_page_len;
+ } else {
+ ipr_scsi_bus_speed_limit(ioa_cfg);
+ ipr_check_term_power(ioa_cfg, mode_pages);
+ ipr_modify_ioafp_mode_page_28(ioa_cfg, mode_pages);
+ length = mode_pages->hdr.length + 1;
+ mode_pages->hdr.length = 0;
+ }
+
+ ipr_build_mode_select(ipr_cmd, cpu_to_be32(IPR_IOA_RES_HANDLE), 0x11,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, mode_pages),
+ length);
+
+ ipr_cmd->job_step = ipr_set_supported_devs;
+ ipr_cmd->u.res = list_entry(ioa_cfg->used_res_q.next,
+ struct ipr_resource_entry, queue);
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_build_mode_sense - Builds a mode sense command
+ * @ipr_cmd: ipr command struct
+ * @res: resource entry struct
+ * @parm: Byte 2 of mode sense command
+ * @dma_addr: DMA address of mode sense buffer
+ * @xfer_len: Size of DMA buffer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_build_mode_sense(struct ipr_cmnd *ipr_cmd,
+ u32 res_handle,
+ u8 parm, u32 dma_addr, u8 xfer_len)
+{
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = res_handle;
+ ioarcb->cmd_pkt.cdb[0] = MODE_SENSE;
+ ioarcb->cmd_pkt.cdb[2] = parm;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | xfer_len);
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length = cpu_to_be32(xfer_len);
+}
+
+/**
+ * ipr_ioafp_mode_sense_page28 - Issue Mode Sense Page 28 to IOA
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send a Page 28 mode sense to the IOA to
+ * retrieve SCSI bus attributes.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_mode_sense_page28(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ipr_build_mode_sense(ipr_cmd, cpu_to_be32(IPR_IOA_RES_HANDLE),
+ 0x28, ioa_cfg->vpd_cbs_dma +
+ offsetof(struct ipr_misc_cbs, mode_pages),
+ sizeof(struct ipr_mode_pages));
+
+ ipr_cmd->job_step = ipr_ioafp_mode_select_page28;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_init_res_table - Initialize the resource table
+ * @ipr_cmd: ipr command struct
+ *
+ * This function looks through the existing resource table, comparing
+ * it with the config table. This function will take care of old/new
+ * devices and schedule adding/removing them from the mid-layer
+ * as appropriate.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_init_res_table(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res, *temp;
+ struct ipr_config_table_entry *cfgte;
+ int found, i;
+ LIST_HEAD(old_res);
+
+ ENTER;
+ if (ioa_cfg->cfg_table->hdr.flags & IPR_UCODE_DOWNLOAD_REQ)
+ dev_err(&ioa_cfg->pdev->dev, "Microcode download required\n");
+
+ list_for_each_entry_safe(res, temp, &ioa_cfg->used_res_q, queue)
+ list_move_tail(&res->queue, &old_res);
+
+ for (i = 0; i < ioa_cfg->cfg_table->hdr.num_entries; i++) {
+ cfgte = &ioa_cfg->cfg_table->dev[i];
+ found = 0;
+
+ list_for_each_entry_safe(res, temp, &old_res, queue) {
+ if (!memcmp(&res->cfgte.res_addr,
+ &cfgte->res_addr, sizeof(cfgte->res_addr))) {
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ found = 1;
+ break;
+ }
+ }
+
+ if (!found) {
+ if (list_empty(&ioa_cfg->free_res_q)) {
+ dev_err(&ioa_cfg->pdev->dev, "Too many devices attached\n");
+ break;
+ }
+
+ found = 1;
+ res = list_entry(ioa_cfg->free_res_q.next,
+ struct ipr_resource_entry, queue);
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ ipr_init_res_entry(res);
+ res->add_to_ml = 1;
+ }
+
+ if (found)
+ memcpy(&res->cfgte, cfgte, sizeof(struct ipr_config_table_entry));
+ }
+
+ list_for_each_entry_safe(res, temp, &old_res, queue) {
+ if (res->sdev) {
+ res->del_from_ml = 1;
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ } else {
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ }
+ }
+
+ ipr_cmd->job_step = ipr_ioafp_mode_sense_page28;
+
+ LEAVE;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_ioafp_query_ioa_cfg - Send a Query IOA Config to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a Query IOA Configuration command
+ * to the adapter to retrieve the IOA configuration table.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_query_ioa_cfg(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+
+ ENTER;
+ dev_info(&ioa_cfg->pdev->dev, "Adapter firmware version: %02X%02X%02X%02X\n",
+ ucode_vpd->major_release, ucode_vpd->card_type,
+ ucode_vpd->minor_release[0], ucode_vpd->minor_release[1]);
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_QUERY_IOA_CONFIG;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(struct ipr_config_table) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(struct ipr_config_table) & 0xff;
+
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length =
+ cpu_to_be32(sizeof(struct ipr_config_table));
+
+ ioadl->address = cpu_to_be32(ioa_cfg->cfg_table_dma);
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | sizeof(struct ipr_config_table));
+
+ ipr_cmd->job_step = ipr_init_res_table;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_inquiry - Send an Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This utility function sends an inquiry to the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_ioafp_inquiry(struct ipr_cmnd *ipr_cmd, u8 flags, u8 page,
+ u32 dma_addr, u8 xfer_len)
+{
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+
+ ENTER;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.cdb[0] = INQUIRY;
+ ioarcb->cmd_pkt.cdb[1] = flags;
+ ioarcb->cmd_pkt.cdb[2] = page;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length = cpu_to_be32(xfer_len);
+
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | xfer_len);
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+ LEAVE;
+}
+
+/**
+ * ipr_ioafp_page3_inquiry - Send a Page 3 Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a Page 3 inquiry to the adapter
+ * to retrieve software VPD information.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_page3_inquiry(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ char type[5];
+
+ ENTER;
+
+ /* Grab the type out of the VPD and store it away */
+ memcpy(type, ioa_cfg->vpd_cbs->ioa_vpd.std_inq_data.vpids.product_id, 4);
+ type[4] = '\0';
+ ioa_cfg->type = simple_strtoul((char *)type, NULL, 16);
+
+ ipr_cmd->job_step = ipr_ioafp_query_ioa_cfg;
+
+ ipr_ioafp_inquiry(ipr_cmd, 1, 3,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, page3_data),
+ sizeof(struct ipr_inquiry_page3));
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_std_inquiry - Send a Standard Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a standard inquiry to the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_std_inquiry(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_ioafp_page3_inquiry;
+
+ ipr_ioafp_inquiry(ipr_cmd, 0, 0,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, ioa_vpd),
+ sizeof(struct ipr_ioa_vpd));
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_indentify_hrrq - Send Identify Host RRQ.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send an Identify Host Request Response Queue
+ * command to establish the HRRQ with the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_indentify_hrrq(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ENTER;
+ dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n");
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_ID_HOST_RR_Q;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ioarcb->cmd_pkt.cdb[2] =
+ ((u32) ioa_cfg->host_rrq_dma >> 24) & 0xff;
+ ioarcb->cmd_pkt.cdb[3] =
+ ((u32) ioa_cfg->host_rrq_dma >> 16) & 0xff;
+ ioarcb->cmd_pkt.cdb[4] =
+ ((u32) ioa_cfg->host_rrq_dma >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[5] =
+ ((u32) ioa_cfg->host_rrq_dma) & 0xff;
+ ioarcb->cmd_pkt.cdb[7] =
+ ((sizeof(u32) * IPR_NUM_CMD_BLKS) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] =
+ (sizeof(u32) * IPR_NUM_CMD_BLKS) & 0xff;
+
+ ipr_cmd->job_step = ipr_ioafp_std_inquiry;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_timer_done - Adapter reset timer function
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function is used in adapter reset processing
+ * for timing events. If the reset_cmd pointer in the IOA
+ * config struct is not this adapter's we are doing nested
+ * resets and fail_all_ops will take care of freeing the
+ * command block.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_timer_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->reset_cmd == ipr_cmd) {
+ list_del(&ipr_cmd->queue);
+ ipr_cmd->done(ipr_cmd);
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+}
+
+/**
+ * ipr_reset_start_timer - Start a timer for adapter reset job
+ * @ipr_cmd: ipr command struct
+ * @timeout: timeout value
+ *
+ * Description: This function is used in adapter reset processing
+ * for timing events. If the reset_cmd pointer in the IOA
+ * config struct is not this adapter's we are doing nested
+ * resets and fail_all_ops will take care of freeing the
+ * command block.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_start_timer(struct ipr_cmnd *ipr_cmd,
+ unsigned long timeout)
+{
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->ioa_cfg->pending_q);
+ ipr_cmd->done = ipr_reset_ioa_job;
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + timeout;
+ ipr_cmd->timer.function = (void (*)(unsigned long))ipr_reset_timer_done;
+ add_timer(&ipr_cmd->timer);
+}
+
+/**
+ * ipr_init_ioa_mem - Initialize ioa_cfg control block
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_init_ioa_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ memset(ioa_cfg->host_rrq, 0, sizeof(u32) * IPR_NUM_CMD_BLKS);
+
+ /* Initialize Host RRQ pointers */
+ ioa_cfg->hrrq_start = ioa_cfg->host_rrq;
+ ioa_cfg->hrrq_end = &ioa_cfg->host_rrq[IPR_NUM_CMD_BLKS - 1];
+ ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
+ ioa_cfg->toggle_bit = 1;
+
+ /* Zero out config table */
+ memset(ioa_cfg->cfg_table, 0, sizeof(struct ipr_config_table));
+}
+
+/**
+ * ipr_reset_enable_ioa - Enable the IOA following a reset.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function reinitializes some control blocks and
+ * enables destructive diagnostics on the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_enable_ioa(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ volatile u32 int_reg;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_ioafp_indentify_hrrq;
+ ipr_init_ioa_mem(ioa_cfg);
+
+ ioa_cfg->allow_interrupts = 1;
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) {
+ writel((IPR_PCII_ERROR_INTERRUPTS | IPR_PCII_HRRQ_UPDATED),
+ ioa_cfg->regs.clr_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ /* Enable destructive diagnostics on IOA */
+ writel(IPR_DOORBELL, ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ writel(IPR_PCII_OPER_INTERRUPTS, ioa_cfg->regs.clr_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+
+ dev_info(&ioa_cfg->pdev->dev, "Initializing IOA.\n");
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + IPR_OPERATIONAL_TIMEOUT;
+ ipr_cmd->timer.function = (void (*)(unsigned long))ipr_timeout;
+ add_timer(&ipr_cmd->timer);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_wait_for_dump - Wait for a dump to timeout.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked when an adapter dump has run out
+ * of processing time.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_reset_wait_for_dump(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ if (ioa_cfg->sdt_state == GET_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_unit_check_no_data - Log a unit check/no data error log
+ * @ioa_cfg: ioa config struct
+ *
+ * Logs an error indicating the adapter unit checked, but for some
+ * reason, we were unable to fetch the unit check buffer.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_unit_check_no_data(struct ipr_ioa_cfg *ioa_cfg)
+{
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev, "IOA unit check with no data\n");
+}
+
+/**
+ * ipr_get_unit_check_buffer - Get the unit check buffer from the IOA
+ * @ioa_cfg: ioa config struct
+ *
+ * Fetches the unit check buffer from the adapter by clocking the data
+ * through the mailbox register.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_get_unit_check_buffer(struct ipr_ioa_cfg *ioa_cfg)
+{
+ unsigned long mailbox;
+ struct ipr_hostrcb *hostrcb;
+ struct ipr_uc_sdt sdt;
+ int rc, length;
+
+ mailbox = readl(ioa_cfg->ioa_mailbox);
+
+ if (!ipr_sdt_is_fmt2(mailbox)) {
+ ipr_unit_check_no_data(ioa_cfg);
+ return;
+ }
+
+ memset(&sdt, 0, sizeof(struct ipr_uc_sdt));
+ rc = ipr_get_ldump_data_section(ioa_cfg, mailbox, (u32 *) &sdt,
+ (sizeof(struct ipr_uc_sdt)) / sizeof(u32));
+
+ if (rc || (be32_to_cpu(sdt.hdr.state) != IPR_FMT2_SDT_READY_TO_USE) ||
+ !(sdt.entry[0].flags & IPR_SDT_VALID_ENTRY)) {
+ ipr_unit_check_no_data(ioa_cfg);
+ return;
+ }
+
+ /* Find length of the first sdt entry (UC buffer) */
+ length = (be32_to_cpu(sdt.entry[0].end_offset) -
+ be32_to_cpu(sdt.entry[0].bar_str_offset)) & IPR_FMT2_MBX_ADDR_MASK;
+
+ hostrcb = list_entry(ioa_cfg->hostrcb_free_q.next,
+ struct ipr_hostrcb, queue);
+ list_del(&hostrcb->queue);
+ memset(&hostrcb->hcam, 0, sizeof(hostrcb->hcam));
+
+ rc = ipr_get_ldump_data_section(ioa_cfg,
+ be32_to_cpu(sdt.entry[0].bar_str_offset),
+ (u32 *)&hostrcb->hcam,
+ min(length, (int)sizeof(hostrcb->hcam)) / sizeof(u32));
+
+ if (!rc)
+ ipr_handle_log_data(ioa_cfg, hostrcb);
+ else
+ ipr_unit_check_no_data(ioa_cfg);
+
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_free_q);
+}
+
+/**
+ * ipr_reset_restore_cfg_space - Restore PCI config space.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function restores the saved PCI config space of
+ * the adapter, fails all outstanding ops back to the callers, and
+ * fetches the dump/unit check if applicable to this reset.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc;
+
+ ENTER;
+ rc = pci_restore_state(ioa_cfg->pdev, ioa_cfg->pci_cfg_buf);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ if (ipr_set_pcix_cmd_reg(ioa_cfg)) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ ipr_fail_all_ops(ioa_cfg);
+
+ if (ioa_cfg->ioa_unit_checked) {
+ ioa_cfg->ioa_unit_checked = 0;
+ ipr_get_unit_check_buffer(ioa_cfg);
+ ipr_cmd->job_step = ipr_reset_alert;
+ ipr_reset_start_timer(ipr_cmd, 0);
+ return IPR_RC_JOB_RETURN;
+ }
+
+ if (ioa_cfg->in_ioa_bringdown) {
+ ipr_cmd->job_step = ipr_ioa_bringdown_done;
+ } else {
+ ipr_cmd->job_step = ipr_reset_enable_ioa;
+
+ if (GET_DUMP == ioa_cfg->sdt_state) {
+ ipr_reset_start_timer(ipr_cmd, IPR_DUMP_TIMEOUT);
+ ipr_cmd->job_step = ipr_reset_wait_for_dump;
+ schedule_work(&ioa_cfg->work_q);
+ return IPR_RC_JOB_RETURN;
+ }
+ }
+
+ ENTER;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_reset_start_bist - Run BIST on the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function runs BIST on the adapter, then delays 2 seconds.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_start_bist(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc;
+
+ ENTER;
+ rc = pci_write_config_byte(ioa_cfg->pdev, PCI_BIST, PCI_BIST_START);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ rc = IPR_RC_JOB_CONTINUE;
+ } else {
+ ipr_cmd->job_step = ipr_reset_restore_cfg_space;
+ ipr_reset_start_timer(ipr_cmd, IPR_WAIT_FOR_BIST_TIMEOUT);
+ rc = IPR_RC_JOB_RETURN;
+ }
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_reset_allowed - Query whether or not IOA can be reset
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 if reset not allowed / non-zero if reset is allowed
+ **/
+static int ipr_reset_allowed(struct ipr_ioa_cfg *ioa_cfg)
+{
+ volatile u32 temp_reg;
+
+ temp_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+ return ((temp_reg & IPR_PCII_CRITICAL_OPERATION) == 0);
+}
+
+/**
+ * ipr_reset_wait_to_start_bist - Wait for permission to reset IOA.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function waits for adapter permission to run BIST,
+ * then runs BIST. If the adapter does not give permission after a
+ * reasonable time, we will reset the adapter anyway. The impact of
+ * resetting the adapter without warning the adapter is the risk of
+ * losing the persistent error log on the adapter. If the adapter is
+ * reset while it is writing to the flash on the adapter, the flash
+ * segment will have bad ECC and be zeroed.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_wait_to_start_bist(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc = IPR_RC_JOB_RETURN;
+
+ if (!ipr_reset_allowed(ioa_cfg) && ipr_cmd->u.time_left) {
+ ipr_cmd->u.time_left -= IPR_CHECK_FOR_RESET_TIMEOUT;
+ ipr_reset_start_timer(ipr_cmd, IPR_CHECK_FOR_RESET_TIMEOUT);
+ } else {
+ ipr_cmd->job_step = ipr_reset_start_bist;
+ rc = IPR_RC_JOB_CONTINUE;
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_reset_alert_part2 - Alert the adapter of a pending reset
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function alerts the adapter that it will be reset.
+ * If memory space is not currently enabled, proceed directly
+ * to running BIST on the adapter. The timer must always be started
+ * so we guarantee we do not run BIST from ipr_isr.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_alert(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ u16 cmd_reg;
+ int rc;
+
+ ENTER;
+ rc = pci_read_config_word(ioa_cfg->pdev, PCI_COMMAND, &cmd_reg);
+
+ if ((rc == PCIBIOS_SUCCESSFUL) && (cmd_reg & PCI_COMMAND_MEMORY)) {
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
+ writel(IPR_UPROCI_RESET_ALERT, ioa_cfg->regs.set_uproc_interrupt_reg);
+ ipr_cmd->job_step = ipr_reset_wait_to_start_bist;
+ } else {
+ ipr_cmd->job_step = ipr_reset_start_bist;
+ }
+
+ ipr_cmd->u.time_left = IPR_WAIT_FOR_RESET_TIMEOUT;
+ ipr_reset_start_timer(ipr_cmd, IPR_CHECK_FOR_RESET_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_ucode_download_done - Microcode download completion
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function unmaps the microcode download buffer.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_reset_ucode_download_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_sglist *sglist = ioa_cfg->ucode_sglist;
+
+ pci_unmap_sg(ioa_cfg->pdev, sglist->scatterlist,
+ sglist->num_sg, DMA_TO_DEVICE);
+
+ ipr_cmd->job_step = ipr_reset_alert;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_reset_ucode_download - Download microcode to the adapter
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function checks to see if it there is microcode
+ * to download to the adapter. If there is, a download is performed.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_ucode_download(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_sglist *sglist = ioa_cfg->ucode_sglist;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ if (!sglist)
+ return IPR_RC_JOB_CONTINUE;
+
+ ipr_cmd->ioarcb.res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ipr_cmd->ioarcb.cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0] = WRITE_BUFFER;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[1] = IPR_WR_BUF_DOWNLOAD_AND_SAVE;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[6] = (sglist->buffer_len & 0xff0000) >> 16;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[7] = (sglist->buffer_len & 0x00ff00) >> 8;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[8] = sglist->buffer_len & 0x0000ff;
+
+ if (ipr_map_ucode_buffer(ipr_cmd, sglist, sglist->buffer_len)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Failed to map microcode download buffer\n");
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ ipr_cmd->job_step = ipr_reset_ucode_download_done;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
+ IPR_WRITE_BUFFER_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_shutdown_ioa - Shutdown the adapter
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function issues an adapter shutdown of the
+ * specified type to the specified adapter as part of the
+ * adapter reset job.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_shutdown_ioa(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ enum ipr_shutdown_type shutdown_type = ipr_cmd->u.shutdown_type;
+ unsigned long timeout;
+ int rc = IPR_RC_JOB_CONTINUE;
+
+ ENTER;
+ if (shutdown_type != IPR_SHUTDOWN_NONE && !ioa_cfg->ioa_is_dead) {
+ ipr_cmd->ioarcb.res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ipr_cmd->ioarcb.cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0] = IPR_IOA_SHUTDOWN;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[1] = shutdown_type;
+
+ if (shutdown_type == IPR_SHUTDOWN_ABBREV)
+ timeout = IPR_ABBREV_SHUTDOWN_TIMEOUT;
+ else if (shutdown_type == IPR_SHUTDOWN_PREPARE_FOR_NORMAL)
+ timeout = IPR_INTERNAL_TIMEOUT;
+ else
+ timeout = IPR_SHUTDOWN_TIMEOUT;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, timeout);
+
+ rc = IPR_RC_JOB_RETURN;
+ ipr_cmd->job_step = ipr_reset_ucode_download;
+ } else
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_reset_ioa_job - Adapter reset job
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function is the job router for the adapter reset job.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_ioa_job(struct ipr_cmnd *ipr_cmd)
+{
+ u32 rc, ioasc;
+ unsigned long scratch = ipr_cmd->u.scratch;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ do {
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (ioa_cfg->reset_cmd != ipr_cmd) {
+ /*
+ * We are doing nested adapter resets and this is
+ * not the current reset job.
+ */
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return;
+ }
+
+ if (IPR_IOASC_SENSE_KEY(ioasc)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "0x%02X failed with IOASC: 0x%08X\n",
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0], ioasc);
+
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return;
+ }
+
+ ipr_reinit_ipr_cmnd(ipr_cmd);
+ ipr_cmd->u.scratch = scratch;
+ rc = ipr_cmd->job_step(ipr_cmd);
+ } while(rc == IPR_RC_JOB_CONTINUE);
+}
+
+/**
+ * _ipr_initiate_ioa_reset - Initiate an adapter reset
+ * @ioa_cfg: ioa config struct
+ * @job_step: first job step of reset job
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate the reset of the given adapter
+ * starting at the selected job step.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void _ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
+ int (*job_step) (struct ipr_cmnd *),
+ enum ipr_shutdown_type shutdown_type)
+{
+ struct ipr_cmnd *ipr_cmd;
+
+ ioa_cfg->in_reset_reload = 1;
+ ioa_cfg->allow_cmds = 0;
+ scsi_block_requests(ioa_cfg->host);
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ioa_cfg->reset_cmd = ipr_cmd;
+ ipr_cmd->job_step = job_step;
+ ipr_cmd->u.shutdown_type = shutdown_type;
+
+ ipr_reset_ioa_job(ipr_cmd);
+}
+
+/**
+ * ipr_initiate_ioa_reset - Initiate an adapter reset
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate the reset of the given adapter.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ if (ioa_cfg->ioa_is_dead)
+ return;
+
+ if (ioa_cfg->in_reset_reload && ioa_cfg->sdt_state == GET_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+
+ if (ioa_cfg->reset_retries++ > IPR_NUM_RESET_RELOAD_RETRIES) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA taken offline - error recovery failed\n");
+
+ ioa_cfg->reset_retries = 0;
+ ioa_cfg->ioa_is_dead = 1;
+
+ if (ioa_cfg->in_ioa_bringdown) {
+ ioa_cfg->reset_cmd = NULL;
+ ioa_cfg->in_reset_reload = 0;
+ ipr_fail_all_ops(ioa_cfg);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+ return;
+ } else {
+ ioa_cfg->in_ioa_bringdown = 1;
+ shutdown_type = IPR_SHUTDOWN_NONE;
+ }
+ }
+
+ _ipr_initiate_ioa_reset(ioa_cfg, ipr_reset_shutdown_ioa,
+ shutdown_type);
+}
+
+/**
+ * ipr_probe_ioa_part2 - Initializes IOAs found in ipr_probe_ioa(..)
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Description: This is the second phase of adapter intialization
+ * This function takes care of initilizing the adapter to the point
+ * where it can accept new commands.
+
+ * Return value:
+ * 0 on sucess / -EIO on failure
+ **/
+static int __devinit ipr_probe_ioa_part2(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int rc = 0;
+ unsigned long host_lock_flags = 0;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+ dev_dbg(&ioa_cfg->pdev->dev, "ioa_cfg adx: 0x%p\n", ioa_cfg);
+ _ipr_initiate_ioa_reset(ioa_cfg, ipr_reset_enable_ioa, IPR_SHUTDOWN_NONE);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+
+ if (ioa_cfg->ioa_is_dead) {
+ rc = -EIO;
+ } else if (ipr_invalid_adapter(ioa_cfg)) {
+ if (!ipr_testmode)
+ rc = -EIO;
+
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter not supported in this hardware configuration.\n");
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_free_cmd_blks - Frees command blocks allocated for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_free_cmd_blks(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ for (i = 0; i < IPR_NUM_CMD_BLKS; i++) {
+ if (ioa_cfg->ipr_cmnd_list[i])
+ pci_pool_free(ioa_cfg->ipr_cmd_pool,
+ ioa_cfg->ipr_cmnd_list[i],
+ ioa_cfg->ipr_cmnd_list_dma[i]);
+
+ ioa_cfg->ipr_cmnd_list[i] = NULL;
+ }
+
+ if (ioa_cfg->ipr_cmd_pool)
+ pci_pool_destroy (ioa_cfg->ipr_cmd_pool);
+
+ ioa_cfg->ipr_cmd_pool = NULL;
+}
+
+/**
+ * ipr_free_mem - Frees memory allocated for an adapter
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_free_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ kfree(ioa_cfg->res_entries);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(struct ipr_misc_cbs),
+ ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma);
+ ipr_free_cmd_blks(ioa_cfg);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(u32) * IPR_NUM_CMD_BLKS,
+ ioa_cfg->host_rrq, ioa_cfg->host_rrq_dma);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(struct ipr_config_table),
+ ioa_cfg->cfg_table,
+ ioa_cfg->cfg_table_dma);
+
+ for (i = 0; i < IPR_NUM_HCAMS; i++) {
+ pci_free_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_hostrcb),
+ ioa_cfg->hostrcb[i],
+ ioa_cfg->hostrcb_dma[i]);
+ }
+
+ ipr_free_dump(ioa_cfg);
+ kfree(ioa_cfg->saved_mode_pages);
+ kfree(ioa_cfg->trace);
+}
+
+/**
+ * ipr_free_all_resources - Free all allocated resources for an adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function frees all allocated resources for the
+ * specified adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_free_all_resources(struct ipr_ioa_cfg *ioa_cfg)
+{
+ ENTER;
+ free_irq(ioa_cfg->pdev->irq, ioa_cfg);
+ iounmap((void *) ioa_cfg->hdw_dma_regs);
+ release_mem_region(ioa_cfg->hdw_dma_regs_pci,
+ pci_resource_len(ioa_cfg->pdev, 0));
+ ipr_free_mem(ioa_cfg);
+ scsi_host_put(ioa_cfg->host);
+ LEAVE;
+}
+
+/**
+ * ipr_alloc_cmd_blks - Allocate command blocks for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -ENOMEM on allocation failure
+ **/
+static int __devinit ipr_alloc_cmd_blks(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioarcb *ioarcb;
+ u32 dma_addr;
+ int i;
+
+ ioa_cfg->ipr_cmd_pool = pci_pool_create (IPR_NAME, ioa_cfg->pdev,
+ sizeof(struct ipr_cmnd), 8, 0);
+
+ if (!ioa_cfg->ipr_cmd_pool)
+ return -ENOMEM;
+
+ for (i = 0; i < IPR_NUM_CMD_BLKS; i++) {
+ ipr_cmd = pci_pool_alloc (ioa_cfg->ipr_cmd_pool, SLAB_KERNEL, &dma_addr);
+
+ if (!ipr_cmd) {
+ ipr_free_cmd_blks(ioa_cfg);
+ return -ENOMEM;
+ }
+
+ memset(ipr_cmd, 0, sizeof(*ipr_cmd));
+ ioa_cfg->ipr_cmnd_list[i] = ipr_cmd;
+ ioa_cfg->ipr_cmnd_list_dma[i] = dma_addr;
+
+ ioarcb = &ipr_cmd->ioarcb;
+ ioarcb->ioarcb_host_pci_addr = cpu_to_be32(dma_addr);
+ ioarcb->host_response_handle = cpu_to_be32(i << 2);
+ ioarcb->write_ioadl_addr =
+ cpu_to_be32(dma_addr + offsetof(struct ipr_cmnd, ioadl));
+ ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr;
+ ioarcb->ioasa_host_pci_addr =
+ cpu_to_be32(dma_addr + offsetof(struct ipr_cmnd, ioasa));
+ ioarcb->ioasa_len = cpu_to_be16(sizeof(struct ipr_ioasa));
+ ipr_cmd->cmd_index = i;
+ ipr_cmd->ioa_cfg = ioa_cfg;
+ ipr_cmd->sense_buffer_dma = dma_addr +
+ offsetof(struct ipr_cmnd, sense_buffer);
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_alloc_mem - Allocate memory for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / non-zero for error
+ **/
+static int __devinit ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ ENTER;
+ ioa_cfg->res_entries = kmalloc(sizeof(struct ipr_resource_entry) *
+ IPR_MAX_PHYSICAL_DEVS, GFP_KERNEL);
+
+ if (!ioa_cfg->res_entries)
+ goto cleanup;
+
+ memset(ioa_cfg->res_entries, 0,
+ sizeof(struct ipr_resource_entry) * IPR_MAX_PHYSICAL_DEVS);
+
+ for (i = 0; i < IPR_MAX_PHYSICAL_DEVS; i++)
+ list_add_tail(&ioa_cfg->res_entries[i].queue, &ioa_cfg->free_res_q);
+
+ ioa_cfg->vpd_cbs = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_misc_cbs),
+ &ioa_cfg->vpd_cbs_dma);
+
+ if (!ioa_cfg->vpd_cbs)
+ goto cleanup;
+
+ if (ipr_alloc_cmd_blks(ioa_cfg))
+ goto cleanup;
+
+ ioa_cfg->host_rrq = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(u32) * IPR_NUM_CMD_BLKS,
+ &ioa_cfg->host_rrq_dma);
+
+ if (!ioa_cfg->host_rrq)
+ goto cleanup;
+
+ ioa_cfg->cfg_table = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_config_table),
+ &ioa_cfg->cfg_table_dma);
+
+ if (!ioa_cfg->cfg_table)
+ goto cleanup;
+
+ for (i = 0; i < IPR_NUM_HCAMS; i++) {
+ ioa_cfg->hostrcb[i] = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_hostrcb),
+ &ioa_cfg->hostrcb_dma[i]);
+
+ if (!ioa_cfg->hostrcb[i])
+ goto cleanup;
+
+ memset(ioa_cfg->hostrcb[i], 0, sizeof(struct ipr_hostrcb));
+ ioa_cfg->hostrcb[i]->hostrcb_dma =
+ ioa_cfg->hostrcb_dma[i] + offsetof(struct ipr_hostrcb, hcam);
+ list_add_tail(&ioa_cfg->hostrcb[i]->queue, &ioa_cfg->hostrcb_free_q);
+ }
+
+ ioa_cfg->trace = kmalloc(sizeof(struct ipr_trace_entry) *
+ IPR_NUM_TRACE_ENTRIES, GFP_KERNEL);
+
+ if (!ioa_cfg->trace)
+ goto cleanup;
+
+ memset(ioa_cfg->trace, 0,
+ sizeof(struct ipr_trace_entry) * IPR_NUM_TRACE_ENTRIES);
+
+ LEAVE;
+ return 0;
+
+cleanup:
+ ipr_free_mem(ioa_cfg);
+
+ LEAVE;
+ return -ENOMEM;
+}
+
+/**
+ * ipr_initialize_bus_attr - Initialize SCSI bus attributes to default values
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * none
+ **/
+static void __devinit ipr_initialize_bus_attr(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ for (i = 0; i < IPR_MAX_NUM_BUSES; i++) {
+ ioa_cfg->bus_attr[i].bus = i;
+ ioa_cfg->bus_attr[i].qas_enabled = 0;
+ ioa_cfg->bus_attr[i].bus_width = IPR_DEFAULT_BUS_WIDTH;
+ if (ipr_max_speed < ARRAY_SIZE(ipr_max_bus_speeds))
+ ioa_cfg->bus_attr[i].max_xfer_rate = ipr_max_bus_speeds[ipr_max_speed];
+ else
+ ioa_cfg->bus_attr[i].max_xfer_rate = IPR_U160_SCSI_RATE;
+ }
+}
+
+/**
+ * ipr_init_ioa_cfg - Initialize IOA config struct
+ * @ioa_cfg: ioa config struct
+ * @host: scsi host struct
+ * @pdev: PCI dev struct
+ *
+ * Return value:
+ * none
+ **/
+static void __devinit ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
+ struct Scsi_Host *host, struct pci_dev *pdev)
+{
+ ioa_cfg->host = host;
+ ioa_cfg->pdev = pdev;
+ ioa_cfg->log_level = ipr_log_level;
+ sprintf(ioa_cfg->eye_catcher, IPR_EYECATCHER);
+ sprintf(ioa_cfg->trace_start, IPR_TRACE_START_LABEL);
+ sprintf(ioa_cfg->ipr_free_label, IPR_FREEQ_LABEL);
+ sprintf(ioa_cfg->ipr_pending_label, IPR_PENDQ_LABEL);
+ sprintf(ioa_cfg->cfg_table_start, IPR_CFG_TBL_START);
+ sprintf(ioa_cfg->resource_table_label, IPR_RES_TABLE_LABEL);
+ sprintf(ioa_cfg->ipr_hcam_label, IPR_HCAM_LABEL);
+ sprintf(ioa_cfg->ipr_cmd_label, IPR_CMD_LABEL);
+
+ INIT_LIST_HEAD(&ioa_cfg->free_q);
+ INIT_LIST_HEAD(&ioa_cfg->pending_q);
+ INIT_LIST_HEAD(&ioa_cfg->hostrcb_free_q);
+ INIT_LIST_HEAD(&ioa_cfg->hostrcb_pending_q);
+ INIT_LIST_HEAD(&ioa_cfg->free_res_q);
+ INIT_LIST_HEAD(&ioa_cfg->used_res_q);
+ INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread, ioa_cfg);
+ init_waitqueue_head(&ioa_cfg->reset_wait_q);
+ ioa_cfg->sdt_state = INACTIVE;
+
+ ipr_initialize_bus_attr(ioa_cfg);
+
+ host->max_id = IPR_MAX_NUM_TARGETS_PER_BUS;
+ host->max_lun = IPR_MAX_NUM_LUNS_PER_TARGET;
+ host->max_channel = IPR_MAX_BUS_TO_SCAN;
+ host->unique_id = host->host_no;
+ host->max_cmd_len = IPR_MAX_CDB_LEN;
+ pci_set_drvdata(pdev, ioa_cfg);
+
+ memcpy(&ioa_cfg->regs, &ioa_cfg->chip_cfg->regs, sizeof(ioa_cfg->regs));
+
+ ioa_cfg->regs.set_interrupt_mask_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.clr_interrupt_mask_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.sense_interrupt_mask_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.clr_interrupt_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.sense_interrupt_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.ioarrin_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.sense_uproc_interrupt_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.set_uproc_interrupt_reg += ioa_cfg->hdw_dma_regs;
+ ioa_cfg->regs.clr_uproc_interrupt_reg += ioa_cfg->hdw_dma_regs;
+}
+
+/**
+ * ipr_probe_ioa - Allocates memory and does first stage of initialization
+ * @pdev: PCI device struct
+ * @dev_id: PCI device id struct
+ *
+ * Return value:
+ * 0 on success / non-zero on failure
+ **/
+static int __devinit ipr_probe_ioa(struct pci_dev *pdev,
+ const struct pci_device_id *dev_id)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct Scsi_Host *host;
+ unsigned long ipr_regs, ipr_regs_pci;
+ u32 rc = PCIBIOS_SUCCESSFUL;
+
+ ENTER;
+
+ if ((rc = pci_enable_device(pdev))) {
+ dev_err(&pdev->dev, "Cannot enable adapter\n");
+ return rc;
+ }
+
+ dev_info(&pdev->dev, "Found IOA with IRQ: %d\n", pdev->irq);
+
+ host = scsi_host_alloc(&driver_template, sizeof(*ioa_cfg));
+
+ if (!host) {
+ dev_err(&pdev->dev, "call to scsi_host_alloc failed!\n");
+ return -ENOMEM;
+ }
+
+ ioa_cfg = (struct ipr_ioa_cfg *)host->hostdata;
+ memset(ioa_cfg, 0, sizeof(struct ipr_ioa_cfg));
+
+ ioa_cfg->chip_cfg = (const struct ipr_chip_cfg_t *)dev_id->driver_data;
+
+ ipr_regs_pci = pci_resource_start(pdev, 0);
+
+ if (!request_mem_region(ipr_regs_pci,
+ pci_resource_len(pdev, 0), IPR_NAME)) {
+ dev_err(&pdev->dev,
+ "Couldn't register memory range of registers\n");
+ scsi_host_put(host);
+ return -ENOMEM;
+ }
+
+ ipr_regs = (unsigned long)ioremap(ipr_regs_pci,
+ pci_resource_len(pdev, 0));
+
+ if (!ipr_regs) {
+ dev_err(&pdev->dev,
+ "Couldn't map memory range of registers\n");
+ release_mem_region(ipr_regs_pci, pci_resource_len(pdev, 0));
+ scsi_host_put(host);
+ return -ENOMEM;
+ }
+
+ ioa_cfg->hdw_dma_regs = ipr_regs;
+ ioa_cfg->hdw_dma_regs_pci = ipr_regs_pci;
+ ioa_cfg->ioa_mailbox = ioa_cfg->chip_cfg->mailbox + ipr_regs;
+
+ ipr_init_ioa_cfg(ioa_cfg, host, pdev);
+
+ pci_set_master(pdev);
+ rc = pci_set_dma_mask(pdev, 0xffffffff);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ dev_err(&pdev->dev, "Failed to set PCI DMA mask\n");
+ rc = -EIO;
+ goto cleanup_nomem;
+ }
+
+ rc = pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE,
+ ioa_cfg->chip_cfg->cache_line_size);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ dev_err(&pdev->dev, "Write of cache line size failed\n");
+ rc = -EIO;
+ goto cleanup_nomem;
+ }
+
+ /* Save away PCI config space for use following IOA reset */
+ rc = pci_save_state(pdev, ioa_cfg->pci_cfg_buf);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ dev_err(&pdev->dev, "Failed to save PCI config space\n");
+ rc = -EIO;
+ goto cleanup_nomem;
+ }
+
+ if ((rc = ipr_save_pcix_cmd_reg(ioa_cfg)))
+ goto cleanup_nomem;
+
+ if ((rc = ipr_set_pcix_cmd_reg(ioa_cfg)))
+ goto cleanup_nomem;
+
+ if ((rc = ipr_alloc_mem(ioa_cfg)))
+ goto cleanup;
+
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
+ rc = request_irq(pdev->irq, ipr_isr, SA_SHIRQ, IPR_NAME, ioa_cfg);
+
+ if (rc) {
+ dev_err(&pdev->dev, "Couldn't register IRQ %d! rc=%d\n",
+ pdev->irq, rc);
+ goto cleanup_nolog;
+ }
+
+ spin_lock(&ipr_driver_lock);
+ list_add_tail(&ioa_cfg->queue, &ipr_ioa_head);
+ spin_unlock(&ipr_driver_lock);
+
+ LEAVE;
+ return 0;
+
+cleanup:
+ dev_err(&pdev->dev, "Couldn't allocate enough memory for device driver!\n");
+cleanup_nolog:
+ ipr_free_mem(ioa_cfg);
+cleanup_nomem:
+ iounmap((void *) ipr_regs);
+ release_mem_region(ipr_regs_pci, pci_resource_len(pdev, 0));
+ scsi_host_put(host);
+
+ return rc;
+}
+
+/**
+ * ipr_scan_vsets - Scans for VSET devices
+ * @ioa_cfg: ioa config struct
+ *
+ * Description: Since the VSET resources do not follow SAM in that we can have
+ * sparse LUNs with no LUN 0, we have to scan for these ourselves.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scan_vsets(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int target, lun;
+
+ for (target = 0; target < IPR_MAX_NUM_TARGETS_PER_BUS; target++)
+ for (lun = 0; lun < IPR_MAX_NUM_VSET_LUNS_PER_TARGET; lun++ )
+ scsi_add_device(ioa_cfg->host, IPR_VSET_BUS, target, lun);
+}
+
+/**
+ * ipr_initiate_ioa_bringdown - Bring down an adapter
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate bringing down the adapter.
+ * This consists of issuing an IOA shutdown to the adapter
+ * to flush the cache, and running BIST.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_initiate_ioa_bringdown(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ ENTER;
+ if (ioa_cfg->sdt_state == WAIT_FOR_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+ ioa_cfg->reset_retries = 0;
+ ioa_cfg->in_ioa_bringdown = 1;
+ ipr_initiate_ioa_reset(ioa_cfg, shutdown_type);
+ LEAVE;
+}
+
+/**
+ * __ipr_remove - Remove a single adapter
+ * @pdev: pci device struct
+ *
+ * Adapter hot plug remove entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void __ipr_remove(struct pci_dev *pdev)
+{
+ unsigned long host_lock_flags = 0;
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+ ipr_initiate_ioa_bringdown(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+
+ spin_lock(&ipr_driver_lock);
+ list_del(&ioa_cfg->queue);
+ spin_unlock(&ipr_driver_lock);
+
+ if (ioa_cfg->sdt_state == ABORT_DUMP)
+ ioa_cfg->sdt_state = WAIT_FOR_DUMP;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+
+ ipr_free_all_resources(ioa_cfg);
+
+ LEAVE;
+}
+
+/**
+ * ipr_remove - IOA hot plug remove entry point
+ * @pdev: pci device struct
+ *
+ * Adapter hot plug remove entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_remove(struct pci_dev *pdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
+
+ ENTER;
+
+ ioa_cfg->allow_cmds = 0;
+ flush_scheduled_work();
+ ipr_remove_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+ ipr_remove_dump_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_dump_attr);
+ scsi_remove_host(ioa_cfg->host);
+
+ __ipr_remove(pdev);
+
+ LEAVE;
+}
+
+/**
+ * ipr_probe - Adapter hot plug add entry point
+ *
+ * Return value:
+ * 0 on success / non-zero on failure
+ **/
+static int __devinit ipr_probe(struct pci_dev *pdev,
+ const struct pci_device_id *dev_id)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ int rc;
+
+ rc = ipr_probe_ioa(pdev, dev_id);
+
+ if (rc)
+ return rc;
+
+ ioa_cfg = pci_get_drvdata(pdev);
+ rc = ipr_probe_ioa_part2(ioa_cfg);
+
+ if (rc) {
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = scsi_add_host(ioa_cfg->host, &pdev->dev);
+
+ if (rc) {
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = ipr_create_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+
+ if (rc) {
+ scsi_remove_host(ioa_cfg->host);
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = ipr_create_dump_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_dump_attr);
+
+ if (rc) {
+ ipr_remove_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+ scsi_remove_host(ioa_cfg->host);
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ scsi_scan_host(ioa_cfg->host);
+ ipr_scan_vsets(ioa_cfg);
+ scsi_add_device(ioa_cfg->host, IPR_IOA_BUS, IPR_IOA_TARGET, IPR_IOA_LUN);
+ ioa_cfg->allow_ml_add_del = 1;
+ schedule_work(&ioa_cfg->work_q);
+ return 0;
+}
+
+/**
+ * ipr_shutdown - Shutdown handler.
+ * @dev: device struct
+ *
+ * This function is invoked upon system shutdown/reboot. It will issue
+ * an adapter shutdown to the adapter to flush the write cache.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_shutdown(struct device *dev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(to_pci_dev(dev));
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ipr_initiate_ioa_bringdown(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+}
+
+static struct pci_device_id ipr_pci_table[] __devinitdata = {
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_5702,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_572E,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_5703,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_573D,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_2780,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[1] },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, ipr_pci_table);
+
+static struct pci_driver ipr_driver = {
+ .name = IPR_NAME,
+ .id_table = ipr_pci_table,
+ .probe = ipr_probe,
+ .remove = ipr_remove,
+ .driver = {
+ .shutdown = ipr_shutdown,
+ },
+};
+
+/**
+ * ipr_init - Module entry point
+ *
+ * Return value:
+ * 0 on success / non-zero on failure
+ **/
+static int __init ipr_init(void)
+{
+ ipr_info("IBM Power RAID SCSI Device Driver version: %s %s\n",
+ IPR_DRIVER_VERSION, IPR_DRIVER_DATE);
+
+ pci_register_driver(&ipr_driver);
+
+ return 0;
+}
+
+/**
+ * ipr_exit - Module unload
+ *
+ * Module unload entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void __exit ipr_exit(void)
+{
+ pci_unregister_driver(&ipr_driver);
+}
+
+module_init(ipr_init);
+module_exit(ipr_exit);
--- /dev/null
+/*
+ * ipr.h -- driver for IBM Power Linux RAID adapters
+ *
+ * Written By: Brian King, IBM Corporation
+ *
+ * Copyright (C) 2003, 2004 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#ifndef _IPR_H
+#define _IPR_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/list.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#ifdef CONFIG_KDB
+#include <linux/kdb.h>
+#endif
+
+/*
+ * Literals
+ */
+#define IPR_DRIVER_VERSION "2.0.7"
+#define IPR_DRIVER_DATE "(May 21, 2004)"
+
+/*
+ * IPR_DBG_TRACE: Setting this to 1 will turn on some general function tracing
+ * resulting in a bunch of extra debugging printks to the console
+ *
+ * IPR_DEBUG: Setting this to 1 will turn on some error path tracing.
+ * Enables the ipr_trace macro.
+ */
+#ifdef IPR_DEBUG_ALL
+#define IPR_DEBUG 1
+#define IPR_DBG_TRACE 1
+#else
+#define IPR_DEBUG 0
+#define IPR_DBG_TRACE 0
+#endif
+
+/*
+ * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
+ * ops per device for devices not running tagged command queuing.
+ * This can be adjusted at runtime through sysfs device attributes.
+ */
+#define IPR_MAX_CMD_PER_LUN 6
+
+/*
+ * IPR_NUM_BASE_CMD_BLKS: This defines the maximum number of
+ * ops the mid-layer can send to the adapter.
+ */
+#define IPR_NUM_BASE_CMD_BLKS 100
+
+#define IPR_SUBS_DEV_ID_2780 0x0264
+#define IPR_SUBS_DEV_ID_5702 0x0266
+#define IPR_SUBS_DEV_ID_5703 0x0278
+#define IPR_SUBS_DEV_ID_572E 0x02D3
+#define IPR_SUBS_DEV_ID_573D 0x02D4
+
+#define IPR_NAME "ipr"
+
+/*
+ * Return codes
+ */
+#define IPR_RC_JOB_CONTINUE 1
+#define IPR_RC_JOB_RETURN 2
+
+/*
+ * IOASCs
+ */
+#define IPR_IOASC_NR_INIT_CMD_REQUIRED 0x02040200
+#define IPR_IOASC_SYNC_REQUIRED 0x023f0000
+#define IPR_IOASC_MED_DO_NOT_REALLOC 0x03110C00
+#define IPR_IOASC_HW_SEL_TIMEOUT 0x04050000
+#define IPR_IOASC_HW_DEV_BUS_STATUS 0x04448500
+#define IPR_IOASC_IOASC_MASK 0xFFFFFF00
+#define IPR_IOASC_SCSI_STATUS_MASK 0x000000FF
+#define IPR_IOASC_IR_RESOURCE_HANDLE 0x05250000
+#define IPR_IOASC_BUS_WAS_RESET 0x06290000
+#define IPR_IOASC_BUS_WAS_RESET_BY_OTHER 0x06298000
+#define IPR_IOASC_ABORTED_CMD_TERM_BY_HOST 0x0B5A0000
+
+#define IPR_FIRST_DRIVER_IOASC 0x10000000
+#define IPR_IOASC_IOA_WAS_RESET 0x10000001
+#define IPR_IOASC_PCI_ACCESS_ERROR 0x10000002
+
+#define IPR_NUM_LOG_HCAMS 2
+#define IPR_NUM_CFG_CHG_HCAMS 2
+#define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS)
+#define IPR_MAX_NUM_TARGETS_PER_BUS 0x10
+#define IPR_MAX_NUM_LUNS_PER_TARGET 256
+#define IPR_MAX_NUM_VSET_LUNS_PER_TARGET 8
+#define IPR_VSET_BUS 0xff
+#define IPR_IOA_BUS 0xff
+#define IPR_IOA_TARGET 0xff
+#define IPR_IOA_LUN 0xff
+#define IPR_MAX_NUM_BUSES 4
+#define IPR_MAX_BUS_TO_SCAN IPR_MAX_NUM_BUSES
+
+#define IPR_NUM_RESET_RELOAD_RETRIES 3
+
+/* We need resources for HCAMS, IOA reset, IOA bringdown, and ERP */
+#define IPR_NUM_INTERNAL_CMD_BLKS (IPR_NUM_HCAMS + \
+ ((IPR_NUM_RESET_RELOAD_RETRIES + 1) * 2) + 3)
+
+#define IPR_MAX_COMMANDS IPR_NUM_BASE_CMD_BLKS
+#define IPR_NUM_CMD_BLKS (IPR_NUM_BASE_CMD_BLKS + \
+ IPR_NUM_INTERNAL_CMD_BLKS)
+
+#define IPR_MAX_PHYSICAL_DEVS 192
+
+#define IPR_MAX_SGLIST 64
+#define IPR_MAX_SECTORS 512
+#define IPR_MAX_CDB_LEN 16
+
+#define IPR_DEFAULT_BUS_WIDTH 16
+#define IPR_80MBs_SCSI_RATE ((80 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_U160_SCSI_RATE ((160 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_U320_SCSI_RATE ((320 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_MAX_SCSI_RATE(width) ((320 * 10) / ((width) / 8))
+
+#define IPR_IOA_RES_HANDLE 0xffffffff
+#define IPR_IOA_RES_ADDR 0x00ffffff
+
+/*
+ * Adapter Commands
+ */
+#define IPR_RESET_DEVICE 0xC3
+#define IPR_RESET_TYPE_SELECT 0x80
+#define IPR_LUN_RESET 0x40
+#define IPR_TARGET_RESET 0x20
+#define IPR_BUS_RESET 0x10
+#define IPR_ID_HOST_RR_Q 0xC4
+#define IPR_QUERY_IOA_CONFIG 0xC5
+#define IPR_ABORT_TASK 0xC7
+#define IPR_CANCEL_ALL_REQUESTS 0xCE
+#define IPR_HOST_CONTROLLED_ASYNC 0xCF
+#define IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE 0x01
+#define IPR_HCAM_CDB_OP_CODE_LOG_DATA 0x02
+#define IPR_SET_SUPPORTED_DEVICES 0xFB
+#define IPR_IOA_SHUTDOWN 0xF7
+#define IPR_WR_BUF_DOWNLOAD_AND_SAVE 0x05
+
+/*
+ * Timeouts
+ */
+#define IPR_SHUTDOWN_TIMEOUT (10 * 60 * HZ)
+#define IPR_VSET_RW_TIMEOUT (2 * 60 * HZ)
+#define IPR_ABBREV_SHUTDOWN_TIMEOUT (10 * HZ)
+#define IPR_DEVICE_RESET_TIMEOUT (30 * HZ)
+#define IPR_CANCEL_ALL_TIMEOUT (30 * HZ)
+#define IPR_ABORT_TASK_TIMEOUT (30 * HZ)
+#define IPR_INTERNAL_TIMEOUT (30 * HZ)
+#define IPR_WRITE_BUFFER_TIMEOUT (10 * 60 * HZ)
+#define IPR_SET_SUP_DEVICE_TIMEOUT (2 * 60 * HZ)
+#define IPR_REQUEST_SENSE_TIMEOUT (10 * HZ)
+#define IPR_OPERATIONAL_TIMEOUT (5 * 60 * HZ)
+#define IPR_WAIT_FOR_RESET_TIMEOUT (2 * HZ)
+#define IPR_CHECK_FOR_RESET_TIMEOUT (HZ / 10)
+#define IPR_WAIT_FOR_BIST_TIMEOUT (2 * HZ)
+#define IPR_DUMP_TIMEOUT (15 * HZ)
+
+/*
+ * SCSI Literals
+ */
+#define IPR_VENDOR_ID_LEN 8
+#define IPR_PROD_ID_LEN 16
+#define IPR_SERIAL_NUM_LEN 8
+
+/*
+ * Hardware literals
+ */
+#define IPR_FMT2_MBX_ADDR_MASK 0x0fffffff
+#define IPR_FMT2_MBX_BAR_SEL_MASK 0xf0000000
+#define IPR_FMT2_MKR_BAR_SEL_SHIFT 28
+#define IPR_GET_FMT2_BAR_SEL(mbx) \
+(((mbx) & IPR_FMT2_MBX_BAR_SEL_MASK) >> IPR_FMT2_MKR_BAR_SEL_SHIFT)
+#define IPR_SDT_FMT2_BAR0_SEL 0x0
+#define IPR_SDT_FMT2_BAR1_SEL 0x1
+#define IPR_SDT_FMT2_BAR2_SEL 0x2
+#define IPR_SDT_FMT2_BAR3_SEL 0x3
+#define IPR_SDT_FMT2_BAR4_SEL 0x4
+#define IPR_SDT_FMT2_BAR5_SEL 0x5
+#define IPR_SDT_FMT2_EXP_ROM_SEL 0x8
+#define IPR_FMT2_SDT_READY_TO_USE 0xC4D4E3F2
+#define IPR_DOORBELL 0x82800000
+
+#define IPR_PCII_IOA_TRANS_TO_OPER (0x80000000 >> 0)
+#define IPR_PCII_IOARCB_XFER_FAILED (0x80000000 >> 3)
+#define IPR_PCII_IOA_UNIT_CHECKED (0x80000000 >> 4)
+#define IPR_PCII_NO_HOST_RRQ (0x80000000 >> 5)
+#define IPR_PCII_CRITICAL_OPERATION (0x80000000 >> 6)
+#define IPR_PCII_IO_DEBUG_ACKNOWLEDGE (0x80000000 >> 7)
+#define IPR_PCII_IOARRIN_LOST (0x80000000 >> 27)
+#define IPR_PCII_MMIO_ERROR (0x80000000 >> 28)
+#define IPR_PCII_PROC_ERR_STATE (0x80000000 >> 29)
+#define IPR_PCII_HRRQ_UPDATED (0x80000000 >> 30)
+#define IPR_PCII_CORE_ISSUED_RST_REQ (0x80000000 >> 31)
+
+#define IPR_PCII_ERROR_INTERRUPTS \
+(IPR_PCII_IOARCB_XFER_FAILED | IPR_PCII_IOA_UNIT_CHECKED | \
+IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_LOST | IPR_PCII_MMIO_ERROR)
+
+#define IPR_PCII_OPER_INTERRUPTS \
+(IPR_PCII_ERROR_INTERRUPTS | IPR_PCII_HRRQ_UPDATED | IPR_PCII_IOA_TRANS_TO_OPER)
+
+#define IPR_UPROCI_RESET_ALERT (0x80000000 >> 7)
+#define IPR_UPROCI_IO_DEBUG_ALERT (0x80000000 >> 9)
+
+#define IPR_LDUMP_MAX_LONG_ACK_DELAY_IN_USEC 200000 /* 200 ms */
+#define IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC 200000 /* 200 ms */
+
+/*
+ * Dump literals
+ */
+#define IPR_MAX_IOA_DUMP_SIZE (4 * 1024 * 1024)
+#define IPR_NUM_SDT_ENTRIES 511
+#define IPR_MAX_NUM_DUMP_PAGES ((IPR_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1)
+
+/*
+ * Misc literals
+ */
+#define IPR_NUM_IOADL_ENTRIES IPR_MAX_SGLIST
+
+/*
+ * Adapter interface types
+ */
+
+struct ipr_res_addr {
+ u8 reserved;
+ u8 bus;
+ u8 target;
+ u8 lun;
+#define IPR_GET_PHYS_LOC(res_addr) \
+ (((res_addr).bus << 16) | ((res_addr).target << 8) | (res_addr).lun)
+}__attribute__((packed, aligned (4)));
+
+struct ipr_std_inq_vpids {
+ u8 vendor_id[IPR_VENDOR_ID_LEN];
+ u8 product_id[IPR_PROD_ID_LEN];
+}__attribute__((packed));
+
+struct ipr_std_inq_data {
+ u8 peri_qual_dev_type;
+#define IPR_STD_INQ_PERI_QUAL(peri) ((peri) >> 5)
+#define IPR_STD_INQ_PERI_DEV_TYPE(peri) ((peri) & 0x1F)
+
+ u8 removeable_medium_rsvd;
+#define IPR_STD_INQ_REMOVEABLE_MEDIUM 0x80
+
+#define IPR_IS_DASD_DEVICE(std_inq) \
+((IPR_STD_INQ_PERI_DEV_TYPE((std_inq).peri_qual_dev_type) == TYPE_DISK) && \
+!(((std_inq).removeable_medium_rsvd) & IPR_STD_INQ_REMOVEABLE_MEDIUM))
+
+#define IPR_IS_SES_DEVICE(std_inq) \
+(IPR_STD_INQ_PERI_DEV_TYPE((std_inq).peri_qual_dev_type) == TYPE_ENCLOSURE)
+
+ u8 version;
+ u8 aen_naca_fmt;
+ u8 additional_len;
+ u8 sccs_rsvd;
+ u8 bq_enc_multi;
+ u8 sync_cmdq_flags;
+
+ struct ipr_std_inq_vpids vpids;
+
+ u8 ros_rsvd_ram_rsvd[4];
+
+ u8 serial_num[IPR_SERIAL_NUM_LEN];
+}__attribute__ ((packed));
+
+struct ipr_config_table_entry {
+ u8 service_level;
+ u8 array_id;
+ u8 flags;
+#define IPR_IS_IOA_RESOURCE 0x80
+#define IPR_IS_ARRAY_MEMBER 0x20
+#define IPR_IS_HOT_SPARE 0x10
+
+ u8 rsvd_subtype;
+#define IPR_RES_SUBTYPE(res) (((res)->cfgte.rsvd_subtype) & 0x0f)
+#define IPR_SUBTYPE_AF_DASD 0
+#define IPR_SUBTYPE_GENERIC_SCSI 1
+#define IPR_SUBTYPE_VOLUME_SET 2
+
+ struct ipr_res_addr res_addr;
+ u32 res_handle;
+ u32 reserved4[2];
+ struct ipr_std_inq_data std_inq_data;
+}__attribute__ ((packed, aligned (4)));
+
+struct ipr_config_table_hdr {
+ u8 num_entries;
+ u8 flags;
+#define IPR_UCODE_DOWNLOAD_REQ 0x10
+ u16 reserved;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_config_table {
+ struct ipr_config_table_hdr hdr;
+ struct ipr_config_table_entry dev[IPR_MAX_PHYSICAL_DEVS];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_cfg_ch_not {
+ struct ipr_config_table_entry cfgte;
+ u8 reserved[936];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_supported_device {
+ u16 data_length;
+ u8 reserved;
+ u8 num_records;
+ struct ipr_std_inq_vpids vpids;
+ u8 reserved2[16];
+}__attribute__((packed, aligned (4)));
+
+/* Command packet structure */
+struct ipr_cmd_pkt {
+ u16 reserved; /* Reserved by IOA */
+ u8 request_type;
+#define IPR_RQTYPE_SCSICDB 0x00
+#define IPR_RQTYPE_IOACMD 0x01
+#define IPR_RQTYPE_HCAM 0x02
+
+ u8 luntar_luntrn;
+
+ u8 flags_hi;
+#define IPR_FLAGS_HI_WRITE_NOT_READ 0x80
+#define IPR_FLAGS_HI_NO_ULEN_CHK 0x20
+#define IPR_FLAGS_HI_SYNC_OVERRIDE 0x10
+#define IPR_FLAGS_HI_SYNC_COMPLETE 0x08
+#define IPR_FLAGS_HI_NO_LINK_DESC 0x04
+
+ u8 flags_lo;
+#define IPR_FLAGS_LO_ALIGNED_BFR 0x20
+#define IPR_FLAGS_LO_DELAY_AFTER_RST 0x10
+#define IPR_FLAGS_LO_UNTAGGED_TASK 0x00
+#define IPR_FLAGS_LO_SIMPLE_TASK 0x02
+#define IPR_FLAGS_LO_ORDERED_TASK 0x04
+#define IPR_FLAGS_LO_HEAD_OF_Q_TASK 0x06
+#define IPR_FLAGS_LO_ACA_TASK 0x08
+
+ u8 cdb[16];
+ u16 timeout;
+}__attribute__ ((packed, aligned(4)));
+
+/* IOA Request Control Block 128 bytes */
+struct ipr_ioarcb {
+ u32 ioarcb_host_pci_addr;
+ u32 reserved;
+ u32 res_handle;
+ u32 host_response_handle;
+ u32 reserved1;
+ u32 reserved2;
+ u32 reserved3;
+
+ u32 write_data_transfer_length;
+ u32 read_data_transfer_length;
+ u32 write_ioadl_addr;
+ u32 write_ioadl_len;
+ u32 read_ioadl_addr;
+ u32 read_ioadl_len;
+
+ u32 ioasa_host_pci_addr;
+ u16 ioasa_len;
+ u16 reserved4;
+
+ struct ipr_cmd_pkt cmd_pkt;
+
+ u32 add_cmd_parms_len;
+ u32 add_cmd_parms[10];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioadl_desc {
+ u32 flags_and_data_len;
+#define IPR_IOADL_FLAGS_MASK 0xff000000
+#define IPR_IOADL_GET_FLAGS(x) (be32_to_cpu(x) & IPR_IOADL_FLAGS_MASK)
+#define IPR_IOADL_DATA_LEN_MASK 0x00ffffff
+#define IPR_IOADL_GET_DATA_LEN(x) (be32_to_cpu(x) & IPR_IOADL_DATA_LEN_MASK)
+#define IPR_IOADL_FLAGS_READ 0x48000000
+#define IPR_IOADL_FLAGS_READ_LAST 0x49000000
+#define IPR_IOADL_FLAGS_WRITE 0x68000000
+#define IPR_IOADL_FLAGS_WRITE_LAST 0x69000000
+#define IPR_IOADL_FLAGS_LAST 0x01000000
+
+ u32 address;
+}__attribute__((packed, aligned (8)));
+
+struct ipr_ioasa_vset {
+ u32 failing_lba_hi;
+ u32 failing_lba_lo;
+ u32 ioa_data[22];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_af_dasd {
+ u32 failing_lba;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_gpdd {
+ u8 end_state;
+ u8 bus_phase;
+ u16 reserved;
+ u32 ioa_data[23];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_raw {
+ u32 ioa_data[24];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa {
+ u32 ioasc;
+#define IPR_IOASC_SENSE_KEY(ioasc) ((ioasc) >> 24)
+#define IPR_IOASC_SENSE_CODE(ioasc) (((ioasc) & 0x00ff0000) >> 16)
+#define IPR_IOASC_SENSE_QUAL(ioasc) (((ioasc) & 0x0000ff00) >> 8)
+#define IPR_IOASC_SENSE_STATUS(ioasc) ((ioasc) & 0x000000ff)
+
+ u16 ret_stat_len; /* Length of the returned IOASA */
+
+ u16 avail_stat_len; /* Total Length of status available. */
+
+ u32 residual_data_len; /* number of bytes in the host data */
+ /* buffers that were not used by the IOARCB command. */
+
+ u32 ilid;
+#define IPR_NO_ILID 0
+#define IPR_DRIVER_ILID 0xffffffff
+
+ u32 fd_ioasc;
+
+ u32 fd_phys_locator;
+
+ u32 fd_res_handle;
+
+ u32 ioasc_specific; /* status code specific field */
+#define IPR_IOASC_SPECIFIC_MASK 0x00ffffff
+#define IPR_FIELD_POINTER_VALID (0x80000000 >> 8)
+#define IPR_FIELD_POINTER_MASK 0x0000ffff
+
+ union {
+ struct ipr_ioasa_vset vset;
+ struct ipr_ioasa_af_dasd dasd;
+ struct ipr_ioasa_gpdd gpdd;
+ struct ipr_ioasa_raw raw;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_mode_parm_hdr {
+ u8 length;
+ u8 medium_type;
+ u8 device_spec_parms;
+ u8 block_desc_len;
+}__attribute__((packed));
+
+struct ipr_mode_pages {
+ struct ipr_mode_parm_hdr hdr;
+ u8 data[255 - sizeof(struct ipr_mode_parm_hdr)];
+}__attribute__((packed));
+
+struct ipr_mode_page_hdr {
+ u8 ps_page_code;
+#define IPR_MODE_PAGE_PS 0x80
+#define IPR_GET_MODE_PAGE_CODE(hdr) ((hdr)->ps_page_code & 0x3F)
+ u8 page_length;
+}__attribute__ ((packed));
+
+struct ipr_dev_bus_entry {
+ struct ipr_res_addr res_addr;
+ u8 flags;
+#define IPR_SCSI_ATTR_ENABLE_QAS 0x80
+#define IPR_SCSI_ATTR_DISABLE_QAS 0x40
+#define IPR_SCSI_ATTR_QAS_MASK 0xC0
+#define IPR_SCSI_ATTR_ENABLE_TM 0x20
+#define IPR_SCSI_ATTR_NO_TERM_PWR 0x10
+#define IPR_SCSI_ATTR_TM_SUPPORTED 0x08
+#define IPR_SCSI_ATTR_LVD_TO_SE_NOT_ALLOWED 0x04
+
+ u8 scsi_id;
+ u8 bus_width;
+ u8 extended_reset_delay;
+#define IPR_EXTENDED_RESET_DELAY 7
+
+ u32 max_xfer_rate;
+
+ u8 spinup_delay;
+ u8 reserved3;
+ u16 reserved4;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_mode_page28 {
+ struct ipr_mode_page_hdr hdr;
+ u8 num_entries;
+ u8 entry_length;
+ struct ipr_dev_bus_entry bus[0];
+}__attribute__((packed));
+
+struct ipr_ioa_vpd {
+ struct ipr_std_inq_data std_inq_data;
+ u8 ascii_part_num[12];
+ u8 reserved[40];
+ u8 ascii_plant_code[4];
+}__attribute__((packed));
+
+struct ipr_inquiry_page3 {
+ u8 peri_qual_dev_type;
+ u8 page_code;
+ u8 reserved1;
+ u8 page_length;
+ u8 ascii_len;
+ u8 reserved2[3];
+ u8 load_id[4];
+ u8 major_release;
+ u8 card_type;
+ u8 minor_release[2];
+ u8 ptf_number[4];
+ u8 patch_number[4];
+}__attribute__((packed));
+
+struct ipr_hostrcb_device_data_entry {
+ struct ipr_std_inq_vpids dev_vpids;
+ u8 dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_res_addr dev_res_addr;
+ struct ipr_std_inq_vpids new_dev_vpids;
+ u8 new_dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids ioa_last_with_dev_vpids;
+ u8 ioa_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_last_with_dev_vpids;
+ u8 cfc_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data[5];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_array_data_entry {
+ struct ipr_std_inq_vpids vpids;
+ u8 serial_num[IPR_SERIAL_NUM_LEN];
+ struct ipr_res_addr expected_dev_res_addr;
+ struct ipr_res_addr dev_res_addr;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_ff_error {
+ u32 ioa_data[246];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_01_error {
+ u32 seek_counter;
+ u32 read_counter;
+ u8 sense_data[32];
+ u32 ioa_data[236];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_02_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids ioa_last_attached_to_cfc_vpids;
+ u8 ioa_last_attached_to_cfc_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_last_attached_to_ioa_vpids;
+ u8 cfc_last_attached_to_ioa_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data[3];
+ u8 reserved[844];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_03_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ u32 errors_detected;
+ u32 errors_logged;
+ u8 ioa_data[12];
+ struct ipr_hostrcb_device_data_entry dev_entry[3];
+ u8 reserved[444];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_04_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ u8 ioa_data[12];
+ struct ipr_hostrcb_array_data_entry array_member[10];
+ u32 exposed_mode_adn;
+ u32 array_id;
+ struct ipr_std_inq_vpids incomp_dev_vpids;
+ u8 incomp_dev_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data2;
+ struct ipr_hostrcb_array_data_entry array_member2[8];
+ struct ipr_res_addr last_func_vset_res_addr;
+ u8 vset_serial_num[IPR_SERIAL_NUM_LEN];
+ u8 protection_level[8];
+ u8 reserved[124];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_error {
+ u32 failing_dev_ioasc;
+ struct ipr_res_addr failing_dev_res_addr;
+ u32 failing_dev_res_handle;
+ u32 prc;
+ union {
+ struct ipr_hostrcb_type_ff_error type_ff_error;
+ struct ipr_hostrcb_type_01_error type_01_error;
+ struct ipr_hostrcb_type_02_error type_02_error;
+ struct ipr_hostrcb_type_03_error type_03_error;
+ struct ipr_hostrcb_type_04_error type_04_error;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_raw {
+ u32 data[sizeof(struct ipr_hostrcb_error)/sizeof(u32)];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hcam {
+ u8 op_code;
+#define IPR_HOST_RCB_OP_CODE_CONFIG_CHANGE 0xE1
+#define IPR_HOST_RCB_OP_CODE_LOG_DATA 0xE2
+
+ u8 notify_type;
+#define IPR_HOST_RCB_NOTIF_TYPE_EXISTING_CHANGED 0x00
+#define IPR_HOST_RCB_NOTIF_TYPE_NEW_ENTRY 0x01
+#define IPR_HOST_RCB_NOTIF_TYPE_REM_ENTRY 0x02
+#define IPR_HOST_RCB_NOTIF_TYPE_ERROR_LOG_ENTRY 0x10
+#define IPR_HOST_RCB_NOTIF_TYPE_INFORMATION_ENTRY 0x11
+
+ u8 notifications_lost;
+#define IPR_HOST_RCB_NO_NOTIFICATIONS_LOST 0
+#define IPR_HOST_RCB_NOTIFICATIONS_LOST 0x80
+
+ u8 flags;
+#define IPR_HOSTRCB_INTERNAL_OPER 0x80
+#define IPR_HOSTRCB_ERR_RESP_SENT 0x40
+
+ u8 overlay_id;
+#define IPR_HOST_RCB_OVERLAY_ID_1 0x01
+#define IPR_HOST_RCB_OVERLAY_ID_2 0x02
+#define IPR_HOST_RCB_OVERLAY_ID_3 0x03
+#define IPR_HOST_RCB_OVERLAY_ID_4 0x04
+#define IPR_HOST_RCB_OVERLAY_ID_6 0x06
+#define IPR_HOST_RCB_OVERLAY_ID_DEFAULT 0xFF
+
+ u8 reserved1[3];
+ u32 ilid;
+ u32 time_since_last_ioa_reset;
+ u32 reserved2;
+ u32 length;
+
+ union {
+ struct ipr_hostrcb_error error;
+ struct ipr_hostrcb_cfg_ch_not ccn;
+ struct ipr_hostrcb_raw raw;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb {
+ struct ipr_hcam hcam;
+ u32 hostrcb_dma;
+ struct list_head queue;
+};
+
+/* IPR smart dump table structures */
+struct ipr_sdt_entry {
+ u32 bar_str_offset;
+ u32 end_offset;
+ u8 entry_byte;
+ u8 reserved[3];
+
+ u8 flags;
+#define IPR_SDT_ENDIAN 0x80
+#define IPR_SDT_VALID_ENTRY 0x20
+
+ u8 resv;
+ u16 priority;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_sdt_header {
+ u32 state;
+ u32 num_entries;
+ u32 num_entries_used;
+ u32 dump_size;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_sdt {
+ struct ipr_sdt_header hdr;
+ struct ipr_sdt_entry entry[IPR_NUM_SDT_ENTRIES];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_uc_sdt {
+ struct ipr_sdt_header hdr;
+ struct ipr_sdt_entry entry[1];
+}__attribute__((packed, aligned (4)));
+
+/*
+ * Driver types
+ */
+struct ipr_bus_attributes {
+ u8 bus;
+ u8 qas_enabled;
+ u8 bus_width;
+ u8 reserved;
+ u32 max_xfer_rate;
+};
+
+struct ipr_resource_entry {
+ struct ipr_config_table_entry cfgte;
+ u8 needs_sync_complete:1;
+ u8 in_erp:1;
+ u8 add_to_ml:1;
+ u8 del_from_ml:1;
+ u8 resetting_device:1;
+ u8 tcq_active:1;
+
+ int qdepth;
+ struct scsi_device *sdev;
+ struct list_head queue;
+};
+
+struct ipr_resource_hdr {
+ u16 num_entries;
+ u16 reserved;
+};
+
+struct ipr_resource_table {
+ struct ipr_resource_hdr hdr;
+ struct ipr_resource_entry dev[IPR_MAX_PHYSICAL_DEVS];
+};
+
+struct ipr_misc_cbs {
+ struct ipr_ioa_vpd ioa_vpd;
+ struct ipr_inquiry_page3 page3_data;
+ struct ipr_mode_pages mode_pages;
+ struct ipr_supported_device supp_dev;
+};
+
+struct ipr_interrupts {
+ unsigned long set_interrupt_mask_reg;
+ unsigned long clr_interrupt_mask_reg;
+ unsigned long sense_interrupt_mask_reg;
+ unsigned long clr_interrupt_reg;
+
+ unsigned long sense_interrupt_reg;
+ unsigned long ioarrin_reg;
+ unsigned long sense_uproc_interrupt_reg;
+ unsigned long set_uproc_interrupt_reg;
+ unsigned long clr_uproc_interrupt_reg;
+};
+
+struct ipr_chip_cfg_t {
+ u32 mailbox;
+ u8 cache_line_size;
+ struct ipr_interrupts regs;
+};
+
+enum ipr_shutdown_type {
+ IPR_SHUTDOWN_NORMAL = 0x00,
+ IPR_SHUTDOWN_PREPARE_FOR_NORMAL = 0x40,
+ IPR_SHUTDOWN_ABBREV = 0x80,
+ IPR_SHUTDOWN_NONE = 0x100
+};
+
+struct ipr_trace_entry {
+ u32 time;
+
+ u8 op_code;
+ u8 type;
+#define IPR_TRACE_START 0x00
+#define IPR_TRACE_FINISH 0xff
+ u16 cmd_index;
+
+ u32 res_handle;
+ union {
+ u32 ioasc;
+ u32 add_data;
+ u32 res_addr;
+ } u;
+};
+
+struct ipr_sglist {
+ u32 order;
+ u32 num_sg;
+ u32 buffer_len;
+ struct scatterlist scatterlist[1];
+};
+
+enum ipr_sdt_state {
+ INACTIVE,
+ WAIT_FOR_DUMP,
+ GET_DUMP,
+ ABORT_DUMP,
+ DUMP_OBTAINED
+};
+
+/* Per-controller data */
+struct ipr_ioa_cfg {
+ char eye_catcher[8];
+#define IPR_EYECATCHER "iprcfg"
+
+ struct list_head queue;
+
+ u8 allow_interrupts:1;
+ u8 in_reset_reload:1;
+ u8 in_ioa_bringdown:1;
+ u8 ioa_unit_checked:1;
+ u8 ioa_is_dead:1;
+ u8 dump_taken:1;
+ u8 allow_cmds:1;
+ u8 allow_ml_add_del:1;
+
+ u16 type; /* CCIN of the card */
+
+ u8 log_level;
+#define IPR_MAX_LOG_LEVEL 4
+#define IPR_DEFAULT_LOG_LEVEL 2
+
+#define IPR_NUM_TRACE_INDEX_BITS 8
+#define IPR_NUM_TRACE_ENTRIES (1 << IPR_NUM_TRACE_INDEX_BITS)
+#define IPR_TRACE_SIZE (sizeof(struct ipr_trace_entry) * IPR_NUM_TRACE_ENTRIES)
+ char trace_start[8];
+#define IPR_TRACE_START_LABEL "trace"
+ struct ipr_trace_entry *trace;
+ u32 trace_index:IPR_NUM_TRACE_INDEX_BITS;
+
+ /*
+ * Queue for free command blocks
+ */
+ char ipr_free_label[8];
+#define IPR_FREEQ_LABEL "free-q"
+ struct list_head free_q;
+
+ /*
+ * Queue for command blocks outstanding to the adapter
+ */
+ char ipr_pending_label[8];
+#define IPR_PENDQ_LABEL "pend-q"
+ struct list_head pending_q;
+
+ char cfg_table_start[8];
+#define IPR_CFG_TBL_START "cfg"
+ struct ipr_config_table *cfg_table;
+ u32 cfg_table_dma;
+
+ char resource_table_label[8];
+#define IPR_RES_TABLE_LABEL "res_tbl"
+ struct ipr_resource_entry *res_entries;
+ struct list_head free_res_q;
+ struct list_head used_res_q;
+
+ char ipr_hcam_label[8];
+#define IPR_HCAM_LABEL "hcams"
+ struct ipr_hostrcb *hostrcb[IPR_NUM_HCAMS];
+ u32 hostrcb_dma[IPR_NUM_HCAMS];
+ struct list_head hostrcb_free_q;
+ struct list_head hostrcb_pending_q;
+
+ u32 *host_rrq;
+ u32 host_rrq_dma;
+#define IPR_HRRQ_REQ_RESP_HANDLE_MASK 0xfffffffc
+#define IPR_HRRQ_RESP_BIT_SET 0x00000002
+#define IPR_HRRQ_TOGGLE_BIT 0x00000001
+#define IPR_HRRQ_REQ_RESP_HANDLE_SHIFT 2
+ volatile u32 *hrrq_start;
+ volatile u32 *hrrq_end;
+ volatile u32 *hrrq_curr;
+ volatile u32 toggle_bit;
+
+ struct ipr_bus_attributes bus_attr[IPR_MAX_NUM_BUSES];
+
+ const struct ipr_chip_cfg_t *chip_cfg;
+
+ unsigned long hdw_dma_regs; /* iomapped PCI memory space */
+ unsigned long hdw_dma_regs_pci; /* raw PCI memory space */
+ unsigned long ioa_mailbox;
+ struct ipr_interrupts regs;
+
+ u32 pci_cfg_buf[64];
+ u16 saved_pcix_cmd_reg;
+ u16 reset_retries;
+
+ u32 errors_logged;
+
+ struct Scsi_Host *host;
+ struct pci_dev *pdev;
+ struct ipr_sglist *ucode_sglist;
+ struct ipr_mode_pages *saved_mode_pages;
+ u8 saved_mode_page_len;
+
+ struct work_struct work_q;
+
+ wait_queue_head_t reset_wait_q;
+
+ struct ipr_dump *dump;
+ enum ipr_sdt_state sdt_state;
+
+ struct ipr_misc_cbs *vpd_cbs;
+ u32 vpd_cbs_dma;
+
+ struct pci_pool *ipr_cmd_pool;
+
+ struct ipr_cmnd *reset_cmd;
+
+ char ipr_cmd_label[8];
+#define IPR_CMD_LABEL "ipr_cmnd"
+ struct ipr_cmnd *ipr_cmnd_list[IPR_NUM_CMD_BLKS];
+ u32 ipr_cmnd_list_dma[IPR_NUM_CMD_BLKS];
+};
+
+struct ipr_cmnd {
+ struct ipr_ioarcb ioarcb;
+ struct ipr_ioasa ioasa;
+ struct ipr_ioadl_desc ioadl[IPR_NUM_IOADL_ENTRIES];
+ struct list_head queue;
+ struct scsi_cmnd *scsi_cmd;
+ struct completion completion;
+ struct timer_list timer;
+ void (*done) (struct ipr_cmnd *);
+ int (*job_step) (struct ipr_cmnd *);
+ u16 cmd_index;
+ u8 sense_buffer[SCSI_SENSE_BUFFERSIZE];
+ dma_addr_t sense_buffer_dma;
+ unsigned short dma_use_sg;
+ dma_addr_t dma_handle;
+ union {
+ enum ipr_shutdown_type shutdown_type;
+ struct ipr_hostrcb *hostrcb;
+ unsigned long time_left;
+ unsigned long scratch;
+ struct ipr_resource_entry *res;
+ struct ipr_cmnd *sibling;
+ struct scsi_device *sdev;
+ } u;
+
+ struct ipr_ioa_cfg *ioa_cfg;
+};
+
+struct ipr_ses_table_entry {
+ char product_id[17];
+ char compare_product_id_byte[17];
+ u32 max_bus_speed_limit; /* MB/sec limit for this backplane */
+};
+
+struct ipr_dump_header {
+ u32 eye_catcher;
+#define IPR_DUMP_EYE_CATCHER 0xC5D4E3F2
+ u32 len;
+ u32 num_entries;
+ u32 first_entry_offset;
+ u32 status;
+#define IPR_DUMP_STATUS_SUCCESS 0
+#define IPR_DUMP_STATUS_QUAL_SUCCESS 2
+#define IPR_DUMP_STATUS_FAILED 0xffffffff
+ u32 os;
+#define IPR_DUMP_OS_LINUX 0x4C4E5558
+ u32 driver_name;
+#define IPR_DUMP_DRIVER_NAME 0x49505232
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_entry_header {
+ u32 eye_catcher;
+#define IPR_DUMP_EYE_CATCHER 0xC5D4E3F2
+ u32 len;
+ u32 num_elems;
+ u32 offset;
+ u32 data_type;
+#define IPR_DUMP_DATA_TYPE_ASCII 0x41534349
+#define IPR_DUMP_DATA_TYPE_BINARY 0x42494E41
+ u32 id;
+#define IPR_DUMP_IOA_DUMP_ID 0x494F4131
+#define IPR_DUMP_LOCATION_ID 0x4C4F4341
+#define IPR_DUMP_TRACE_ID 0x54524143
+#define IPR_DUMP_DRIVER_VERSION_ID 0x44525652
+#define IPR_DUMP_DRIVER_TYPE_ID 0x54595045
+#define IPR_DUMP_IOA_CTRL_BLK 0x494F4342
+#define IPR_DUMP_PEND_OPS 0x414F5053
+ u32 status;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_location_entry {
+ struct ipr_dump_entry_header hdr;
+ u8 location[BUS_ID_SIZE];
+}__attribute__((packed));
+
+struct ipr_dump_trace_entry {
+ struct ipr_dump_entry_header hdr;
+ u32 trace[IPR_TRACE_SIZE / sizeof(u32)];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_version_entry {
+ struct ipr_dump_entry_header hdr;
+ u8 version[sizeof(IPR_DRIVER_VERSION)];
+};
+
+struct ipr_dump_ioa_type_entry {
+ struct ipr_dump_entry_header hdr;
+ u32 type;
+ u32 fw_version;
+};
+
+struct ipr_driver_dump {
+ struct ipr_dump_header hdr;
+ struct ipr_dump_version_entry version_entry;
+ struct ipr_dump_location_entry location_entry;
+ struct ipr_dump_ioa_type_entry ioa_type_entry;
+ struct ipr_dump_trace_entry trace_entry;
+}__attribute__((packed));
+
+struct ipr_ioa_dump {
+ struct ipr_dump_entry_header hdr;
+ struct ipr_sdt sdt;
+ u32 *ioa_data[IPR_MAX_NUM_DUMP_PAGES];
+ u32 reserved;
+ u32 next_page_index;
+ u32 page_offset;
+ u32 format;
+#define IPR_SDT_FMT2 2
+#define IPR_SDT_UNKNOWN 3
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump {
+ struct kobject kobj;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_driver_dump driver_dump;
+ struct ipr_ioa_dump ioa_dump;
+};
+
+struct ipr_error_table_t {
+ u32 ioasc;
+ int log_ioasa;
+ int log_hcam;
+ char *error;
+};
+
+struct ipr_software_inq_lid_info {
+ u32 load_id;
+ u32 timestamp[3];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ucode_image_header {
+ u32 header_length;
+ u32 lid_table_offset;
+ u8 major_release;
+ u8 card_type;
+ u8 minor_release[2];
+ u8 reserved[20];
+ char eyecatcher[16];
+ u32 num_lids;
+ struct ipr_software_inq_lid_info lid[1];
+}__attribute__((packed, aligned (4)));
+
+/*
+ * Macros
+ */
+#if IPR_DEBUG
+#define IPR_DBG_CMD(CMD) do { CMD; } while (0)
+#else
+#define IPR_DBG_CMD(CMD)
+#endif
+
+#define ipr_breakpoint_data KERN_ERR IPR_NAME\
+": %s: %s: Line: %d ioa_cfg: %p\n", __FILE__, \
+__FUNCTION__, __LINE__, ioa_cfg
+
+#if defined(CONFIG_KDB) && !defined(CONFIG_PPC_ISERIES)
+#define ipr_breakpoint {printk(ipr_breakpoint_data); KDB_ENTER();}
+#define ipr_breakpoint_or_die {printk(ipr_breakpoint_data); KDB_ENTER();}
+#else
+#define ipr_breakpoint
+#define ipr_breakpoint_or_die panic(ipr_breakpoint_data)
+#endif
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+#define ipr_create_trace_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
+#define ipr_remove_trace_file(kobj, attr) sysfs_remove_bin_file(kobj, attr)
+#else
+#define ipr_create_trace_file(kobj, attr) 0
+#define ipr_remove_trace_file(kobj, attr) do { } while(0)
+#endif
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+#define ipr_create_dump_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
+#define ipr_remove_dump_file(kobj, attr) sysfs_remove_bin_file(kobj, attr)
+#else
+#define ipr_create_dump_file(kobj, attr) 0
+#define ipr_remove_dump_file(kobj, attr) do { } while(0)
+#endif
+
+/*
+ * Error logging macros
+ */
+#define ipr_err(...) printk(KERN_ERR IPR_NAME ": "__VA_ARGS__)
+#define ipr_info(...) printk(KERN_INFO IPR_NAME ": "__VA_ARGS__)
+#define ipr_crit(...) printk(KERN_CRIT IPR_NAME ": "__VA_ARGS__)
+#define ipr_warn(...) printk(KERN_WARNING IPR_NAME": "__VA_ARGS__)
+#define ipr_dbg(...) IPR_DBG_CMD(printk(KERN_INFO IPR_NAME ": "__VA_ARGS__))
+
+#define ipr_sdev_printk(level, sdev, fmt, ...) \
+ printk(level IPR_NAME ": %d:%d:%d:%d: " fmt, sdev->host->host_no, \
+ sdev->channel, sdev->id, sdev->lun, ##__VA_ARGS__)
+
+#define ipr_sdev_err(sdev, fmt, ...) \
+ ipr_sdev_printk(KERN_ERR, sdev, fmt, ##__VA_ARGS__)
+
+#define ipr_sdev_info(sdev, fmt, ...) \
+ ipr_sdev_printk(KERN_INFO, sdev, fmt, ##__VA_ARGS__)
+
+#define ipr_sdev_dbg(sdev, fmt, ...) \
+ IPR_DBG_CMD(ipr_sdev_printk(KERN_INFO, sdev, fmt, ##__VA_ARGS__))
+
+#define ipr_res_printk(level, ioa_cfg, res, fmt, ...) \
+ printk(level IPR_NAME ": %d:%d:%d:%d: " fmt, ioa_cfg->host->host_no, \
+ res.bus, res.target, res.lun, ##__VA_ARGS__)
+
+#define ipr_res_err(ioa_cfg, res, fmt, ...) \
+ ipr_res_printk(KERN_ERR, ioa_cfg, res, fmt, ##__VA_ARGS__)
+#define ipr_res_dbg(ioa_cfg, res, fmt, ...) \
+ IPR_DBG_CMD(ipr_res_printk(KERN_INFO, ioa_cfg, res, fmt, ##__VA_ARGS__))
+
+#define ipr_trace ipr_dbg("%s: %s: Line: %d\n",\
+ __FILE__, __FUNCTION__, __LINE__)
+
+#if IPR_DBG_TRACE
+#define ENTER printk(KERN_INFO IPR_NAME": Entering %s\n", __FUNCTION__)
+#define LEAVE printk(KERN_INFO IPR_NAME": Leaving %s\n", __FUNCTION__)
+#else
+#define ENTER
+#define LEAVE
+#endif
+
+#define ipr_err_separator \
+ipr_err("----------------------------------------------------------\n")
+
+
+/*
+ * Inlines
+ */
+
+/**
+ * ipr_is_ioa_resource - Determine if a resource is the IOA
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if IOA / 0 if not IOA
+ **/
+static inline int ipr_is_ioa_resource(struct ipr_resource_entry *res)
+{
+ return (res->cfgte.flags & IPR_IS_IOA_RESOURCE) ? 1 : 0;
+}
+
+/**
+ * ipr_is_af_dasd_device - Determine if a resource is an AF DASD
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if AF DASD / 0 if not AF DASD
+ **/
+static inline int ipr_is_af_dasd_device(struct ipr_resource_entry *res)
+{
+ if (IPR_IS_DASD_DEVICE(res->cfgte.std_inq_data) &&
+ !ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_AF_DASD)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_vset_device - Determine if a resource is a VSET
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if VSET / 0 if not VSET
+ **/
+static inline int ipr_is_vset_device(struct ipr_resource_entry *res)
+{
+ if (IPR_IS_DASD_DEVICE(res->cfgte.std_inq_data) &&
+ !ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_VOLUME_SET)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_gscsi - Determine if a resource is a generic scsi resource
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if GSCSI / 0 if not GSCSI
+ **/
+static inline int ipr_is_gscsi(struct ipr_resource_entry *res)
+{
+ if (!ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_GENERIC_SCSI)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_device - Determine if resource address is that of a device
+ * @res_addr: resource address struct
+ *
+ * Return value:
+ * 1 if AF / 0 if not AF
+ **/
+static inline int ipr_is_device(struct ipr_res_addr *res_addr)
+{
+ if ((res_addr->bus < IPR_MAX_NUM_BUSES) &&
+ (res_addr->target < IPR_MAX_NUM_TARGETS_PER_BUS))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * ipr_sdt_is_fmt2 - Determine if a SDT address is in format 2
+ * @sdt_word: SDT address
+ *
+ * Return value:
+ * 1 if format 2 / 0 if not
+ **/
+static inline int ipr_sdt_is_fmt2(u32 sdt_word)
+{
+ u32 bar_sel = IPR_GET_FMT2_BAR_SEL(sdt_word);
+
+ switch (bar_sel) {
+ case IPR_SDT_FMT2_BAR0_SEL:
+ case IPR_SDT_FMT2_BAR1_SEL:
+ case IPR_SDT_FMT2_BAR2_SEL:
+ case IPR_SDT_FMT2_BAR3_SEL:
+ case IPR_SDT_FMT2_BAR4_SEL:
+ case IPR_SDT_FMT2_BAR5_SEL:
+ case IPR_SDT_FMT2_EXP_ROM_SEL:
+ return 1;
+ };
+
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+* sym53c500_cs.c Bob Tracy (rct@frus.com)
+*
+* A rewrite of the pcmcia-cs add-on driver for newer (circa 1997)
+* New Media Bus Toaster PCMCIA SCSI cards using the Symbios Logic
+* 53c500 controller: intended for use with 2.6 and later kernels.
+* The pcmcia-cs add-on version of this driver is not supported
+* beyond 2.4. It consisted of three files with history/copyright
+* information as follows:
+*
+* SYM53C500.h
+* Bob Tracy (rct@frus.com)
+* Original by Tom Corner (tcorner@via.at).
+* Adapted from NCR53c406a.h which is Copyrighted (C) 1994
+* Normunds Saumanis (normunds@rx.tech.swh.lv)
+*
+* SYM53C500.c
+* Bob Tracy (rct@frus.com)
+* Original driver by Tom Corner (tcorner@via.at) was adapted
+* from NCR53c406a.c which is Copyrighted (C) 1994, 1995, 1996
+* Normunds Saumanis (normunds@fi.ibm.com)
+*
+* sym53c500.c
+* Bob Tracy (rct@frus.com)
+* Original by Tom Corner (tcorner@via.at) was adapted from a
+* driver for the Qlogic SCSI card written by
+* David Hinds (dhinds@allegro.stanford.edu).
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms of the GNU General Public License as published by the
+* Free Software Foundation; either version 2, or (at your option) any
+* later version.
+*
+* This program is distributed in the hope that it will be useful, but
+* WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+* General Public License for more details.
+*/
+
+#define SYM53C500_DEBUG 0
+#define VERBOSE_SYM53C500_DEBUG 0
+
+/*
+* Set this to 0 if you encounter kernel lockups while transferring
+* data in PIO mode. Note this can be changed via "sysfs".
+*/
+#define USE_FAST_PIO 1
+
+/* =============== End of user configurable parameters ============== */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/ioport.h>
+#include <linux/blkdev.h>
+#include <linux/spinlock.h>
+#include <linux/bitops.h>
+
+#include <asm/io.h>
+#include <asm/dma.h>
+#include <asm/irq.h>
+
+#include <scsi/scsi_ioctl.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/cistpl.h>
+#include <pcmcia/ds.h>
+#include <pcmcia/ciscode.h>
+
+/* ================================================================== */
+
+#ifdef PCMCIA_DEBUG
+static int pc_debug = PCMCIA_DEBUG;
+module_param(pc_debug, int, 0);
+#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
+static char *version =
+"sym53c500_cs.c 0.9b 2004/05/10 (Bob Tracy)";
+#else
+#define DEBUG(n, args...)
+#endif
+
+/* ================================================================== */
+
+/* Parameters that can be set with 'insmod' */
+
+/* Bit map of interrupts to choose from */
+static unsigned int irq_mask = 0xdeb8; /* 3, 6, 7, 9-12, 14, 15 */
+static int irq_list[4] = { -1 };
+static int num_irqs = 1;
+
+module_param(irq_mask, int, 0);
+MODULE_PARM_DESC(irq_mask, "IRQ mask bits (default: 0xdeb8)");
+module_param_array(irq_list, int, num_irqs, 0);
+MODULE_PARM_DESC(irq_list, "Comma-separated list of up to 4 IRQs to try (default: auto select).");
+
+/* ================================================================== */
+
+#define SYNC_MODE 0 /* Synchronous transfer mode */
+
+/* Default configuration */
+#define C1_IMG 0x07 /* ID=7 */
+#define C2_IMG 0x48 /* FE SCSI2 */
+#define C3_IMG 0x20 /* CDB */
+#define C4_IMG 0x04 /* ANE */
+#define C5_IMG 0xa4 /* ? changed from b6= AA PI SIE POL */
+#define C7_IMG 0x80 /* added for SYM53C500 t. corner */
+
+/* Hardware Registers: offsets from io_port (base) */
+
+/* Control Register Set 0 */
+#define TC_LSB 0x00 /* transfer counter lsb */
+#define TC_MSB 0x01 /* transfer counter msb */
+#define SCSI_FIFO 0x02 /* scsi fifo register */
+#define CMD_REG 0x03 /* command register */
+#define STAT_REG 0x04 /* status register */
+#define DEST_ID 0x04 /* selection/reselection bus id */
+#define INT_REG 0x05 /* interrupt status register */
+#define SRTIMOUT 0x05 /* select/reselect timeout reg */
+#define SEQ_REG 0x06 /* sequence step register */
+#define SYNCPRD 0x06 /* synchronous transfer period */
+#define FIFO_FLAGS 0x07 /* indicates # of bytes in fifo */
+#define SYNCOFF 0x07 /* synchronous offset register */
+#define CONFIG1 0x08 /* configuration register */
+#define CLKCONV 0x09 /* clock conversion register */
+/* #define TESTREG 0x0A */ /* test mode register */
+#define CONFIG2 0x0B /* configuration 2 register */
+#define CONFIG3 0x0C /* configuration 3 register */
+#define CONFIG4 0x0D /* configuration 4 register */
+#define TC_HIGH 0x0E /* transfer counter high */
+/* #define FIFO_BOTTOM 0x0F */ /* reserve FIFO byte register */
+
+/* Control Register Set 1 */
+/* #define JUMPER_SENSE 0x00 */ /* jumper sense port reg (r/w) */
+/* #define SRAM_PTR 0x01 */ /* SRAM address pointer reg (r/w) */
+/* #define SRAM_DATA 0x02 */ /* SRAM data register (r/w) */
+#define PIO_FIFO 0x04 /* PIO FIFO registers (r/w) */
+/* #define PIO_FIFO1 0x05 */ /* */
+/* #define PIO_FIFO2 0x06 */ /* */
+/* #define PIO_FIFO3 0x07 */ /* */
+#define PIO_STATUS 0x08 /* PIO status (r/w) */
+/* #define ATA_CMD 0x09 */ /* ATA command/status reg (r/w) */
+/* #define ATA_ERR 0x0A */ /* ATA features/error reg (r/w) */
+#define PIO_FLAG 0x0B /* PIO flag interrupt enable (r/w) */
+#define CONFIG5 0x09 /* configuration 5 register */
+/* #define SIGNATURE 0x0E */ /* signature register (r) */
+/* #define CONFIG6 0x0F */ /* configuration 6 register (r) */
+#define CONFIG7 0x0d
+
+/* select register set 0 */
+#define REG0(x) (outb(C4_IMG, (x) + CONFIG4))
+/* select register set 1 */
+#define REG1(x) outb(C7_IMG, (x) + CONFIG7); outb(C5_IMG, (x) + CONFIG5)
+
+#if SYM53C500_DEBUG
+#define DEB(x) x
+#else
+#define DEB(x)
+#endif
+
+#if VERBOSE_SYM53C500_DEBUG
+#define VDEB(x) x
+#else
+#define VDEB(x)
+#endif
+
+#define LOAD_DMA_COUNT(x, count) \
+ outb(count & 0xff, (x) + TC_LSB); \
+ outb((count >> 8) & 0xff, (x) + TC_MSB); \
+ outb((count >> 16) & 0xff, (x) + TC_HIGH);
+
+/* Chip commands */
+#define DMA_OP 0x80
+
+#define SCSI_NOP 0x00
+#define FLUSH_FIFO 0x01
+#define CHIP_RESET 0x02
+#define SCSI_RESET 0x03
+#define RESELECT 0x40
+#define SELECT_NO_ATN 0x41
+#define SELECT_ATN 0x42
+#define SELECT_ATN_STOP 0x43
+#define ENABLE_SEL 0x44
+#define DISABLE_SEL 0x45
+#define SELECT_ATN3 0x46
+#define RESELECT3 0x47
+#define TRANSFER_INFO 0x10
+#define INIT_CMD_COMPLETE 0x11
+#define MSG_ACCEPT 0x12
+#define TRANSFER_PAD 0x18
+#define SET_ATN 0x1a
+#define RESET_ATN 0x1b
+#define SEND_MSG 0x20
+#define SEND_STATUS 0x21
+#define SEND_DATA 0x22
+#define DISCONN_SEQ 0x23
+#define TERMINATE_SEQ 0x24
+#define TARG_CMD_COMPLETE 0x25
+#define DISCONN 0x27
+#define RECV_MSG 0x28
+#define RECV_CMD 0x29
+#define RECV_DATA 0x2a
+#define RECV_CMD_SEQ 0x2b
+#define TARGET_ABORT_DMA 0x04
+
+/* ================================================================== */
+
+struct scsi_info_t {
+ dev_link_t link;
+ dev_node_t node;
+ struct Scsi_Host *host;
+ unsigned short manf_id;
+};
+
+/*
+* Repository for per-instance host data.
+*/
+struct sym53c500_data {
+ struct scsi_cmnd *current_SC;
+ int fast_pio;
+};
+
+enum Phase {
+ idle,
+ data_out,
+ data_in,
+ command_ph,
+ status_ph,
+ message_out,
+ message_in
+};
+
+/* ================================================================== */
+
+/*
+* Global (within this module) variables other than
+* sym53c500_driver_template (the scsi_host_template).
+*/
+static dev_link_t *dev_list;
+static dev_info_t dev_info = "sym53c500_cs";
+
+/* ================================================================== */
+
+static void
+chip_init(int io_port)
+{
+ REG1(io_port);
+ outb(0x01, io_port + PIO_STATUS);
+ outb(0x00, io_port + PIO_FLAG);
+
+ outb(C4_IMG, io_port + CONFIG4); /* REG0(io_port); */
+ outb(C3_IMG, io_port + CONFIG3);
+ outb(C2_IMG, io_port + CONFIG2);
+ outb(C1_IMG, io_port + CONFIG1);
+
+ outb(0x05, io_port + CLKCONV); /* clock conversion factor */
+ outb(0x9C, io_port + SRTIMOUT); /* Selection timeout */
+ outb(0x05, io_port + SYNCPRD); /* Synchronous transfer period */
+ outb(SYNC_MODE, io_port + SYNCOFF); /* synchronous mode */
+}
+
+static void
+SYM53C500_int_host_reset(int io_port)
+{
+ outb(C4_IMG, io_port + CONFIG4); /* REG0(io_port); */
+ outb(CHIP_RESET, io_port + CMD_REG);
+ outb(SCSI_NOP, io_port + CMD_REG); /* required after reset */
+ outb(SCSI_RESET, io_port + CMD_REG);
+ chip_init(io_port);
+}
+
+static __inline__ int
+SYM53C500_pio_read(int fast_pio, int base, unsigned char *request, unsigned int reqlen)
+{
+ int i;
+ int len; /* current scsi fifo size */
+
+ REG1(base);
+ while (reqlen) {
+ i = inb(base + PIO_STATUS);
+ /* VDEB(printk("pio_status=%x\n", i)); */
+ if (i & 0x80)
+ return 0;
+
+ switch (i & 0x1e) {
+ default:
+ case 0x10: /* fifo empty */
+ len = 0;
+ break;
+ case 0x0:
+ len = 1;
+ break;
+ case 0x8: /* fifo 1/3 full */
+ len = 42;
+ break;
+ case 0xc: /* fifo 2/3 full */
+ len = 84;
+ break;
+ case 0xe: /* fifo full */
+ len = 128;
+ break;
+ }
+
+ if ((i & 0x40) && len == 0) { /* fifo empty and interrupt occurred */
+ return 0;
+ }
+
+ if (len) {
+ if (len > reqlen)
+ len = reqlen;
+
+ if (fast_pio && len > 3) {
+ insl(base + PIO_FIFO, request, len >> 2);
+ request += len & 0xfc;
+ reqlen -= len & 0xfc;
+ } else {
+ while (len--) {
+ *request++ = inb(base + PIO_FIFO);
+ reqlen--;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+static __inline__ int
+SYM53C500_pio_write(int fast_pio, int base, unsigned char *request, unsigned int reqlen)
+{
+ int i = 0;
+ int len; /* current scsi fifo size */
+
+ REG1(base);
+ while (reqlen && !(i & 0x40)) {
+ i = inb(base + PIO_STATUS);
+ /* VDEB(printk("pio_status=%x\n", i)); */
+ if (i & 0x80) /* error */
+ return 0;
+
+ switch (i & 0x1e) {
+ case 0x10:
+ len = 128;
+ break;
+ case 0x0:
+ len = 84;
+ break;
+ case 0x8:
+ len = 42;
+ break;
+ case 0xc:
+ len = 1;
+ break;
+ default:
+ case 0xe:
+ len = 0;
+ break;
+ }
+
+ if (len) {
+ if (len > reqlen)
+ len = reqlen;
+
+ if (fast_pio && len > 3) {
+ outsl(base + PIO_FIFO, request, len >> 2);
+ request += len & 0xfc;
+ reqlen -= len & 0xfc;
+ } else {
+ while (len--) {
+ outb(*request++, base + PIO_FIFO);
+ reqlen--;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+static irqreturn_t
+SYM53C500_intr(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long flags;
+ struct Scsi_Host *dev = dev_id;
+ DEB(unsigned char fifo_size;)
+ DEB(unsigned char seq_reg;)
+ unsigned char status, int_reg;
+ unsigned char pio_status;
+ struct scatterlist *sglist;
+ unsigned int sgcount;
+ int port_base = dev->io_port;
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)dev->hostdata;
+ struct scsi_cmnd *curSC = data->current_SC;
+ int fast_pio = data->fast_pio;
+
+ spin_lock_irqsave(dev->host_lock, flags);
+
+ VDEB(printk("SYM53C500_intr called\n"));
+
+ REG1(port_base);
+ pio_status = inb(port_base + PIO_STATUS);
+ REG0(port_base);
+ status = inb(port_base + STAT_REG);
+ DEB(seq_reg = inb(port_base + SEQ_REG));
+ int_reg = inb(port_base + INT_REG);
+ DEB(fifo_size = inb(port_base + FIFO_FLAGS) & 0x1f);
+
+#if SYM53C500_DEBUG
+ printk("status=%02x, seq_reg=%02x, int_reg=%02x, fifo_size=%02x",
+ status, seq_reg, int_reg, fifo_size);
+ printk(", pio=%02x\n", pio_status);
+#endif /* SYM53C500_DEBUG */
+
+ if (int_reg & 0x80) { /* SCSI reset intr */
+ DEB(printk("SYM53C500: reset intr received\n"));
+ curSC->result = DID_RESET << 16;
+ goto idle_out;
+ }
+
+ if (pio_status & 0x80) {
+ printk("SYM53C500: Warning: PIO error!\n");
+ curSC->result = DID_ERROR << 16;
+ goto idle_out;
+ }
+
+ if (status & 0x20) { /* Parity error */
+ printk("SYM53C500: Warning: parity error!\n");
+ curSC->result = DID_PARITY << 16;
+ goto idle_out;
+ }
+
+ if (status & 0x40) { /* Gross error */
+ printk("SYM53C500: Warning: gross error!\n");
+ curSC->result = DID_ERROR << 16;
+ goto idle_out;
+ }
+
+ if (int_reg & 0x20) { /* Disconnect */
+ DEB(printk("SYM53C500: disconnect intr received\n"));
+ if (curSC->SCp.phase != message_in) { /* Unexpected disconnect */
+ curSC->result = DID_NO_CONNECT << 16;
+ } else { /* Command complete, return status and message */
+ curSC->result = (curSC->SCp.Status & 0xff)
+ | ((curSC->SCp.Message & 0xff) << 8) | (DID_OK << 16);
+ }
+ goto idle_out;
+ }
+
+ switch (status & 0x07) { /* scsi phase */
+ case 0x00: /* DATA-OUT */
+ if (int_reg & 0x10) { /* Target requesting info transfer */
+ curSC->SCp.phase = data_out;
+ VDEB(printk("SYM53C500: Data-Out phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ LOAD_DMA_COUNT(port_base, curSC->request_bufflen); /* Max transfer size */
+ outb(TRANSFER_INFO | DMA_OP, port_base + CMD_REG);
+ if (!curSC->use_sg) /* Don't use scatter-gather */
+ SYM53C500_pio_write(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen);
+ else { /* use scatter-gather */
+ sgcount = curSC->use_sg;
+ sglist = curSC->request_buffer;
+ while (sgcount--) {
+ SYM53C500_pio_write(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length);
+ sglist++;
+ }
+ }
+ REG0(port_base);
+ }
+ break;
+
+ case 0x01: /* DATA-IN */
+ if (int_reg & 0x10) { /* Target requesting info transfer */
+ curSC->SCp.phase = data_in;
+ VDEB(printk("SYM53C500: Data-In phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ LOAD_DMA_COUNT(port_base, curSC->request_bufflen); /* Max transfer size */
+ outb(TRANSFER_INFO | DMA_OP, port_base + CMD_REG);
+ if (!curSC->use_sg) /* Don't use scatter-gather */
+ SYM53C500_pio_read(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen);
+ else { /* Use scatter-gather */
+ sgcount = curSC->use_sg;
+ sglist = curSC->request_buffer;
+ while (sgcount--) {
+ SYM53C500_pio_read(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length);
+ sglist++;
+ }
+ }
+ REG0(port_base);
+ }
+ break;
+
+ case 0x02: /* COMMAND */
+ curSC->SCp.phase = command_ph;
+ printk("SYM53C500: Warning: Unknown interrupt occurred in command phase!\n");
+ break;
+
+ case 0x03: /* STATUS */
+ curSC->SCp.phase = status_ph;
+ VDEB(printk("SYM53C500: Status phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ outb(INIT_CMD_COMPLETE, port_base + CMD_REG);
+ break;
+
+ case 0x04: /* Reserved */
+ case 0x05: /* Reserved */
+ printk("SYM53C500: WARNING: Reserved phase!!!\n");
+ break;
+
+ case 0x06: /* MESSAGE-OUT */
+ DEB(printk("SYM53C500: Message-Out phase\n"));
+ curSC->SCp.phase = message_out;
+ outb(SET_ATN, port_base + CMD_REG); /* Reject the message */
+ outb(MSG_ACCEPT, port_base + CMD_REG);
+ break;
+
+ case 0x07: /* MESSAGE-IN */
+ VDEB(printk("SYM53C500: Message-In phase\n"));
+ curSC->SCp.phase = message_in;
+
+ curSC->SCp.Status = inb(port_base + SCSI_FIFO);
+ curSC->SCp.Message = inb(port_base + SCSI_FIFO);
+
+ VDEB(printk("SCSI FIFO size=%d\n", inb(port_base + FIFO_FLAGS) & 0x1f));
+ DEB(printk("Status = %02x Message = %02x\n", curSC->SCp.Status, curSC->SCp.Message));
+
+ if (curSC->SCp.Message == SAVE_POINTERS || curSC->SCp.Message == DISCONNECT) {
+ outb(SET_ATN, port_base + CMD_REG); /* Reject message */
+ DEB(printk("Discarding SAVE_POINTERS message\n"));
+ }
+ outb(MSG_ACCEPT, port_base + CMD_REG);
+ break;
+ }
+out:
+ spin_unlock_irqrestore(dev->host_lock, flags);
+ return IRQ_HANDLED;
+
+idle_out:
+ curSC->SCp.phase = idle;
+ curSC->scsi_done(curSC);
+ goto out;
+}
+
+static void
+SYM53C500_release(dev_link_t *link)
+{
+ struct scsi_info_t *info = link->priv;
+ struct Scsi_Host *shost = info->host;
+
+ DEBUG(0, "SYM53C500_release(0x%p)\n", link);
+
+ /*
+ * Do this before releasing/freeing resources.
+ */
+ scsi_remove_host(shost);
+
+ /*
+ * Interrupts getting hosed on card removal. Try
+ * the following code, mostly from qlogicfas.c.
+ */
+ if (shost->irq)
+ free_irq(shost->irq, shost);
+ if (shost->dma_channel != 0xff)
+ free_dma(shost->dma_channel);
+ if (shost->io_port && shost->n_io_port)
+ release_region(shost->io_port, shost->n_io_port);
+
+ link->dev = NULL;
+
+ pcmcia_release_configuration(link->handle);
+ pcmcia_release_io(link->handle, &link->io);
+ pcmcia_release_irq(link->handle, &link->irq);
+
+ link->state &= ~DEV_CONFIG;
+
+ scsi_host_put(shost);
+} /* SYM53C500_release */
+
+static const char*
+SYM53C500_info(struct Scsi_Host *SChost)
+{
+ static char info_msg[256];
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SChost->hostdata;
+
+ DEB(printk("SYM53C500_info called\n"));
+ (void)snprintf(info_msg, sizeof(info_msg),
+ "SYM53C500 at 0x%lx, IRQ %d, %s PIO mode.",
+ SChost->io_port, SChost->irq, data->fast_pio ? "fast" : "slow");
+ return (info_msg);
+}
+
+static int
+SYM53C500_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
+{
+ int i;
+ int port_base = SCpnt->device->host->io_port;
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SCpnt->device->host->hostdata;
+
+ VDEB(printk("SYM53C500_queue called\n"));
+
+ DEB(printk("cmd=%02x, cmd_len=%02x, target=%02x, lun=%02x, bufflen=%d\n",
+ SCpnt->cmnd[0], SCpnt->cmd_len, SCpnt->device->id,
+ SCpnt->device->lun, SCpnt->request_bufflen));
+
+ VDEB(for (i = 0; i < SCpnt->cmd_len; i++)
+ printk("cmd[%d]=%02x ", i, SCpnt->cmnd[i]));
+ VDEB(printk("\n"));
+
+ data->current_SC = SCpnt;
+ data->current_SC->scsi_done = done;
+ data->current_SC->SCp.phase = command_ph;
+ data->current_SC->SCp.Status = 0;
+ data->current_SC->SCp.Message = 0;
+
+ /* We are locked here already by the mid layer */
+ REG0(port_base);
+ outb(SCpnt->device->id, port_base + DEST_ID); /* set destination */
+ outb(FLUSH_FIFO, port_base + CMD_REG); /* reset the fifos */
+
+ for (i = 0; i < SCpnt->cmd_len; i++) {
+ outb(SCpnt->cmnd[i], port_base + SCSI_FIFO);
+ }
+ outb(SELECT_NO_ATN, port_base + CMD_REG);
+
+ return 0;
+}
+
+static int
+SYM53C500_host_reset(struct scsi_cmnd *SCpnt)
+{
+ int port_base = SCpnt->device->host->io_port;
+
+ DEB(printk("SYM53C500_host_reset called\n"));
+ SYM53C500_int_host_reset(port_base);
+
+ return SUCCESS;
+}
+
+static int
+SYM53C500_biosparm(struct scsi_device *disk,
+ struct block_device *dev,
+ sector_t capacity, int *info_array)
+{
+ int size;
+
+ DEB(printk("SYM53C500_biosparm called\n"));
+
+ size = capacity;
+ info_array[0] = 64; /* heads */
+ info_array[1] = 32; /* sectors */
+ info_array[2] = size >> 11; /* cylinders */
+ if (info_array[2] > 1024) { /* big disk */
+ info_array[0] = 255;
+ info_array[1] = 63;
+ info_array[2] = size / (255 * 63);
+ }
+ return 0;
+}
+
+static ssize_t
+SYM53C500_show_pio(struct class_device *cdev, char *buf)
+{
+ struct Scsi_Host *SHp = class_to_shost(cdev);
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SHp->hostdata;
+
+ return snprintf(buf, 4, "%d\n", data->fast_pio);
+}
+
+static ssize_t
+SYM53C500_store_pio(struct class_device *cdev, const char *buf, size_t count)
+{
+ int pio;
+ struct Scsi_Host *SHp = class_to_shost(cdev);
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SHp->hostdata;
+
+ pio = simple_strtoul(buf, NULL, 0);
+ if (pio == 0 || pio == 1) {
+ data->fast_pio = pio;
+ return count;
+ }
+ else
+ return -EINVAL;
+}
+
+/*
+* SCSI HBA device attributes we want to
+* make available via sysfs.
+*/
+static struct class_device_attribute SYM53C500_pio_attr = {
+ .attr = {
+ .name = "fast_pio",
+ .mode = (S_IRUGO | S_IWUSR),
+ },
+ .show = SYM53C500_show_pio,
+ .store = SYM53C500_store_pio,
+};
+
+static struct class_device_attribute *SYM53C500_shost_attrs[] = {
+ &SYM53C500_pio_attr,
+ NULL,
+};
+
+/*
+* scsi_host_template initializer
+*/
+static struct scsi_host_template sym53c500_driver_template = {
+ .module = THIS_MODULE,
+ .name = "SYM53C500",
+ .info = SYM53C500_info,
+ .queuecommand = SYM53C500_queue,
+ .eh_host_reset_handler = SYM53C500_host_reset,
+ .bios_param = SYM53C500_biosparm,
+ .proc_name = "SYM53C500",
+ .can_queue = 1,
+ .this_id = 7,
+ .sg_tablesize = 32,
+ .cmd_per_lun = 1,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = SYM53C500_shost_attrs
+};
+
+#define CS_CHECK(fn, ret) \
+do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
+
+static void
+SYM53C500_config(dev_link_t *link)
+{
+ client_handle_t handle = link->handle;
+ struct scsi_info_t *info = link->priv;
+ tuple_t tuple;
+ cisparse_t parse;
+ int i, last_ret, last_fn;
+ int irq_level, port_base;
+ unsigned short tuple_data[32];
+ struct Scsi_Host *host;
+ struct scsi_host_template *tpnt = &sym53c500_driver_template;
+ struct sym53c500_data *data;
+
+ DEBUG(0, "SYM53C500_config(0x%p)\n", link);
+
+ tuple.TupleData = (cisdata_t *)tuple_data;
+ tuple.TupleDataMax = 64;
+ tuple.TupleOffset = 0;
+ tuple.DesiredTuple = CISTPL_CONFIG;
+ CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple));
+ CS_CHECK(GetTupleData, pcmcia_get_tuple_data(handle, &tuple));
+ CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, &parse));
+ link->conf.ConfigBase = parse.config.base;
+
+ tuple.DesiredTuple = CISTPL_MANFID;
+ if ((pcmcia_get_first_tuple(handle, &tuple) == CS_SUCCESS) &&
+ (pcmcia_get_tuple_data(handle, &tuple) == CS_SUCCESS))
+ info->manf_id = le16_to_cpu(tuple.TupleData[0]);
+
+ /* Configure card */
+ link->state |= DEV_CONFIG;
+
+ tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY;
+ CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple));
+ while (1) {
+ if (pcmcia_get_tuple_data(handle, &tuple) != 0 ||
+ pcmcia_parse_tuple(handle, &tuple, &parse) != 0)
+ goto next_entry;
+ link->conf.ConfigIndex = parse.cftable_entry.index;
+ link->io.BasePort1 = parse.cftable_entry.io.win[0].base;
+ link->io.NumPorts1 = parse.cftable_entry.io.win[0].len;
+
+ if (link->io.BasePort1 != 0) {
+ i = pcmcia_request_io(handle, &link->io);
+ if (i == CS_SUCCESS)
+ break;
+ }
+next_entry:
+ CS_CHECK(GetNextTuple, pcmcia_get_next_tuple(handle, &tuple));
+ }
+
+ CS_CHECK(RequestIRQ, pcmcia_request_irq(handle, &link->irq));
+ CS_CHECK(RequestConfiguration, pcmcia_request_configuration(handle, &link->conf));
+
+ /*
+ * That's the trouble with copying liberally from another driver.
+ * Some things probably aren't relevant, and I suspect this entire
+ * section dealing with manufacturer IDs can be scrapped. --rct
+ */
+ if ((info->manf_id == MANFID_MACNICA) ||
+ (info->manf_id == MANFID_PIONEER) ||
+ (info->manf_id == 0x0098)) {
+ /* set ATAcmd */
+ outb(0xb4, link->io.BasePort1 + 0xd);
+ outb(0x24, link->io.BasePort1 + 0x9);
+ outb(0x04, link->io.BasePort1 + 0xd);
+ }
+
+ /*
+ * irq_level == 0 implies tpnt->can_queue == 0, which
+ * is not supported in 2.6. Thus, only irq_level > 0
+ * will be allowed.
+ *
+ * Possible port_base values are as follows:
+ *
+ * 0x130, 0x230, 0x280, 0x290,
+ * 0x320, 0x330, 0x340, 0x350
+ */
+ port_base = link->io.BasePort1;
+ irq_level = link->irq.AssignedIRQ;
+
+ DEB(printk("SYM53C500: port_base=0x%x, irq=%d, fast_pio=%d\n",
+ port_base, irq_level, USE_FAST_PIO);)
+
+ chip_init(port_base);
+
+ host = scsi_host_alloc(tpnt, sizeof(struct sym53c500_data));
+ if (!host) {
+ printk("SYM53C500: Unable to register host, giving up.\n");
+ goto err_release;
+ }
+
+ data = (struct sym53c500_data *)host->hostdata;
+
+ if (irq_level > 0) {
+ if (request_irq(irq_level, SYM53C500_intr, 0, "SYM53C500", host)) {
+ printk("SYM53C500: unable to allocate IRQ %d\n", irq_level);
+ goto err_free_scsi;
+ }
+ DEB(printk("SYM53C500: allocated IRQ %d\n", irq_level));
+ } else if (irq_level == 0) {
+ DEB(printk("SYM53C500: No interrupts detected\n"));
+ goto err_free_scsi;
+ } else {
+ DEB(printk("SYM53C500: Shouldn't get here!\n"));
+ goto err_free_scsi;
+ }
+
+ host->unique_id = port_base;
+ host->irq = irq_level;
+ host->io_port = port_base;
+ host->n_io_port = 0x10;
+ host->dma_channel = -1;
+
+ /*
+ * Note fast_pio is set to USE_FAST_PIO by
+ * default, but can be changed via "sysfs".
+ */
+ data->fast_pio = USE_FAST_PIO;
+
+ sprintf(info->node.dev_name, "scsi%d", host->host_no);
+ link->dev = &info->node;
+ info->host = host;
+
+ if (scsi_add_host(host, NULL))
+ goto err_free_irq;
+
+ scsi_scan_host(host);
+
+ goto out; /* SUCCESS */
+
+err_free_irq:
+ free_irq(irq_level, host);
+err_free_scsi:
+ scsi_host_put(host);
+err_release:
+ release_region(port_base, 0x10);
+ printk(KERN_INFO "sym53c500_cs: no SCSI devices found\n");
+
+out:
+ link->state &= ~DEV_CONFIG_PENDING;
+ return;
+
+cs_failed:
+ cs_error(link->handle, last_fn, last_ret);
+ SYM53C500_release(link);
+ return;
+} /* SYM53C500_config */
+
+static int
+SYM53C500_event(event_t event, int priority, event_callback_args_t *args)
+{
+ dev_link_t *link = args->client_data;
+ struct scsi_info_t *info = link->priv;
+
+ DEBUG(1, "SYM53C500_event(0x%06x)\n", event);
+
+ switch (event) {
+ case CS_EVENT_CARD_REMOVAL:
+ link->state &= ~DEV_PRESENT;
+ if (link->state & DEV_CONFIG)
+ SYM53C500_release(link);
+ break;
+ case CS_EVENT_CARD_INSERTION:
+ link->state |= DEV_PRESENT | DEV_CONFIG_PENDING;
+ SYM53C500_config(link);
+ break;
+ case CS_EVENT_PM_SUSPEND:
+ link->state |= DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_RESET_PHYSICAL:
+ if (link->state & DEV_CONFIG)
+ pcmcia_release_configuration(link->handle);
+ break;
+ case CS_EVENT_PM_RESUME:
+ link->state &= ~DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_CARD_RESET:
+ if (link->state & DEV_CONFIG) {
+ pcmcia_request_configuration(link->handle, &link->conf);
+ /* See earlier comment about manufacturer IDs. */
+ if ((info->manf_id == MANFID_MACNICA) ||
+ (info->manf_id == MANFID_PIONEER) ||
+ (info->manf_id == 0x0098)) {
+ outb(0x80, link->io.BasePort1 + 0xd);
+ outb(0x24, link->io.BasePort1 + 0x9);
+ outb(0x04, link->io.BasePort1 + 0xd);
+ }
+ /*
+ * If things don't work after a "resume",
+ * this is a good place to start looking.
+ */
+ SYM53C500_int_host_reset(link->io.BasePort1);
+ }
+ break;
+ }
+ return 0;
+} /* SYM53C500_event */
+
+static void
+SYM53C500_detach(dev_link_t *link)
+{
+ dev_link_t **linkp;
+
+ DEBUG(0, "SYM53C500_detach(0x%p)\n", link);
+
+ /* Locate device structure */
+ for (linkp = &dev_list; *linkp; linkp = &(*linkp)->next)
+ if (*linkp == link)
+ break;
+ if (*linkp == NULL)
+ return;
+
+ if (link->state & DEV_CONFIG)
+ SYM53C500_release(link);
+
+ if (link->handle)
+ pcmcia_deregister_client(link->handle);
+
+ /* Unlink device structure, free bits. */
+ *linkp = link->next;
+ kfree(link->priv);
+ link->priv = NULL;
+} /* SYM53C500_detach */
+
+static dev_link_t *
+SYM53C500_attach(void)
+{
+ struct scsi_info_t *info;
+ client_reg_t client_reg;
+ dev_link_t *link;
+ int i, ret;
+
+ DEBUG(0, "SYM53C500_attach()\n");
+
+ /* Create new SCSI device */
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return NULL;
+ memset(info, 0, sizeof(*info));
+ link = &info->link;
+ link->priv = info;
+ link->io.NumPorts1 = 16;
+ link->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
+ link->io.IOAddrLines = 10;
+ link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
+ link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
+ if (irq_list[0] == -1)
+ link->irq.IRQInfo2 = irq_mask;
+ else
+ for (i = 0; i < 4; i++)
+ link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->conf.Attributes = CONF_ENABLE_IRQ;
+ link->conf.Vcc = 50;
+ link->conf.IntType = INT_MEMORY_AND_IO;
+ link->conf.Present = PRESENT_OPTION;
+
+ /* Register with Card Services */
+ link->next = dev_list;
+ dev_list = link;
+ client_reg.dev_info = &dev_info;
+ client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
+ client_reg.event_handler = &SYM53C500_event;
+ client_reg.EventMask = CS_EVENT_RESET_REQUEST | CS_EVENT_CARD_RESET |
+ CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
+ CS_EVENT_PM_SUSPEND | CS_EVENT_PM_RESUME;
+ client_reg.Version = 0x0210;
+ client_reg.event_callback_args.client_data = link;
+ ret = pcmcia_register_client(&link->handle, &client_reg);
+ if (ret != 0) {
+ cs_error(link->handle, RegisterClient, ret);
+ SYM53C500_detach(link);
+ return NULL;
+ }
+
+ return link;
+} /* SYM53C500_attach */
+
+MODULE_AUTHOR("Bob Tracy <rct@frus.com>");
+MODULE_DESCRIPTION("SYM53C500 PCMCIA SCSI driver");
+MODULE_LICENSE("GPL");
+
+static struct pcmcia_driver sym53c500_cs_driver = {
+ .owner = THIS_MODULE,
+ .drv = {
+ .name = "sym53c500_cs",
+ },
+ .attach = SYM53C500_attach,
+ .detach = SYM53C500_detach,
+};
+
+static int __init
+init_sym53c500_cs(void)
+{
+ return pcmcia_register_driver(&sym53c500_cs_driver);
+}
+
+static void __exit
+exit_sym53c500_cs(void)
+{
+ pcmcia_unregister_driver(&sym53c500_cs_driver);
+}
+
+module_init(init_sym53c500_cs);
+module_exit(exit_sym53c500_cs);
--- /dev/null
+/* to be used by qlogicfas and qlogic_cs */
+#ifndef __QLOGICFAS408_H
+#define __QLOGICFAS408_H
+
+/*----------------------------------------------------------------*/
+/* Configuration */
+
+/* Set the following to max out the speed of the PIO PseudoDMA transfers,
+ again, 0 tends to be slower, but more stable. */
+
+#define QL_TURBO_PDMA 1
+
+/* This should be 1 to enable parity detection */
+
+#define QL_ENABLE_PARITY 1
+
+/* This will reset all devices when the driver is initialized (during bootup).
+ The other linux drivers don't do this, but the DOS drivers do, and after
+ using DOS or some kind of crash or lockup this will bring things back
+ without requiring a cold boot. It does take some time to recover from a
+ reset, so it is slower, and I have seen timeouts so that devices weren't
+ recognized when this was set. */
+
+#define QL_RESET_AT_START 0
+
+/* crystal frequency in megahertz (for offset 5 and 9)
+ Please set this for your card. Most Qlogic cards are 40 Mhz. The
+ Control Concepts ISA (not VLB) is 24 Mhz */
+
+#define XTALFREQ 40
+
+/**********/
+/* DANGER! modify these at your own risk */
+/* SLOWCABLE can usually be reset to zero if you have a clean setup and
+ proper termination. The rest are for synchronous transfers and other
+ advanced features if your device can transfer faster than 5Mb/sec.
+ If you are really curious, email me for a quick howto until I have
+ something official */
+/**********/
+
+/*****/
+/* config register 1 (offset 8) options */
+/* This needs to be set to 1 if your cabling is long or noisy */
+#define SLOWCABLE 1
+
+/*****/
+/* offset 0xc */
+/* This will set fast (10Mhz) synchronous timing when set to 1
+ For this to have an effect, FASTCLK must also be 1 */
+#define FASTSCSI 0
+
+/* This when set to 1 will set a faster sync transfer rate */
+#define FASTCLK 0 /*(XTALFREQ>25?1:0)*/
+
+/*****/
+/* offset 6 */
+/* This is the sync transfer divisor, XTALFREQ/X will be the maximum
+ achievable data rate (assuming the rest of the system is capable
+ and set properly) */
+#define SYNCXFRPD 5 /*(XTALFREQ/5)*/
+
+/*****/
+/* offset 7 */
+/* This is the count of how many synchronous transfers can take place
+ i.e. how many reqs can occur before an ack is given.
+ The maximum value for this is 15, the upper bits can modify
+ REQ/ACK assertion and deassertion during synchronous transfers
+ If this is 0, the bus will only transfer asynchronously */
+#define SYNCOFFST 0
+/* for the curious, bits 7&6 control the deassertion delay in 1/2 cycles
+ of the 40Mhz clock. If FASTCLK is 1, specifying 01 (1/2) will
+ cause the deassertion to be early by 1/2 clock. Bits 5&4 control
+ the assertion delay, also in 1/2 clocks (FASTCLK is ignored here). */
+
+/*----------------------------------------------------------------*/
+
+struct qlogicfas408_priv {
+ int qbase; /* Port */
+ int qinitid; /* initiator ID */
+ int qabort; /* Flag to cause an abort */
+ int qlirq; /* IRQ being used */
+ int int_type; /* type of irq, 2 for ISA board, 0 for PCMCIA */
+ char qinfo[80]; /* description */
+ Scsi_Cmnd *qlcmd; /* current command being processed */
+ struct Scsi_Host *shost; /* pointer back to host */
+ struct qlogicfas408_priv *next; /* next private struct */
+};
+
+/* The qlogic card uses two register maps - These macros select which one */
+#define REG0 ( outb( inb( qbase + 0xd ) & 0x7f , qbase + 0xd ), outb( 4 , qbase + 0xd ))
+#define REG1 ( outb( inb( qbase + 0xd ) | 0x80 , qbase + 0xd ), outb( 0xb4 | int_type, qbase + 0xd ))
+
+/* following is watchdog timeout in microseconds */
+#define WATCHDOG 5000000
+
+/*----------------------------------------------------------------*/
+/* the following will set the monitor border color (useful to find
+ where something crashed or gets stuck at and as a simple profiler) */
+
+#define rtrc(i) {}
+
+#define get_priv_by_cmd(x) (struct qlogicfas408_priv *)&((x)->device->host->hostdata[0])
+#define get_priv_by_host(x) (struct qlogicfas408_priv *)&((x)->hostdata[0])
+
+irqreturn_t qlogicfas408_ihandl(int irq, void *dev_id, struct pt_regs *regs);
+int qlogicfas408_queuecommand(Scsi_Cmnd * cmd, void (*done) (Scsi_Cmnd *));
+int qlogicfas408_biosparam(struct scsi_device * disk,
+ struct block_device *dev,
+ sector_t capacity, int ip[]);
+int qlogicfas408_abort(Scsi_Cmnd * cmd);
+int qlogicfas408_bus_reset(Scsi_Cmnd * cmd);
+int qlogicfas408_host_reset(Scsi_Cmnd * cmd);
+int qlogicfas408_device_reset(Scsi_Cmnd * cmd);
+const char *qlogicfas408_info(struct Scsi_Host *host);
+int qlogicfas408_get_chip_type(int qbase, int int_type);
+void qlogicfas408_setup(int qbase, int id, int int_type);
+int qlogicfas408_detect(int qbase, int int_type);
+void qlogicfas408_disable_ints(struct qlogicfas408_priv *priv);
+#endif /* __QLOGICFAS408_H */
+
--- /dev/null
+/*
+ * sata_nv.c - NVIDIA nForce SATA
+ *
+ * Copyright 2004 NVIDIA Corp. All rights reserved.
+ * Copyright 2004 Andrew Chew
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_nv"
+#define DRV_VERSION "0.01"
+
+#define NV_PORTS 2
+#define NV_PIO_MASK 0x1f
+#define NV_UDMA_MASK 0x7f
+#define NV_PORT0_BMDMA_REG_OFFSET 0x00
+#define NV_PORT1_BMDMA_REG_OFFSET 0x08
+#define NV_PORT0_SCR_REG_OFFSET 0x00
+#define NV_PORT1_SCR_REG_OFFSET 0x40
+
+#define NV_INT_STATUS 0x10
+#define NV_INT_STATUS_PDEV_INT 0x01
+#define NV_INT_STATUS_PDEV_PM 0x02
+#define NV_INT_STATUS_PDEV_ADDED 0x04
+#define NV_INT_STATUS_PDEV_REMOVED 0x08
+#define NV_INT_STATUS_SDEV_INT 0x10
+#define NV_INT_STATUS_SDEV_PM 0x20
+#define NV_INT_STATUS_SDEV_ADDED 0x40
+#define NV_INT_STATUS_SDEV_REMOVED 0x80
+#define NV_INT_STATUS_PDEV_HOTPLUG (NV_INT_STATUS_PDEV_ADDED | \
+ NV_INT_STATUS_PDEV_REMOVED)
+#define NV_INT_STATUS_SDEV_HOTPLUG (NV_INT_STATUS_SDEV_ADDED | \
+ NV_INT_STATUS_SDEV_REMOVED)
+#define NV_INT_STATUS_HOTPLUG (NV_INT_STATUS_PDEV_HOTPLUG | \
+ NV_INT_STATUS_SDEV_HOTPLUG)
+
+#define NV_INT_ENABLE 0x11
+#define NV_INT_ENABLE_PDEV_MASK 0x01
+#define NV_INT_ENABLE_PDEV_PM 0x02
+#define NV_INT_ENABLE_PDEV_ADDED 0x04
+#define NV_INT_ENABLE_PDEV_REMOVED 0x08
+#define NV_INT_ENABLE_SDEV_MASK 0x10
+#define NV_INT_ENABLE_SDEV_PM 0x20
+#define NV_INT_ENABLE_SDEV_ADDED 0x40
+#define NV_INT_ENABLE_SDEV_REMOVED 0x80
+#define NV_INT_ENABLE_PDEV_HOTPLUG (NV_INT_ENABLE_PDEV_ADDED | \
+ NV_INT_ENABLE_PDEV_REMOVED)
+#define NV_INT_ENABLE_SDEV_HOTPLUG (NV_INT_ENABLE_SDEV_ADDED | \
+ NV_INT_ENABLE_SDEV_REMOVED)
+#define NV_INT_ENABLE_HOTPLUG (NV_INT_ENABLE_PDEV_HOTPLUG | \
+ NV_INT_ENABLE_SDEV_HOTPLUG)
+
+#define NV_INT_CONFIG 0x12
+#define NV_INT_CONFIG_METHD 0x01 // 0 = INT, 1 = SMI
+
+static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+irqreturn_t nv_interrupt (int irq, void *dev_instance, struct pt_regs *regs);
+static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg);
+static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+static void nv_host_stop (struct ata_host_set *host_set);
+
+static struct pci_device_id nv_pci_tbl[] = {
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2S_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { 0, } /* terminate list */
+};
+
+static struct pci_driver nv_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = nv_pci_tbl,
+ .probe = nv_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+static Scsi_Host_Template nv_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = ATA_MAX_PRD,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ .use_clustering = ATA_SHT_USE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = ATA_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations nv_ops = {
+ .port_disable = ata_port_disable,
+ .tf_load = ata_tf_load_pio,
+ .tf_read = ata_tf_read_pio,
+ .exec_command = ata_exec_command_pio,
+ .check_status = ata_check_status_pio,
+ .phy_reset = sata_phy_reset,
+ .bmdma_setup = ata_bmdma_setup_pio,
+ .bmdma_start = ata_bmdma_start_pio,
+ .qc_prep = ata_qc_prep,
+ .qc_issue = ata_qc_issue_prot,
+ .eng_timeout = ata_eng_timeout,
+ .irq_handler = nv_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .scr_read = nv_scr_read,
+ .scr_write = nv_scr_write,
+ .port_start = ata_port_start,
+ .port_stop = ata_port_stop,
+ .host_stop = nv_host_stop,
+};
+
+MODULE_AUTHOR("NVIDIA");
+MODULE_DESCRIPTION("low-level driver for NVIDIA nForce SATA controller");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, nv_pci_tbl);
+
+irqreturn_t nv_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ata_host_set *host_set = dev_instance;
+ unsigned int i;
+ unsigned int handled = 0;
+ unsigned long flags;
+ u8 intr_status;
+ u8 intr_enable;
+
+ spin_lock_irqsave(&host_set->lock, flags);
+
+ for (i = 0; i < host_set->n_ports; i++) {
+ struct ata_port *ap;
+
+ ap = host_set->ports[i];
+ if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) {
+ struct ata_queued_cmd *qc;
+
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (qc && (!(qc->tf.ctl & ATA_NIEN)))
+ handled += ata_host_intr(ap, qc);
+ }
+
+ intr_status = inb(ap->ioaddr.scr_addr + NV_INT_STATUS);
+ intr_enable = inb(ap->ioaddr.scr_addr + NV_INT_ENABLE);
+
+ // Clear interrupt status.
+ outb(0xff, ap->ioaddr.scr_addr + NV_INT_STATUS);
+
+ if (intr_status & NV_INT_STATUS_HOTPLUG) {
+ if (intr_status & NV_INT_STATUS_PDEV_ADDED) {
+ printk(KERN_WARNING "ata%u: "
+ "Primary device added\n", ap->id);
+ }
+
+ if (intr_status & NV_INT_STATUS_PDEV_REMOVED) {
+ printk(KERN_WARNING "ata%u: "
+ "Primary device removed\n", ap->id);
+ }
+
+ if (intr_status & NV_INT_STATUS_SDEV_ADDED) {
+ printk(KERN_WARNING "ata%u: "
+ "Secondary device added\n", ap->id);
+ }
+
+ if (intr_status & NV_INT_STATUS_SDEV_REMOVED) {
+ printk(KERN_WARNING "ata%u: "
+ "Secondary device removed\n", ap->id);
+ }
+ }
+ }
+
+ spin_unlock_irqrestore(&host_set->lock, flags);
+
+ return IRQ_RETVAL(handled);
+}
+
+static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg)
+{
+ if (sc_reg > SCR_CONTROL)
+ return 0xffffffffU;
+
+ return inl(ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
+{
+ if (sc_reg > SCR_CONTROL)
+ return;
+
+ outl(val, ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+static void nv_host_stop (struct ata_host_set *host_set)
+{
+ int i;
+
+ for (i=0; i<host_set->n_ports; i++) {
+ u8 intr_mask;
+
+ // Disable hotplug event interrupts.
+ intr_mask = inb(host_set->ports[i]->ioaddr.scr_addr +
+ NV_INT_ENABLE);
+ intr_mask &= ~(NV_INT_ENABLE_HOTPLUG);
+ outb(intr_mask, host_set->ports[i]->ioaddr.scr_addr +
+ NV_INT_ENABLE);
+ }
+}
+
+static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int printed_version = 0;
+ struct ata_probe_ent *probe_ent = NULL;
+ int i;
+ int rc;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+ rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+ rc = pci_set_consistent_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+
+ probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
+ if (!probe_ent) {
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ memset(probe_ent, 0, sizeof(*probe_ent));
+ INIT_LIST_HEAD(&probe_ent->node);
+
+ probe_ent->pdev = pdev;
+ probe_ent->sht = &nv_sht;
+ probe_ent->host_flags = ATA_FLAG_SATA |
+ ATA_FLAG_SATA_RESET |
+ ATA_FLAG_SRST |
+ ATA_FLAG_NO_LEGACY;
+ probe_ent->port_ops = &nv_ops;
+ probe_ent->n_ports = NV_PORTS;
+ probe_ent->irq = pdev->irq;
+ probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->pio_mask = NV_PIO_MASK;
+ probe_ent->udma_mask = NV_UDMA_MASK;
+
+ probe_ent->port[0].cmd_addr = pci_resource_start(pdev, 0);
+ ata_std_ports(&probe_ent->port[0]);
+ probe_ent->port[0].altstatus_addr =
+ probe_ent->port[0].ctl_addr =
+ pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS;
+ probe_ent->port[0].bmdma_addr =
+ pci_resource_start(pdev, 4) | NV_PORT0_BMDMA_REG_OFFSET;
+ probe_ent->port[0].scr_addr =
+ pci_resource_start(pdev, 5) | NV_PORT0_SCR_REG_OFFSET;
+
+ probe_ent->port[1].cmd_addr = pci_resource_start(pdev, 2);
+ ata_std_ports(&probe_ent->port[1]);
+ probe_ent->port[1].altstatus_addr =
+ probe_ent->port[1].ctl_addr =
+ pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS;
+ probe_ent->port[1].bmdma_addr =
+ pci_resource_start(pdev, 4) | NV_PORT1_BMDMA_REG_OFFSET;
+ probe_ent->port[1].scr_addr =
+ pci_resource_start(pdev, 5) | NV_PORT1_SCR_REG_OFFSET;
+
+ pci_set_master(pdev);
+
+ rc = ata_device_add(probe_ent);
+ if (rc != NV_PORTS)
+ goto err_out_regions;
+
+ // Enable hotplug event interrupts.
+ for (i=0; i<probe_ent->n_ports; i++) {
+ u8 intr_mask;
+
+ outb(NV_INT_STATUS_HOTPLUG, probe_ent->port[i].scr_addr +
+ NV_INT_STATUS);
+
+ intr_mask = inb(probe_ent->port[i].scr_addr + NV_INT_ENABLE);
+ intr_mask |= NV_INT_ENABLE_HOTPLUG;
+ outb(intr_mask, probe_ent->port[i].scr_addr + NV_INT_ENABLE);
+ }
+
+ kfree(probe_ent);
+
+ return 0;
+
+err_out_regions:
+ pci_release_regions(pdev);
+
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+static int __init nv_init(void)
+{
+ return pci_module_init(&nv_pci_driver);
+}
+
+static void __exit nv_exit(void)
+{
+ pci_unregister_driver(&nv_pci_driver);
+}
+
+module_init(nv_init);
+module_exit(nv_exit);
--- /dev/null
+/*
+ * sata_promise.h - Promise SATA common definitions and inline funcs
+ *
+ * Copyright 2003-2004 Red Hat, Inc.
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ */
+
+#ifndef __SATA_PROMISE_H__
+#define __SATA_PROMISE_H__
+
+#include <linux/ata.h>
+
+enum pdc_packet_bits {
+ PDC_PKT_READ = (1 << 2),
+ PDC_PKT_NODATA = (1 << 3),
+
+ PDC_PKT_SIZEMASK = (1 << 7) | (1 << 6) | (1 << 5),
+ PDC_PKT_CLEAR_BSY = (1 << 4),
+ PDC_PKT_WAIT_DRDY = (1 << 3) | (1 << 4),
+ PDC_LAST_REG = (1 << 3),
+
+ PDC_REG_DEVCTL = (1 << 3) | (1 << 2) | (1 << 1),
+};
+
+static inline unsigned int pdc_pkt_header(struct ata_taskfile *tf,
+ dma_addr_t sg_table,
+ unsigned int devno, u8 *buf)
+{
+ u8 dev_reg;
+ u32 *buf32 = (u32 *) buf;
+
+ /* set control bits (byte 0), zero delay seq id (byte 3),
+ * and seq id (byte 2)
+ */
+ switch (tf->protocol) {
+ case ATA_PROT_DMA:
+ if (!(tf->flags & ATA_TFLAG_WRITE))
+ buf32[0] = cpu_to_le32(PDC_PKT_READ);
+ else
+ buf32[0] = 0;
+ break;
+
+ case ATA_PROT_NODATA:
+ buf32[0] = cpu_to_le32(PDC_PKT_NODATA);
+ break;
+
+ default:
+ BUG();
+ break;
+ }
+
+ buf32[1] = cpu_to_le32(sg_table); /* S/G table addr */
+ buf32[2] = 0; /* no next-packet */
+
+ if (devno == 0)
+ dev_reg = ATA_DEVICE_OBS;
+ else
+ dev_reg = ATA_DEVICE_OBS | ATA_DEV1;
+
+ /* select device */
+ buf[12] = (1 << 5) | PDC_PKT_CLEAR_BSY | ATA_REG_DEVICE;
+ buf[13] = dev_reg;
+
+ /* device control register */
+ buf[14] = (1 << 5) | PDC_REG_DEVCTL;
+ buf[15] = tf->ctl;
+
+ return 16; /* offset of next byte */
+}
+
+static inline unsigned int pdc_pkt_footer(struct ata_taskfile *tf, u8 *buf,
+ unsigned int i)
+{
+ if (tf->flags & ATA_TFLAG_DEVICE) {
+ buf[i++] = (1 << 5) | ATA_REG_DEVICE;
+ buf[i++] = tf->device;
+ }
+
+ /* and finally the command itself; also includes end-of-pkt marker */
+ buf[i++] = (1 << 5) | PDC_LAST_REG | ATA_REG_CMD;
+ buf[i++] = tf->command;
+
+ return i;
+}
+
+static inline unsigned int pdc_prep_lba28(struct ata_taskfile *tf, u8 *buf, unsigned int i)
+{
+ /* the "(1 << 5)" should be read "(count << 5)" */
+
+ /* ATA command block registers */
+ buf[i++] = (1 << 5) | ATA_REG_FEATURE;
+ buf[i++] = tf->feature;
+
+ buf[i++] = (1 << 5) | ATA_REG_NSECT;
+ buf[i++] = tf->nsect;
+
+ buf[i++] = (1 << 5) | ATA_REG_LBAL;
+ buf[i++] = tf->lbal;
+
+ buf[i++] = (1 << 5) | ATA_REG_LBAM;
+ buf[i++] = tf->lbam;
+
+ buf[i++] = (1 << 5) | ATA_REG_LBAH;
+ buf[i++] = tf->lbah;
+
+ return i;
+}
+
+static inline unsigned int pdc_prep_lba48(struct ata_taskfile *tf, u8 *buf, unsigned int i)
+{
+ /* the "(2 << 5)" should be read "(count << 5)" */
+
+ /* ATA command block registers */
+ buf[i++] = (2 << 5) | ATA_REG_FEATURE;
+ buf[i++] = tf->hob_feature;
+ buf[i++] = tf->feature;
+
+ buf[i++] = (2 << 5) | ATA_REG_NSECT;
+ buf[i++] = tf->hob_nsect;
+ buf[i++] = tf->nsect;
+
+ buf[i++] = (2 << 5) | ATA_REG_LBAL;
+ buf[i++] = tf->hob_lbal;
+ buf[i++] = tf->lbal;
+
+ buf[i++] = (2 << 5) | ATA_REG_LBAM;
+ buf[i++] = tf->hob_lbam;
+ buf[i++] = tf->lbam;
+
+ buf[i++] = (2 << 5) | ATA_REG_LBAH;
+ buf[i++] = tf->hob_lbah;
+ buf[i++] = tf->lbah;
+
+ return i;
+}
+
+
+#endif /* __SATA_PROMISE_H__ */
--- /dev/null
+#
+# Makefile for the Motorola 8xx FEC ethernet controller
+#
+
+obj-$(CONFIG_SERIAL_CPM) += cpm_uart.o
+
+# Select the correct platform objects.
+cpm_uart-objs-$(CONFIG_CPM2) += cpm_uart_cpm2.o
+cpm_uart-objs-$(CONFIG_8xx) += cpm_uart_cpm1.o
+
+cpm_uart-objs := cpm_uart_core.o $(cpm_uart-objs-y)
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.h
+ *
+ * Driver for CPM (SCC/SMC) serial ports
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ *
+ */
+#ifndef CPM_UART_H
+#define CPM_UART_H
+
+#include <linux/config.h>
+
+#if defined(CONFIG_CPM2)
+#include "cpm_uart_cpm2.h"
+#elif defined(CONFIG_8xx)
+#include "cpm_uart_cpm1.h"
+#endif
+
+#ifndef CONFIG_SERIAL_8250
+#define SERIAL_CPM_MAJOR TTY_MAJOR
+#define SERIAL_CPM_MINOR 64
+#else
+#define SERIAL_CPM_MAJOR 204
+#define SERIAL_CPM_MINOR 42
+#endif
+
+#define IS_SMC(pinfo) (pinfo->flags & FLAG_SMC)
+#define IS_DISCARDING(pinfo) (pinfo->flags & FLAG_DISCARDING)
+#define FLAG_DISCARDING 0x00000004 /* when set, don't discard */
+#define FLAG_SMC 0x00000002
+#define FLAG_CONSOLE 0x00000001
+
+#define UART_SMC1 0
+#define UART_SMC2 1
+#define UART_SCC1 2
+#define UART_SCC2 3
+#define UART_SCC3 4
+#define UART_SCC4 5
+
+#define UART_NR 6
+
+#define RX_NUM_FIFO 4
+#define RX_BUF_SIZE 32
+#define TX_NUM_FIFO 4
+#define TX_BUF_SIZE 32
+
+struct uart_cpm_port {
+ struct uart_port port;
+ u16 rx_nrfifos;
+ u16 rx_fifosize;
+ u16 tx_nrfifos;
+ u16 tx_fifosize;
+ smc_t *smcp;
+ smc_uart_t *smcup;
+ scc_t *sccp;
+ scc_uart_t *sccup;
+ volatile cbd_t *rx_bd_base;
+ volatile cbd_t *rx_cur;
+ volatile cbd_t *tx_bd_base;
+ volatile cbd_t *tx_cur;
+ unsigned char *tx_buf;
+ unsigned char *rx_buf;
+ u32 flags;
+ void (*set_lineif)(struct uart_cpm_port *);
+ u8 brg;
+ uint dp_addr;
+ void *mem_addr;
+ dma_addr_t dma_addr;
+ /* helpers */
+ int baud;
+ int bits;
+};
+
+extern int cpm_uart_port_map[UART_NR];
+extern int cpm_uart_nr;
+extern struct uart_cpm_port cpm_uart_ports[UART_NR];
+
+/* these are located in their respective files */
+void cpm_line_cr_cmd(int line, int cmd);
+int cpm_uart_init_portdesc(void);
+int cpm_uart_allocbuf(struct uart_cpm_port *pinfo, unsigned int is_con);
+void cpm_uart_freebuf(struct uart_cpm_port *pinfo);
+
+void smc1_lineif(struct uart_cpm_port *pinfo);
+void smc2_lineif(struct uart_cpm_port *pinfo);
+void scc1_lineif(struct uart_cpm_port *pinfo);
+void scc2_lineif(struct uart_cpm_port *pinfo);
+void scc3_lineif(struct uart_cpm_port *pinfo);
+void scc4_lineif(struct uart_cpm_port *pinfo);
+
+#endif /* CPM_UART_H */
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.c
+ *
+ * Driver for CPM (SCC/SMC) serial ports; core driver
+ *
+ * Based on arch/ppc/cpm2_io/uart.c by Dan Malek
+ * Based on ppc8xx.c by Thomas Gleixner
+ * Based on drivers/serial/amba.c by Russell King
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com) (CPM2)
+ * Pantelis Antoniou (panto@intracom.gr) (CPM1)
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ * (C) 2004 Intracom, S.A.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/device.h>
+#include <linux/bootmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/delay.h>
+
+#if defined(CONFIG_SERIAL_CPM_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+#include <linux/kernel.h>
+
+#include "cpm_uart.h"
+
+/***********************************************************************/
+
+/* Track which ports are configured as uarts */
+int cpm_uart_port_map[UART_NR];
+/* How many ports did we config as uarts */
+int cpm_uart_nr;
+
+/**************************************************************/
+
+static int cpm_uart_tx_pump(struct uart_port *port);
+static void cpm_uart_init_smc(struct uart_cpm_port *pinfo, int bits, u16 cval);
+static void cpm_uart_init_scc(struct uart_cpm_port *pinfo, int sbits, u16 sval);
+
+/**************************************************************/
+
+/*
+ * Check, if transmit buffers are processed
+*/
+static unsigned int cpm_uart_tx_empty(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile cbd_t *bdp = pinfo->tx_bd_base;
+ int ret = 0;
+
+ while (1) {
+ if (bdp->cbd_sc & BD_SC_READY)
+ break;
+
+ if (bdp->cbd_sc & BD_SC_WRAP) {
+ ret = TIOCSER_TEMT;
+ break;
+ }
+ bdp++;
+ }
+
+ pr_debug("CPM uart[%d]:tx_empty: %d\n", port->line, ret);
+
+ return ret;
+}
+
+static void cpm_uart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ /* Whee. Do nothing. */
+}
+
+static unsigned int cpm_uart_get_mctrl(struct uart_port *port)
+{
+ /* Whee. Do nothing. */
+ return TIOCM_CAR | TIOCM_DSR | TIOCM_CTS;
+}
+
+/*
+ * Stop transmitter
+ */
+static void cpm_uart_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:stop tx\n", port->line);
+
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm &= ~SMCM_TX;
+ else
+ sccp->scc_sccm &= ~UART_SCCM_TX;
+}
+
+/*
+ * Start transmitter
+ */
+static void cpm_uart_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:start tx\n", port->line);
+
+ /* if in the middle of discarding return */
+ if (IS_DISCARDING(pinfo))
+ return;
+
+ if (IS_SMC(pinfo)) {
+ if (smcp->smc_smcm & SMCM_TX)
+ return;
+ } else {
+ if (sccp->scc_sccm & UART_SCCM_TX)
+ return;
+ }
+
+ if (cpm_uart_tx_pump(port) != 0) {
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm |= SMCM_TX;
+ else
+ sccp->scc_sccm |= UART_SCCM_TX;
+ }
+}
+
+/*
+ * Stop receiver
+ */
+static void cpm_uart_stop_rx(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:stop rx\n", port->line);
+
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm &= ~SMCM_RX;
+ else
+ sccp->scc_sccm &= ~UART_SCCM_RX;
+}
+
+/*
+ * Enable Modem status interrupts
+ */
+static void cpm_uart_enable_ms(struct uart_port *port)
+{
+ pr_debug("CPM uart[%d]:enable ms\n", port->line);
+}
+
+/*
+ * Generate a break.
+ */
+static void cpm_uart_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int line = pinfo - cpm_uart_ports;
+
+ pr_debug("CPM uart[%d]:break ctrl, break_state: %d\n", port->line,
+ break_state);
+
+ if (break_state)
+ cpm_line_cr_cmd(line, CPM_CR_STOP_TX);
+ else
+ cpm_line_cr_cmd(line, CPM_CR_RESTART_TX);
+}
+
+/*
+ * Transmit characters, refill buffer descriptor, if possible
+ */
+static void cpm_uart_int_tx(struct uart_port *port, struct pt_regs *regs)
+{
+ pr_debug("CPM uart[%d]:TX INT\n", port->line);
+
+ cpm_uart_tx_pump(port);
+}
+
+/*
+ * Receive characters
+ */
+static void cpm_uart_int_rx(struct uart_port *port, struct pt_regs *regs)
+{
+ int i;
+ unsigned char ch, *cp;
+ struct tty_struct *tty = port->info->tty;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile cbd_t *bdp;
+ u16 status;
+ unsigned int flg;
+
+ pr_debug("CPM uart[%d]:RX INT\n", port->line);
+
+ /* Just loop through the closed BDs and copy the characters into
+ * the buffer.
+ */
+ bdp = pinfo->rx_cur;
+ for (;;) {
+ /* get status */
+ status = bdp->cbd_sc;
+ /* If this one is empty, return happy */
+ if (status & BD_SC_EMPTY)
+ break;
+
+ /* get number of characters, and check spce in flip-buffer */
+ i = bdp->cbd_datlen;
+
+ /* If we have not enough room in tty flip buffer, then we try
+ * later, which will be the next rx-interrupt or a timeout
+ */
+ if ((tty->flip.count + i) >= TTY_FLIPBUF_SIZE) {
+ tty->flip.work.func((void *)tty);
+ if ((tty->flip.count + i) >= TTY_FLIPBUF_SIZE) {
+ printk(KERN_WARNING "TTY_DONT_FLIP set\n");
+ return;
+ }
+ }
+
+ /* get pointer */
+ cp = (unsigned char *)bus_to_virt(bdp->cbd_bufaddr);
+
+ /* loop through the buffer */
+ while (i-- > 0) {
+ ch = *cp++;
+ port->icount.rx++;
+ flg = TTY_NORMAL;
+
+ if (status &
+ (BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV))
+ goto handle_error;
+ if (uart_handle_sysrq_char(port, ch, regs))
+ continue;
+
+ error_return:
+ *tty->flip.char_buf_ptr++ = ch;
+ *tty->flip.flag_buf_ptr++ = flg;
+ tty->flip.count++;
+
+ } /* End while (i--) */
+
+ /* This BD is ready to be used again. Clear status. get next */
+ bdp->cbd_sc &= ~(BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV);
+ bdp->cbd_sc |= BD_SC_EMPTY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->rx_bd_base;
+ else
+ bdp++;
+ } /* End for (;;) */
+
+ /* Write back buffer pointer */
+ pinfo->rx_cur = (volatile cbd_t *) bdp;
+
+ /* activate BH processing */
+ tty_flip_buffer_push(tty);
+
+ return;
+
+ /* Error processing */
+
+ handle_error:
+ /* Statistics */
+ if (status & BD_SC_BR)
+ port->icount.brk++;
+ if (status & BD_SC_PR)
+ port->icount.parity++;
+ if (status & BD_SC_FR)
+ port->icount.frame++;
+ if (status & BD_SC_OV)
+ port->icount.overrun++;
+
+ /* Mask out ignored conditions */
+ status &= port->read_status_mask;
+
+ /* Handle the remaining ones */
+ if (status & BD_SC_BR)
+ flg = TTY_BREAK;
+ else if (status & BD_SC_PR)
+ flg = TTY_PARITY;
+ else if (status & BD_SC_FR)
+ flg = TTY_FRAME;
+
+ /* overrun does not affect the current character ! */
+ if (status & BD_SC_OV) {
+ ch = 0;
+ flg = TTY_OVERRUN;
+ /* We skip this buffer */
+ /* CHECK: Is really nothing senseful there */
+ /* ASSUMPTION: it contains nothing valid */
+ i = 0;
+ }
+#ifdef SUPPORT_SYSRQ
+ port->sysrq = 0;
+#endif
+ goto error_return;
+}
+
+/*
+ * Asynchron mode interrupt handler
+ */
+static irqreturn_t cpm_uart_int(int irq, void *data, struct pt_regs *regs)
+{
+ u8 events;
+ struct uart_port *port = (struct uart_port *)data;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:IRQ\n", port->line);
+
+ if (IS_SMC(pinfo)) {
+ events = smcp->smc_smce;
+ if (events & SMCM_BRKE)
+ uart_handle_break(port);
+ if (events & SMCM_RX)
+ cpm_uart_int_rx(port, regs);
+ if (events & SMCM_TX)
+ cpm_uart_int_tx(port, regs);
+ smcp->smc_smce = events;
+ } else {
+ events = sccp->scc_scce;
+ if (events & UART_SCCM_BRKE)
+ uart_handle_break(port);
+ if (events & UART_SCCM_RX)
+ cpm_uart_int_rx(port, regs);
+ if (events & UART_SCCM_TX)
+ cpm_uart_int_tx(port, regs);
+ sccp->scc_scce = events;
+ }
+ return (events) ? IRQ_HANDLED : IRQ_NONE;
+}
+
+static int cpm_uart_startup(struct uart_port *port)
+{
+ int retval;
+
+ pr_debug("CPM uart[%d]:startup\n", port->line);
+
+ /* Install interrupt handler. */
+ retval = request_irq(port->irq, cpm_uart_int, 0, "cpm_uart", port);
+ if (retval)
+ return retval;
+
+ return 0;
+}
+
+/*
+ * Shutdown the uart
+ */
+static void cpm_uart_shutdown(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int line = pinfo - cpm_uart_ports;
+
+ pr_debug("CPM uart[%d]:shutdown\n", port->line);
+
+ /* free interrupt handler */
+ free_irq(port->irq, port);
+
+ /* If the port is not the console, disable Rx and Tx. */
+ if (!(pinfo->flags & FLAG_CONSOLE)) {
+ /* Stop uarts */
+ if (IS_SMC(pinfo)) {
+ volatile smc_t *smcp = pinfo->smcp;
+ smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ smcp->smc_smcm &= ~(SMCM_RX | SMCM_TX);
+ } else {
+ volatile scc_t *sccp = pinfo->sccp;
+ sccp->scc_gsmrl &= ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ sccp->scc_sccm &= ~(UART_SCCM_TX | UART_SCCM_RX);
+ }
+
+ /* Shut them really down and reinit buffer descriptors */
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+ }
+}
+
+static void cpm_uart_set_termios(struct uart_port *port,
+ struct termios *termios, struct termios *old)
+{
+ int baud;
+ unsigned long flags;
+ u16 cval, scval;
+ int bits, sbits;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int line = pinfo - cpm_uart_ports;
+ volatile cbd_t *bdp;
+
+ pr_debug("CPM uart[%d]:set_termios\n", port->line);
+
+ spin_lock_irqsave(&port->lock, flags);
+ /* disable uart interrupts */
+ if (IS_SMC(pinfo))
+ pinfo->smcp->smc_smcm &= ~(SMCM_RX | SMCM_TX);
+ else
+ pinfo->sccp->scc_sccm &= ~(UART_SCCM_TX | UART_SCCM_RX);
+ pinfo->flags |= FLAG_DISCARDING;
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ /* if previous configuration exists wait for tx to finish */
+ if (pinfo->baud != 0 && pinfo->bits != 0) {
+
+ /* point to the last txed bd */
+ bdp = pinfo->tx_cur;
+ if (bdp == pinfo->tx_bd_base)
+ bdp = pinfo->tx_bd_base + (pinfo->tx_nrfifos - 1);
+ else
+ bdp--;
+
+ /* wait for it to be transmitted */
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ schedule();
+
+ /* and delay for the hw fifo to drain */
+ udelay((3 * 1000000 * pinfo->bits) / pinfo->baud);
+ }
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ /* Send the CPM an initialize command. */
+ cpm_line_cr_cmd(line, CPM_CR_STOP_TX);
+
+ /* Stop uart */
+ if (IS_SMC(pinfo))
+ pinfo->smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ else
+ pinfo->sccp->scc_gsmrl &= ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+
+ /* Send the CPM an initialize command. */
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk / 16);
+
+ /* Character length programmed into the mode register is the
+ * sum of: 1 start bit, number of data bits, 0 or 1 parity bit,
+ * 1 or 2 stop bits, minus 1.
+ * The value 'bits' counts this for us.
+ */
+ cval = 0;
+ scval = 0;
+
+ /* byte size */
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ bits = 5;
+ break;
+ case CS6:
+ bits = 6;
+ break;
+ case CS7:
+ bits = 7;
+ break;
+ case CS8:
+ bits = 8;
+ break;
+ /* Never happens, but GCC is too dumb to figure it out */
+ default:
+ bits = 8;
+ break;
+ }
+ sbits = bits - 5;
+
+ if (termios->c_cflag & CSTOPB) {
+ cval |= SMCMR_SL; /* Two stops */
+ scval |= SCU_PSMR_SL;
+ bits++;
+ }
+
+ if (termios->c_cflag & PARENB) {
+ cval |= SMCMR_PEN;
+ scval |= SCU_PSMR_PEN;
+ bits++;
+ if (!(termios->c_cflag & PARODD)) {
+ cval |= SMCMR_PM_EVEN;
+ scval |= (SCU_PSMR_REVP | SCU_PSMR_TEVP);
+ }
+ }
+
+ /*
+ * Set up parity check flag
+ */
+#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK))
+
+ port->read_status_mask = (BD_SC_EMPTY | BD_SC_OV);
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= BD_SC_FR | BD_SC_PR;
+ if ((termios->c_iflag & BRKINT) || (termios->c_iflag & PARMRK))
+ port->read_status_mask |= BD_SC_BR;
+
+ /*
+ * Characters to ignore
+ */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= BD_SC_PR | BD_SC_FR;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= BD_SC_BR;
+ /*
+ * If we're ignore parity and break indicators, ignore
+ * overruns too. (For real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= BD_SC_OV;
+ }
+ /*
+ * !!! ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->read_status_mask &= ~BD_SC_EMPTY;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ cpm_set_brg(pinfo->brg - 1, baud);
+
+ /* Start bit has not been added (so don't, because we would just
+ * subtract it later), and we need to add one for the number of
+ * stops bits (there is always at least one).
+ */
+ bits++;
+
+ /* re-init */
+ if (IS_SMC(pinfo))
+ cpm_uart_init_smc(pinfo, bits, cval);
+ else
+ cpm_uart_init_scc(pinfo, sbits, scval);
+
+ pinfo->baud = baud;
+ pinfo->bits = bits;
+
+ pinfo->flags &= ~FLAG_DISCARDING;
+ spin_unlock_irqrestore(&port->lock, flags);
+
+}
+
+static const char *cpm_uart_type(struct uart_port *port)
+{
+ pr_debug("CPM uart[%d]:uart_type\n", port->line);
+
+ return port->type == PORT_CPM ? "CPM UART" : NULL;
+}
+
+/*
+ * verify the new serial_struct (for TIOCSSERIAL).
+ */
+static int cpm_uart_verify_port(struct uart_port *port,
+ struct serial_struct *ser)
+{
+ int ret = 0;
+
+ pr_debug("CPM uart[%d]:verify_port\n", port->line);
+
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_CPM)
+ ret = -EINVAL;
+ if (ser->irq < 0 || ser->irq >= NR_IRQS)
+ ret = -EINVAL;
+ if (ser->baud_base < 9600)
+ ret = -EINVAL;
+ return ret;
+}
+
+/*
+ * Transmit characters, refill buffer descriptor, if possible
+ */
+static int cpm_uart_tx_pump(struct uart_port *port)
+{
+ volatile cbd_t *bdp;
+ unsigned char *p;
+ int count;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ struct circ_buf *xmit = &port->info->xmit;
+
+ /* Handle xon/xoff */
+ if (port->x_char) {
+ /* Pick next descriptor and fill from buffer */
+ bdp = pinfo->tx_cur;
+
+ p = bus_to_virt(bdp->cbd_bufaddr);
+ *p++ = xmit->buf[xmit->tail];
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+ /* Get next BD. */
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->tx_bd_base;
+ else
+ bdp++;
+ pinfo->tx_cur = bdp;
+
+ port->icount.tx++;
+ port->x_char = 0;
+ return 1;
+ }
+
+ if (uart_circ_empty(xmit) || uart_tx_stopped(port)) {
+ cpm_uart_stop_tx(port, 0);
+ return 0;
+ }
+
+ /* Pick next descriptor and fill from buffer */
+ bdp = pinfo->tx_cur;
+
+ while (!(bdp->cbd_sc & BD_SC_READY) && (xmit->tail != xmit->head)) {
+ count = 0;
+ p = bus_to_virt(bdp->cbd_bufaddr);
+ while (count < pinfo->tx_fifosize) {
+ *p++ = xmit->buf[xmit->tail];
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+ count++;
+ if (xmit->head == xmit->tail)
+ break;
+ }
+ bdp->cbd_datlen = count;
+ bdp->cbd_sc |= BD_SC_READY;
+ /* Get next BD. */
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->tx_bd_base;
+ else
+ bdp++;
+ }
+ pinfo->tx_cur = bdp;
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
+ if (uart_circ_empty(xmit)) {
+ cpm_uart_stop_tx(port, 0);
+ return 0;
+ }
+
+ return 1;
+}
+
+static void cpm_uart_init_scc(struct uart_cpm_port *pinfo, int bits, u16 scval)
+{
+ int line = pinfo - cpm_uart_ports;
+ volatile scc_t *scp;
+ volatile scc_uart_t *sup;
+ u8 *mem_addr;
+ volatile cbd_t *bdp;
+ int i;
+
+ pr_debug("CPM uart[%d]:init_scc\n", pinfo->port.line);
+
+ scp = pinfo->sccp;
+ sup = pinfo->sccup;
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ pinfo->rx_cur = pinfo->rx_bd_base;
+ mem_addr = pinfo->mem_addr;
+ for (bdp = pinfo->rx_bd_base, i = 0; i < pinfo->rx_nrfifos; i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_EMPTY | BD_SC_INTRPT | (i < (pinfo->rx_nrfifos - 1) ? 0 : BD_SC_WRAP);
+ mem_addr += pinfo->rx_fifosize;
+ }
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ mem_addr = pinfo->mem_addr + L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize);
+ pinfo->tx_cur = pinfo->tx_bd_base;
+ for (bdp = pinfo->tx_bd_base, i = 0; i < pinfo->tx_nrfifos; i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_INTRPT | (i < (pinfo->tx_nrfifos - 1) ? 0 : BD_SC_WRAP);
+ mem_addr += pinfo->tx_fifosize;
+ bdp++;
+ }
+
+ /* Store address */
+ pinfo->sccup->scc_genscc.scc_rbase = (unsigned char *)pinfo->rx_bd_base - DPRAM_BASE;
+ pinfo->sccup->scc_genscc.scc_tbase = (unsigned char *)pinfo->tx_bd_base - DPRAM_BASE;
+
+ /* Set up the uart parameters in the
+ * parameter ram.
+ */
+
+ cpm_set_scc_fcr(sup);
+
+ sup->scc_genscc.scc_mrblr = pinfo->rx_fifosize;
+ sup->scc_maxidl = pinfo->rx_fifosize;
+ sup->scc_brkcr = 1;
+ sup->scc_parec = 0;
+ sup->scc_frmec = 0;
+ sup->scc_nosec = 0;
+ sup->scc_brkec = 0;
+ sup->scc_uaddr1 = 0;
+ sup->scc_uaddr2 = 0;
+ sup->scc_toseq = 0;
+ sup->scc_char1 = 0x8000;
+ sup->scc_char2 = 0x8000;
+ sup->scc_char3 = 0x8000;
+ sup->scc_char4 = 0x8000;
+ sup->scc_char5 = 0x8000;
+ sup->scc_char6 = 0x8000;
+ sup->scc_char7 = 0x8000;
+ sup->scc_char8 = 0x8000;
+ sup->scc_rccm = 0xc0ff;
+
+ /* Send the CPM an initialize command.
+ */
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+
+ /* Set UART mode, 8 bit, no parity, one stop.
+ * Enable receive and transmit.
+ */
+ scp->scc_gsmrh = 0;
+ scp->scc_gsmrl =
+ (SCC_GSMRL_MODE_UART | SCC_GSMRL_TDCR_16 | SCC_GSMRL_RDCR_16);
+
+ /* Enable rx interrupts and clear all pending events. */
+ scp->scc_sccm = UART_SCCM_RX;
+ scp->scc_scce = 0xffff;
+ scp->scc_dsr = 0x7e7e;
+ scp->scc_psmr = (bits << 12) | scval;
+
+ scp->scc_gsmrl |= (SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+}
+
+static void cpm_uart_init_smc(struct uart_cpm_port *pinfo, int bits, u16 cval)
+{
+ int line = pinfo - cpm_uart_ports;
+ volatile smc_t *sp;
+ volatile smc_uart_t *up;
+ volatile u8 *mem_addr;
+ volatile cbd_t *bdp;
+ int i;
+
+ pr_debug("CPM uart[%d]:init_smc\n", pinfo->port.line);
+
+ sp = pinfo->smcp;
+ up = pinfo->smcup;
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ mem_addr = pinfo->mem_addr;
+ pinfo->rx_cur = pinfo->rx_bd_base;
+ for (bdp = pinfo->rx_bd_base, i = 0; i < pinfo->rx_nrfifos; i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_EMPTY | BD_SC_INTRPT | (i < (pinfo->rx_nrfifos - 1) ? 0 : BD_SC_WRAP);
+ mem_addr += pinfo->rx_fifosize;
+ }
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ mem_addr = pinfo->mem_addr + L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize);
+ pinfo->tx_cur = pinfo->tx_bd_base;
+ for (bdp = pinfo->tx_bd_base, i = 0; i < pinfo->tx_nrfifos; i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_INTRPT | (i < (pinfo->tx_nrfifos - 1) ? 0 : BD_SC_WRAP);
+ mem_addr += pinfo->tx_fifosize;
+ }
+
+ /* Store address */
+ pinfo->smcup->smc_rbase = (u_char *)pinfo->rx_bd_base - DPRAM_BASE;
+ pinfo->smcup->smc_tbase = (u_char *)pinfo->tx_bd_base - DPRAM_BASE;
+
+ /* Set up the uart parameters in the
+ * parameter ram.
+ */
+ cpm_set_smc_fcr(up);
+
+ /* Using idle charater time requires some additional tuning. */
+ up->smc_mrblr = pinfo->rx_fifosize;
+ up->smc_maxidl = pinfo->rx_fifosize;
+ up->smc_brkcr = 1;
+
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+
+ /* Set UART mode, according to the parameters */
+ sp->smc_smcmr = smcr_mk_clen(bits) | cval | SMCMR_SM_UART;
+
+ /* Enable only rx interrupts clear all pending events. */
+ sp->smc_smcm = SMCM_RX;
+ sp->smc_smce = 0xff;
+
+ sp->smc_smcmr |= (SMCMR_REN | SMCMR_TEN);
+}
+
+/*
+ * Initialize port. This is called from early_console stuff
+ * so we have to be careful here !
+ */
+static int cpm_uart_request_port(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int ret;
+
+ pr_debug("CPM uart[%d]:request port\n", port->line);
+
+ if (pinfo->flags & FLAG_CONSOLE)
+ return 0;
+
+ /*
+ * Setup any port IO, connect any baud rate generators,
+ * etc. This is expected to be handled by board
+ * dependant code
+ */
+ if (pinfo->set_lineif)
+ pinfo->set_lineif(pinfo);
+
+ ret = cpm_uart_allocbuf(pinfo, 0);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static void cpm_uart_release_port(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+
+ if (!(pinfo->flags & FLAG_CONSOLE))
+ cpm_uart_freebuf(pinfo);
+}
+
+/*
+ * Configure/autoconfigure the port.
+ */
+static void cpm_uart_config_port(struct uart_port *port, int flags)
+{
+ pr_debug("CPM uart[%d]:config_port\n", port->line);
+
+ if (flags & UART_CONFIG_TYPE) {
+ port->type = PORT_CPM;
+ cpm_uart_request_port(port);
+ }
+}
+static struct uart_ops cpm_uart_pops = {
+ .tx_empty = cpm_uart_tx_empty,
+ .set_mctrl = cpm_uart_set_mctrl,
+ .get_mctrl = cpm_uart_get_mctrl,
+ .stop_tx = cpm_uart_stop_tx,
+ .start_tx = cpm_uart_start_tx,
+ .stop_rx = cpm_uart_stop_rx,
+ .enable_ms = cpm_uart_enable_ms,
+ .break_ctl = cpm_uart_break_ctl,
+ .startup = cpm_uart_startup,
+ .shutdown = cpm_uart_shutdown,
+ .set_termios = cpm_uart_set_termios,
+ .type = cpm_uart_type,
+ .release_port = cpm_uart_release_port,
+ .request_port = cpm_uart_request_port,
+ .config_port = cpm_uart_config_port,
+ .verify_port = cpm_uart_verify_port,
+};
+
+struct uart_cpm_port cpm_uart_ports[UART_NR] = {
+ [UART_SMC1] = {
+ .port = {
+ .irq = SMC1_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .flags = FLAG_SMC,
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = smc1_lineif,
+ },
+ [UART_SMC2] = {
+ .port = {
+ .irq = SMC2_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .flags = FLAG_SMC,
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = smc2_lineif,
+ },
+ [UART_SCC1] = {
+ .port = {
+ .irq = SCC1_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc1_lineif,
+ },
+ [UART_SCC2] = {
+ .port = {
+ .irq = SCC2_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc2_lineif,
+ },
+ [UART_SCC3] = {
+ .port = {
+ .irq = SCC3_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc3_lineif,
+ },
+ [UART_SCC4] = {
+ .port = {
+ .irq = SCC4_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc4_lineif,
+ },
+};
+
+#ifdef CONFIG_SERIAL_CPM_CONSOLE
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * Note that this is called with interrupts already disabled
+ */
+static void cpm_uart_console_write(struct console *co, const char *s,
+ u_int count)
+{
+ struct uart_cpm_port *pinfo =
+ &cpm_uart_ports[cpm_uart_port_map[co->index]];
+ unsigned int i;
+ volatile cbd_t *bdp, *bdbase;
+ volatile unsigned char *cp;
+
+ if (IS_DISCARDING(pinfo))
+ return;
+
+ /* Get the address of the host memory buffer.
+ */
+ bdp = pinfo->tx_cur;
+ bdbase = pinfo->tx_bd_base;
+
+ /*
+ * Now, do each character. This is not as bad as it looks
+ * since this is a holding FIFO and not a transmitting FIFO.
+ * We could add the complexity of filling the entire transmit
+ * buffer, but we would just wait longer between accesses......
+ */
+ for (i = 0; i < count; i++, s++) {
+ /* Wait for transmitter fifo to empty.
+ * Ready indicates output is ready, and xmt is doing
+ * that, not that it is ready for us to send.
+ */
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ /* Send the character out.
+ * If the buffer address is in the CPM DPRAM, don't
+ * convert it.
+ */
+ if ((uint) (bdp->cbd_bufaddr) > (uint) CPM_ADDR)
+ cp = (unsigned char *) (bdp->cbd_bufaddr);
+ else
+ cp = bus_to_virt(bdp->cbd_bufaddr);
+
+ *cp = *s;
+
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+
+ /* if a LF, also do CR... */
+ if (*s == 10) {
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ if ((uint) (bdp->cbd_bufaddr) > (uint) CPM_ADDR)
+ cp = (unsigned char *) (bdp->cbd_bufaddr);
+ else
+ cp = bus_to_virt(bdp->cbd_bufaddr);
+
+ *cp = 13;
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+ }
+ }
+
+ /*
+ * Finally, Wait for transmitter & holding register to empty
+ * and restore the IER
+ */
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ pinfo->tx_cur = (volatile cbd_t *) bdp;
+}
+
+/*
+ * Setup console. Be careful is called early !
+ */
+static int __init cpm_uart_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ struct uart_cpm_port *pinfo;
+ int baud = 38400;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+ int ret;
+
+ port =
+ (struct uart_port *)&cpm_uart_ports[cpm_uart_port_map[co->index]];
+ pinfo = (struct uart_cpm_port *)port;
+
+ pinfo->flags |= FLAG_CONSOLE;
+
+ if (options) {
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+ } else {
+ bd_t *bd = (bd_t *) __res;
+
+ if (bd->bi_baudrate)
+ baud = bd->bi_baudrate;
+ else
+ baud = 9600;
+ }
+
+ /*
+ * Setup any port IO, connect any baud rate generators,
+ * etc. This is expected to be handled by board
+ * dependant code
+ */
+ if (pinfo->set_lineif)
+ pinfo->set_lineif(pinfo);
+
+ ret = cpm_uart_allocbuf(pinfo, 1);
+ if (ret)
+ return ret;
+
+ uart_set_options(port, co, baud, parity, bits, flow);
+
+ return 0;
+}
+
+extern struct uart_driver cpm_reg;
+static struct console cpm_scc_uart_console = {
+ .name "ttyCPM",
+ .write cpm_uart_console_write,
+ .device uart_console_device,
+ .setup cpm_uart_console_setup,
+ .flags CON_PRINTBUFFER,
+ .index -1,
+ .data = &cpm_reg,
+};
+
+int __init cpm_uart_console_init(void)
+{
+ int ret = cpm_uart_init_portdesc();
+
+ if (!ret)
+ register_console(&cpm_scc_uart_console);
+ return ret;
+}
+
+console_initcall(cpm_uart_console_init);
+
+#define CPM_UART_CONSOLE &cpm_scc_uart_console
+#else
+#define CPM_UART_CONSOLE NULL
+#endif
+
+static struct uart_driver cpm_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "ttyCPM",
+ .dev_name = "ttyCPM",
+ .major = SERIAL_CPM_MAJOR,
+ .minor = SERIAL_CPM_MINOR,
+ .cons = CPM_UART_CONSOLE,
+};
+
+static int __init cpm_uart_init(void)
+{
+ int ret, i;
+
+ printk(KERN_INFO "Serial: CPM driver $Revision: 0.01 $\n");
+
+#ifndef CONFIG_SERIAL_CPM_CONSOLE
+ ret = cpm_uart_init_portdesc();
+ if (ret)
+ return ret;
+#endif
+
+ cpm_reg.nr = cpm_uart_nr;
+ ret = uart_register_driver(&cpm_reg);
+
+ if (ret)
+ return ret;
+
+ for (i = 0; i < cpm_uart_nr; i++) {
+ int con = cpm_uart_port_map[i];
+ cpm_uart_ports[con].port.line = i;
+ cpm_uart_ports[con].port.flags = UPF_BOOT_AUTOCONF;
+ uart_add_one_port(&cpm_reg, &cpm_uart_ports[con].port);
+ }
+
+ return ret;
+}
+
+static void __exit cpm_uart_exit(void)
+{
+ int i;
+
+ for (i = 0; i < cpm_uart_nr; i++) {
+ int con = cpm_uart_port_map[i];
+ uart_remove_one_port(&cpm_reg, &cpm_uart_ports[con].port);
+ }
+
+ uart_unregister_driver(&cpm_reg);
+}
+
+module_init(cpm_uart_init);
+module_exit(cpm_uart_exit);
+
+MODULE_AUTHOR("Kumar Gala/Antoniou Pantelis");
+MODULE_DESCRIPTION("CPM SCC/SMC port driver $Revision: 0.01 $");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CHARDEV(SERIAL_CPM_MAJOR, SERIAL_CPM_MINOR);
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.c
+ *
+ * Driver for CPM (SCC/SMC) serial ports; CPM1 definitions
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com) (CPM2)
+ * Pantelis Antoniou (panto@intracom.gr) (CPM1)
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ * (C) 2004 Intracom, S.A.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/device.h>
+#include <linux/bootmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include <linux/serial_core.h>
+#include <linux/kernel.h>
+
+#include "cpm_uart.h"
+
+/**************************************************************/
+
+void cpm_line_cr_cmd(int line, int cmd)
+{
+ ushort val;
+ volatile cpm8xx_t *cp = cpmp;
+
+ switch (line) {
+ case UART_SMC1:
+ val = mk_cr_cmd(CPM_CR_CH_SMC1, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SMC2:
+ val = mk_cr_cmd(CPM_CR_CH_SMC2, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC1:
+ val = mk_cr_cmd(CPM_CR_CH_SCC1, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC2:
+ val = mk_cr_cmd(CPM_CR_CH_SCC2, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC3:
+ val = mk_cr_cmd(CPM_CR_CH_SCC3, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC4:
+ val = mk_cr_cmd(CPM_CR_CH_SCC4, cmd) | CPM_CR_FLG;
+ break;
+ default:
+ return;
+
+ }
+ cp->cp_cpcr = val;
+ while (cp->cp_cpcr & CPM_CR_FLG) ;
+}
+
+void smc1_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile cpm8xx_t *cp = cpmp;
+
+ cp->cp_pbpar |= 0x000000c0;
+ cp->cp_pbdir &= ~0x000000c0;
+ cp->cp_pbodr &= ~0x000000c0;
+
+ pinfo->brg = 1;
+}
+
+void smc2_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SMC2: insert port configuration here */
+ pinfo->brg = 2;
+}
+
+void scc1_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC1: insert port configuration here */
+ pinfo->brg = 1;
+}
+
+void scc2_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC2: insert port configuration here */
+ pinfo->brg = 2;
+}
+
+void scc3_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC3: insert port configuration here */
+ pinfo->brg = 3;
+}
+
+void scc4_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC4: insert port configuration here */
+ pinfo->brg = 4;
+}
+
+/*
+ * Allocate DP-Ram and memory buffers. We need to allocate a transmit and
+ * receive buffer descriptors from dual port ram, and a character
+ * buffer area from host mem. If we are allocating for the console we need
+ * to do it from bootmem
+ */
+int cpm_uart_allocbuf(struct uart_cpm_port *pinfo, unsigned int is_con)
+{
+ int dpmemsz, memsz;
+ u8 *dp_mem;
+ uint dp_addr;
+ u8 *mem_addr;
+ dma_addr_t dma_addr;
+
+ pr_debug("CPM uart[%d]:allocbuf\n", pinfo->port.line);
+
+ dpmemsz = sizeof(cbd_t) * (pinfo->rx_nrfifos + pinfo->tx_nrfifos);
+ dp_mem = m8xx_cpm_dpalloc(dpmemsz);
+ if (dp_mem == NULL) {
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate buffer descriptors\n");
+ return -ENOMEM;
+ }
+ dp_addr = m8xx_cpm_dpram_offset(dp_mem);
+
+ memsz = L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos * pinfo->tx_fifosize);
+ if (is_con) {
+ mem_addr = (u8 *) m8xx_cpm_hostalloc(memsz);
+ dma_addr = 0;
+ } else
+ mem_addr = dma_alloc_coherent(NULL, memsz, &dma_addr,
+ GFP_KERNEL);
+
+ if (mem_addr == NULL) {
+ m8xx_cpm_dpfree(dp_mem);
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate coherent memory\n");
+ return -ENOMEM;
+ }
+
+ pinfo->dp_addr = dp_addr;
+ pinfo->mem_addr = mem_addr;
+ pinfo->dma_addr = dma_addr;
+
+ pinfo->rx_buf = mem_addr;
+ pinfo->tx_buf = pinfo->rx_buf + L1_CACHE_ALIGN(pinfo->rx_nrfifos
+ * pinfo->rx_fifosize);
+
+ pinfo->rx_bd_base = (volatile cbd_t *)dp_mem;
+ pinfo->tx_bd_base = pinfo->rx_bd_base + pinfo->rx_nrfifos;
+
+ return 0;
+}
+
+void cpm_uart_freebuf(struct uart_cpm_port *pinfo)
+{
+ dma_free_coherent(NULL, L1_CACHE_ALIGN(pinfo->rx_nrfifos *
+ pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos *
+ pinfo->tx_fifosize), pinfo->mem_addr,
+ pinfo->dma_addr);
+
+ m8xx_cpm_dpfree(m8xx_cpm_dpram_addr(pinfo->dp_addr));
+}
+
+/* Setup any dynamic params in the uart desc */
+int cpm_uart_init_portdesc(void)
+{
+ pr_debug("CPM uart[-]:init portdesc\n");
+
+ cpm_uart_nr = 0;
+#ifdef CONFIG_SERIAL_CPM_SMC1
+ cpm_uart_ports[UART_SMC1].smcp = &cpmp->cp_smc[0];
+ cpm_uart_ports[UART_SMC1].smcup =
+ (smc_uart_t *) & cpmp->cp_dparam[PROFF_SMC1];
+ cpm_uart_ports[UART_SMC1].port.mapbase =
+ (unsigned long)&cpmp->cp_smc[0];
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SMC2
+ cpm_uart_ports[UART_SMC2].smcp = &cpmp->cp_smc[1];
+ cpm_uart_ports[UART_SMC2].smcup =
+ (smc_uart_t *) & cpmp->cp_dparam[PROFF_SMC2];
+ cpm_uart_ports[UART_SMC2].port.mapbase =
+ (unsigned long)&cpmp->cp_smc[1];
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC1
+ cpm_uart_ports[UART_SCC1].sccp = &cpmp->cp_scc[0];
+ cpm_uart_ports[UART_SCC1].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC1];
+ cpm_uart_ports[UART_SCC1].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[0];
+ cpm_uart_ports[UART_SCC1].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC1].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC2
+ cpm_uart_ports[UART_SCC2].sccp = &cpmp->cp_scc[1];
+ cpm_uart_ports[UART_SCC2].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC2];
+ cpm_uart_ports[UART_SCC2].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[1];
+ cpm_uart_ports[UART_SCC2].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC2].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC3
+ cpm_uart_ports[UART_SCC3].sccp = &cpmp->cp_scc[2];
+ cpm_uart_ports[UART_SCC3].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC3];
+ cpm_uart_ports[UART_SCC3].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[2];
+ cpm_uart_ports[UART_SCC3].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC3].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC3].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC3;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC4
+ cpm_uart_ports[UART_SCC4].sccp = &cpmp->cp_scc[3];
+ cpm_uart_ports[UART_SCC4].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC4];
+ cpm_uart_ports[UART_SCC4].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[3];
+ cpm_uart_ports[UART_SCC4].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC4].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC4].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC4;
+#endif
+ return 0;
+}
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart_cpm1.h
+ *
+ * Driver for CPM (SCC/SMC) serial ports
+ *
+ * definitions for cpm1
+ *
+ */
+
+#ifndef CPM_UART_CPM1_H
+#define CPM_UART_CPM1_H
+
+#include <asm/commproc.h>
+
+/* defines for IRQs */
+#define SMC1_IRQ (CPM_IRQ_OFFSET + CPMVEC_SMC1)
+#define SMC2_IRQ (CPM_IRQ_OFFSET + CPMVEC_SMC2)
+#define SCC1_IRQ (CPM_IRQ_OFFSET + CPMVEC_SCC1)
+#define SCC2_IRQ (CPM_IRQ_OFFSET + CPMVEC_SCC2)
+#define SCC3_IRQ (CPM_IRQ_OFFSET + CPMVEC_SCC3)
+#define SCC4_IRQ (CPM_IRQ_OFFSET + CPMVEC_SCC4)
+
+/* the CPM address */
+#define CPM_ADDR IMAP_ADDR
+
+static inline void cpm_set_brg(int brg, int baud)
+{
+ m8xx_cpm_setbrg(brg, baud);
+}
+
+static inline void cpm_set_scc_fcr(volatile scc_uart_t * sup)
+{
+ sup->scc_genscc.scc_rfcr = SMC_EB;
+ sup->scc_genscc.scc_tfcr = SMC_EB;
+}
+
+static inline void cpm_set_smc_fcr(volatile smc_uart_t * up)
+{
+ up->smc_rfcr = SMC_EB;
+ up->smc_tfcr = SMC_EB;
+}
+
+#define DPRAM_BASE ((unsigned char *)&cpmp->cp_dpmem[0])
+
+#endif
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart_cpm2.c
+ *
+ * Driver for CPM (SCC/SMC) serial ports; CPM2 definitions
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com) (CPM2)
+ * Pantelis Antoniou (panto@intracom.gr) (CPM1)
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ * (C) 2004 Intracom, S.A.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/device.h>
+#include <linux/bootmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include <linux/serial_core.h>
+#include <linux/kernel.h>
+
+#include "cpm_uart.h"
+
+/**************************************************************/
+
+void cpm_line_cr_cmd(int line, int cmd)
+{
+ volatile cpm_cpm2_t *cp = cpmp;
+ ulong val;
+
+ switch (line) {
+ case UART_SMC1:
+ val = mk_cr_cmd(CPM_CR_SMC1_PAGE, CPM_CR_SMC1_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ case UART_SMC2:
+ val = mk_cr_cmd(CPM_CR_SMC2_PAGE, CPM_CR_SMC2_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC1:
+ val = mk_cr_cmd(CPM_CR_SCC1_PAGE, CPM_CR_SCC1_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC2:
+ val = mk_cr_cmd(CPM_CR_SCC2_PAGE, CPM_CR_SCC2_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC3:
+ val = mk_cr_cmd(CPM_CR_SCC3_PAGE, CPM_CR_SCC3_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC4:
+ val = mk_cr_cmd(CPM_CR_SCC4_PAGE, CPM_CR_SCC4_SBLOCK, 0,
+ cmd) | CPM_CR_FLG;
+ break;
+ default:
+ return;
+
+ }
+ cp->cp_cpcr = val;
+ while (cp->cp_cpcr & CPM_CR_FLG) ;
+}
+
+void smc1_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+
+ /* SMC1 is only on port D */
+ io->iop_ppard |= 0x00c00000;
+ io->iop_pdird |= 0x00400000;
+ io->iop_pdird &= ~0x00800000;
+ io->iop_psord &= ~0x00c00000;
+
+ /* Wire BRG1 to SMC1 */
+ cpm2_immr->im_cpmux.cmx_smr &= 0x0f;
+ pinfo->brg = 1;
+}
+
+void smc2_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+
+ /* SMC2 is only on port A */
+ io->iop_ppara |= 0x00c00000;
+ io->iop_pdira |= 0x00400000;
+ io->iop_pdira &= ~0x00800000;
+ io->iop_psora &= ~0x00c00000;
+
+ /* Wire BRG2 to SMC2 */
+ cpm2_immr->im_cpmux.cmx_smr &= 0xf0;
+ pinfo->brg = 2;
+}
+
+void scc1_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+
+ /* Use Port D for SCC1 instead of other functions. */
+ io->iop_ppard |= 0x00000003;
+ io->iop_psord &= ~0x00000001; /* Rx */
+ io->iop_psord |= 0x00000002; /* Tx */
+ io->iop_pdird &= ~0x00000001; /* Rx */
+ io->iop_pdird |= 0x00000002; /* Tx */
+
+ /* Wire BRG1 to SCC1 */
+ cpm2_immr->im_cpmux.cmx_scr &= ~0x00ffffff;
+ cpm2_immr->im_cpmux.cmx_scr |= 0x00000000;
+ pinfo->brg = 1;
+}
+
+void scc2_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+ io->iop_pparb |= 0x008b0000;
+ io->iop_pdirb |= 0x00880000;
+ io->iop_psorb |= 0x00880000;
+ io->iop_pdirb &= ~0x00030000;
+ io->iop_psorb &= ~0x00030000;
+ cpm2_immr->im_cpmux.cmx_scr &= ~0xff00ffff;
+ cpm2_immr->im_cpmux.cmx_scr |= 0x00090000;
+ pinfo->brg = 2;
+}
+
+void scc3_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+ io->iop_pparb |= 0x008b0000;
+ io->iop_pdirb |= 0x00880000;
+ io->iop_psorb |= 0x00880000;
+ io->iop_pdirb &= ~0x00030000;
+ io->iop_psorb &= ~0x00030000;
+ cpm2_immr->im_cpmux.cmx_scr &= ~0xffff00ff;
+ cpm2_immr->im_cpmux.cmx_scr |= 0x00001200;
+ pinfo->brg = 3;
+}
+
+void scc4_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile iop_cpm2_t *io = &cpm2_immr->im_ioport;
+
+ io->iop_ppard |= 0x00000600;
+ io->iop_psord &= ~0x00000600; /* Tx/Rx */
+ io->iop_pdird &= ~0x00000200; /* Rx */
+ io->iop_pdird |= 0x00000400; /* Tx */
+
+ cpm2_immr->im_cpmux.cmx_scr &= ~0xffffff00;
+ cpm2_immr->im_cpmux.cmx_scr |= 0x0000001b;
+ pinfo->brg = 4;
+}
+
+/*
+ * Allocate DP-Ram and memory buffers. We need to allocate a transmit and
+ * receive buffer descriptors from dual port ram, and a character
+ * buffer area from host mem. If we are allocating for the console we need
+ * to do it from bootmem
+ */
+int cpm_uart_allocbuf(struct uart_cpm_port *pinfo, unsigned int is_con)
+{
+ int dpmemsz, memsz;
+ u8 *dp_mem;
+ uint dp_addr;
+ u8 *mem_addr;
+ dma_addr_t dma_addr = 0;
+
+ pr_debug("CPM uart[%d]:allocbuf\n", pinfo->port.line);
+
+ dpmemsz = sizeof(cbd_t) * (pinfo->rx_nrfifos + pinfo->tx_nrfifos);
+ dp_mem = cpm2_dpalloc(dpmemsz, 8);
+ if (dp_mem == NULL) {
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate buffer descriptors\n");
+ return -ENOMEM;
+ }
+
+ dp_addr = cpm2_dpram_offset(dp_mem);
+
+ memsz = L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos * pinfo->tx_fifosize);
+ if (is_con)
+ mem_addr = alloc_bootmem(memsz);
+ else
+ mem_addr = dma_alloc_coherent(NULL, memsz, &dma_addr,
+ GFP_KERNEL);
+
+ if (mem_addr == NULL) {
+ cpm2_dpfree(dp_mem);
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate coherent memory\n");
+ return -ENOMEM;
+ }
+
+ pinfo->dp_addr = dp_addr;
+ pinfo->mem_addr = mem_addr;
+ pinfo->dma_addr = dma_addr;
+
+ pinfo->rx_buf = mem_addr;
+ pinfo->tx_buf = pinfo->rx_buf + L1_CACHE_ALIGN(pinfo->rx_nrfifos
+ * pinfo->rx_fifosize);
+
+ pinfo->rx_bd_base = (volatile cbd_t *)dp_mem;
+ pinfo->tx_bd_base = pinfo->rx_bd_base + pinfo->rx_nrfifos;
+
+ return 0;
+}
+
+void cpm_uart_freebuf(struct uart_cpm_port *pinfo)
+{
+ dma_free_coherent(NULL, L1_CACHE_ALIGN(pinfo->rx_nrfifos *
+ pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos *
+ pinfo->tx_fifosize), pinfo->mem_addr,
+ pinfo->dma_addr);
+
+ cpm2_dpfree(&pinfo->dp_addr);
+}
+
+/* Setup any dynamic params in the uart desc */
+int cpm_uart_init_portdesc(void)
+{
+ pr_debug("CPM uart[-]:init portdesc\n");
+
+ cpm_uart_nr = 0;
+#ifdef CONFIG_SERIAL_CPM_SMC1
+ cpm_uart_ports[UART_SMC1].smcp = (smc_t *) & cpm2_immr->im_smc[0];
+ cpm_uart_ports[UART_SMC1].smcup =
+ (smc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SMC1];
+ cpm_uart_ports[UART_SMC1].port.mapbase =
+ (unsigned long)&cpm2_immr->im_smc[0];
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SMC2
+ cpm_uart_ports[UART_SMC2].smcp = (smc_t *) & cpm2_immr->im_smc[1];
+ cpm_uart_ports[UART_SMC2].smcup =
+ (smc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SMC2];
+ cpm_uart_ports[UART_SMC2].port.mapbase =
+ (unsigned long)&cpm2_immr->im_smc[1];
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC1
+ cpm_uart_ports[UART_SCC1].sccp = (scc_t *) & cpm2_immr->im_scc[0];
+ cpm_uart_ports[UART_SCC1].sccup =
+ (scc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SCC1];
+ cpm_uart_ports[UART_SCC1].port.mapbase =
+ (unsigned long)&cpm2_immr->im_scc[0];
+ cpm_uart_ports[UART_SCC1].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC1].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC2
+ cpm_uart_ports[UART_SCC2].sccp = (scc_t *) & cpm2_immr->im_scc[1];
+ cpm_uart_ports[UART_SCC2].sccup =
+ (scc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SCC2];
+ cpm_uart_ports[UART_SCC2].port.mapbase =
+ (unsigned long)&cpm2_immr->im_scc[1];
+ cpm_uart_ports[UART_SCC2].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC2].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC3
+ cpm_uart_ports[UART_SCC3].sccp = (scc_t *) & cpm2_immr->im_scc[2];
+ cpm_uart_ports[UART_SCC3].sccup =
+ (scc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SCC3];
+ cpm_uart_ports[UART_SCC3].port.mapbase =
+ (unsigned long)&cpm2_immr->im_scc[2];
+ cpm_uart_ports[UART_SCC3].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC3].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC3].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC3;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC4
+ cpm_uart_ports[UART_SCC4].sccp = (scc_t *) & cpm2_immr->im_scc[3];
+ cpm_uart_ports[UART_SCC4].sccup =
+ (scc_uart_t *) & cpm2_immr->im_dprambase[PROFF_SCC4];
+ cpm_uart_ports[UART_SCC4].port.mapbase =
+ (unsigned long)&cpm2_immr->im_scc[3];
+ cpm_uart_ports[UART_SCC4].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC4].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC4].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC4;
+#endif
+
+ return 0;
+}
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart_cpm2.h
+ *
+ * Driver for CPM (SCC/SMC) serial ports
+ *
+ * definitions for cpm2
+ *
+ */
+
+#ifndef CPM_UART_CPM2_H
+#define CPM_UART_CPM2_H
+
+#include <asm/cpm2.h>
+
+/* defines for IRQs */
+#define SMC1_IRQ SIU_INT_SMC1
+#define SMC2_IRQ SIU_INT_SMC2
+#define SCC1_IRQ SIU_INT_SCC1
+#define SCC2_IRQ SIU_INT_SCC2
+#define SCC3_IRQ SIU_INT_SCC3
+#define SCC4_IRQ SIU_INT_SCC4
+
+/* the CPM address */
+#define CPM_ADDR CPM_MAP_ADDR
+
+static inline void cpm_set_brg(int brg, int baud)
+{
+ cpm2_setbrg(brg, baud);
+}
+
+static inline void cpm_set_scc_fcr(volatile scc_uart_t * sup)
+{
+ sup->scc_genscc.scc_rfcr = CPMFCR_GBL | CPMFCR_EB;
+ sup->scc_genscc.scc_tfcr = CPMFCR_GBL | CPMFCR_EB;
+}
+
+static inline void cpm_set_smc_fcr(volatile smc_uart_t * up)
+{
+ up->smc_rfcr = CPMFCR_GBL | CPMFCR_EB;
+ up->smc_tfcr = CPMFCR_GBL | CPMFCR_EB;
+}
+
+#define DPRAM_BASE ((unsigned char *)&cpm2_immr->im_dprambase[0])
+
+#endif
--- /dev/null
+/*
+ * drivers/serial/mpc52xx_uart.c
+ *
+ * Driver for the PSC of the Freescale MPC52xx PSCs configured as UARTs.
+ *
+ * FIXME According to the usermanual the status bits in the status register
+ * are only updated when the peripherals access the FIFO and not when the
+ * CPU access them. So since we use this bits to know when we stop writing
+ * and reading, they may not be updated in-time and a race condition may
+ * exists. But I haven't be able to prove this and I don't care. But if
+ * any problem arises, it might worth checking. The TX/RX FIFO Stats
+ * registers should be used in addition.
+ * Update: Actually, they seem updated ... At least the bits we use.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Some of the code has been inspired/copied from the 2.4 code written
+ * by Dale Farnsworth <dfarnsworth@mvista.com>.
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+/* OCP Usage :
+ *
+ * This drivers uses the OCP model. To load the serial driver for one of the
+ * PSCs, just add this to the core_ocp table :
+ *
+ * {
+ * .vendor = OCP_VENDOR_FREESCALE,
+ * .function = OCP_FUNC_PSC_UART,
+ * .index = 0,
+ * .paddr = MPC52xx_PSC1,
+ * .irq = MPC52xx_PSC1_IRQ,
+ * .pm = OCP_CPM_NA,
+ * },
+ *
+ * This is for PSC1, replace the paddr and irq according to the PSC you want to
+ * use. The driver all necessary registers to place the PSC in uart mode without
+ * DCD. However, the pin multiplexing aren't changed and should be set either
+ * by the bootloader or in the platform init code.
+ * The index field must be equal to the PSC index ( e.g. 0 for PSC1, 1 for PSC2,
+ * and so on). So the PSC1 is mapped to /dev/ttyS0, PSC2 to /dev/ttyS1 and so
+ * on. But be warned, it's an ABSOLUTE REQUIREMENT ! This is needed mainly for
+ * the console code : without this 1:1 mapping, at early boot time, when we are
+ * parsing the kernel args console=ttyS?, we wouldn't know wich PSC it will be
+ * mapped to because OCP stuff is not yet initialized.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/sysrq.h>
+#include <linux/console.h>
+
+#include <asm/delay.h>
+#include <asm/io.h>
+#include <asm/ocp.h>
+
+#include <asm/mpc52xx.h>
+#include <asm/mpc52xx_psc.h>
+
+#if defined(CONFIG_SERIAL_MPC52xx_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+
+
+
+#define ISR_PASS_LIMIT 256 /* Max number of iteration in the interrupt */
+
+
+static struct uart_port mpc52xx_uart_ports[MPC52xx_PSC_MAXNUM];
+ /* Rem: - We use the read_status_mask as a shadow of
+ * psc->mpc52xx_psc_imr
+ * - It's important that is array is all zero on start as we
+ * use it to know if it's initialized or not ! If it's not sure
+ * it's cleared, then a memset(...,0,...) should be added to
+ * the console_init
+ */
+
+#define PSC(port) ((struct mpc52xx_psc *)((port)->membase))
+
+
+/* Forward declaration of the interruption handling routine */
+static irqreturn_t mpc52xx_uart_int(int irq,void *dev_id,struct pt_regs *regs);
+
+
+/* Simple macro to test if a port is console or not. This one is taken
+ * for serial_core.c and maybe should be moved to serial_core.h ? */
+#ifdef CONFIG_SERIAL_CORE_CONSOLE
+#define uart_console(port) ((port)->cons && (port)->cons->index == (port)->line)
+#else
+#define uart_console(port) (0)
+#endif
+
+
+/* ======================================================================== */
+/* UART operations */
+/* ======================================================================== */
+
+static unsigned int
+mpc52xx_uart_tx_empty(struct uart_port *port)
+{
+ int status = in_be16(&PSC(port)->mpc52xx_psc_status);
+ return (status & MPC52xx_PSC_SR_TXEMP) ? TIOCSER_TEMT : 0;
+}
+
+static void
+mpc52xx_uart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ /* Not implemented */
+}
+
+static unsigned int
+mpc52xx_uart_get_mctrl(struct uart_port *port)
+{
+ /* Not implemented */
+ return TIOCM_CTS | TIOCM_DSR | TIOCM_CAR;
+}
+
+static void
+mpc52xx_uart_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ /* port->lock taken by caller */
+ port->read_status_mask &= ~MPC52xx_PSC_IMR_TXRDY;
+ out_be16(&PSC(port)->mpc52xx_psc_imr,port->read_status_mask);
+}
+
+static void
+mpc52xx_uart_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ /* port->lock taken by caller */
+ port->read_status_mask |= MPC52xx_PSC_IMR_TXRDY;
+ out_be16(&PSC(port)->mpc52xx_psc_imr,port->read_status_mask);
+}
+
+static void
+mpc52xx_uart_send_xchar(struct uart_port *port, char ch)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&port->lock, flags);
+
+ port->x_char = ch;
+ if (ch) {
+ /* Make sure tx interrupts are on */
+ /* Truly necessary ??? They should be anyway */
+ port->read_status_mask |= MPC52xx_PSC_IMR_TXRDY;
+ out_be16(&PSC(port)->mpc52xx_psc_imr,port->read_status_mask);
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static void
+mpc52xx_uart_stop_rx(struct uart_port *port)
+{
+ /* port->lock taken by caller */
+ port->read_status_mask &= ~MPC52xx_PSC_IMR_RXRDY;
+ out_be16(&PSC(port)->mpc52xx_psc_imr,port->read_status_mask);
+}
+
+static void
+mpc52xx_uart_enable_ms(struct uart_port *port)
+{
+ /* Not implemented */
+}
+
+static void
+mpc52xx_uart_break_ctl(struct uart_port *port, int ctl)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&port->lock, flags);
+
+ if ( ctl == -1 )
+ out_8(&PSC(port)->command,MPC52xx_PSC_START_BRK);
+ else
+ out_8(&PSC(port)->command,MPC52xx_PSC_STOP_BRK);
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static int
+mpc52xx_uart_startup(struct uart_port *port)
+{
+ struct mpc52xx_psc *psc = PSC(port);
+
+ /* Reset/activate the port, clear and enable interrupts */
+ out_8(&psc->command,MPC52xx_PSC_RST_RX);
+ out_8(&psc->command,MPC52xx_PSC_RST_TX);
+
+ out_be32(&psc->sicr,0); /* UART mode DCD ignored */
+
+ out_be16(&psc->mpc52xx_psc_clock_select, 0xdd00); /* /16 prescaler on */
+
+ out_8(&psc->rfcntl, 0x00);
+ out_be16(&psc->rfalarm, 0x1ff);
+ out_8(&psc->tfcntl, 0x07);
+ out_be16(&psc->tfalarm, 0x80);
+
+ port->read_status_mask |= MPC52xx_PSC_IMR_RXRDY | MPC52xx_PSC_IMR_TXRDY;
+ out_be16(&psc->mpc52xx_psc_imr,port->read_status_mask);
+
+ out_8(&psc->command,MPC52xx_PSC_TX_ENABLE);
+ out_8(&psc->command,MPC52xx_PSC_RX_ENABLE);
+
+ return 0;
+}
+
+static void
+mpc52xx_uart_shutdown(struct uart_port *port)
+{
+ struct mpc52xx_psc *psc = PSC(port);
+
+ /* Shut down the port, interrupt and all */
+ out_8(&psc->command,MPC52xx_PSC_RST_RX);
+ out_8(&psc->command,MPC52xx_PSC_RST_TX);
+
+ port->read_status_mask = 0;
+ out_be16(&psc->mpc52xx_psc_imr,port->read_status_mask);
+}
+
+static void
+mpc52xx_uart_set_termios(struct uart_port *port, struct termios *new,
+ struct termios *old)
+{
+ struct mpc52xx_psc *psc = PSC(port);
+ unsigned long flags;
+ unsigned char mr1, mr2;
+ unsigned short ctr;
+ unsigned int j, baud, quot;
+
+ /* Prepare what we're gonna write */
+ mr1 = 0;
+
+ switch (new->c_cflag & CSIZE) {
+ case CS5: mr1 |= MPC52xx_PSC_MODE_5_BITS;
+ break;
+ case CS6: mr1 |= MPC52xx_PSC_MODE_6_BITS;
+ break;
+ case CS7: mr1 |= MPC52xx_PSC_MODE_7_BITS;
+ break;
+ case CS8:
+ default: mr1 |= MPC52xx_PSC_MODE_8_BITS;
+ }
+
+ if (new->c_cflag & PARENB) {
+ mr1 |= (new->c_cflag & PARODD) ?
+ MPC52xx_PSC_MODE_PARODD : MPC52xx_PSC_MODE_PAREVEN;
+ } else
+ mr1 |= MPC52xx_PSC_MODE_PARNONE;
+
+
+ mr2 = 0;
+
+ if (new->c_cflag & CSTOPB)
+ mr2 |= MPC52xx_PSC_MODE_TWO_STOP;
+ else
+ mr2 |= ((new->c_cflag & CSIZE) == CS5) ?
+ MPC52xx_PSC_MODE_ONE_STOP_5_BITS :
+ MPC52xx_PSC_MODE_ONE_STOP;
+
+
+ baud = uart_get_baud_rate(port, new, old, 0, port->uartclk/16);
+ quot = uart_get_divisor(port, baud);
+ ctr = quot & 0xffff;
+
+ /* Get the lock */
+ spin_lock_irqsave(&port->lock, flags);
+
+ /* Update the per-port timeout */
+ uart_update_timeout(port, new->c_cflag, baud);
+
+ /* Do our best to flush TX & RX, so we don't loose anything */
+ /* But we don't wait indefinitly ! */
+ j = 5000000; /* Maximum wait */
+ /* FIXME Can't receive chars since set_termios might be called at early
+ * boot for the console, all stuff is not yet ready to receive at that
+ * time and that just makes the kernel oops */
+ /* while (j-- && mpc52xx_uart_int_rx_chars(port)); */
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP) &&
+ --j)
+ udelay(1);
+
+ if (!j)
+ printk( KERN_ERR "mpc52xx_uart.c: "
+ "Unable to flush RX & TX fifos in-time in set_termios."
+ "Some chars may have been lost.\n" );
+
+ /* Reset the TX & RX */
+ out_8(&psc->command,MPC52xx_PSC_RST_RX);
+ out_8(&psc->command,MPC52xx_PSC_RST_TX);
+
+ /* Send new mode settings */
+ out_8(&psc->command,MPC52xx_PSC_SEL_MODE_REG_1);
+ out_8(&psc->mode,mr1);
+ out_8(&psc->mode,mr2);
+ out_8(&psc->ctur,ctr >> 8);
+ out_8(&psc->ctlr,ctr & 0xff);
+
+ /* Reenable TX & RX */
+ out_8(&psc->command,MPC52xx_PSC_TX_ENABLE);
+ out_8(&psc->command,MPC52xx_PSC_RX_ENABLE);
+
+ /* We're all set, release the lock */
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static const char *
+mpc52xx_uart_type(struct uart_port *port)
+{
+ return port->type == PORT_MPC52xx ? "MPC52xx PSC" : NULL;
+}
+
+static void
+mpc52xx_uart_release_port(struct uart_port *port)
+{
+ if (port->flags & UPF_IOREMAP) { /* remapped by us ? */
+ iounmap(port->membase);
+ port->membase = NULL;
+ }
+}
+
+static int
+mpc52xx_uart_request_port(struct uart_port *port)
+{
+ if (port->flags & UPF_IOREMAP) /* Need to remap ? */
+ port->membase = ioremap(port->mapbase, sizeof(struct mpc52xx_psc));
+
+ return port->membase != NULL ? 0 : -EBUSY;
+}
+
+static void
+mpc52xx_uart_config_port(struct uart_port *port, int flags)
+{
+ if ( (flags & UART_CONFIG_TYPE) &&
+ (mpc52xx_uart_request_port(port) == 0) )
+ port->type = PORT_MPC52xx;
+}
+
+static int
+mpc52xx_uart_verify_port(struct uart_port *port, struct serial_struct *ser)
+{
+ if ( ser->type != PORT_UNKNOWN && ser->type != PORT_MPC52xx )
+ return -EINVAL;
+
+ if ( (ser->irq != port->irq) ||
+ (ser->io_type != SERIAL_IO_MEM) ||
+ (ser->baud_base != port->uartclk) ||
+ // FIXME Should check addresses/irq as well ?
+ (ser->hub6 != 0 ) )
+ return -EINVAL;
+
+ return 0;
+}
+
+
+static struct uart_ops mpc52xx_uart_ops = {
+ .tx_empty = mpc52xx_uart_tx_empty,
+ .set_mctrl = mpc52xx_uart_set_mctrl,
+ .get_mctrl = mpc52xx_uart_get_mctrl,
+ .stop_tx = mpc52xx_uart_stop_tx,
+ .start_tx = mpc52xx_uart_start_tx,
+ .send_xchar = mpc52xx_uart_send_xchar,
+ .stop_rx = mpc52xx_uart_stop_rx,
+ .enable_ms = mpc52xx_uart_enable_ms,
+ .break_ctl = mpc52xx_uart_break_ctl,
+ .startup = mpc52xx_uart_startup,
+ .shutdown = mpc52xx_uart_shutdown,
+ .set_termios = mpc52xx_uart_set_termios,
+/* .pm = mpc52xx_uart_pm, Not supported yet */
+/* .set_wake = mpc52xx_uart_set_wake, Not supported yet */
+ .type = mpc52xx_uart_type,
+ .release_port = mpc52xx_uart_release_port,
+ .request_port = mpc52xx_uart_request_port,
+ .config_port = mpc52xx_uart_config_port,
+ .verify_port = mpc52xx_uart_verify_port
+};
+
+
+/* ======================================================================== */
+/* Interrupt handling */
+/* ======================================================================== */
+
+static inline int
+mpc52xx_uart_int_rx_chars(struct uart_port *port, struct pt_regs *regs)
+{
+ struct tty_struct *tty = port->info->tty;
+ unsigned char ch;
+ unsigned short status;
+
+ /* While we can read, do so ! */
+ while ( (status = in_be16(&PSC(port)->mpc52xx_psc_status)) &
+ MPC52xx_PSC_SR_RXRDY) {
+
+ /* If we are full, just stop reading */
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE)
+ break;
+
+ /* Get the char */
+ ch = in_8(&PSC(port)->mpc52xx_psc_buffer_8);
+
+ /* Handle sysreq char */
+#ifdef SUPPORT_SYSRQ
+ if (uart_handle_sysrq_char(port, ch, regs)) {
+ port->sysrq = 0;
+ continue;
+ }
+#endif
+
+ /* Store it */
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = 0;
+ port->icount.rx++;
+
+ if ( status & (MPC52xx_PSC_SR_PE |
+ MPC52xx_PSC_SR_FE |
+ MPC52xx_PSC_SR_RB |
+ MPC52xx_PSC_SR_OE) ) {
+
+ if (status & MPC52xx_PSC_SR_RB) {
+ *tty->flip.flag_buf_ptr = TTY_BREAK;
+ uart_handle_break(port);
+ } else if (status & MPC52xx_PSC_SR_PE)
+ *tty->flip.flag_buf_ptr = TTY_PARITY;
+ else if (status & MPC52xx_PSC_SR_FE)
+ *tty->flip.flag_buf_ptr = TTY_FRAME;
+ if (status & MPC52xx_PSC_SR_OE) {
+ /*
+ * Overrun is special, since it's
+ * reported immediately, and doesn't
+ * affect the current character
+ */
+ if (tty->flip.count < (TTY_FLIPBUF_SIZE-1)) {
+ tty->flip.flag_buf_ptr++;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ }
+ *tty->flip.flag_buf_ptr = TTY_OVERRUN;
+ }
+
+ /* Clear error condition */
+ out_8(&PSC(port)->command,MPC52xx_PSC_RST_ERR_STAT);
+
+ }
+
+ tty->flip.char_buf_ptr++;
+ tty->flip.flag_buf_ptr++;
+ tty->flip.count++;
+
+ }
+
+ tty_flip_buffer_push(tty);
+
+ return in_be16(&PSC(port)->mpc52xx_psc_status) & MPC52xx_PSC_SR_RXRDY;
+}
+
+static inline int
+mpc52xx_uart_int_tx_chars(struct uart_port *port)
+{
+ struct circ_buf *xmit = &port->info->xmit;
+
+ /* Process out of band chars */
+ if (port->x_char) {
+ out_8(&PSC(port)->mpc52xx_psc_buffer_8, port->x_char);
+ port->icount.tx++;
+ port->x_char = 0;
+ return 1;
+ }
+
+ /* Nothing to do ? */
+ if (uart_circ_empty(xmit) || uart_tx_stopped(port)) {
+ mpc52xx_uart_stop_tx(port,0);
+ return 0;
+ }
+
+ /* Send chars */
+ while (in_be16(&PSC(port)->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXRDY) {
+ out_8(&PSC(port)->mpc52xx_psc_buffer_8, xmit->buf[xmit->tail]);
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+ if (uart_circ_empty(xmit))
+ break;
+ }
+
+ /* Wake up */
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
+ /* Maybe we're done after all */
+ if (uart_circ_empty(xmit)) {
+ mpc52xx_uart_stop_tx(port,0);
+ return 0;
+ }
+
+ return 1;
+}
+
+static irqreturn_t
+mpc52xx_uart_int(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct uart_port *port = (struct uart_port *) dev_id;
+ unsigned long pass = ISR_PASS_LIMIT;
+ unsigned int keepgoing;
+ unsigned short status;
+
+ if ( irq != port->irq ) {
+ printk( KERN_WARNING
+ "mpc52xx_uart_int : " \
+ "Received wrong int %d. Waiting for %d\n",
+ irq, port->irq);
+ return IRQ_NONE;
+ }
+
+ spin_lock(&port->lock);
+
+ /* While we have stuff to do, we continue */
+ do {
+ /* If we don't find anything to do, we stop */
+ keepgoing = 0;
+
+ /* Read status */
+ status = in_be16(&PSC(port)->mpc52xx_psc_isr);
+ status &= port->read_status_mask;
+
+ /* Do we need to receive chars ? */
+ /* For this RX interrupts must be on and some chars waiting */
+ if ( status & MPC52xx_PSC_IMR_RXRDY )
+ keepgoing |= mpc52xx_uart_int_rx_chars(port, regs);
+
+ /* Do we need to send chars ? */
+ /* For this, TX must be ready and TX interrupt enabled */
+ if ( status & MPC52xx_PSC_IMR_TXRDY )
+ keepgoing |= mpc52xx_uart_int_tx_chars(port);
+
+ /* Limit number of iteration */
+ if ( !(--pass) )
+ keepgoing = 0;
+
+ } while (keepgoing);
+
+ spin_unlock(&port->lock);
+
+ return IRQ_HANDLED;
+}
+
+
+/* ======================================================================== */
+/* Console ( if applicable ) */
+/* ======================================================================== */
+
+#ifdef CONFIG_SERIAL_MPC52xx_CONSOLE
+
+static void __init
+mpc52xx_console_get_options(struct uart_port *port,
+ int *baud, int *parity, int *bits, int *flow)
+{
+ struct mpc52xx_psc *psc = PSC(port);
+ unsigned char mr1;
+
+ /* Read the mode registers */
+ out_8(&psc->command,MPC52xx_PSC_SEL_MODE_REG_1);
+ mr1 = in_8(&psc->mode);
+
+ /* CT{U,L}R are write-only ! */
+ *baud = __res.bi_baudrate ?
+ __res.bi_baudrate : CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD;
+
+ /* Parse them */
+ switch (mr1 & MPC52xx_PSC_MODE_BITS_MASK) {
+ case MPC52xx_PSC_MODE_5_BITS: *bits = 5; break;
+ case MPC52xx_PSC_MODE_6_BITS: *bits = 6; break;
+ case MPC52xx_PSC_MODE_7_BITS: *bits = 7; break;
+ case MPC52xx_PSC_MODE_8_BITS:
+ default: *bits = 8;
+ }
+
+ if (mr1 & MPC52xx_PSC_MODE_PARNONE)
+ *parity = 'n';
+ else
+ *parity = mr1 & MPC52xx_PSC_MODE_PARODD ? 'o' : 'e';
+}
+
+static void
+mpc52xx_console_write(struct console *co, const char *s, unsigned int count)
+{
+ struct uart_port *port = &mpc52xx_uart_ports[co->index];
+ struct mpc52xx_psc *psc = PSC(port);
+ unsigned int i, j;
+
+ /* Disable interrupts */
+ out_be16(&psc->mpc52xx_psc_imr, 0);
+
+ /* Wait the TX buffer to be empty */
+ j = 5000000; /* Maximum wait */
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP) &&
+ --j)
+ udelay(1);
+
+ /* Write all the chars */
+ for ( i=0 ; i<count ; i++ ) {
+
+ /* Send the char */
+ out_8(&psc->mpc52xx_psc_buffer_8, *s);
+
+ /* Line return handling */
+ if ( *s++ == '\n' )
+ out_8(&psc->mpc52xx_psc_buffer_8, '\r');
+
+ /* Wait the TX buffer to be empty */
+ j = 20000; /* Maximum wait */
+ while (!(in_be16(&psc->mpc52xx_psc_status) &
+ MPC52xx_PSC_SR_TXEMP) && --j)
+ udelay(1);
+ }
+
+ /* Restore interrupt state */
+ out_be16(&psc->mpc52xx_psc_imr, port->read_status_mask);
+}
+
+static int __init
+mpc52xx_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port = &mpc52xx_uart_ports[co->index];
+
+ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ if (co->index < 0 || co->index >= MPC52xx_PSC_MAXNUM)
+ return -EINVAL;
+
+ /* Basic port init. Needed since we use some uart_??? func before
+ * real init for early access */
+ port->lock = SPIN_LOCK_UNLOCKED;
+ port->uartclk = __res.bi_ipbfreq / 2; /* Look at CTLR doc */
+ port->ops = &mpc52xx_uart_ops;
+ port->mapbase = MPC52xx_PSCx(co->index);
+
+ /* We ioremap ourself */
+ port->membase = ioremap(port->mapbase, sizeof(struct mpc52xx_psc));
+ if (port->membase == NULL) {
+ release_mem_region(port->mapbase, sizeof(struct mpc52xx_psc));
+ return -EBUSY;
+ }
+
+ /* Setup the port parameters accoding to options */
+ if (options)
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+ else
+ mpc52xx_console_get_options(port, &baud, &parity, &bits, &flow);
+
+ return uart_set_options(port, co, baud, parity, bits, flow);
+}
+
+
+extern struct uart_driver mpc52xx_uart_driver;
+
+static struct console mpc52xx_console = {
+ .name = "ttyS",
+ .write = mpc52xx_console_write,
+ .device = uart_console_device,
+ .setup = mpc52xx_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1, /* Specified on the cmdline (e.g. console=ttyS0 ) */
+ .data = &mpc52xx_uart_driver,
+};
+
+
+static int __init
+mpc52xx_console_init(void)
+{
+ register_console(&mpc52xx_console);
+ return 0;
+}
+
+console_initcall(mpc52xx_console_init);
+
+#define MPC52xx_PSC_CONSOLE &mpc52xx_console
+#else
+#define MPC52xx_PSC_CONSOLE NULL
+#endif
+
+
+/* ======================================================================== */
+/* UART Driver */
+/* ======================================================================== */
+
+static struct uart_driver mpc52xx_uart_driver = {
+ .owner = THIS_MODULE,
+ .driver_name = "mpc52xx_psc_uart",
+ .dev_name = "ttyS",
+ .devfs_name = "ttyS",
+ .major = TTY_MAJOR,
+ .minor = 64,
+ .nr = MPC52xx_PSC_MAXNUM,
+ .cons = MPC52xx_PSC_CONSOLE,
+};
+
+
+/* ======================================================================== */
+/* OCP Driver */
+/* ======================================================================== */
+
+static int __devinit
+mpc52xx_uart_probe(struct ocp_device *ocp)
+{
+ struct uart_port *port = NULL;
+ int idx, ret;
+
+ /* Get the corresponding port struct */
+ idx = ocp->def->index;
+ if (idx < 0 || idx >= MPC52xx_PSC_MAXNUM)
+ return -EINVAL;
+
+ port = &mpc52xx_uart_ports[idx];
+
+ /* Init the port structure */
+ port->lock = SPIN_LOCK_UNLOCKED;
+ port->mapbase = ocp->def->paddr;
+ port->irq = ocp->def->irq;
+ port->uartclk = __res.bi_ipbfreq / 2; /* Look at CTLR doc */
+ port->fifosize = 255; /* Should be 512 ! But it can't be */
+ /* stored in a unsigned char */
+ port->iotype = UPIO_MEM;
+ port->flags = UPF_BOOT_AUTOCONF |
+ ( uart_console(port) ? 0 : UPF_IOREMAP );
+ port->line = idx;
+ port->ops = &mpc52xx_uart_ops;
+ port->read_status_mask = 0;
+
+ /* Requests the mem & irqs */
+ /* Unlike other serial drivers, we reserve the resources here, so we
+ * can detect early if multiple drivers uses the same PSC. Special
+ * care must be taken with the console PSC
+ */
+ ret = request_irq(
+ port->irq, mpc52xx_uart_int,
+ SA_INTERRUPT | SA_SAMPLE_RANDOM, "mpc52xx_psc_uart", port);
+ if (ret)
+ goto error;
+
+ ret = request_mem_region(port->mapbase, sizeof(struct mpc52xx_psc),
+ "mpc52xx_psc_uart") != NULL ? 0 : -EBUSY;
+ if (ret)
+ goto free_irq;
+
+ /* Add the port to the uart sub-system */
+ ret = uart_add_one_port(&mpc52xx_uart_driver, port);
+ if (ret)
+ goto release_mem;
+
+ ocp_set_drvdata(ocp, (void*)port);
+
+ return 0;
+
+
+free_irq:
+ free_irq(port->irq, mpc52xx_uart_int);
+
+release_mem:
+ release_mem_region(port->mapbase, sizeof(struct mpc52xx_psc));
+
+error:
+ if (uart_console(port))
+ printk( "mpc52xx_uart.c: Error during resource alloction for "
+ "the console port !!! Check that the console PSC is "
+ "not used by another OCP driver !!!\n" );
+
+ return ret;
+}
+
+static void
+mpc52xx_uart_remove(struct ocp_device *ocp)
+{
+ struct uart_port *port = (struct uart_port *) ocp_get_drvdata(ocp);
+
+ ocp_set_drvdata(ocp, NULL);
+
+ if (port) {
+ uart_remove_one_port(&mpc52xx_uart_driver, port);
+ release_mem_region(port->mapbase, sizeof(struct mpc52xx_psc));
+ free_irq(port->irq, mpc52xx_uart_int);
+ }
+}
+
+#ifdef CONFIG_PM
+static int
+mpc52xx_uart_suspend(struct ocp_device *ocp, u32 state)
+{
+ struct uart_port *port = (struct uart_port *) ocp_get_drvdata(ocp);
+
+ uart_suspend_port(&mpc52xx_uart_driver, port);
+
+ return 0;
+}
+
+static int
+mpc52xx_uart_resume(struct ocp_device *ocp)
+{
+ struct uart_port *port = (struct uart_port *) ocp_get_drvdata(ocp);
+
+ uart_resume_port(&mpc52xx_uart_driver, port);
+
+ return 0;
+}
+#endif
+
+static struct ocp_device_id mpc52xx_uart_ids[] __devinitdata = {
+ { .vendor = OCP_VENDOR_FREESCALE, .function = OCP_FUNC_PSC_UART },
+ { .vendor = OCP_VENDOR_INVALID /* Terminating entry */ }
+};
+
+MODULE_DEVICE_TABLE(ocp, mpc52xx_uart_ids);
+
+static struct ocp_driver mpc52xx_uart_ocp_driver = {
+ .name = "mpc52xx_psc_uart",
+ .id_table = mpc52xx_uart_ids,
+ .probe = mpc52xx_uart_probe,
+ .remove = mpc52xx_uart_remove,
+#ifdef CONFIG_PM
+ .suspend = mpc52xx_uart_suspend,
+ .resume = mpc52xx_uart_resume,
+#endif
+};
+
+
+/* ======================================================================== */
+/* Module */
+/* ======================================================================== */
+
+static int __init
+mpc52xx_uart_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO "Serial: MPC52xx PSC driver\n");
+
+ ret = uart_register_driver(&mpc52xx_uart_driver);
+ if (ret)
+ return ret;
+
+ ret = ocp_register_driver(&mpc52xx_uart_ocp_driver);
+
+ return ret;
+}
+
+static void __exit
+mpc52xx_uart_exit(void)
+{
+ ocp_unregister_driver(&mpc52xx_uart_ocp_driver);
+ uart_unregister_driver(&mpc52xx_uart_driver);
+}
+
+
+module_init(mpc52xx_uart_init);
+module_exit(mpc52xx_uart_exit);
+
+MODULE_AUTHOR("Sylvain Munaut <tnt@246tNt.com>");
+MODULE_DESCRIPTION("Freescale MPC52xx PSC UART");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* drivers/serial/serial_lh7a40x.c
+ *
+ * Copyright (C) 2004 Coastal Environmental Systems
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ */
+
+/* Driver for Sharp LH7A40X embedded serial ports
+ *
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ * Based on drivers/serial/amba.c, by Deep Blue Solutions Ltd.
+ *
+ * ---
+ *
+ * This driver supports the embedded UARTs of the Sharp LH7A40X series
+ * CPUs. While similar to the 16550 and other UART chips, there is
+ * nothing close to register compatibility. Moreover, some of the
+ * modem control lines are not available, either in the chip or they
+ * are lacking in the board-level implementation.
+ *
+ * - Use of SIRDIS
+ * For simplicity, we disable the IR functions of any UART whenever
+ * we enable it.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#if defined(CONFIG_SERIAL_LH7A40X_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+
+#include <asm/arch/serial.h>
+
+#define DEV_MAJOR 204
+#define DEV_MINOR 16
+#define DEV_NR 3
+
+#define ISR_LOOP_LIMIT 256
+
+#define UR(p,o) _UR ((p)->membase, o)
+#define _UR(b,o) (*((volatile unsigned int*)(((unsigned char*) b) + (o))))
+#define BIT_CLR(p,o,m) UR(p,o) = UR(p,o) & (~(unsigned int)m)
+#define BIT_SET(p,o,m) UR(p,o) = UR(p,o) | ( (unsigned int)m)
+
+#define UART_REG_SIZE 32
+
+#define UARTEN (0x01) /* UART enable */
+#define SIRDIS (0x02) /* Serial IR disable (UART1 only) */
+
+#define RxEmpty (0x10)
+#define TxEmpty (0x80)
+#define TxFull (0x20)
+#define nRxRdy RxEmpty
+#define nTxRdy TxFull
+#define TxBusy (0x08)
+
+#define RxBreak (0x0800)
+#define RxOverrunError (0x0400)
+#define RxParityError (0x0200)
+#define RxFramingError (0x0100)
+#define RxError (RxBreak | RxOverrunError | RxParityError | RxFramingError)
+
+#define DCD (0x04)
+#define DSR (0x02)
+#define CTS (0x01)
+
+#define RxInt (0x01)
+#define TxInt (0x02)
+#define ModemInt (0x04)
+#define RxTimeoutInt (0x08)
+
+#define MSEOI (0x10)
+
+#define WLEN_8 (0x60)
+#define WLEN_7 (0x40)
+#define WLEN_6 (0x20)
+#define WLEN_5 (0x00)
+#define WLEN (0x60) /* Mask for all word-length bits */
+#define STP2 (0x08)
+#define PEN (0x02) /* Parity Enable */
+#define EPS (0x04) /* Even Parity Set */
+#define FEN (0x10) /* FIFO Enable */
+#define BRK (0x01) /* Send Break */
+
+
+struct uart_port_lh7a40x {
+ struct uart_port port;
+ unsigned int statusPrev; /* Most recently read modem status */
+};
+
+static void lh7a40xuart_stop_tx (struct uart_port* port, unsigned int tty_stop)
+{
+ BIT_CLR (port, UART_R_INTEN, TxInt);
+}
+
+static void lh7a40xuart_start_tx (struct uart_port* port,
+ unsigned int tty_start)
+{
+ BIT_SET (port, UART_R_INTEN, TxInt);
+
+ /* *** FIXME: do I need to check for startup of the
+ transmitter? The old driver did, but AMBA
+ doesn't . */
+}
+
+static void lh7a40xuart_stop_rx (struct uart_port* port)
+{
+ BIT_SET (port, UART_R_INTEN, RxTimeoutInt | RxInt);
+}
+
+static void lh7a40xuart_enable_ms (struct uart_port* port)
+{
+ BIT_SET (port, UART_R_INTEN, ModemInt);
+}
+
+static void
+#ifdef SUPPORT_SYSRQ
+lh7a40xuart_rx_chars (struct uart_port* port, struct pt_regs* regs)
+#else
+lh7a40xuart_rx_chars (struct uart_port* port)
+#endif
+{
+ struct tty_struct* tty = port->info->tty;
+ int cbRxMax = 256; /* (Gross) limit on receive */
+ unsigned int data; /* Received data and status */
+
+ while (!(UR (port, UART_R_STATUS) & nRxRdy) && --cbRxMax) {
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ tty->flip.work.func((void*)tty);
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ printk(KERN_WARNING "TTY_DONT_FLIP set\n");
+ return;
+ }
+ }
+
+ data = UR (port, UART_R_DATA);
+
+ *tty->flip.char_buf_ptr = (unsigned char) data;
+ *tty->flip.flag_buf_ptr = TTY_NORMAL;
+ ++port->icount.rx;
+
+ if (data & RxError) { /* Quick check, short-circuit */
+ if (data & RxBreak) {
+ data &= ~(RxFramingError | RxParityError);
+ ++port->icount.brk;
+ if (uart_handle_break (port))
+ continue;
+ }
+ else if (data & RxParityError)
+ ++port->icount.parity;
+ else if (data & RxFramingError)
+ ++port->icount.frame;
+ if (data & RxOverrunError)
+ ++port->icount.overrun;
+
+ /* Mask by termios, leave Rx'd byte */
+ data &= port->read_status_mask | 0xff;
+
+ if (data & RxBreak)
+ *tty->flip.flag_buf_ptr = TTY_BREAK;
+ else if (data & RxParityError)
+ *tty->flip.flag_buf_ptr = TTY_PARITY;
+ else if (data & RxFramingError)
+ *tty->flip.flag_buf_ptr = TTY_FRAME;
+ }
+
+ if (uart_handle_sysrq_char (port, (unsigned char) data, regs))
+ continue;
+
+ if ((data & port->ignore_status_mask) == 0) {
+ ++tty->flip.flag_buf_ptr;
+ ++tty->flip.char_buf_ptr;
+ ++tty->flip.count;
+ }
+ if ((data & RxOverrunError)
+ && tty->flip.count < TTY_FLIPBUF_SIZE) {
+ /*
+ * Overrun is special, since it's reported
+ * immediately, and doesn't affect the current
+ * character
+ */
+ *tty->flip.char_buf_ptr++ = 0;
+ *tty->flip.flag_buf_ptr++ = TTY_OVERRUN;
+ ++tty->flip.count;
+ }
+ }
+ tty_flip_buffer_push (tty);
+ return;
+}
+
+static void lh7a40xuart_tx_chars (struct uart_port* port)
+{
+ struct circ_buf* xmit = &port->info->xmit;
+ int cbTxMax = port->fifosize;
+
+ if (port->x_char) {
+ UR (port, UART_R_DATA) = port->x_char;
+ ++port->icount.tx;
+ port->x_char = 0;
+ return;
+ }
+ if (uart_circ_empty (xmit) || uart_tx_stopped (port)) {
+ lh7a40xuart_stop_tx (port, 0);
+ return;
+ }
+
+ /* Unlike the AMBA UART, the lh7a40x UART does not guarantee
+ that at least half of the FIFO is empty. Instead, we check
+ status for every character. Using the AMBA method causes
+ the transmitter to drop characters. */
+
+ do {
+ UR (port, UART_R_DATA) = xmit->buf[xmit->tail];
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ ++port->icount.tx;
+ if (uart_circ_empty(xmit))
+ break;
+ } while (!(UR (port, UART_R_STATUS) & nTxRdy)
+ && cbTxMax--);
+
+ if (uart_circ_chars_pending (xmit) < WAKEUP_CHARS)
+ uart_write_wakeup (port);
+
+ if (uart_circ_empty (xmit))
+ lh7a40xuart_stop_tx (port, 0);
+}
+
+static void lh7a40xuart_modem_status (struct uart_port* port)
+{
+ unsigned int status = UR (port, UART_R_STATUS);
+ unsigned int delta
+ = status ^ ((struct uart_port_lh7a40x*) port)->statusPrev;
+
+ BIT_SET (port, UART_R_RAWISR, MSEOI); /* Clear modem status intr */
+
+ if (!delta) /* Only happens if we missed 2 transitions */
+ return;
+
+ ((struct uart_port_lh7a40x*) port)->statusPrev = status;
+
+ if (delta & DCD)
+ uart_handle_dcd_change (port, status & DCD);
+
+ if (delta & DSR)
+ ++port->icount.dsr;
+
+ if (delta & CTS)
+ uart_handle_cts_change (port, status & CTS);
+
+ wake_up_interruptible (&port->info->delta_msr_wait);
+}
+
+static irqreturn_t lh7a40xuart_int (int irq, void* dev_id,
+ struct pt_regs* regs)
+{
+ struct uart_port* port = dev_id;
+ unsigned int cLoopLimit = ISR_LOOP_LIMIT;
+ unsigned int isr = UR (port, UART_R_ISR);
+
+
+ do {
+ if (isr & (RxInt | RxTimeoutInt))
+#ifdef SUPPORT_SYSRQ
+ lh7a40xuart_rx_chars(port, regs);
+#else
+ lh7a40xuart_rx_chars(port);
+#endif
+ if (isr & ModemInt)
+ lh7a40xuart_modem_status (port);
+ if (isr & TxInt)
+ lh7a40xuart_tx_chars (port);
+
+ if (--cLoopLimit == 0)
+ break;
+
+ isr = UR (port, UART_R_ISR);
+ } while (isr & (RxInt | TxInt | RxTimeoutInt));
+
+ return IRQ_HANDLED;
+}
+
+static unsigned int lh7a40xuart_tx_empty (struct uart_port* port)
+{
+ return (UR (port, UART_R_STATUS) & TxEmpty) ? TIOCSER_TEMT : 0;
+}
+
+static unsigned int lh7a40xuart_get_mctrl (struct uart_port* port)
+{
+ unsigned int result = 0;
+ unsigned int status = UR (port, UART_R_STATUS);
+
+ if (status & DCD)
+ result |= TIOCM_CAR;
+ if (status & DSR)
+ result |= TIOCM_DSR;
+ if (status & CTS)
+ result |= TIOCM_CTS;
+
+ return result;
+}
+
+static void lh7a40xuart_set_mctrl (struct uart_port* port, unsigned int mctrl)
+{
+ /* None of the ports supports DTR. UART1 supports RTS through GPIO. */
+ /* Note, kernel appears to be setting DTR and RTS on console. */
+
+ /* *** FIXME: this deserves more work. There's some work in
+ tracing all of the IO pins. */
+#if 0
+ if( port->mapbase == UART1_PHYS) {
+ gpioRegs_t *gpio = (gpioRegs_t *)IO_ADDRESS(GPIO_PHYS);
+
+ if (mctrl & TIOCM_RTS)
+ gpio->pbdr &= ~GPIOB_UART1_RTS;
+ else
+ gpio->pbdr |= GPIOB_UART1_RTS;
+ }
+#endif
+}
+
+static void lh7a40xuart_break_ctl (struct uart_port* port, int break_state)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (break_state == -1)
+ BIT_SET (port, UART_R_FCON, BRK); /* Assert break */
+ else
+ BIT_CLR (port, UART_R_FCON, BRK); /* Deassert break */
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static int lh7a40xuart_startup (struct uart_port* port)
+{
+ int retval;
+
+ retval = request_irq (port->irq, lh7a40xuart_int, 0,
+ "serial_lh7a40x", port);
+ if (retval)
+ return retval;
+
+ /* Initial modem control-line settings */
+ ((struct uart_port_lh7a40x*) port)->statusPrev
+ = UR (port, UART_R_STATUS);
+
+ /* There is presently no configuration option to enable IR.
+ Thus, we always disable it. */
+
+ BIT_SET (port, UART_R_CON, UARTEN | SIRDIS);
+ BIT_SET (port, UART_R_INTEN, RxTimeoutInt | RxInt);
+
+ return 0;
+}
+
+static void lh7a40xuart_shutdown (struct uart_port* port)
+{
+ free_irq (port->irq, port);
+ BIT_CLR (port, UART_R_FCON, BRK | FEN);
+ BIT_CLR (port, UART_R_CON, UARTEN);
+}
+
+static void lh7a40xuart_set_termios (struct uart_port* port,
+ struct termios* termios,
+ struct termios* old)
+{
+ unsigned int con;
+ unsigned int inten;
+ unsigned int fcon;
+ unsigned long flags;
+ unsigned int baud;
+ unsigned int quot;
+
+ baud = uart_get_baud_rate (port, termios, old, 8, port->uartclk/16);
+ quot = uart_get_divisor (port, baud); /* -1 performed elsewhere */
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ fcon = WLEN_5;
+ break;
+ case CS6:
+ fcon = WLEN_6;
+ break;
+ case CS7:
+ fcon = WLEN_7;
+ break;
+ case CS8:
+ default:
+ fcon = WLEN_8;
+ break;
+ }
+ if (termios->c_cflag & CSTOPB)
+ fcon |= STP2;
+ if (termios->c_cflag & PARENB) {
+ fcon |= PEN;
+ if (!(termios->c_cflag & PARODD))
+ fcon |= EPS;
+ }
+ if (port->fifosize > 1)
+ fcon |= FEN;
+
+ spin_lock_irqsave (&port->lock, flags);
+
+ uart_update_timeout (port, termios->c_cflag, baud);
+
+ port->read_status_mask = RxOverrunError;
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= RxFramingError | RxParityError;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ port->read_status_mask |= RxBreak;
+
+ /* Figure mask for status we ignore */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= RxFramingError | RxParityError;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= RxBreak;
+ /* Ignore overrun when ignorning parity */
+ /* *** FIXME: is this in the right place? */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= RxOverrunError;
+ }
+
+ /* Ignore all receive errors when receive disabled */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->ignore_status_mask |= RxError;
+
+ con = UR (port, UART_R_CON);
+ inten = (UR (port, UART_R_INTEN) & ~ModemInt);
+
+ if (UART_ENABLE_MS (port, termios->c_cflag))
+ inten |= ModemInt;
+
+ BIT_CLR (port, UART_R_CON, UARTEN); /* Disable UART */
+ UR (port, UART_R_INTEN) = 0; /* Disable interrupts */
+ UR (port, UART_R_BRCON) = quot - 1; /* Set baud rate divisor */
+ UR (port, UART_R_FCON) = fcon; /* Set FIFO and frame ctrl */
+ UR (port, UART_R_INTEN) = inten; /* Enable interrupts */
+ UR (port, UART_R_CON) = con; /* Restore UART mode */
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static const char* lh7a40xuart_type (struct uart_port* port)
+{
+ return port->type == PORT_LH7A40X ? "LH7A40X" : NULL;
+}
+
+static void lh7a40xuart_release_port (struct uart_port* port)
+{
+ release_mem_region (port->mapbase, UART_REG_SIZE);
+}
+
+static int lh7a40xuart_request_port (struct uart_port* port)
+{
+ return request_mem_region (port->mapbase, UART_REG_SIZE,
+ "serial_lh7a40x") != NULL
+ ? 0 : -EBUSY;
+}
+
+static void lh7a40xuart_config_port (struct uart_port* port, int flags)
+{
+ if (flags & UART_CONFIG_TYPE) {
+ port->type = PORT_LH7A40X;
+ lh7a40xuart_request_port (port);
+ }
+}
+
+static int lh7a40xuart_verify_port (struct uart_port* port,
+ struct serial_struct* ser)
+{
+ int ret = 0;
+
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_LH7A40X)
+ ret = -EINVAL;
+ if (ser->irq < 0 || ser->irq >= NR_IRQS)
+ ret = -EINVAL;
+ if (ser->baud_base < 9600) /* *** FIXME: is this true? */
+ ret = -EINVAL;
+ return ret;
+}
+
+static struct uart_ops lh7a40x_uart_ops = {
+ .tx_empty = lh7a40xuart_tx_empty,
+ .set_mctrl = lh7a40xuart_set_mctrl,
+ .get_mctrl = lh7a40xuart_get_mctrl,
+ .stop_tx = lh7a40xuart_stop_tx,
+ .start_tx = lh7a40xuart_start_tx,
+ .stop_rx = lh7a40xuart_stop_rx,
+ .enable_ms = lh7a40xuart_enable_ms,
+ .break_ctl = lh7a40xuart_break_ctl,
+ .startup = lh7a40xuart_startup,
+ .shutdown = lh7a40xuart_shutdown,
+ .set_termios = lh7a40xuart_set_termios,
+ .type = lh7a40xuart_type,
+ .release_port = lh7a40xuart_release_port,
+ .request_port = lh7a40xuart_request_port,
+ .config_port = lh7a40xuart_config_port,
+ .verify_port = lh7a40xuart_verify_port,
+};
+
+static struct uart_port_lh7a40x lh7a40x_ports[DEV_NR] = {
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART1_PHYS),
+ .mapbase = UART1_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART1INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 0,
+ },
+ },
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART2_PHYS),
+ .mapbase = UART2_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART2INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 1,
+ },
+ },
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART3_PHYS),
+ .mapbase = UART3_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART3INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 2,
+ },
+ },
+};
+
+#ifndef CONFIG_SERIAL_LH7A40X_CONSOLE
+# define LH7A40X_CONSOLE NULL
+#else
+# define LH7A40X_CONSOLE &lh7a40x_console
+
+
+static void lh7a40xuart_console_write (struct console* co,
+ const char* s,
+ unsigned int count)
+{
+ struct uart_port* port = &lh7a40x_ports[co->index].port;
+ unsigned int con = UR (port, UART_R_CON);
+ unsigned int inten = UR (port, UART_R_INTEN);
+
+
+ UR (port, UART_R_INTEN) = 0; /* Disable all interrupts */
+ BIT_SET (port, UART_R_CON, UARTEN | SIRDIS); /* Enable UART */
+
+ for (; count-- > 0; ++s) {
+ while (UR (port, UART_R_STATUS) & nTxRdy)
+ ;
+ UR (port, UART_R_DATA) = *s;
+ if (*s == '\n') {
+ while ((UR (port, UART_R_STATUS) & TxBusy))
+ ;
+ UR (port, UART_R_DATA) = '\r';
+ }
+ }
+
+ /* Wait until all characters are sent */
+ while (UR (port, UART_R_STATUS) & TxBusy)
+ ;
+
+ /* Restore control and interrupt mask */
+ UR (port, UART_R_CON) = con;
+ UR (port, UART_R_INTEN) = inten;
+}
+
+static void __init lh7a40xuart_console_get_options (struct uart_port* port,
+ int* baud,
+ int* parity,
+ int* bits)
+{
+ if (UR (port, UART_R_CON) & UARTEN) {
+ unsigned int fcon = UR (port, UART_R_FCON);
+ unsigned int quot = UR (port, UART_R_BRCON) + 1;
+
+ switch (fcon & (PEN | EPS)) {
+ default: *parity = 'n'; break;
+ case PEN: *parity = 'o'; break;
+ case PEN | EPS: *parity = 'e'; break;
+ }
+
+ switch (fcon & WLEN) {
+ default:
+ case WLEN_8: *bits = 8; break;
+ case WLEN_7: *bits = 7; break;
+ case WLEN_6: *bits = 6; break;
+ case WLEN_5: *bits = 5; break;
+ }
+
+ *baud = port->uartclk/(16*quot);
+ }
+}
+
+static int __init lh7a40xuart_console_setup (struct console* co, char* options)
+{
+ struct uart_port* port;
+ int baud = 38400;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ if (co->index >= DEV_NR) /* Bounds check on device number */
+ co->index = 0;
+ port = &lh7a40x_ports[co->index].port;
+
+ if (options)
+ uart_parse_options (options, &baud, &parity, &bits, &flow);
+ else
+ lh7a40xuart_console_get_options (port, &baud, &parity, &bits);
+
+ return uart_set_options (port, co, baud, parity, bits, flow);
+}
+
+extern struct uart_driver lh7a40x_reg;
+static struct console lh7a40x_console = {
+ .name = "ttyAM",
+ .write = lh7a40xuart_console_write,
+ .device = uart_console_device,
+ .setup = lh7a40xuart_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &lh7a40x_reg,
+};
+
+static int __init lh7a40xuart_console_init(void)
+{
+ register_console (&lh7a40x_console);
+ return 0;
+}
+
+console_initcall (lh7a40xuart_console_init);
+
+#endif
+
+static struct uart_driver lh7a40x_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "ttyAM",
+ .dev_name = "ttyAM",
+ .major = DEV_MAJOR,
+ .minor = DEV_MINOR,
+ .nr = DEV_NR,
+ .cons = LH7A40X_CONSOLE,
+};
+
+static int __init lh7a40xuart_init(void)
+{
+ int ret;
+
+ printk (KERN_INFO "serial: LH7A40X serial driver\n");
+
+ ret = uart_register_driver (&lh7a40x_reg);
+
+ if (ret == 0) {
+ int i;
+
+ for (i = 0; i < DEV_NR; i++)
+ uart_add_one_port (&lh7a40x_reg,
+ &lh7a40x_ports[i].port);
+ }
+ return ret;
+}
+
+static void __exit lh7a40xuart_exit(void)
+{
+ int i;
+
+ for (i = 0; i < DEV_NR; i++)
+ uart_remove_one_port (&lh7a40x_reg, &lh7a40x_ports[i].port);
+
+ uart_unregister_driver (&lh7a40x_reg);
+}
+
+module_init (lh7a40xuart_init);
+module_exit (lh7a40xuart_exit);
+
+MODULE_AUTHOR ("Marc Singer");
+MODULE_DESCRIPTION ("Sharp LH7A40X serial port driver");
+MODULE_LICENSE ("GPL");
--- /dev/null
+/*
+ * C-Brick Serial Port (and console) driver for SGI Altix machines.
+ *
+ * This driver is NOT suitable for talking to the l1-controller for
+ * anything other than 'console activities' --- please use the l1
+ * driver for that.
+ *
+ *
+ * Copyright (c) 2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1500 Crittenden Lane,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/NoticeExplan
+ */
+
+#include <linux/config.h>
+#include <linux/interrupt.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/module.h>
+#include <linux/sysrq.h>
+#include <linux/circ_buf.h>
+#include <linux/serial_reg.h>
+#include <linux/delay.h> /* for mdelay */
+#include <linux/miscdevice.h>
+#include <linux/serial_core.h>
+
+#include <asm/sn/simulator.h>
+#include <asm/sn/sn2/sn_private.h>
+#include <asm/sn/sn_sal.h>
+
+/* number of characters we can transmit to the SAL console at a time */
+#define SN_SAL_MAX_CHARS 120
+
+/* 64K, when we're asynch, it must be at least printk's LOG_BUF_LEN to
+ * avoid losing chars, (always has to be a power of 2) */
+#define SN_SAL_BUFFER_SIZE (64 * (1 << 10))
+
+#define SN_SAL_UART_FIFO_DEPTH 16
+#define SN_SAL_UART_FIFO_SPEED_CPS 9600/10
+
+/* sn_transmit_chars() calling args */
+#define TRANSMIT_BUFFERED 0
+#define TRANSMIT_RAW 1
+
+/* To use dynamic numbers only and not use the assigned major and minor,
+ * define the following.. */
+/* #define USE_DYNAMIC_MINOR 1 */ /* use dynamic minor number */
+#define USE_DYNAMIC_MINOR 0 /* Don't rely on misc_register dynamic minor */
+
+/* Device name we're using */
+#define DEVICE_NAME "ttySG"
+#define DEVICE_NAME_DYNAMIC "ttySG0" /* need full name for misc_register */
+/* The major/minor we are using, ignored for USE_DYNAMIC_MINOR */
+#define DEVICE_MAJOR 204
+#define DEVICE_MINOR 40
+
+/*
+ * Port definition - this kinda drives it all
+ */
+struct sn_cons_port {
+ struct timer_list sc_timer;
+ struct uart_port sc_port;
+ struct sn_sal_ops {
+ int (*sal_puts_raw) (const char *s, int len);
+ int (*sal_puts) (const char *s, int len);
+ int (*sal_getc) (void);
+ int (*sal_input_pending) (void);
+ void (*sal_wakeup_transmit) (struct sn_cons_port *, int);
+ } *sc_ops;
+ unsigned long sc_interrupt_timeout;
+ int sc_is_asynch;
+};
+
+static struct sn_cons_port sal_console_port;
+
+/* Only used if USE_DYNAMIC_MINOR is set to 1 */
+static struct miscdevice misc; /* used with misc_register for dynamic */
+
+extern u64 master_node_bedrock_address;
+extern void early_sn_setup(void);
+
+#undef DEBUG
+#ifdef DEBUG
+static int sn_debug_printf(const char *fmt, ...);
+#define DPRINTF(x...) sn_debug_printf(x)
+#else
+#define DPRINTF(x...) do { } while (0)
+#endif
+
+/* Prototypes */
+static int snt_hw_puts_raw(const char *, int);
+static int snt_hw_puts_buffered(const char *, int);
+static int snt_poll_getc(void);
+static int snt_poll_input_pending(void);
+static int snt_sim_puts(const char *, int);
+static int snt_sim_getc(void);
+static int snt_sim_input_pending(void);
+static int snt_intr_getc(void);
+static int snt_intr_input_pending(void);
+static void sn_transmit_chars(struct sn_cons_port *, int);
+
+/* A table for polling:
+ */
+static struct sn_sal_ops poll_ops = {
+ .sal_puts_raw = snt_hw_puts_raw,
+ .sal_puts = snt_hw_puts_raw,
+ .sal_getc = snt_poll_getc,
+ .sal_input_pending = snt_poll_input_pending
+};
+
+/* A table for the simulator */
+static struct sn_sal_ops sim_ops = {
+ .sal_puts_raw = snt_sim_puts,
+ .sal_puts = snt_sim_puts,
+ .sal_getc = snt_sim_getc,
+ .sal_input_pending = snt_sim_input_pending
+};
+
+/* A table for interrupts enabled */
+static struct sn_sal_ops intr_ops = {
+ .sal_puts_raw = snt_hw_puts_raw,
+ .sal_puts = snt_hw_puts_buffered,
+ .sal_getc = snt_intr_getc,
+ .sal_input_pending = snt_intr_input_pending,
+ .sal_wakeup_transmit = sn_transmit_chars
+};
+
+/* the console does output in two distinctly different ways:
+ * synchronous (raw) and asynchronous (buffered). initally, early_printk
+ * does synchronous output. any data written goes directly to the SAL
+ * to be output (incidentally, it is internally buffered by the SAL)
+ * after interrupts and timers are initialized and available for use,
+ * the console init code switches to asynchronous output. this is
+ * also the earliest opportunity to begin polling for console input.
+ * after console initialization, console output and tty (serial port)
+ * output is buffered and sent to the SAL asynchronously (either by
+ * timer callback or by UART interrupt) */
+
+
+/* routines for running the console in polling mode */
+
+/**
+ * snt_poll_getc - Get a character from the console in polling mode
+ *
+ */
+static int
+snt_poll_getc(void)
+{
+ int ch;
+
+ ia64_sn_console_getc(&ch);
+ return ch;
+}
+
+/**
+ * snt_poll_input_pending - Check if any input is waiting - polling mode.
+ *
+ */
+static int
+snt_poll_input_pending(void)
+{
+ int status, input;
+
+ status = ia64_sn_console_check(&input);
+ return !status && input;
+}
+
+/* routines for running the console on the simulator */
+
+/**
+ * snt_sim_puts - send to the console, used in simulator mode
+ * @str: String to send
+ * @count: length of string
+ *
+ */
+static int
+snt_sim_puts(const char *str, int count)
+{
+ int counter = count;
+
+#ifdef FLAG_DIRECT_CONSOLE_WRITES
+ /* This is an easy way to pre-pend the output to know whether the output
+ * was done via sal or directly */
+ writeb('[', master_node_bedrock_address + (UART_TX << 3));
+ writeb('+', master_node_bedrock_address + (UART_TX << 3));
+ writeb(']', master_node_bedrock_address + (UART_TX << 3));
+ writeb(' ', master_node_bedrock_address + (UART_TX << 3));
+#endif /* FLAG_DIRECT_CONSOLE_WRITES */
+ while (counter > 0) {
+ writeb(*str, master_node_bedrock_address + (UART_TX << 3));
+ counter--;
+ str++;
+ }
+ return count;
+}
+
+/**
+ * snt_sim_getc - Get character from console in simulator mode
+ *
+ */
+static int
+snt_sim_getc(void)
+{
+ return readb(master_node_bedrock_address + (UART_RX << 3));
+}
+
+/**
+ * snt_sim_input_pending - Check if there is input pending in simulator mode
+ *
+ */
+static int
+snt_sim_input_pending(void)
+{
+ return readb(master_node_bedrock_address +
+ (UART_LSR << 3)) & UART_LSR_DR;
+}
+
+/* routines for an interrupt driven console (normal) */
+
+/**
+ * snt_intr_getc - Get a character from the console, interrupt mode
+ *
+ */
+static int
+snt_intr_getc(void)
+{
+ return ia64_sn_console_readc();
+}
+
+/**
+ * snt_intr_input_pending - Check if input is pending, interrupt mode
+ *
+ */
+static int
+snt_intr_input_pending(void)
+{
+ return ia64_sn_console_intr_status() & SAL_CONSOLE_INTR_RECV;
+}
+
+/* these functions are polled and interrupt */
+
+/**
+ * snt_hw_puts_raw - Send raw string to the console, polled or interrupt mode
+ * @s: String
+ * @len: Length
+ *
+ */
+static int
+snt_hw_puts_raw(const char *s, int len)
+{
+ /* this will call the PROM and not return until this is done */
+ return ia64_sn_console_putb(s, len);
+}
+
+/**
+ * snt_hw_puts_buffered - Send string to console, polled or interrupt mode
+ * @s: String
+ * @len: Length
+ *
+ */
+static int
+snt_hw_puts_buffered(const char *s, int len)
+{
+ /* queue data to the PROM */
+ return ia64_sn_console_xmit_chars((char *)s, len);
+}
+
+/* uart interface structs
+ * These functions are associated with the uart_port that the serial core
+ * infrastructure calls.
+ *
+ * Note: Due to how the console works, many routines are no-ops.
+ */
+
+/**
+ * snp_type - What type of console are we?
+ * @port: Port to operate with (we ignore since we only have one port)
+ *
+ */
+static const char *
+snp_type(struct uart_port *port)
+{
+ return ("SGI SN L1");
+}
+
+/**
+ * snp_tx_empty - Is the transmitter empty? We pretend we're always empty
+ * @port: Port to operate on (we ignore since we only have one port)
+ *
+ */
+static unsigned int
+snp_tx_empty(struct uart_port *port)
+{
+ return 1;
+}
+
+/**
+ * snp_stop_tx - stop the transmitter - no-op for us
+ * @port: Port to operat eon - we ignore - no-op function
+ * @tty_stop: Set to 1 if called via uart_stop
+ *
+ */
+static void
+snp_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+}
+
+/**
+ * snp_release_port - Free i/o and resources for port - no-op for us
+ * @port: Port to operate on - we ignore - no-op function
+ *
+ */
+static void
+snp_release_port(struct uart_port *port)
+{
+}
+
+/**
+ * snp_enable_ms - Force modem status interrupts on - no-op for us
+ * @port: Port to operate on - we ignore - no-op function
+ *
+ */
+static void
+snp_enable_ms(struct uart_port *port)
+{
+}
+
+/**
+ * snp_shutdown - shut down the port - free irq and disable - no-op for us
+ * @port: Port to shut down - we ignore
+ *
+ */
+static void
+snp_shutdown(struct uart_port *port)
+{
+}
+
+/**
+ * snp_set_mctrl - set control lines (dtr, rts, etc) - no-op for our console
+ * @port: Port to operate on - we ignore
+ * @mctrl: Lines to set/unset - we ignore
+ *
+ */
+static void
+snp_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+}
+
+/**
+ * snp_get_mctrl - get contorl line info, we just return a static value
+ * @port: port to operate on - we only have one port so we ignore this
+ *
+ */
+static unsigned int
+snp_get_mctrl(struct uart_port *port)
+{
+ return TIOCM_CAR | TIOCM_RNG | TIOCM_DSR | TIOCM_CTS;
+}
+
+/**
+ * snp_stop_rx - Stop the receiver - we ignor ethis
+ * @port: Port to operate on - we ignore
+ *
+ */
+static void
+snp_stop_rx(struct uart_port *port)
+{
+}
+
+/**
+ * snp_start_tx - Start transmitter
+ * @port: Port to operate on
+ * @tty_stop: Set to 1 if called via uart_start
+ *
+ */
+static void
+snp_start_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ if (sal_console_port.sc_ops->sal_wakeup_transmit)
+ sal_console_port.sc_ops->sal_wakeup_transmit(&sal_console_port, TRANSMIT_BUFFERED);
+
+}
+
+/**
+ * snp_break_ctl - handle breaks - ignored by us
+ * @port: Port to operate on
+ * @break_state: Break state
+ *
+ */
+static void
+snp_break_ctl(struct uart_port *port, int break_state)
+{
+}
+
+/**
+ * snp_startup - Start up the serial port - always return 0 (We're always on)
+ * @port: Port to operate on
+ *
+ */
+static int
+snp_startup(struct uart_port *port)
+{
+ return 0;
+}
+
+/**
+ * snp_set_termios - set termios stuff - we ignore these
+ * @port: port to operate on
+ * @termios: New settings
+ * @termios: Old
+ *
+ */
+static void
+snp_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+}
+
+/**
+ * snp_request_port - allocate resources for port - ignored by us
+ * @port: port to operate on
+ *
+ */
+static int
+snp_request_port(struct uart_port *port)
+{
+ return 0;
+}
+
+/**
+ * snp_config_port - allocate resources, set up - we ignore, we're always on
+ * @port: Port to operate on
+ * @flags: flags used for port setup
+ *
+ */
+static void
+snp_config_port(struct uart_port *port, int flags)
+{
+}
+
+/* Associate the uart functions above - given to serial core */
+
+static struct uart_ops sn_console_ops = {
+ .tx_empty = snp_tx_empty,
+ .set_mctrl = snp_set_mctrl,
+ .get_mctrl = snp_get_mctrl,
+ .stop_tx = snp_stop_tx,
+ .start_tx = snp_start_tx,
+ .stop_rx = snp_stop_rx,
+ .enable_ms = snp_enable_ms,
+ .break_ctl = snp_break_ctl,
+ .startup = snp_startup,
+ .shutdown = snp_shutdown,
+ .set_termios = snp_set_termios,
+ .pm = NULL,
+ .type = snp_type,
+ .release_port = snp_release_port,
+ .request_port = snp_request_port,
+ .config_port = snp_config_port,
+ .verify_port = NULL,
+};
+
+/* End of uart struct functions and defines */
+
+#ifdef DEBUG
+
+/**
+ * sn_debug_printf - close to hardware debugging printf
+ * @fmt: printf format
+ *
+ * This is as "close to the metal" as we can get, used when the driver
+ * itself may be broken.
+ *
+ */
+static int
+sn_debug_printf(const char *fmt, ...)
+{
+ static char printk_buf[1024];
+ int printed_len;
+ va_list args;
+
+ va_start(args, fmt);
+ printed_len = vsnprintf(printk_buf, sizeof (printk_buf), fmt, args);
+
+ if (!sal_console_port.sc_ops) {
+ if (IS_RUNNING_ON_SIMULATOR())
+ sal_console_port.sc_ops = &sim_ops;
+ else
+ sal_console_port.sc_ops = &poll_ops;
+
+ early_sn_setup();
+ }
+ sal_console_port.sc_ops->sal_puts_raw(printk_buf, printed_len);
+
+ va_end(args);
+ return printed_len;
+}
+#endif /* DEBUG */
+
+/*
+ * Interrupt handling routines.
+ */
+
+
+/**
+ * sn_receive_chars - Grab characters, pass them to tty layer
+ * @port: Port to operate on
+ * @regs: Saved registers (needed by uart_handle_sysrq_char)
+ *
+ * Note: If we're not registered with the serial core infrastructure yet,
+ * we don't try to send characters to it...
+ *
+ */
+static void
+sn_receive_chars(struct sn_cons_port *port, struct pt_regs *regs)
+{
+ int ch;
+ struct tty_struct *tty;
+
+ if (!port) {
+ printk(KERN_ERR "sn_receive_chars - port NULL so can't receieve\n");
+ return;
+ }
+
+ if (!port->sc_ops) {
+ printk(KERN_ERR "sn_receive_chars - port->sc_ops NULL so can't receieve\n");
+ return;
+ }
+
+ if (port->sc_port.info) {
+ /* The serial_core stuffs are initilized, use them */
+ tty = port->sc_port.info->tty;
+ }
+ else {
+ /* Not registered yet - can't pass to tty layer. */
+ tty = NULL;
+ }
+
+ while (port->sc_ops->sal_input_pending()) {
+ ch = port->sc_ops->sal_getc();
+ if (ch < 0) {
+ printk(KERN_ERR "sn_console: An error occured while "
+ "obtaining data from the console (0x%0x)\n", ch);
+ break;
+ }
+#if defined(CONFIG_SERIAL_SGI_L1_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+ if (uart_handle_sysrq_char(&port->sc_port, ch, regs))
+ continue;
+#endif /* CONFIG_SERIAL_SGI_L1_CONSOLE && CONFIG_MAGIC_SYSRQ */
+
+ /* record the character to pass up to the tty layer */
+ if (tty) {
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = TTY_NORMAL;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ if (tty->flip.count == TTY_FLIPBUF_SIZE)
+ break;
+ }
+ else {
+ }
+ port->sc_port.icount.rx++;
+ }
+
+ if (tty)
+ tty_flip_buffer_push(tty);
+}
+
+/**
+ * sn_transmit_chars - grab characters from serial core, send off
+ * @port: Port to operate on
+ * @raw: Transmit raw or buffered
+ *
+ * Note: If we're early, before we're registered with serial core, the
+ * writes are going through sn_sal_console_write because that's how
+ * register_console has been set up. We currently could have asynch
+ * polls calling this function due to sn_sal_switch_to_asynch but we can
+ * ignore them until we register with the serial core stuffs.
+ *
+ */
+static void
+sn_transmit_chars(struct sn_cons_port *port, int raw)
+{
+ int xmit_count, tail, head, loops, ii;
+ int result;
+ char *start;
+ struct circ_buf *xmit;
+
+ if (!port)
+ return;
+
+ BUG_ON(!port->sc_is_asynch);
+
+ if (port->sc_port.info) {
+ /* We're initilized, using serial core infrastructure */
+ xmit = &port->sc_port.info->xmit;
+ }
+ else {
+ /* Probably sn_sal_switch_to_asynch has been run but serial core isn't
+ * initilized yet. Just return. Writes are going through
+ * sn_sal_console_write (due to register_console) at this time.
+ */
+ return;
+ }
+
+ if (uart_circ_empty(xmit) || uart_tx_stopped(&port->sc_port)) {
+ /* Nothing to do. */
+ return;
+ }
+
+ head = xmit->head;
+ tail = xmit->tail;
+ start = &xmit->buf[tail];
+
+ /* twice around gets the tail to the end of the buffer and
+ * then to the head, if needed */
+ loops = (head < tail) ? 2 : 1;
+
+ for (ii = 0; ii < loops; ii++) {
+ xmit_count = (head < tail) ?
+ (UART_XMIT_SIZE - tail) : (head - tail);
+
+ if (xmit_count > 0) {
+ if (raw == TRANSMIT_RAW)
+ result =
+ port->sc_ops->sal_puts_raw(start,
+ xmit_count);
+ else
+ result =
+ port->sc_ops->sal_puts(start, xmit_count);
+#ifdef DEBUG
+ if (!result)
+ DPRINTF("`");
+#endif
+ if (result > 0) {
+ xmit_count -= result;
+ port->sc_port.icount.tx += result;
+ tail += result;
+ tail &= UART_XMIT_SIZE - 1;
+ xmit->tail = tail;
+ start = &xmit->buf[tail];
+ }
+ }
+ }
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&port->sc_port);
+
+ if (uart_circ_empty(xmit))
+ snp_stop_tx(&port->sc_port, 0); /* no-op for us */
+}
+
+/**
+ * sn_sal_interrupt - Handle console interrupts
+ * @irq: irq #, useful for debug statements
+ * @dev_id: our pointer to our port (sn_cons_port which contains the uart port)
+ * @regs: Saved registers, used by sn_receive_chars for uart_handle_sysrq_char
+ *
+ */
+static irqreturn_t
+sn_sal_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct sn_cons_port *port = (struct sn_cons_port *) dev_id;
+ unsigned long flags;
+ int status = ia64_sn_console_intr_status();
+
+ if (!port)
+ return IRQ_NONE;
+
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ if (status & SAL_CONSOLE_INTR_RECV) {
+ sn_receive_chars(port, regs);
+ }
+ if (status & SAL_CONSOLE_INTR_XMIT) {
+ sn_transmit_chars(port, TRANSMIT_BUFFERED);
+ }
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ return IRQ_HANDLED;
+}
+
+/**
+ * sn_sal_connect_interrupt - Request interrupt, handled by sn_sal_interrupt
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * returns the console irq if interrupt is successfully registered, else 0
+ *
+ */
+static int
+sn_sal_connect_interrupt(struct sn_cons_port *port)
+{
+ if (request_irq(SGI_UART_VECTOR, sn_sal_interrupt, SA_INTERRUPT,
+ "SAL console driver", port) >= 0) {
+ return SGI_UART_VECTOR;
+ }
+
+ printk(KERN_INFO "sn_console: console proceeding in polled mode\n");
+ return 0;
+}
+
+/**
+ * sn_sal_timer_poll - this function handles polled console mode
+ * @data: A pointer to our sn_cons_port (which contains the uart port)
+ *
+ * data is the pointer that init_timer will store for us. This function is
+ * associated with init_timer to see if there is any console traffic.
+ * Obviously not used in interrupt mode
+ *
+ */
+static void
+sn_sal_timer_poll(unsigned long data)
+{
+ struct sn_cons_port *port = (struct sn_cons_port *) data;
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ if (!port->sc_port.irq) {
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ sn_receive_chars(port, NULL);
+ sn_transmit_chars(port, TRANSMIT_RAW);
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ mod_timer(&port->sc_timer,
+ jiffies + port->sc_interrupt_timeout);
+ }
+}
+
+/*
+ * Boot-time initialization code
+ */
+
+/**
+ * sn_sal_switch_to_asynch - Switch to async mode (as opposed to synch)
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * So this is used by sn_sal_serial_console_init (early on, before we're
+ * registered with serial core). It's also used by sn_sal_module_init
+ * right after we've registered with serial core. The later only happens
+ * if we didn't already come through here via sn_sal_serial_console_init.
+ *
+ */
+static void __init
+sn_sal_switch_to_asynch(struct sn_cons_port *port)
+{
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ DPRINTF("sn_console: about to switch to asynchronous console\n");
+
+ /* without early_printk, we may be invoked late enough to race
+ * with other cpus doing console IO at this point, however
+ * console interrupts will never be enabled */
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+
+ /* early_printk invocation may have done this for us */
+ if (!port->sc_ops) {
+ if (IS_RUNNING_ON_SIMULATOR())
+ port->sc_ops = &sim_ops;
+ else
+ port->sc_ops = &poll_ops;
+ }
+
+ /* we can't turn on the console interrupt (as request_irq
+ * calls kmalloc, which isn't set up yet), so we rely on a
+ * timer to poll for input and push data from the console
+ * buffer.
+ */
+ init_timer(&port->sc_timer);
+ port->sc_timer.function = sn_sal_timer_poll;
+ port->sc_timer.data = (unsigned long) port;
+
+ if (IS_RUNNING_ON_SIMULATOR())
+ port->sc_interrupt_timeout = 6;
+ else {
+ /* 960cps / 16 char FIFO = 60HZ
+ * HZ / (SN_SAL_FIFO_SPEED_CPS / SN_SAL_FIFO_DEPTH) */
+ port->sc_interrupt_timeout =
+ HZ * SN_SAL_UART_FIFO_DEPTH / SN_SAL_UART_FIFO_SPEED_CPS;
+ }
+ mod_timer(&port->sc_timer, jiffies + port->sc_interrupt_timeout);
+
+ port->sc_is_asynch = 1;
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+}
+
+/**
+ * sn_sal_switch_to_interrupts - Switch to interrupt driven mode
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * In sn_sal_module_init, after we're registered with serial core and
+ * the port is added, this function is called to switch us to interrupt
+ * mode. We were previously in asynch/polling mode (using init_timer).
+ *
+ * We attempt to switch to interrupt mode here by calling
+ * sn_sal_connect_interrupt. If that works out, we enable receive interrupts.
+ */
+static void __init
+sn_sal_switch_to_interrupts(struct sn_cons_port *port)
+{
+ int irq;
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ DPRINTF("sn_console: switching to interrupt driven console\n");
+
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+
+ irq = sn_sal_connect_interrupt(port);
+
+ if (irq) {
+ port->sc_port.irq = irq;
+ port->sc_ops = &intr_ops;
+
+ /* turn on receive interrupts */
+ ia64_sn_console_intr_enable(SAL_CONSOLE_INTR_RECV);
+ }
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+}
+
+/*
+ * Kernel console definitions
+ */
+
+#ifdef CONFIG_SERIAL_SGI_L1_CONSOLE
+static void sn_sal_console_write(struct console *, const char *, unsigned);
+static int __init sn_sal_console_setup(struct console *, char *);
+extern struct uart_driver sal_console_uart;
+extern struct tty_driver *uart_console_device(struct console *, int *);
+
+static struct console sal_console = {
+ .name = DEVICE_NAME,
+ .write = sn_sal_console_write,
+ .device = uart_console_device,
+ .setup = sn_sal_console_setup,
+ .index = -1, /* unspecified */
+ .data = &sal_console_uart,
+};
+
+#define SAL_CONSOLE &sal_console
+#else
+#define SAL_CONSOLE 0
+#endif /* CONFIG_SERIAL_SGI_L1_CONSOLE */
+
+static struct uart_driver sal_console_uart = {
+ .owner = THIS_MODULE,
+ .driver_name = "sn_console",
+ .dev_name = DEVICE_NAME,
+ .major = 0, /* major/minor set at registration time per USE_DYNAMIC_MINOR */
+ .minor = 0,
+ .nr = 1, /* one port */
+ .cons = SAL_CONSOLE,
+};
+
+/**
+ * sn_sal_module_init - When the kernel loads us, get us rolling w/ serial core
+ *
+ * Before this is called, we've been printing kernel messages in a special
+ * early mode not making use of the serial core infrastructure. When our
+ * driver is loaded for real, we register the driver and port with serial
+ * core and try to enable interrupt driven mode.
+ *
+ */
+static int __init
+sn_sal_module_init(void)
+{
+ int retval;
+
+ printk(KERN_INFO "sn_console: Console driver init\n");
+
+ if (!ia64_platform_is("sn2"))
+ return -ENODEV;
+
+ if (USE_DYNAMIC_MINOR == 1) {
+ misc.minor = MISC_DYNAMIC_MINOR;
+ misc.name = DEVICE_NAME_DYNAMIC;
+ retval = misc_register(&misc);
+ if (retval != 0) {
+ printk("Failed to register console device using misc_register.\n");
+ return -ENODEV;
+ }
+ sal_console_uart.major = MISC_MAJOR;
+ sal_console_uart.minor = misc.minor;
+ }
+ else {
+ sal_console_uart.major = DEVICE_MAJOR;
+ sal_console_uart.minor = DEVICE_MINOR;
+ }
+
+ /* We register the driver and the port before switching to interrupts
+ * or async above so the proper uart structures are populated */
+
+ if (uart_register_driver(&sal_console_uart) < 0) {
+ printk("ERROR sn_sal_module_init failed uart_register_driver, line %d\n",
+ __LINE__);
+ return -ENODEV;
+ }
+
+ sal_console_port.sc_port.lock = SPIN_LOCK_UNLOCKED;
+
+ /* Setup the port struct with the minimum needed */
+ sal_console_port.sc_port.membase = (char *)1; /* just needs to be non-zero */
+ sal_console_port.sc_port.type = PORT_16550A;
+ sal_console_port.sc_port.fifosize = SN_SAL_MAX_CHARS;
+ sal_console_port.sc_port.ops = &sn_console_ops;
+ sal_console_port.sc_port.line = 0;
+
+ if (uart_add_one_port(&sal_console_uart, &sal_console_port.sc_port) < 0) {
+ /* error - not sure what I'd do - so I'll do nothing */
+ printk(KERN_ERR "%s: unable to add port\n", __FUNCTION__);
+ }
+
+ /* when this driver is compiled in, the console initialization
+ * will have already switched us into asynchronous operation
+ * before we get here through the module initcalls */
+ if (!sal_console_port.sc_is_asynch) {
+ sn_sal_switch_to_asynch(&sal_console_port);
+ }
+
+ /* at this point (module_init) we can try to turn on interrupts */
+ if (!IS_RUNNING_ON_SIMULATOR()) {
+ sn_sal_switch_to_interrupts(&sal_console_port);
+ }
+ return 0;
+}
+
+/**
+ * sn_sal_module_exit - When we're unloaded, remove the driver/port
+ *
+ */
+static void __exit
+sn_sal_module_exit(void)
+{
+ del_timer_sync(&sal_console_port.sc_timer);
+ uart_remove_one_port(&sal_console_uart, &sal_console_port.sc_port);
+ uart_unregister_driver(&sal_console_uart);
+ misc_deregister(&misc);
+}
+
+module_init(sn_sal_module_init);
+module_exit(sn_sal_module_exit);
+
+#ifdef CONFIG_SERIAL_SGI_L1_CONSOLE
+
+/**
+ * puts_raw_fixed - sn_sal_console_write helper for adding \r's as required
+ * @puts_raw : puts function to do the writing
+ * @s: input string
+ * @count: length
+ *
+ * We need a \r ahead of every \n for direct writes through
+ * ia64_sn_console_putb (what sal_puts_raw below actually does).
+ *
+ */
+
+static void puts_raw_fixed(int (*puts_raw) (const char *s, int len), const char *s, int count)
+{
+ const char *s1;
+
+ /* Output '\r' before each '\n' */
+ while ((s1 = memchr(s, '\n', count)) != NULL) {
+ puts_raw(s, s1 - s);
+ puts_raw("\r\n", 2);
+ count -= s1 + 1 - s;
+ s = s1 + 1;
+ }
+ puts_raw(s, count);
+}
+
+/**
+ * sn_sal_console_write - Print statements before serial core available
+ * @console: Console to operate on - we ignore since we have just one
+ * @s: String to send
+ * @count: length
+ *
+ * This is referenced in the console struct. It is used for early
+ * console printing before we register with serial core and for things
+ * such as kdb. The console_lock must be held when we get here.
+ *
+ * This function has some code for trying to print output even if the lock
+ * is held. We try to cover the case where a lock holder could have died.
+ * We don't use this special case code if we're not registered with serial
+ * core yet. After we're registered with serial core, the only time this
+ * function would be used is for high level kernel output like magic sys req,
+ * kdb, and printk's.
+ */
+static void
+sn_sal_console_write(struct console *co, const char *s, unsigned count)
+{
+ unsigned long flags = 0;
+ struct sn_cons_port *port = &sal_console_port;
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ static int stole_lock = 0;
+#endif
+
+ BUG_ON(!port->sc_is_asynch);
+
+ /* We can't look at the xmit buffer if we're not registered with serial core
+ * yet. So only do the fancy recovery after registering
+ */
+ if (port->sc_port.info) {
+
+ /* somebody really wants this output, might be an
+ * oops, kdb, panic, etc. make sure they get it. */
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ if (spin_is_locked(&port->sc_port.lock)) {
+ int lhead = port->sc_port.info->xmit.head;
+ int ltail = port->sc_port.info->xmit.tail;
+ int counter, got_lock = 0;
+
+ /*
+ * We attempt to determine if someone has died with the
+ * lock. We wait ~20 secs after the head and tail ptrs
+ * stop moving and assume the lock holder is not functional
+ * and plow ahead. If the lock is freed within the time out
+ * period we re-get the lock and go ahead normally. We also
+ * remember if we have plowed ahead so that we don't have
+ * to wait out the time out period again - the asumption
+ * is that we will time out again.
+ */
+
+ for (counter = 0; counter < 150; mdelay(125), counter++) {
+ if (!spin_is_locked(&port->sc_port.lock) || stole_lock) {
+ if (!stole_lock) {
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ got_lock = 1;
+ }
+ break;
+ }
+ else {
+ /* still locked */
+ if ((lhead != port->sc_port.info->xmit.head) || (ltail != port->sc_port.info->xmit.tail)) {
+ lhead = port->sc_port.info->xmit.head;
+ ltail = port->sc_port.info->xmit.tail;
+ counter = 0;
+ }
+ }
+ }
+ /* flush anything in the serial core xmit buffer, raw */
+ sn_transmit_chars(port, 1);
+ if (got_lock) {
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ stole_lock = 0;
+ }
+ else {
+ /* fell thru */
+ stole_lock = 1;
+ }
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+ }
+ else {
+ stole_lock = 0;
+#endif
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ sn_transmit_chars(port, 1);
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+ }
+ }
+ else {
+ /* Not yet registered with serial core - simple case */
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+ }
+}
+
+
+/**
+ * sn_sal_console_setup - Set up console for early printing
+ * @co: Console to work with
+ * @options: Options to set
+ *
+ * Altix console doesn't do anything with baud rates, etc, anyway.
+ *
+ * This isn't required since not providing the setup function in the
+ * console struct is ok. However, other patches like KDB plop something
+ * here so providing it is easier.
+ *
+ */
+static int __init
+sn_sal_console_setup(struct console *co, char *options)
+{
+ return 0;
+}
+
+/**
+ * sn_sal_console_write_early - simple early output routine
+ * @co - console struct
+ * @s - string to print
+ * @count - count
+ *
+ * Simple function to provide early output, before even
+ * sn_sal_serial_console_init is called. Referenced in the
+ * console struct registerd in sn_serial_console_early_setup.
+ *
+ */
+static void __init
+sn_sal_console_write_early(struct console *co, const char *s, unsigned count)
+{
+ puts_raw_fixed(sal_console_port.sc_ops->sal_puts_raw, s, count);
+}
+
+/* Used for very early console printing - again, before
+ * sn_sal_serial_console_init is run */
+static struct console sal_console_early __initdata = {
+ .name = "sn_sal",
+ .write = sn_sal_console_write_early,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+/**
+ * sn_serial_console_early_setup - Sets up early console output support
+ *
+ * Register a console early on... This is for output before even
+ * sn_sal_serial_cosnole_init is called. This function is called from
+ * setup.c. This allows us to do really early polled writes. When
+ * sn_sal_serial_console_init is called, this console is unregistered
+ * and a new one registered.
+ */
+int __init
+sn_serial_console_early_setup(void)
+{
+ if (!ia64_platform_is("sn2"))
+ return -1;
+
+ if (IS_RUNNING_ON_SIMULATOR())
+ sal_console_port.sc_ops = &sim_ops;
+ else
+ sal_console_port.sc_ops = &poll_ops;
+
+ early_sn_setup(); /* Find SAL entry points */
+ register_console(&sal_console_early);
+
+ return 0;
+}
+
+
+/**
+ * sn_sal_serial_console_init - Early console output - set up for register
+ *
+ * This function is called when regular console init happens. Because we
+ * support even earlier console output with sn_serial_console_early_setup
+ * (called from setup.c directly), this function unregisters the really
+ * early console.
+ *
+ * Note: Even if setup.c doesn't register sal_console_early, unregistering
+ * it here doesn't hurt anything.
+ *
+ */
+static int __init
+sn_sal_serial_console_init(void)
+{
+ if (ia64_platform_is("sn2")) {
+ sn_sal_switch_to_asynch(&sal_console_port);
+ DPRINTF ("sn_sal_serial_console_init : register console\n");
+ register_console(&sal_console);
+ unregister_console(&sal_console_early);
+ }
+ return 0;
+}
+
+console_initcall(sn_sal_serial_console_init);
+
+#endif /* CONFIG_SERIAL_SGI_L1_CONSOLE */
--- /dev/null
+/*
+ *
+ * Includes for cdc-acm.c
+ *
+ * Mainly take from usbnet's cdc-ether part
+ *
+ */
+
+/*
+ * CMSPAR, some architectures can't have space and mark parity.
+ */
+
+#ifndef CMSPAR
+#define CMSPAR 0
+#endif
+
+/*
+ * Major and minor numbers.
+ */
+
+#define ACM_TTY_MAJOR 166
+#define ACM_TTY_MINORS 32
+
+/*
+ * Requests.
+ */
+
+#define USB_RT_ACM (USB_TYPE_CLASS | USB_RECIP_INTERFACE)
+
+#define ACM_REQ_COMMAND 0x00
+#define ACM_REQ_RESPONSE 0x01
+#define ACM_REQ_SET_FEATURE 0x02
+#define ACM_REQ_GET_FEATURE 0x03
+#define ACM_REQ_CLEAR_FEATURE 0x04
+
+#define ACM_REQ_SET_LINE 0x20
+#define ACM_REQ_GET_LINE 0x21
+#define ACM_REQ_SET_CONTROL 0x22
+#define ACM_REQ_SEND_BREAK 0x23
+
+/*
+ * IRQs.
+ */
+
+#define ACM_IRQ_NETWORK 0x00
+#define ACM_IRQ_LINE_STATE 0x20
+
+/*
+ * Output control lines.
+ */
+
+#define ACM_CTRL_DTR 0x01
+#define ACM_CTRL_RTS 0x02
+
+/*
+ * Input control lines and line errors.
+ */
+
+#define ACM_CTRL_DCD 0x01
+#define ACM_CTRL_DSR 0x02
+#define ACM_CTRL_BRK 0x04
+#define ACM_CTRL_RI 0x08
+
+#define ACM_CTRL_FRAMING 0x10
+#define ACM_CTRL_PARITY 0x20
+#define ACM_CTRL_OVERRUN 0x40
+
+/*
+ * Line speed and caracter encoding.
+ */
+
+struct acm_line {
+ __u32 speed;
+ __u8 stopbits;
+ __u8 parity;
+ __u8 databits;
+} __attribute__ ((packed));
+
+/*
+ * Internal driver structures.
+ */
+
+struct acm {
+ struct usb_device *dev; /* the corresponding usb device */
+ struct usb_interface *control; /* control interface */
+ struct usb_interface *data; /* data interface */
+ struct tty_struct *tty; /* the corresponding tty */
+ struct urb *ctrlurb, *readurb, *writeurb; /* urbs */
+ struct acm_line line; /* line coding (bits, stop, parity) */
+ struct work_struct work; /* work queue entry for line discipline waking up */
+ struct tasklet_struct bh; /* rx processing */
+ unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */
+ unsigned int ctrlout; /* output control lines (DTR, RTS) */
+ unsigned int writesize; /* max packet size for the output bulk endpoint */
+ unsigned int used; /* someone has this acm's device open */
+ unsigned int minor; /* acm minor number */
+ unsigned char throttle; /* throttled by tty layer */
+ unsigned char clocal; /* termios CLOCAL */
+ unsigned char ready_for_write; /* write urb can be used */
+};
+
+/* "Union Functional Descriptor" from CDC spec 5.2.3.X */
+struct union_desc {
+ u8 bLength;
+ u8 bDescriptorType;
+ u8 bDescriptorSubType;
+
+ u8 bMasterInterface0;
+ u8 bSlaveInterface0;
+ /* ... and there could be other slave interfaces */
+} __attribute__ ((packed));
+
+#define CDC_UNION_TYPE 0x06
+#define CDC_DATA_INTERFACE_TYPE 0x0a
+
--- /dev/null
+/*
+ * drivers/usb/core/sysfs.c
+ *
+ * (C) Copyright 2002 David Brownell
+ * (C) Copyright 2002 Greg Kroah-Hartman
+ * (C) Copyright 2002 IBM Corp.
+ *
+ * All of the sysfs file attributes for usb devices and interfaces.
+ *
+ */
+
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+
+#ifdef CONFIG_USB_DEBUG
+ #define DEBUG
+#else
+ #undef DEBUG
+#endif
+#include <linux/usb.h>
+
+#include "usb.h"
+
+/* Active configuration fields */
+#define usb_actconfig_show(field, multiplier, format_string) \
+static ssize_t show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ \
+ udev = to_usb_device (dev); \
+ if (udev->actconfig) \
+ return sprintf (buf, format_string, \
+ udev->actconfig->desc.field * multiplier); \
+ else \
+ return 0; \
+} \
+
+#define usb_actconfig_attr(field, multiplier, format_string) \
+usb_actconfig_show(field, multiplier, format_string) \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_actconfig_attr (bNumInterfaces, 1, "%2d\n")
+usb_actconfig_attr (bmAttributes, 1, "%2x\n")
+usb_actconfig_attr (bMaxPower, 2, "%3dmA\n")
+
+/* configuration value is always present, and r/w */
+usb_actconfig_show(bConfigurationValue, 1, "%u\n");
+
+static ssize_t
+set_bConfigurationValue (struct device *dev, const char *buf, size_t count)
+{
+ struct usb_device *udev = udev = to_usb_device (dev);
+ int config, value;
+
+ if (sscanf (buf, "%u", &config) != 1 || config > 255)
+ return -EINVAL;
+ down(&udev->serialize);
+ value = usb_set_configuration (udev, config);
+ up(&udev->serialize);
+ return (value < 0) ? value : count;
+}
+
+static DEVICE_ATTR(bConfigurationValue, S_IRUGO | S_IWUSR,
+ show_bConfigurationValue, set_bConfigurationValue);
+
+/* String fields */
+#define usb_string_attr(name, field) \
+static ssize_t show_##name(struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ int len; \
+ \
+ udev = to_usb_device (dev); \
+ len = usb_string(udev, udev->descriptor.field, buf, PAGE_SIZE); \
+ if (len < 0) \
+ return 0; \
+ buf[len] = '\n'; \
+ buf[len+1] = 0; \
+ return len+1; \
+} \
+static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL);
+
+usb_string_attr(product, iProduct);
+usb_string_attr(manufacturer, iManufacturer);
+usb_string_attr(serial, iSerialNumber);
+
+static ssize_t
+show_speed (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+ char *speed;
+
+ udev = to_usb_device (dev);
+
+ switch (udev->speed) {
+ case USB_SPEED_LOW:
+ speed = "1.5";
+ break;
+ case USB_SPEED_UNKNOWN:
+ case USB_SPEED_FULL:
+ speed = "12";
+ break;
+ case USB_SPEED_HIGH:
+ speed = "480";
+ break;
+ default:
+ speed = "unknown";
+ }
+ return sprintf (buf, "%s\n", speed);
+}
+static DEVICE_ATTR(speed, S_IRUGO, show_speed, NULL);
+
+static ssize_t
+show_devnum (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%d\n", udev->devnum);
+}
+static DEVICE_ATTR(devnum, S_IRUGO, show_devnum, NULL);
+
+static ssize_t
+show_version (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%2x.%02x\n", udev->descriptor.bcdUSB >> 8,
+ udev->descriptor.bcdUSB & 0xff);
+}
+static DEVICE_ATTR(version, S_IRUGO, show_version, NULL);
+
+static ssize_t
+show_maxchild (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%d\n", udev->maxchild);
+}
+static DEVICE_ATTR(maxchild, S_IRUGO, show_maxchild, NULL);
+
+/* Descriptor fields */
+#define usb_descriptor_attr(field, format_string) \
+static ssize_t \
+show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ \
+ udev = to_usb_device (dev); \
+ return sprintf (buf, format_string, udev->descriptor.field); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_descriptor_attr (idVendor, "%04x\n")
+usb_descriptor_attr (idProduct, "%04x\n")
+usb_descriptor_attr (bcdDevice, "%04x\n")
+usb_descriptor_attr (bDeviceClass, "%02x\n")
+usb_descriptor_attr (bDeviceSubClass, "%02x\n")
+usb_descriptor_attr (bDeviceProtocol, "%02x\n")
+usb_descriptor_attr (bNumConfigurations, "%d\n")
+
+
+void usb_create_sysfs_dev_files (struct usb_device *udev)
+{
+ struct device *dev = &udev->dev;
+
+ /* current configuration's attributes */
+ device_create_file (dev, &dev_attr_bNumInterfaces);
+ device_create_file (dev, &dev_attr_bConfigurationValue);
+ device_create_file (dev, &dev_attr_bmAttributes);
+ device_create_file (dev, &dev_attr_bMaxPower);
+
+ /* device attributes */
+ device_create_file (dev, &dev_attr_idVendor);
+ device_create_file (dev, &dev_attr_idProduct);
+ device_create_file (dev, &dev_attr_bcdDevice);
+ device_create_file (dev, &dev_attr_bDeviceClass);
+ device_create_file (dev, &dev_attr_bDeviceSubClass);
+ device_create_file (dev, &dev_attr_bDeviceProtocol);
+ device_create_file (dev, &dev_attr_bNumConfigurations);
+
+ /* speed varies depending on how you connect the device */
+ device_create_file (dev, &dev_attr_speed);
+ // FIXME iff there are other speed configs, show how many
+
+ if (udev->descriptor.iManufacturer)
+ device_create_file (dev, &dev_attr_manufacturer);
+ if (udev->descriptor.iProduct)
+ device_create_file (dev, &dev_attr_product);
+ if (udev->descriptor.iSerialNumber)
+ device_create_file (dev, &dev_attr_serial);
+
+ device_create_file (dev, &dev_attr_devnum);
+ device_create_file (dev, &dev_attr_version);
+ device_create_file (dev, &dev_attr_maxchild);
+}
+
+/* Interface fields */
+#define usb_intf_attr(field, format_string) \
+static ssize_t \
+show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ \
+ return sprintf (buf, format_string, intf->cur_altsetting->desc.field); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_intf_attr (bInterfaceNumber, "%02x\n")
+usb_intf_attr (bAlternateSetting, "%2d\n")
+usb_intf_attr (bNumEndpoints, "%02x\n")
+usb_intf_attr (bInterfaceClass, "%02x\n")
+usb_intf_attr (bInterfaceSubClass, "%02x\n")
+usb_intf_attr (bInterfaceProtocol, "%02x\n")
+usb_intf_attr (iInterface, "%02x\n")
+
+void usb_create_sysfs_intf_files (struct usb_interface *intf)
+{
+ device_create_file (&intf->dev, &dev_attr_bInterfaceNumber);
+ device_create_file (&intf->dev, &dev_attr_bAlternateSetting);
+ device_create_file (&intf->dev, &dev_attr_bNumEndpoints);
+ device_create_file (&intf->dev, &dev_attr_bInterfaceClass);
+ device_create_file (&intf->dev, &dev_attr_bInterfaceSubClass);
+ device_create_file (&intf->dev, &dev_attr_bInterfaceProtocol);
+ device_create_file (&intf->dev, &dev_attr_iInterface);
+}
--- /dev/null
+/*
+ * OHCI HCD (Host Controller Driver) for USB.
+ *
+ * (C) Copyright 1999 Roman Weissgaerber <weissg@vienna.at>
+ * (C) Copyright 2000-2002 David Brownell <dbrownell@users.sourceforge.net>
+ * (C) Copyright 2002 Hewlett-Packard Company
+ *
+ * Bus Glue for Sharp LH7A404
+ *
+ * Written by Christopher Hoover <ch@hpl.hp.com>
+ * Based on fragments of previous driver by Rusell King et al.
+ *
+ * Modified for LH7A404 from ohci-sa1111.c
+ * by Durgesh Pattamatta <pattamattad@sharpsec.com>
+ *
+ * This file is licenced under the GPL.
+ */
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/arch/hardware.h>
+
+
+extern int usb_disabled(void);
+
+/*-------------------------------------------------------------------------*/
+
+static void lh7a404_start_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": starting LH7A404 OHCI USB Controller\n");
+
+ /*
+ * Now, carefully enable the USB clock, and take
+ * the USB host controller out of reset.
+ */
+ CSC_PWRCNT |= CSC_PWRCNT_USBH_EN; /* Enable clock */
+ udelay(1000);
+ USBH_CMDSTATUS = OHCI_HCR;
+
+ printk(KERN_DEBUG __FILE__
+ ": Clock to USB host has been enabled \n");
+}
+
+static void lh7a404_stop_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": stopping LH7A404 OHCI USB Controller\n");
+
+ CSC_PWRCNT &= ~CSC_PWRCNT_USBH_EN; /* Disable clock */
+}
+
+
+/*-------------------------------------------------------------------------*/
+
+
+static irqreturn_t usb_hcd_lh7a404_hcim_irq (int irq, void *__hcd,
+ struct pt_regs * r)
+{
+ struct usb_hcd *hcd = __hcd;
+
+ return usb_hcd_irq(irq, hcd, r);
+}
+
+/*-------------------------------------------------------------------------*/
+
+void usb_hcd_lh7a404_remove (struct usb_hcd *, struct platform_device *);
+
+/* configure so an HC device and id are always provided */
+/* always called with process context; sleeping is OK */
+
+
+/**
+ * usb_hcd_lh7a404_probe - initialize LH7A404-based HCDs
+ * Context: !in_interrupt()
+ *
+ * Allocates basic resources for this USB host controller, and
+ * then invokes the start() method for the HCD associated with it
+ * through the hotplug entry's driver_data.
+ *
+ */
+int usb_hcd_lh7a404_probe (const struct hc_driver *driver,
+ struct usb_hcd **hcd_out,
+ struct platform_device *dev)
+{
+ int retval;
+ struct usb_hcd *hcd = 0;
+
+ unsigned int *addr = NULL;
+
+ if (!request_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1, hcd_name)) {
+ pr_debug("request_mem_region failed");
+ return -EBUSY;
+ }
+
+
+ lh7a404_start_hc(dev);
+
+ addr = ioremap(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ if (!addr) {
+ pr_debug("ioremap failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+
+ hcd = driver->hcd_alloc ();
+ if (hcd == NULL){
+ pr_debug ("hcd_alloc failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ if(dev->resource[1].flags != IORESOURCE_IRQ){
+ pr_debug ("resource[1] is not IORESOURCE_IRQ");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ hcd->driver = (struct hc_driver *) driver;
+ hcd->description = driver->description;
+ hcd->irq = dev->resource[1].start;
+ hcd->regs = addr;
+ hcd->self.controller = &dev->dev;
+
+ retval = hcd_buffer_create (hcd);
+ if (retval != 0) {
+ pr_debug ("pool alloc fail");
+ goto err1;
+ }
+
+ retval = request_irq (hcd->irq, usb_hcd_lh7a404_hcim_irq, SA_INTERRUPT,
+ hcd->description, hcd);
+ if (retval != 0) {
+ pr_debug("request_irq failed");
+ retval = -EBUSY;
+ goto err2;
+ }
+
+ pr_debug ("%s (LH7A404) at 0x%p, irq %d",
+ hcd->description, hcd->regs, hcd->irq);
+
+ usb_bus_init (&hcd->self);
+ hcd->self.op = &usb_hcd_operations;
+ hcd->self.hcpriv = (void *) hcd;
+ hcd->self.bus_name = "lh7a404";
+ hcd->product_desc = "LH7A404 OHCI";
+
+ INIT_LIST_HEAD (&hcd->dev_list);
+
+ usb_register_bus (&hcd->self);
+
+ if ((retval = driver->start (hcd)) < 0)
+ {
+ usb_hcd_lh7a404_remove(hcd, dev);
+ return retval;
+ }
+
+ *hcd_out = hcd;
+ return 0;
+
+ err2:
+ hcd_buffer_destroy (hcd);
+ if (hcd)
+ driver->hcd_free(hcd);
+ err1:
+ lh7a404_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ return retval;
+}
+
+
+/* may be called without controller electrically present */
+/* may be called with controller, bus, and devices active */
+
+/**
+ * usb_hcd_lh7a404_remove - shutdown processing for LH7A404-based HCDs
+ * @dev: USB Host Controller being removed
+ * Context: !in_interrupt()
+ *
+ * Reverses the effect of usb_hcd_lh7a404_probe(), first invoking
+ * the HCD's stop() method. It is always called from a thread
+ * context, normally "rmmod", "apmd", or something similar.
+ *
+ */
+void usb_hcd_lh7a404_remove (struct usb_hcd *hcd, struct platform_device *dev)
+{
+ void *base;
+
+ pr_debug ("remove: %s, state %x", hcd->self.bus_name, hcd->state);
+
+ if (in_interrupt ())
+ BUG ();
+
+ hcd->state = USB_STATE_QUIESCING;
+
+ pr_debug ("%s: roothub graceful disconnect", hcd->self.bus_name);
+ usb_disconnect (&hcd->self.root_hub);
+
+ hcd->driver->stop (hcd);
+ hcd->state = USB_STATE_HALT;
+
+ free_irq (hcd->irq, hcd);
+ hcd_buffer_destroy (hcd);
+
+ usb_deregister_bus (&hcd->self);
+
+ base = hcd->regs;
+ hcd->driver->hcd_free (hcd);
+
+ lh7a404_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+}
+
+/*-------------------------------------------------------------------------*/
+
+static int __devinit
+ohci_lh7a404_start (struct usb_hcd *hcd)
+{
+ struct ohci_hcd *ohci = hcd_to_ohci (hcd);
+ int ret;
+
+ ohci_dbg (ohci, "ohci_lh7a404_start, ohci:%p", ohci);
+
+ ohci->hcca = dma_alloc_coherent (hcd->self.controller,
+ sizeof *ohci->hcca, &ohci->hcca_dma, 0);
+ if (!ohci->hcca)
+ return -ENOMEM;
+
+ ohci_dbg (ohci, "ohci_lh7a404_start, ohci->hcca:%p",
+ ohci->hcca);
+
+ memset (ohci->hcca, 0, sizeof (struct ohci_hcca));
+
+ if ((ret = ohci_mem_init (ohci)) < 0) {
+ ohci_stop (hcd);
+ return ret;
+ }
+ ohci->regs = hcd->regs;
+
+ if (hc_reset (ohci) < 0) {
+ ohci_stop (hcd);
+ return -ENODEV;
+ }
+
+ if (hc_start (ohci) < 0) {
+ err ("can't start %s", ohci->hcd.self.bus_name);
+ ohci_stop (hcd);
+ return -EBUSY;
+ }
+ create_debug_files (ohci);
+
+#ifdef DEBUG
+ ohci_dump (ohci, 1);
+#endif /*DEBUG*/
+ return 0;
+}
+
+/*-------------------------------------------------------------------------*/
+
+static const struct hc_driver ohci_lh7a404_hc_driver = {
+ .description = hcd_name,
+
+ /*
+ * generic hardware linkage
+ */
+ .irq = ohci_irq,
+ .flags = HCD_USB11,
+
+ /*
+ * basic lifecycle operations
+ */
+ .start = ohci_lh7a404_start,
+#ifdef CONFIG_PM
+ /* suspend: ohci_lh7a404_suspend, -- tbd */
+ /* resume: ohci_lh7a404_resume, -- tbd */
+#endif /*CONFIG_PM*/
+ .stop = ohci_stop,
+
+ /*
+ * memory lifecycle (except per-request)
+ */
+ .hcd_alloc = ohci_hcd_alloc,
+ .hcd_free = ohci_hcd_free,
+
+ /*
+ * managing i/o requests and associated device resources
+ */
+ .urb_enqueue = ohci_urb_enqueue,
+ .urb_dequeue = ohci_urb_dequeue,
+ .endpoint_disable = ohci_endpoint_disable,
+
+ /*
+ * scheduling support
+ */
+ .get_frame_number = ohci_get_frame,
+
+ /*
+ * root hub support
+ */
+ .hub_status_data = ohci_hub_status_data,
+ .hub_control = ohci_hub_control,
+};
+
+/*-------------------------------------------------------------------------*/
+
+static int ohci_hcd_lh7a404_drv_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = NULL;
+ int ret;
+
+ pr_debug ("In ohci_hcd_lh7a404_drv_probe");
+
+ if (usb_disabled())
+ return -ENODEV;
+
+ ret = usb_hcd_lh7a404_probe(&ohci_lh7a404_hc_driver, &hcd, pdev);
+
+ if (ret == 0)
+ dev_set_drvdata(dev, hcd);
+
+ return ret;
+}
+
+static int ohci_hcd_lh7a404_drv_remove(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ usb_hcd_lh7a404_remove(hcd, pdev);
+ dev_set_drvdata(dev, NULL);
+ return 0;
+}
+ /*TBD*/
+/*static int ohci_hcd_lh7a404_drv_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ return 0;
+}
+static int ohci_hcd_lh7a404_drv_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+
+ return 0;
+}
+*/
+
+static struct device_driver ohci_hcd_lh7a404_driver = {
+ .name = "lh7a404-ohci",
+ .bus = &platform_bus_type,
+ .probe = ohci_hcd_lh7a404_drv_probe,
+ .remove = ohci_hcd_lh7a404_drv_remove,
+ /*.suspend = ohci_hcd_lh7a404_drv_suspend, */
+ /*.resume = ohci_hcd_lh7a404_drv_resume, */
+};
+
+static int __init ohci_hcd_lh7a404_init (void)
+{
+ pr_debug (DRIVER_INFO " (LH7A404)");
+ pr_debug ("block sizes: ed %d td %d\n",
+ sizeof (struct ed), sizeof (struct td));
+
+ return driver_register(&ohci_hcd_lh7a404_driver);
+}
+
+static void __exit ohci_hcd_lh7a404_cleanup (void)
+{
+ driver_unregister(&ohci_hcd_lh7a404_driver);
+}
+
+module_init (ohci_hcd_lh7a404_init);
+module_exit (ohci_hcd_lh7a404_cleanup);
--- /dev/null
+/******************************************************************************
+ * touchkitusb.c -- Driver for eGalax TouchKit USB Touchscreens
+ *
+ * Copyright (C) 2004 by Daniel Ritz
+ * Copyright (C) by Todd E. Johnson (mtouchusb.c)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Based upon mtouchusb.c
+ *
+ *****************************************************************************/
+
+//#define DEBUG
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/input.h>
+#include <linux/module.h>
+#include <linux/init.h>
+
+#if !defined(DEBUG) && defined(CONFIG_USB_DEBUG)
+#define DEBUG
+#endif
+#include <linux/usb.h>
+
+
+#define TOUCHKIT_MIN_XC 0x0
+#define TOUCHKIT_MAX_XC 0x07ff
+#define TOUCHKIT_XC_FUZZ 0x0
+#define TOUCHKIT_XC_FLAT 0x0
+#define TOUCHKIT_MIN_YC 0x0
+#define TOUCHKIT_MAX_YC 0x07ff
+#define TOUCHKIT_YC_FUZZ 0x0
+#define TOUCHKIT_YC_FLAT 0x0
+#define TOUCHKIT_REPORT_DATA_SIZE 8
+
+#define TOUCHKIT_DOWN 0x01
+#define TOUCHKIT_POINT_TOUCH 0x81
+#define TOUCHKIT_POINT_NOTOUCH 0x80
+
+#define TOUCHKIT_GET_TOUCHED(dat) ((((dat)[0]) & TOUCHKIT_DOWN) ? 1 : 0)
+#define TOUCHKIT_GET_X(dat) (((dat)[3] << 7) | (dat)[4])
+#define TOUCHKIT_GET_Y(dat) (((dat)[1] << 7) | (dat)[2])
+
+#define DRIVER_VERSION "v0.1"
+#define DRIVER_AUTHOR "Daniel Ritz <daniel.ritz@gmx.ch>"
+#define DRIVER_DESC "eGalax TouchKit USB HID Touchscreen Driver"
+
+struct touchkit_usb {
+ unsigned char *data;
+ dma_addr_t data_dma;
+ struct urb *irq;
+ struct usb_device *udev;
+ struct input_dev input;
+ int open;
+ char name[128];
+ char phys[64];
+};
+
+static struct usb_device_id touchkit_devices[] = {
+ {USB_DEVICE(0x3823, 0x0001)},
+ {USB_DEVICE(0x0eef, 0x0001)},
+ {}
+};
+
+static void touchkit_irq(struct urb *urb, struct pt_regs *regs)
+{
+ struct touchkit_usb *touchkit = urb->context;
+ int retval;
+
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ETIMEDOUT:
+ /* this urb is timing out */
+ dbg("%s - urb timed out - was the device unplugged?",
+ __FUNCTION__);
+ return;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ /* this urb is terminated, clean up */
+ dbg("%s - urb shutting down with status: %d",
+ __FUNCTION__, urb->status);
+ return;
+ default:
+ dbg("%s - nonzero urb status received: %d",
+ __FUNCTION__, urb->status);
+ goto exit;
+ }
+
+ input_regs(&touchkit->input, regs);
+ input_report_key(&touchkit->input, BTN_TOUCH,
+ TOUCHKIT_GET_TOUCHED(touchkit->data));
+ input_report_abs(&touchkit->input, ABS_X,
+ TOUCHKIT_GET_X(touchkit->data));
+ input_report_abs(&touchkit->input, ABS_Y,
+ TOUCHKIT_GET_Y(touchkit->data));
+ input_sync(&touchkit->input);
+
+exit:
+ retval = usb_submit_urb(urb, GFP_ATOMIC);
+ if (retval)
+ err("%s - usb_submit_urb failed with result: %d",
+ __FUNCTION__, retval);
+}
+
+static int touchkit_open(struct input_dev *input)
+{
+ struct touchkit_usb *touchkit = input->private;
+
+ if (touchkit->open++)
+ return 0;
+
+ touchkit->irq->dev = touchkit->udev;
+
+ if (usb_submit_urb(touchkit->irq, GFP_ATOMIC)) {
+ touchkit->open--;
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static void touchkit_close(struct input_dev *input)
+{
+ struct touchkit_usb *touchkit = input->private;
+
+ if (!--touchkit->open)
+ usb_unlink_urb(touchkit->irq);
+}
+
+static int touchkit_alloc_buffers(struct usb_device *udev,
+ struct touchkit_usb *touchkit)
+{
+ touchkit->data = usb_buffer_alloc(udev, TOUCHKIT_REPORT_DATA_SIZE,
+ SLAB_ATOMIC, &touchkit->data_dma);
+
+ if (!touchkit->data)
+ return -1;
+
+ return 0;
+}
+
+static void touchkit_free_buffers(struct usb_device *udev,
+ struct touchkit_usb *touchkit)
+{
+ if (touchkit->data)
+ usb_buffer_free(udev, TOUCHKIT_REPORT_DATA_SIZE,
+ touchkit->data, touchkit->data_dma);
+}
+
+static int touchkit_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+{
+ int ret;
+ struct touchkit_usb *touchkit;
+ struct usb_host_interface *interface;
+ struct usb_endpoint_descriptor *endpoint;
+ struct usb_device *udev = interface_to_usbdev(intf);
+ char path[64];
+ char *buf;
+
+ interface = intf->cur_altsetting;
+ endpoint = &interface->endpoint[0].desc;
+
+ touchkit = kmalloc(sizeof(struct touchkit_usb), GFP_KERNEL);
+ if (!touchkit)
+ return -ENOMEM;
+
+ memset(touchkit, 0, sizeof(struct touchkit_usb));
+ touchkit->udev = udev;
+
+ if (touchkit_alloc_buffers(udev, touchkit)) {
+ ret = -ENOMEM;
+ goto out_free;
+ }
+
+ touchkit->input.private = touchkit;
+ touchkit->input.open = touchkit_open;
+ touchkit->input.close = touchkit_close;
+
+ usb_make_path(udev, path, 64);
+ sprintf(touchkit->phys, "%s/input0", path);
+
+ touchkit->input.name = touchkit->name;
+ touchkit->input.phys = touchkit->phys;
+ touchkit->input.id.bustype = BUS_USB;
+ touchkit->input.id.vendor = udev->descriptor.idVendor;
+ touchkit->input.id.product = udev->descriptor.idProduct;
+ touchkit->input.id.version = udev->descriptor.bcdDevice;
+ touchkit->input.dev = &intf->dev;
+
+ touchkit->input.evbit[0] = BIT(EV_KEY) | BIT(EV_ABS);
+ touchkit->input.absbit[0] = BIT(ABS_X) | BIT(ABS_Y);
+ touchkit->input.keybit[LONG(BTN_TOUCH)] = BIT(BTN_TOUCH);
+
+ /* Used to Scale Compensated Data */
+ touchkit->input.absmin[ABS_X] = TOUCHKIT_MIN_XC;
+ touchkit->input.absmax[ABS_X] = TOUCHKIT_MAX_XC;
+ touchkit->input.absfuzz[ABS_X] = TOUCHKIT_XC_FUZZ;
+ touchkit->input.absflat[ABS_X] = TOUCHKIT_XC_FLAT;
+ touchkit->input.absmin[ABS_Y] = TOUCHKIT_MIN_YC;
+ touchkit->input.absmax[ABS_Y] = TOUCHKIT_MAX_YC;
+ touchkit->input.absfuzz[ABS_Y] = TOUCHKIT_YC_FUZZ;
+ touchkit->input.absflat[ABS_Y] = TOUCHKIT_YC_FLAT;
+
+ buf = kmalloc(63, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto out_free_buffers;
+ }
+
+ if (udev->descriptor.iManufacturer &&
+ usb_string(udev, udev->descriptor.iManufacturer, buf, 63) > 0)
+ strcat(touchkit->name, buf);
+ if (udev->descriptor.iProduct &&
+ usb_string(udev, udev->descriptor.iProduct, buf, 63) > 0)
+ sprintf(touchkit->name, "%s %s", touchkit->name, buf);
+
+ if (!strlen(touchkit->name))
+ sprintf(touchkit->name, "USB Touchscreen %04x:%04x",
+ touchkit->input.id.vendor, touchkit->input.id.product);
+
+ kfree(buf);
+
+ touchkit->irq = usb_alloc_urb(0, GFP_KERNEL);
+ if (!touchkit->irq) {
+ dbg("%s - usb_alloc_urb failed: touchkit->irq", __FUNCTION__);
+ ret = -ENOMEM;
+ goto out_free_buffers;
+ }
+
+ usb_fill_int_urb(touchkit->irq, touchkit->udev,
+ usb_rcvintpipe(touchkit->udev, 0x81),
+ touchkit->data, TOUCHKIT_REPORT_DATA_SIZE,
+ touchkit_irq, touchkit, endpoint->bInterval);
+
+ input_register_device(&touchkit->input);
+
+ printk(KERN_INFO "input: %s on %s\n", touchkit->name, path);
+ usb_set_intfdata(intf, touchkit);
+
+ return 0;
+
+out_free_buffers:
+ touchkit_free_buffers(udev, touchkit);
+out_free:
+ kfree(touchkit);
+ return ret;
+}
+
+static void touchkit_disconnect(struct usb_interface *intf)
+{
+ struct touchkit_usb *touchkit = usb_get_intfdata(intf);
+
+ dbg("%s - called", __FUNCTION__);
+
+ if (!touchkit)
+ return;
+
+ dbg("%s - touchkit is initialized, cleaning up", __FUNCTION__);
+ usb_set_intfdata(intf, NULL);
+ input_unregister_device(&touchkit->input);
+ usb_unlink_urb(touchkit->irq);
+ usb_free_urb(touchkit->irq);
+ touchkit_free_buffers(interface_to_usbdev(intf), touchkit);
+ kfree(touchkit);
+}
+
+MODULE_DEVICE_TABLE(usb, touchkit_devices);
+
+static struct usb_driver touchkit_driver = {
+ .owner = THIS_MODULE,
+ .name = "touchkitusb",
+ .probe = touchkit_probe,
+ .disconnect = touchkit_disconnect,
+ .id_table = touchkit_devices,
+};
+
+static int __init touchkit_init(void)
+{
+ return usb_register(&touchkit_driver);
+}
+
+static void __exit touchkit_cleanup(void)
+{
+ usb_deregister(&touchkit_driver);
+}
+
+module_init(touchkit_init);
+module_exit(touchkit_cleanup);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
--- /dev/null
+/***************************************************************************
+ * V4L2 driver for SN9C10[12] PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#ifndef _SN9C102_H_
+#define _SN9C102_H_
+
+#include <linux/version.h>
+#include <linux/usb.h>
+#include <linux/videodev.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/types.h>
+#include <linux/param.h>
+#include <linux/rwsem.h>
+
+#include "sn9c102_sensor.h"
+
+/*****************************************************************************/
+
+#define SN9C102_DEBUG
+#define SN9C102_DEBUG_LEVEL 2
+#define SN9C102_MAX_DEVICES 64
+#define SN9C102_MAX_FRAMES 32
+#define SN9C102_URBS 2
+#define SN9C102_ISO_PACKETS 7
+#define SN9C102_ALTERNATE_SETTING 8
+#define SN9C102_CTRL_TIMEOUT 10*HZ
+
+/*****************************************************************************/
+
+#define SN9C102_MODULE_NAME "V4L2 driver for SN9C10[12] PC Camera Controllers"
+#define SN9C102_MODULE_AUTHOR "(C) 2004 Luca Risolia"
+#define SN9C102_AUTHOR_EMAIL "<luca.risolia@studio.unibo.it>"
+#define SN9C102_MODULE_LICENSE "GPL"
+#define SN9C102_MODULE_VERSION "1:1.01-beta"
+#define SN9C102_MODULE_VERSION_CODE KERNEL_VERSION(1, 0, 1)
+
+SN9C102_ID_TABLE;
+SN9C102_SENSOR_TABLE;
+
+enum sn9c102_frame_state {
+ F_UNUSED,
+ F_QUEUED,
+ F_GRABBING,
+ F_DONE,
+ F_ERROR,
+};
+
+struct sn9c102_frame_t {
+ void* bufmem;
+ struct v4l2_buffer buf;
+ enum sn9c102_frame_state state;
+ struct list_head frame;
+ unsigned long vma_use_count;
+};
+
+enum sn9c102_dev_state {
+ DEV_INITIALIZED = 0x01,
+ DEV_DISCONNECTED = 0x02,
+ DEV_MISCONFIGURED = 0x04,
+};
+
+enum sn9c102_io_method {
+ IO_NONE,
+ IO_READ,
+ IO_MMAP,
+};
+
+enum sn9c102_stream_state {
+ STREAM_OFF,
+ STREAM_INTERRUPT,
+ STREAM_ON,
+};
+
+struct sn9c102_sysfs_attr {
+ u8 reg, val, i2c_reg, i2c_val;
+};
+
+static DECLARE_MUTEX(sn9c102_sysfs_lock);
+static DECLARE_RWSEM(sn9c102_disconnect);
+
+struct sn9c102_device {
+ struct device dev;
+
+ struct video_device* v4ldev;
+
+ struct sn9c102_sensor* sensor;
+
+ struct usb_device* usbdev;
+ struct urb* urb[SN9C102_URBS];
+ void* transfer_buffer[SN9C102_URBS];
+ u8* control_buffer;
+
+ struct sn9c102_frame_t *frame_current, frame[SN9C102_MAX_FRAMES];
+ struct list_head inqueue, outqueue;
+ u32 frame_count, nbuffers;
+
+ enum sn9c102_io_method io;
+ enum sn9c102_stream_state stream;
+
+ struct sn9c102_sysfs_attr sysfs;
+ u16 reg[32];
+
+ enum sn9c102_dev_state state;
+ u8 users;
+
+ struct semaphore dev_sem, fileop_sem;
+ spinlock_t queue_lock;
+ wait_queue_head_t open, wait_frame, wait_stream;
+};
+
+/*****************************************************************************/
+
+void
+sn9c102_attach_sensor(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ cam->sensor = sensor;
+ cam->sensor->dev = &cam->dev;
+ cam->sensor->usbdev = cam->usbdev;
+}
+
+/*****************************************************************************/
+
+#undef DBG
+#undef KDBG
+#ifdef SN9C102_DEBUG
+# define DBG(level, fmt, args...) \
+{ \
+ if (debug >= (level)) { \
+ if ((level) == 1) \
+ dev_err(&cam->dev, fmt "\n", ## args); \
+ else if ((level) == 2) \
+ dev_info(&cam->dev, fmt "\n", ## args); \
+ else if ((level) >= 3) \
+ dev_info(&cam->dev, "[%s:%d] " fmt "\n", \
+ __FUNCTION__, __LINE__ , ## args); \
+ } \
+}
+# define KDBG(level, fmt, args...) \
+{ \
+ if (debug >= (level)) { \
+ if ((level) == 1 || (level) == 2) \
+ pr_info("sn9c102: " fmt "\n", ## args); \
+ else if ((level) == 3) \
+ pr_debug("sn9c102: [%s:%d] " fmt "\n", __FUNCTION__, \
+ __LINE__ , ## args); \
+ } \
+}
+#else
+# define KDBG(level, fmt, args...) do {;} while(0);
+# define DBG(level, fmt, args...) do {;} while(0);
+#endif
+
+#undef PDBG
+#define PDBG(fmt, args...) \
+dev_info(&cam->dev, "[%s:%d] " fmt "\n", __FUNCTION__, __LINE__ , ## args);
+
+#undef PDBGG
+#define PDBGG(fmt, args...) do {;} while(0); /* placeholder */
+
+#endif /* _SN9C102_H_ */
--- /dev/null
+/***************************************************************************
+ * V4L2 driver for SN9C10[12] PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/stddef.h>
+#include <linux/ioctl.h>
+#include <linux/poll.h>
+#include <linux/stat.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/page-flags.h>
+#include <asm/page.h>
+#include <asm/uaccess.h>
+
+#include "sn9c102.h"
+
+/*****************************************************************************/
+
+MODULE_DEVICE_TABLE(usb, sn9c102_id_table);
+
+MODULE_AUTHOR(SN9C102_MODULE_AUTHOR " " SN9C102_AUTHOR_EMAIL);
+MODULE_DESCRIPTION(SN9C102_MODULE_NAME);
+MODULE_VERSION(SN9C102_MODULE_VERSION);
+MODULE_LICENSE(SN9C102_MODULE_LICENSE);
+
+static short video_nr[] = {[0 ... SN9C102_MAX_DEVICES-1] = -1};
+static unsigned int nv;
+module_param_array(video_nr, short, nv, 0444);
+MODULE_PARM_DESC(video_nr,
+ "\n<-1|n[,...]> Specify V4L2 minor mode number."
+ "\n -1 = use next available (default)"
+ "\n n = use minor number n (integer >= 0)"
+ "\nYou can specify up to "__MODULE_STRING(SN9C102_MAX_DEVICES)
+ " cameras this way."
+ "\nFor example:"
+ "\nvideo_nr=-1,2,-1 would assign minor number 2 to"
+ "\nthe second camera and use auto for the first"
+ "\none and for every other camera."
+ "\n");
+
+#ifdef SN9C102_DEBUG
+static unsigned short debug = SN9C102_DEBUG_LEVEL;
+module_param(debug, ushort, 0644);
+MODULE_PARM_DESC(debug,
+ "\n<n> Debugging information level, from 0 to 3:"
+ "\n0 = none (use carefully)"
+ "\n1 = critical errors"
+ "\n2 = significant informations"
+ "\n3 = more verbose messages"
+ "\nLevel 3 is useful for testing only, when only "
+ "one device is used."
+ "\nDefault value is "__MODULE_STRING(SN9C102_DEBUG_LEVEL)"."
+ "\n");
+#endif
+
+/*****************************************************************************/
+
+typedef char sn9c102_sof_header_t[7];
+typedef char sn9c102_eof_header_t[4];
+
+static sn9c102_sof_header_t sn9c102_sof_header[] = {
+ {0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x00},
+ {0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x01},
+};
+
+/* Number of random bytes that complete the SOF above headers */
+#define SN9C102_SOFLEN 5
+
+static sn9c102_eof_header_t sn9c102_eof_header[] = {
+ {0x00, 0x00, 0x00, 0x00},
+ {0x40, 0x00, 0x00, 0x00},
+ {0x80, 0x00, 0x00, 0x00},
+ {0xc0, 0x00, 0x00, 0x00},
+};
+
+/*****************************************************************************/
+
+static inline unsigned long kvirt_to_pa(unsigned long adr)
+{
+ unsigned long kva, ret;
+
+ kva = (unsigned long)page_address(vmalloc_to_page((void *)adr));
+ kva |= adr & (PAGE_SIZE-1);
+ ret = __pa(kva);
+ return ret;
+}
+
+
+static void* rvmalloc(size_t size)
+{
+ void* mem;
+ unsigned long adr;
+
+ size = PAGE_ALIGN(size);
+
+ mem = vmalloc_32((unsigned long)size);
+ if (!mem)
+ return NULL;
+
+ memset(mem, 0, size);
+
+ adr = (unsigned long)mem;
+ while (size > 0) {
+ SetPageReserved(vmalloc_to_page((void *)adr));
+ adr += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ return mem;
+}
+
+
+static void rvfree(void* mem, size_t size)
+{
+ unsigned long adr;
+
+ if (!mem)
+ return;
+
+ size = PAGE_ALIGN(size);
+
+ adr = (unsigned long)mem;
+ while (size > 0) {
+ ClearPageReserved(vmalloc_to_page((void *)adr));
+ adr += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ vfree(mem);
+}
+
+
+static u32 sn9c102_request_buffers(struct sn9c102_device* cam, u32 count)
+{
+ struct v4l2_pix_format* p = &(cam->sensor->pix_format);
+ const size_t imagesize = (p->width * p->height * p->priv)/8;
+ void* buff = NULL;
+ u32 i;
+
+ if (count > SN9C102_MAX_FRAMES)
+ count = SN9C102_MAX_FRAMES;
+
+ cam->nbuffers = count;
+ while (cam->nbuffers > 0) {
+ if ((buff = rvmalloc(cam->nbuffers * imagesize)))
+ break;
+ cam->nbuffers--;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++) {
+ cam->frame[i].bufmem = buff + i*imagesize;
+ cam->frame[i].buf.index = i;
+ cam->frame[i].buf.m.offset = i*imagesize;
+ cam->frame[i].buf.length = imagesize;
+ cam->frame[i].buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ cam->frame[i].buf.sequence = 0;
+ cam->frame[i].buf.field = V4L2_FIELD_NONE;
+ cam->frame[i].buf.memory = V4L2_MEMORY_MMAP;
+ cam->frame[i].buf.flags = 0;
+ }
+
+ return cam->nbuffers;
+}
+
+
+static void sn9c102_release_buffers(struct sn9c102_device* cam)
+{
+ if (cam->nbuffers) {
+ rvfree(cam->frame[0].bufmem,
+ cam->nbuffers * cam->frame[0].buf.length);
+ cam->nbuffers = 0;
+ }
+}
+
+
+static void sn9c102_empty_framequeues(struct sn9c102_device* cam)
+{
+ u32 i;
+
+ INIT_LIST_HEAD(&cam->inqueue);
+ INIT_LIST_HEAD(&cam->outqueue);
+
+ for (i = 0; i < SN9C102_MAX_FRAMES; i++) {
+ cam->frame[i].state = F_UNUSED;
+ cam->frame[i].buf.bytesused = 0;
+ }
+}
+
+
+static void sn9c102_queue_unusedframes(struct sn9c102_device* cam)
+{
+ unsigned long lock_flags;
+ u32 i;
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].state == F_UNUSED) {
+ cam->frame[i].state = F_QUEUED;
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_add_tail(&cam->frame[i].frame, &cam->inqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+ }
+}
+
+/*****************************************************************************/
+
+int sn9c102_write_reg(struct sn9c102_device* cam, u8 value, u16 index)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* buff = cam->control_buffer;
+ int res;
+
+ if (index == 0x18)
+ value = (value & 0xcf) | (cam->reg[0x18] & 0x30);
+
+ *buff = value;
+
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ index, 0, buff, 1, SN9C102_CTRL_TIMEOUT);
+ if (res < 0) {
+ DBG(3, "Failed to write a register (value 0x%02X, index "
+ "0x%02X, error %d)", value, index, res)
+ return -1;
+ }
+
+ cam->reg[index] = value;
+
+ return 0;
+}
+
+
+/* NOTE: reading some registers always returns 0 */
+static int sn9c102_read_reg(struct sn9c102_device* cam, u16 index)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* buff = cam->control_buffer;
+ int res;
+
+ res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 0x00, 0xc1,
+ index, 0, buff, 1, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ DBG(3, "Failed to read a register (index 0x%02X, error %d)",
+ index, res)
+
+ return (res >= 0) ? (int)(*buff) : -1;
+}
+
+
+int sn9c102_pread_reg(struct sn9c102_device* cam, u16 index)
+{
+ if (index > 0x1f)
+ return -EINVAL;
+
+ return cam->reg[index];
+}
+
+
+static int
+sn9c102_i2c_wait(struct sn9c102_device* cam, struct sn9c102_sensor* sensor)
+{
+ int i, r;
+
+ for (i = 1; i <= 5; i++) {
+ r = sn9c102_read_reg(cam, 0x08);
+ if (r < 0)
+ return -EIO;
+ if (r & 0x04)
+ return 0;
+ if (sensor->frequency & SN9C102_I2C_400KHZ)
+ udelay(5*8);
+ else
+ udelay(16*8);
+ }
+ return -EBUSY;
+}
+
+
+static int
+sn9c102_i2c_detect_read_error(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ int r;
+ r = sn9c102_read_reg(cam, 0x08);
+ return (r < 0 || (r >= 0 && !(r & 0x08))) ? -EIO : 0;
+}
+
+
+static int
+sn9c102_i2c_detect_write_error(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ int r;
+ r = sn9c102_read_reg(cam, 0x08);
+ return (r < 0 || (r >= 0 && (r & 0x08))) ? -EIO : 0;
+}
+
+
+int
+sn9c102_i2c_try_read(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 address)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* data = cam->control_buffer;
+ int err = 0, res;
+
+ /* Write cycle - address */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) | 0x10;
+ data[1] = sensor->slave_write_id;
+ data[2] = address;
+ data[7] = 0x10;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+
+ /* Read cycle - 1 byte */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) |
+ 0x10 | 0x02;
+ data[1] = sensor->slave_read_id;
+ data[7] = 0x10;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+
+ /* The read byte will be placed in data[4] */
+ res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 0x00, 0xc1,
+ 0x0a, 0, data, 5, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_detect_read_error(cam, sensor);
+
+ if (err)
+ DBG(3, "I2C read failed for %s image sensor", sensor->name)
+
+ PDBGG("I2C read: address 0x%02X, value: 0x%02X", address, data[4])
+
+ return err ? -1 : (int)data[4];
+}
+
+
+int
+sn9c102_i2c_try_raw_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 n, u8 data0,
+ u8 data1, u8 data2, u8 data3, u8 data4, u8 data5)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* data = cam->control_buffer;
+ int err = 0, res;
+
+ /* Write cycle. It usually is address + value */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0)
+ | ((n - 1) << 4);
+ data[1] = data0;
+ data[2] = data1;
+ data[3] = data2;
+ data[4] = data3;
+ data[5] = data4;
+ data[6] = data5;
+ data[7] = 0x10;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+ err += sn9c102_i2c_detect_write_error(cam, sensor);
+
+ if (err)
+ DBG(3, "I2C write failed for %s image sensor", sensor->name)
+
+ PDBGG("I2C write: %u bytes, data0 = 0x%02X, data1 = 0x%02X, "
+ "data2 = 0x%02X, data3 = 0x%02X, data4 = 0x%02X, data5 = 0x%02X",
+ n, data0, data1, data2, data3, data4, data5)
+
+ return err ? -1 : 0;
+}
+
+
+int
+sn9c102_i2c_try_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 address, u8 value)
+{
+ return sn9c102_i2c_try_raw_write(cam, sensor, 3,
+ sensor->slave_write_id, address,
+ value, 0, 0, 0);
+}
+
+
+int sn9c102_i2c_read(struct sn9c102_device* cam, u8 address)
+{
+ if (!cam->sensor)
+ return -1;
+
+ return sn9c102_i2c_try_read(cam, cam->sensor, address);
+}
+
+
+int sn9c102_i2c_write(struct sn9c102_device* cam, u8 address, u8 value)
+{
+ if (!cam->sensor)
+ return -1;
+
+ return sn9c102_i2c_try_write(cam, cam->sensor, address, value);
+}
+
+/*****************************************************************************/
+
+static void* sn9c102_find_sof_header(void* mem, size_t len)
+{
+ size_t soflen=sizeof(sn9c102_sof_header_t), SOFLEN=SN9C102_SOFLEN, i;
+ u8 j, n = sizeof(sn9c102_sof_header) / soflen;
+
+ for (i = 0; (len >= soflen+SOFLEN) && (i <= len-soflen-SOFLEN); i++)
+ for (j = 0; j < n; j++)
+ if (!memcmp(mem + i, sn9c102_sof_header[j], soflen))
+ /* Skips the header */
+ return mem + i + soflen + SOFLEN;
+
+ return NULL;
+}
+
+
+static void* sn9c102_find_eof_header(void* mem, size_t len)
+{
+ size_t eoflen = sizeof(sn9c102_eof_header_t), i;
+ unsigned j, n = sizeof(sn9c102_eof_header) / eoflen;
+
+ for (i = 0; (len >= eoflen) && (i <= len - eoflen); i++)
+ for (j = 0; j < n; j++)
+ if (!memcmp(mem + i, sn9c102_eof_header[j], eoflen))
+ return mem + i;
+
+ return NULL;
+}
+
+
+static void sn9c102_urb_complete(struct urb *urb, struct pt_regs* regs)
+{
+ struct sn9c102_device* cam = urb->context;
+ struct sn9c102_frame_t** f;
+ unsigned long lock_flags;
+ u8 i;
+ int err = 0;
+
+ if (urb->status == -ENOENT)
+ return;
+
+ f = &cam->frame_current;
+
+ if (cam->stream == STREAM_INTERRUPT) {
+ cam->stream = STREAM_OFF;
+ if ((*f))
+ (*f)->state = F_QUEUED;
+ DBG(3, "Stream interrupted")
+ wake_up_interruptible(&cam->wait_stream);
+ }
+
+ if ((cam->state & DEV_DISCONNECTED)||(cam->state & DEV_MISCONFIGURED))
+ return;
+
+ if (cam->stream == STREAM_OFF || list_empty(&cam->inqueue))
+ goto resubmit_urb;
+
+ if (!(*f))
+ (*f) = list_entry(cam->inqueue.next, struct sn9c102_frame_t,
+ frame);
+
+ for (i = 0; i < urb->number_of_packets; i++) {
+ unsigned int img, len, status;
+ void *pos, *sof, *eof;
+
+ len = urb->iso_frame_desc[i].actual_length;
+ status = urb->iso_frame_desc[i].status;
+ pos = urb->iso_frame_desc[i].offset + urb->transfer_buffer;
+
+ if (status) {
+ DBG(3, "Error in isochronous frame")
+ (*f)->state = F_ERROR;
+ continue;
+ }
+
+ PDBGG("Isochrnous frame: length %u, #%u i", len, i)
+
+ /* NOTE: It is probably correct to assume that SOF and EOF
+ headers do not occur between two consecutive packets,
+ but who knows..Whatever is the truth, this assumption
+ doesn't introduce bugs. */
+
+redo:
+ sof = sn9c102_find_sof_header(pos, len);
+ if (!sof) {
+ eof = sn9c102_find_eof_header(pos, len);
+ if ((*f)->state == F_GRABBING) {
+end_of_frame:
+ img = len;
+
+ if (eof)
+ img = (eof > pos) ? eof - pos - 1 : 0;
+
+ if ((*f)->buf.bytesused+img>(*f)->buf.length) {
+ u32 b = (*f)->buf.bytesused + img -
+ (*f)->buf.length;
+ img = (*f)->buf.length -
+ (*f)->buf.bytesused;
+ DBG(3, "Expected EOF not found: "
+ "video frame cut")
+ if (eof)
+ DBG(3, "Exceeded limit: +%u "
+ "bytes", (unsigned)(b))
+ }
+
+ memcpy((*f)->bufmem + (*f)->buf.bytesused, pos,
+ img);
+
+ if ((*f)->buf.bytesused == 0)
+ do_gettimeofday(&(*f)->buf.timestamp);
+
+ (*f)->buf.bytesused += img;
+
+ if ((*f)->buf.bytesused == (*f)->buf.length) {
+ u32 b = (*f)->buf.bytesused;
+ (*f)->state = F_DONE;
+ (*f)->buf.sequence= ++cam->frame_count;
+ spin_lock_irqsave(&cam->queue_lock,
+ lock_flags);
+ list_move_tail(&(*f)->frame,
+ &cam->outqueue);
+ if (!list_empty(&cam->inqueue))
+ (*f) = list_entry(
+ cam->inqueue.next,
+ struct sn9c102_frame_t,
+ frame );
+ else
+ (*f) = NULL;
+ spin_unlock_irqrestore(&cam->queue_lock
+ , lock_flags);
+ DBG(3, "Video frame captured: "
+ "%lu bytes", (unsigned long)(b))
+
+ if (!(*f))
+ goto resubmit_urb;
+
+ } else if (eof) {
+ (*f)->state = F_ERROR;
+ DBG(3, "Not expected EOF after %lu "
+ "bytes of image data",
+ (unsigned long)((*f)->buf.bytesused))
+ }
+
+ if (sof) /* (1) */
+ goto start_of_frame;
+
+ } else if (eof) {
+ DBG(3, "EOF without SOF")
+ continue;
+
+ } else {
+ PDBGG("Ignoring pointless isochronous frame")
+ continue;
+ }
+
+ } else if ((*f)->state == F_QUEUED || (*f)->state == F_ERROR) {
+start_of_frame:
+ (*f)->state = F_GRABBING;
+ (*f)->buf.bytesused = 0;
+ len -= (sof - pos);
+ pos = sof;
+ DBG(3, "SOF detected: new video frame")
+ if (len)
+ goto redo;
+
+ } else if ((*f)->state == F_GRABBING) {
+ eof = sn9c102_find_eof_header(pos, len);
+ if (eof && eof < sof)
+ goto end_of_frame; /* (1) */
+ else {
+ DBG(3, "SOF before expected EOF after %lu "
+ "bytes of image data",
+ (unsigned long)((*f)->buf.bytesused))
+ goto start_of_frame;
+ }
+ }
+ }
+
+resubmit_urb:
+ urb->dev = cam->usbdev;
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0 && err != -EPERM) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "usb_submit_urb() failed")
+ }
+
+ wake_up_interruptible(&cam->wait_frame);
+}
+
+
+static int sn9c102_start_transfer(struct sn9c102_device* cam)
+{
+ struct usb_device *udev = cam->usbdev;
+ struct urb* urb;
+ const unsigned int wMaxPacketSize[] = {0, 128, 256, 384, 512,
+ 680, 800, 900, 1023};
+ const unsigned int psz = wMaxPacketSize[SN9C102_ALTERNATE_SETTING];
+ s8 i, j;
+ int err = 0;
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ cam->transfer_buffer[i] = kmalloc(SN9C102_ISO_PACKETS * psz,
+ GFP_KERNEL);
+ if (!cam->transfer_buffer[i]) {
+ err = -ENOMEM;
+ DBG(1, "Not enough memory")
+ goto free_buffers;
+ }
+ }
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ urb = usb_alloc_urb(SN9C102_ISO_PACKETS, GFP_KERNEL);
+ cam->urb[i] = urb;
+ if (!urb) {
+ err = -ENOMEM;
+ DBG(1, "usb_alloc_urb() failed")
+ goto free_urbs;
+ }
+ urb->dev = udev;
+ urb->context = cam;
+ urb->pipe = usb_rcvisocpipe(udev, 1);
+ urb->transfer_flags = URB_ISO_ASAP;
+ urb->number_of_packets = SN9C102_ISO_PACKETS;
+ urb->complete = sn9c102_urb_complete;
+ urb->transfer_buffer = cam->transfer_buffer[i];
+ urb->transfer_buffer_length = psz * SN9C102_ISO_PACKETS;
+ urb->interval = 1;
+ for (j = 0; j < SN9C102_ISO_PACKETS; j++) {
+ urb->iso_frame_desc[j].offset = psz * j;
+ urb->iso_frame_desc[j].length = psz;
+ }
+ }
+
+ /* Enable video */
+ if (!(cam->reg[0x01] & 0x04)) {
+ err = sn9c102_write_reg(cam, cam->reg[0x01] | 0x04, 0x01);
+ if (err) {
+ err = -EIO;
+ DBG(1, "I/O hardware error")
+ goto free_urbs;
+ }
+ }
+
+ err = usb_set_interface(udev, 0, SN9C102_ALTERNATE_SETTING);
+ if (err) {
+ DBG(1, "usb_set_interface() failed")
+ goto free_urbs;
+ }
+
+ cam->frame_current = NULL;
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ err = usb_submit_urb(cam->urb[i], GFP_KERNEL);
+ if (err) {
+ for (j = i-1; j >= 0; j--)
+ usb_kill_urb(cam->urb[j]);
+ DBG(1, "usb_submit_urb() failed, error %d", err)
+ goto free_urbs;
+ }
+ }
+
+ return 0;
+
+free_urbs:
+ for (i = 0; (i < SN9C102_URBS) && cam->urb[i]; i++)
+ usb_free_urb(cam->urb[i]);
+
+free_buffers:
+ for (i = 0; (i < SN9C102_URBS) && cam->transfer_buffer[i]; i++)
+ kfree(cam->transfer_buffer[i]);
+
+ return err;
+}
+
+
+static int sn9c102_stop_transfer(struct sn9c102_device* cam)
+{
+ struct usb_device *udev = cam->usbdev;
+ s8 i;
+ int err = 0;
+
+ if (cam->state & DEV_DISCONNECTED)
+ return 0;
+
+ for (i = SN9C102_URBS-1; i >= 0; i--) {
+ usb_kill_urb(cam->urb[i]);
+ usb_free_urb(cam->urb[i]);
+ kfree(cam->transfer_buffer[i]);
+ }
+
+ err = usb_set_interface(udev, 0, 0); /* 0 Mb/s */
+ if (err)
+ DBG(3, "usb_set_interface() failed")
+
+ return err;
+}
+
+/*****************************************************************************/
+
+static u8 sn9c102_strtou8(const char* buff, size_t len, ssize_t* count)
+{
+ char str[5];
+ char* endp;
+ unsigned long val;
+
+ if (len < 4) {
+ strncpy(str, buff, len);
+ str[len+1] = '\0';
+ } else {
+ strncpy(str, buff, 4);
+ str[4] = '\0';
+ }
+
+ val = simple_strtoul(str, &endp, 0);
+
+ *count = 0;
+ if (val <= 0xff)
+ *count = (ssize_t)(endp - str);
+ if ((*count) && (len == *count+1) && (buff[*count] == '\n'))
+ *count += 1;
+
+ return (u8)val;
+}
+
+/* NOTE 1: being inside one of the following methods implies that the v4l
+ device exists for sure (see kobjects and reference counters)
+ NOTE 2: buffers are PAGE_SIZE long */
+
+static ssize_t sn9c102_show_reg(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ count = sprintf(buf, "%u\n", cam->sysfs.reg);
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_reg(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 index;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ index = sn9c102_strtou8(buf, len, &count);
+ if (index > 0x1f || !count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ cam->sysfs.reg = index;
+
+ DBG(2, "Moved SN9C10X register index to 0x%02X", cam->sysfs.reg)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_val(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+ int val;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ if ((val = sn9c102_read_reg(cam, cam->sysfs.reg)) < 0) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ count = sprintf(buf, "%d\n", val);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_val(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 value;
+ ssize_t count;
+ int err;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ err = sn9c102_write_reg(cam, value, cam->sysfs.reg);
+ if (err) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ DBG(2, "Written SN9C10X reg. 0x%02X, val. 0x%02X",
+ cam->sysfs.reg, value)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_i2c_reg(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ count = sprintf(buf, "%u\n", cam->sysfs.i2c_reg);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_i2c_reg(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 index;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ index = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ cam->sysfs.i2c_reg = index;
+
+ DBG(2, "Moved sensor register index to 0x%02X", cam->sysfs.i2c_reg)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_i2c_val(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+ int val;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ if ((val = sn9c102_i2c_read(cam, cam->sysfs.i2c_reg)) < 0) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ count = sprintf(buf, "%d\n", val);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_i2c_val(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 value;
+ ssize_t count;
+ int err;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ err = sn9c102_i2c_write(cam, cam->sysfs.i2c_reg, value);
+ if (err) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ DBG(2, "Written sensor reg. 0x%02X, val. 0x%02X",
+ cam->sysfs.i2c_reg, value)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_redblue(struct class_device* cd, const char* buf, size_t len)
+{
+ ssize_t res = 0;
+ u8 value;
+ ssize_t count;
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count)
+ return -EINVAL;
+
+ if ((res = sn9c102_store_reg(cd, "0x10", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+
+ return res;
+}
+
+
+static ssize_t
+sn9c102_store_green(struct class_device* cd, const char* buf, size_t len)
+{
+ ssize_t res = 0;
+ u8 value;
+ ssize_t count;
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count || value > 0x0f)
+ return -EINVAL;
+
+ if ((res = sn9c102_store_reg(cd, "0x11", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+
+ return res;
+}
+
+
+static CLASS_DEVICE_ATTR(reg, S_IRUGO | S_IWUSR,
+ sn9c102_show_reg, sn9c102_store_reg);
+static CLASS_DEVICE_ATTR(val, S_IRUGO | S_IWUSR,
+ sn9c102_show_val, sn9c102_store_val);
+static CLASS_DEVICE_ATTR(i2c_reg, S_IRUGO | S_IWUSR,
+ sn9c102_show_i2c_reg, sn9c102_store_i2c_reg);
+static CLASS_DEVICE_ATTR(i2c_val, S_IRUGO | S_IWUSR,
+ sn9c102_show_i2c_val, sn9c102_store_i2c_val);
+static CLASS_DEVICE_ATTR(redblue, S_IWUGO, NULL, sn9c102_store_redblue);
+static CLASS_DEVICE_ATTR(green, S_IWUGO, NULL, sn9c102_store_green);
+
+
+static void sn9c102_create_sysfs(struct sn9c102_device* cam)
+{
+ struct video_device *v4ldev = cam->v4ldev;
+
+ video_device_create_file(v4ldev, &class_device_attr_reg);
+ video_device_create_file(v4ldev, &class_device_attr_val);
+ video_device_create_file(v4ldev, &class_device_attr_redblue);
+ video_device_create_file(v4ldev, &class_device_attr_green);
+ if (cam->sensor->slave_write_id && cam->sensor->slave_read_id) {
+ video_device_create_file(v4ldev, &class_device_attr_i2c_reg);
+ video_device_create_file(v4ldev, &class_device_attr_i2c_val);
+ }
+}
+
+/*****************************************************************************/
+
+static int sn9c102_set_scale(struct sn9c102_device* cam, u8 scale)
+{
+ u8 r = 0;
+ int err = 0;
+
+ if (scale == 1)
+ r = cam->reg[0x18] & 0xcf;
+ else if (scale == 2) {
+ r = cam->reg[0x18] & 0xcf;
+ r |= 0x10;
+ } else if (scale == 4)
+ r = cam->reg[0x18] | 0x20;
+
+ err += sn9c102_write_reg(cam, r, 0x18);
+ if (err)
+ return -EIO;
+
+ PDBGG("Scaling factor: %u", scale)
+
+ return 0;
+}
+
+
+static int sn9c102_set_crop(struct sn9c102_device* cam, struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = cam->sensor;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left),
+ v_start = (u8)(rect->top - s->cropcap.bounds.top),
+ h_size = (u8)(rect->width / 16),
+ v_size = (u8)(rect->height / 16),
+ ae_strx = 0x00,
+ ae_stry = 0x00,
+ ae_endx = h_size / 2,
+ ae_endy = v_size / 2;
+ int err = 0;
+
+ /* These are a sort of stroboscopic signal for some sensors */
+ err += sn9c102_write_reg(cam, h_size, 0x1a);
+ err += sn9c102_write_reg(cam, v_size, 0x1b);
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+ err += sn9c102_write_reg(cam, h_size, 0x15);
+ err += sn9c102_write_reg(cam, v_size, 0x16);
+ err += sn9c102_write_reg(cam, ae_strx, 0x1c);
+ err += sn9c102_write_reg(cam, ae_stry, 0x1d);
+ err += sn9c102_write_reg(cam, ae_endx, 0x1e);
+ err += sn9c102_write_reg(cam, ae_endy, 0x1f);
+ if (err)
+ return -EIO;
+
+ PDBGG("h_start, v_start, h_size, v_size, ho_size, vo_size "
+ "%u %u %u %u %u %u", h_start, v_start, h_size, v_size, ho_size,
+ vo_size)
+
+ return 0;
+}
+
+
+static int sn9c102_init(struct sn9c102_device* cam)
+{
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ struct v4l2_queryctrl *qctrl;
+ struct v4l2_rect* rect;
+ u8 i = 0, n = 0;
+ int err = 0;
+
+ if (!(cam->state & DEV_INITIALIZED)) {
+ init_waitqueue_head(&cam->open);
+ qctrl = s->qctrl;
+ rect = &(s->cropcap.defrect);
+ } else { /* use current values */
+ qctrl = s->_qctrl;
+ rect = &(s->_rect);
+ }
+
+ err += sn9c102_set_scale(cam, rect->width / s->pix_format.width);
+ err += sn9c102_set_crop(cam, rect);
+ if (err)
+ return err;
+
+ if (s->init) {
+ err = s->init(cam);
+ if (err) {
+ DBG(3, "Sensor initialization failed")
+ return err;
+ }
+ }
+
+ if (s->set_crop)
+ if ((err = s->set_crop(cam, rect))) {
+ DBG(3, "set_crop() failed")
+ return err;
+ }
+
+ if (s->set_ctrl) {
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (s->qctrl[i].id != 0 &&
+ !(s->qctrl[i].flags & V4L2_CTRL_FLAG_DISABLED)) {
+ ctrl.id = s->qctrl[i].id;
+ ctrl.value = qctrl[i].default_value;
+ err = s->set_ctrl(cam, &ctrl);
+ if (err) {
+ DBG(3, "Set control failed")
+ return err;
+ }
+ }
+ }
+
+ if (!(cam->state & DEV_INITIALIZED)) {
+ init_MUTEX(&cam->fileop_sem);
+ spin_lock_init(&cam->queue_lock);
+ init_waitqueue_head(&cam->wait_frame);
+ init_waitqueue_head(&cam->wait_stream);
+ memcpy(s->_qctrl, s->qctrl, sizeof(s->qctrl));
+ memcpy(&(s->_rect), &(s->cropcap.defrect),
+ sizeof(struct v4l2_rect));
+ cam->state |= DEV_INITIALIZED;
+ }
+
+ DBG(2, "Initialization succeeded")
+ return 0;
+}
+
+
+static void sn9c102_release_resources(struct sn9c102_device* cam)
+{
+ down(&sn9c102_sysfs_lock);
+
+ DBG(2, "V4L2 device /dev/video%d deregistered", cam->v4ldev->minor)
+ video_set_drvdata(cam->v4ldev, NULL);
+ video_unregister_device(cam->v4ldev);
+
+ up(&sn9c102_sysfs_lock);
+
+ kfree(cam->control_buffer);
+}
+
+/*****************************************************************************/
+
+static int sn9c102_open(struct inode* inode, struct file* filp)
+{
+ struct sn9c102_device* cam;
+ int err = 0;
+
+ /* This the only safe way to prevent race conditions with disconnect */
+ if (!down_read_trylock(&sn9c102_disconnect))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(video_devdata(filp));
+
+ if (down_interruptible(&cam->dev_sem)) {
+ up_read(&sn9c102_disconnect);
+ return -ERESTARTSYS;
+ }
+
+ if (cam->users) {
+ DBG(2, "Device /dev/video%d is busy...", cam->v4ldev->minor)
+ if ((filp->f_flags & O_NONBLOCK) ||
+ (filp->f_flags & O_NDELAY)) {
+ err = -EWOULDBLOCK;
+ goto out;
+ }
+ up(&cam->dev_sem);
+ err = wait_event_interruptible_exclusive(cam->open,
+ cam->state & DEV_DISCONNECTED
+ || !cam->users);
+ if (err) {
+ up_read(&sn9c102_disconnect);
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED) {
+ up_read(&sn9c102_disconnect);
+ return -ENODEV;
+ }
+ down(&cam->dev_sem);
+ }
+
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ err = sn9c102_init(cam);
+ if (err) {
+ DBG(1, "Initialization failed again. "
+ "I will retry on next open().")
+ goto out;
+ }
+ cam->state &= ~DEV_MISCONFIGURED;
+ }
+
+ if ((err = sn9c102_start_transfer(cam)))
+ goto out;
+
+ filp->private_data = cam;
+ cam->users++;
+ cam->io = IO_NONE;
+ cam->stream = STREAM_OFF;
+ cam->nbuffers = 0;
+ cam->frame_count = 0;
+ sn9c102_empty_framequeues(cam);
+
+ DBG(3, "Video device /dev/video%d is open", cam->v4ldev->minor)
+
+out:
+ up(&cam->dev_sem);
+ up_read(&sn9c102_disconnect);
+ return err;
+}
+
+
+static int sn9c102_release(struct inode* inode, struct file* filp)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+
+ down(&cam->dev_sem); /* prevent disconnect() to be called */
+
+ sn9c102_stop_transfer(cam);
+
+ sn9c102_release_buffers(cam);
+
+ if (cam->state & DEV_DISCONNECTED) {
+ sn9c102_release_resources(cam);
+ up(&cam->dev_sem);
+ kfree(cam);
+ return 0;
+ }
+
+ cam->users--;
+ wake_up_interruptible_nr(&cam->open, 1);
+
+ DBG(3, "Video device /dev/video%d closed", cam->v4ldev->minor)
+
+ up(&cam->dev_sem);
+
+ return 0;
+}
+
+
+static ssize_t
+sn9c102_read(struct file* filp, char __user * buf, size_t count, loff_t* f_pos)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ struct sn9c102_frame_t* f, * i;
+ unsigned long lock_flags;
+ int err = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ if (cam->io == IO_MMAP) {
+ DBG(3, "Close and open the device again to choose "
+ "the read method")
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ if (cam->io == IO_NONE) {
+ if (!sn9c102_request_buffers(cam, 2)) {
+ DBG(1, "read() failed, not enough memory")
+ up(&cam->fileop_sem);
+ return -ENOMEM;
+ }
+ cam->io = IO_READ;
+ cam->stream = STREAM_ON;
+ sn9c102_queue_unusedframes(cam);
+ }
+
+ if (!count) {
+ up(&cam->fileop_sem);
+ return 0;
+ }
+
+ if (list_empty(&cam->outqueue)) {
+ if (filp->f_flags & O_NONBLOCK) {
+ up(&cam->fileop_sem);
+ return -EAGAIN;
+ }
+ err = wait_event_interruptible
+ ( cam->wait_frame,
+ (!list_empty(&cam->outqueue)) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ up(&cam->fileop_sem);
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED) {
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+ }
+
+ f = list_entry(cam->outqueue.prev, struct sn9c102_frame_t, frame);
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_for_each_entry(i, &cam->outqueue, frame)
+ i->state = F_UNUSED;
+ INIT_LIST_HEAD(&cam->outqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ sn9c102_queue_unusedframes(cam);
+
+ if (count > f->buf.length)
+ count = f->buf.length;
+
+ if (copy_to_user(buf, f->bufmem, count)) {
+ up(&cam->fileop_sem);
+ return -EFAULT;
+ }
+ *f_pos += count;
+
+ PDBGG("Frame #%lu, bytes read: %zu", (unsigned long)f->buf.index,count)
+
+ up(&cam->fileop_sem);
+
+ return count;
+}
+
+
+static unsigned int sn9c102_poll(struct file *filp, poll_table *wait)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ unsigned int mask = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return POLLERR;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ goto error;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ goto error;
+ }
+
+ if (cam->io == IO_NONE) {
+ if (!sn9c102_request_buffers(cam, 2)) {
+ DBG(1, "poll() failed, not enough memory")
+ goto error;
+ }
+ cam->io = IO_READ;
+ cam->stream = STREAM_ON;
+ }
+
+ if (cam->io == IO_READ)
+ sn9c102_queue_unusedframes(cam);
+
+ poll_wait(filp, &cam->wait_frame, wait);
+
+ if (!list_empty(&cam->outqueue))
+ mask |= POLLIN | POLLRDNORM;
+
+ up(&cam->fileop_sem);
+
+ return mask;
+
+error:
+ up(&cam->fileop_sem);
+ return POLLERR;
+}
+
+
+static void sn9c102_vm_open(struct vm_area_struct* vma)
+{
+ struct sn9c102_frame_t* f = vma->vm_private_data;
+ f->vma_use_count++;
+}
+
+
+static void sn9c102_vm_close(struct vm_area_struct* vma)
+{
+ /* NOTE: buffers are not freed here */
+ struct sn9c102_frame_t* f = vma->vm_private_data;
+ f->vma_use_count--;
+}
+
+
+static struct vm_operations_struct sn9c102_vm_ops = {
+ .open = sn9c102_vm_open,
+ .close = sn9c102_vm_close,
+};
+
+
+static int sn9c102_mmap(struct file* filp, struct vm_area_struct *vma)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ unsigned long size = vma->vm_end - vma->vm_start,
+ start = vma->vm_start,
+ pos,
+ page;
+ u32 i;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ if (cam->io != IO_MMAP || !(vma->vm_flags & VM_WRITE) ||
+ size != PAGE_ALIGN(cam->frame[0].buf.length)) {
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++) {
+ if ((cam->frame[i].buf.m.offset>>PAGE_SHIFT) == vma->vm_pgoff)
+ break;
+ }
+ if (i == cam->nbuffers) {
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ pos = (unsigned long)cam->frame[i].bufmem;
+ while (size > 0) { /* size is page-aligned */
+ page = kvirt_to_pa(pos);
+ if (remap_page_range(vma, start, page, PAGE_SIZE,
+ vma->vm_page_prot)) {
+ up(&cam->fileop_sem);
+ return -EAGAIN;
+ }
+ start += PAGE_SIZE;
+ pos += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ vma->vm_ops = &sn9c102_vm_ops;
+ vma->vm_flags &= ~VM_IO; /* not I/O memory */
+ vma->vm_flags |= VM_RESERVED; /* avoid to swap out this VMA */
+ vma->vm_private_data = &cam->frame[i];
+
+ sn9c102_vm_open(vma);
+
+ up(&cam->fileop_sem);
+
+ return 0;
+}
+
+
+static int sn9c102_v4l2_ioctl(struct inode* inode, struct file* filp,
+ unsigned int cmd, void __user * arg)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+
+ switch (cmd) {
+
+ case VIDIOC_QUERYCAP:
+ {
+ struct v4l2_capability cap = {
+ .driver = "sn9c102",
+ .version = SN9C102_MODULE_VERSION_CODE,
+ .capabilities = V4L2_CAP_VIDEO_CAPTURE |
+ V4L2_CAP_READWRITE |
+ V4L2_CAP_STREAMING,
+ };
+
+ strlcpy(cap.card, cam->v4ldev->name, sizeof(cap.card));
+ strlcpy(cap.bus_info, cam->dev.bus_id, sizeof(cap.bus_info));
+
+ if (copy_to_user(arg, &cap, sizeof(cap)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_ENUMINPUT:
+ {
+ struct v4l2_input i;
+
+ if (copy_from_user(&i, arg, sizeof(i)))
+ return -EFAULT;
+
+ if (i.index)
+ return -EINVAL;
+
+ memset(&i, 0, sizeof(i));
+ strcpy(i.name, "USB");
+
+ if (copy_to_user(arg, &i, sizeof(i)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_INPUT:
+ case VIDIOC_S_INPUT:
+ {
+ int index;
+
+ if (copy_from_user(&index, arg, sizeof(index)))
+ return -EFAULT;
+
+ if (index != 0)
+ return -EINVAL;
+
+ return 0;
+ }
+
+ case VIDIOC_QUERYCTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_queryctrl qc;
+ u8 i, n;
+
+ if (copy_from_user(&qc, arg, sizeof(qc)))
+ return -EFAULT;
+
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (qc.id && qc.id == s->qctrl[i].id) {
+ memcpy(&qc, &(s->qctrl[i]), sizeof(qc));
+ if (copy_to_user(arg, &qc, sizeof(qc)))
+ return -EFAULT;
+ return 0;
+ }
+
+ return -EINVAL;
+ }
+
+ case VIDIOC_G_CTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ int err = 0;
+
+ if (!s->get_ctrl)
+ return -EINVAL;
+
+ if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
+ return -EFAULT;
+
+ err = s->get_ctrl(cam, &ctrl);
+
+ if (copy_to_user(arg, &ctrl, sizeof(ctrl)))
+ return -EFAULT;
+
+ return err;
+ }
+
+ case VIDIOC_S_CTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ u8 i, n;
+ int err = 0;
+
+ if (!s->set_ctrl)
+ return -EINVAL;
+
+ if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
+ return -EFAULT;
+
+ if ((err = s->set_ctrl(cam, &ctrl)))
+ return err;
+
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (ctrl.id == s->qctrl[i].id) {
+ s->_qctrl[i].default_value = ctrl.value;
+ break;
+ }
+
+ return 0;
+ }
+
+ case VIDIOC_CROPCAP:
+ {
+ struct v4l2_cropcap* cc = &(cam->sensor->cropcap);
+
+ cc->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ cc->pixelaspect.numerator = 1;
+ cc->pixelaspect.denominator = 1;
+
+ if (copy_to_user(arg, cc, sizeof(*cc)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_CROP:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_crop crop = {
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
+ };
+
+ memcpy(&(crop.c), &(s->_rect), sizeof(struct v4l2_rect));
+
+ if (copy_to_user(arg, &crop, sizeof(crop)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_S_CROP:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_crop crop;
+ struct v4l2_rect* rect;
+ struct v4l2_rect* bounds = &(s->cropcap.bounds);
+ struct v4l2_pix_format* pix_format = &(s->pix_format);
+ u8 scale;
+ const enum sn9c102_stream_state stream = cam->stream;
+ const u32 nbuffers = cam->nbuffers;
+ u32 i;
+ int err = 0;
+
+ if (copy_from_user(&crop, arg, sizeof(crop)))
+ return -EFAULT;
+
+ rect = &(crop.c);
+
+ if (crop.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_CROP failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
+
+ if (rect->width < 16)
+ rect->width = 16;
+ if (rect->height < 16)
+ rect->height = 16;
+ if (rect->width > bounds->width)
+ rect->width = bounds->width;
+ if (rect->height > bounds->height)
+ rect->height = bounds->height;
+ if (rect->left < bounds->left)
+ rect->left = bounds->left;
+ if (rect->top < bounds->top)
+ rect->top = bounds->top;
+ if (rect->left + rect->width > bounds->left + bounds->width)
+ rect->left = bounds->left+bounds->width - rect->width;
+ if (rect->top + rect->height > bounds->top + bounds->height)
+ rect->top = bounds->top+bounds->height - rect->height;
+
+ rect->width &= ~15L;
+ rect->height &= ~15L;
+
+ { /* calculate the scaling factor */
+ u32 a, b;
+ a = rect->width * rect->height;
+ b = pix_format->width * pix_format->height;
+ scale = b ? (u8)((a / b) <= 1 ? 1 : ((a / b) == 3 ? 2 :
+ ((a / b) > 4 ? 4 : (a / b)))) : 1;
+ }
+
+ if (cam->stream == STREAM_ON) {
+ cam->stream = STREAM_INTERRUPT;
+ err = wait_event_interruptible
+ ( cam->wait_stream,
+ (cam->stream == STREAM_OFF) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "The camera is misconfigured. To use "
+ "it, close and open /dev/video%d "
+ "again.", cam->v4ldev->minor)
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ if (copy_to_user(arg, &crop, sizeof(crop))) {
+ cam->stream = stream;
+ return -EFAULT;
+ }
+
+ sn9c102_release_buffers(cam);
+
+ err = sn9c102_set_crop(cam, rect);
+ if (s->set_crop)
+ err += s->set_crop(cam, rect);
+ err += sn9c102_set_scale(cam, scale);
+
+ if (err) { /* atomic, no rollback in ioctl() */
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_CROP failed because of hardware "
+ "problems. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return err;
+ }
+
+ s->pix_format.width = rect->width/scale;
+ s->pix_format.height = rect->height/scale;
+ memcpy(&(s->_rect), rect, sizeof(*rect));
+
+ if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_CROP failed because of not enough "
+ "memory. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -ENOMEM;
+ }
+
+ cam->stream = stream;
+
+ return 0;
+ }
+
+ case VIDIOC_ENUM_FMT:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_fmtdesc fmtd;
+
+ if (copy_from_user(&fmtd, arg, sizeof(fmtd)))
+ return -EFAULT;
+
+ if (fmtd.index != 0)
+ return -EINVAL;
+
+ memset(&fmtd, 0, sizeof(fmtd));
+
+ fmtd.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ strcpy(fmtd.description, "bayer rgb");
+ fmtd.pixelformat = s->pix_format.pixelformat;
+
+ if (copy_to_user(arg, &fmtd, sizeof(fmtd)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_FMT:
+ {
+ struct v4l2_format format;
+ struct v4l2_pix_format* pfmt = &(cam->sensor->pix_format);
+
+ if (copy_from_user(&format, arg, sizeof(format)))
+ return -EFAULT;
+
+ if (format.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ pfmt->bytesperline = (pfmt->width * pfmt->priv) / 8;
+ pfmt->sizeimage = pfmt->height * pfmt->bytesperline;
+ pfmt->field = V4L2_FIELD_NONE;
+ memcpy(&(format.fmt.pix), pfmt, sizeof(*pfmt));
+
+ if (copy_to_user(arg, &format, sizeof(format)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_TRY_FMT:
+ case VIDIOC_S_FMT:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_format format;
+ struct v4l2_pix_format* pix;
+ struct v4l2_pix_format* pfmt = &(s->pix_format);
+ struct v4l2_rect* bounds = &(s->cropcap.bounds);
+ struct v4l2_rect rect;
+ u8 scale;
+ const enum sn9c102_stream_state stream = cam->stream;
+ const u32 nbuffers = cam->nbuffers;
+ u32 i;
+ int err = 0;
+
+ if (copy_from_user(&format, arg, sizeof(format)))
+ return -EFAULT;
+
+ pix = &(format.fmt.pix);
+
+ if (format.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ memcpy(&rect, &(s->_rect), sizeof(rect));
+
+ { /* calculate the scaling factor */
+ u32 a, b;
+ a = rect.width * rect.height;
+ b = pix->width * pix->height;
+ scale = b ? (u8)((a / b) <= 1 ? 1 : ((a / b) == 3 ? 2 :
+ ((a / b) > 4 ? 4 : (a / b)))) : 1;
+ }
+
+ rect.width = scale * pix->width;
+ rect.height = scale * pix->height;
+
+ if (rect.width < 16)
+ rect.width = 16;
+ if (rect.height < 16)
+ rect.height = 16;
+ if (rect.width > bounds->left + bounds->width - rect.left)
+ rect.width = bounds->left+bounds->width - rect.left;
+ if (rect.height > bounds->top + bounds->height - rect.top)
+ rect.height = bounds->top + bounds->height - rect.top;
+
+ rect.width &= ~15L;
+ rect.height &= ~15L;
+
+ pix->width = rect.width / scale;
+ pix->height = rect.height / scale;
+
+ pix->pixelformat = pfmt->pixelformat;
+ pix->priv = pfmt->priv; /* bpp */
+ pix->colorspace = pfmt->colorspace;
+ pix->bytesperline = (pix->width * pix->priv) / 8;
+ pix->sizeimage = pix->height * pix->bytesperline;
+ pix->field = V4L2_FIELD_NONE;
+
+ if (cmd == VIDIOC_TRY_FMT)
+ return 0;
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_FMT failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
+
+ if (cam->stream == STREAM_ON) {
+ cam->stream = STREAM_INTERRUPT;
+ err = wait_event_interruptible
+ ( cam->wait_stream,
+ (cam->stream == STREAM_OFF) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "The camera is misconfigured. To use "
+ "it, close and open /dev/video%d "
+ "again.", cam->v4ldev->minor)
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ if (copy_to_user(arg, &format, sizeof(format))) {
+ cam->stream = stream;
+ return -EFAULT;
+ }
+
+ sn9c102_release_buffers(cam);
+
+ err = sn9c102_set_crop(cam, &rect);
+ if (s->set_crop)
+ err += s->set_crop(cam, &rect);
+ err += sn9c102_set_scale(cam, scale);
+
+ if (err) { /* atomic, no rollback in ioctl() */
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_FMT failed because of hardware "
+ "problems. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return err;
+ }
+
+ memcpy(pfmt, pix, sizeof(*pix));
+ memcpy(&(s->_rect), &rect, sizeof(rect));
+
+ if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_FMT failed because of not enough "
+ "memory. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -ENOMEM;
+ }
+
+ cam->stream = stream;
+
+ return 0;
+ }
+
+ case VIDIOC_REQBUFS:
+ {
+ struct v4l2_requestbuffers rb;
+ u32 i;
+ int err;
+
+ if (copy_from_user(&rb, arg, sizeof(rb)))
+ return -EFAULT;
+
+ if (rb.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ rb.memory != V4L2_MEMORY_MMAP)
+ return -EINVAL;
+
+ if (cam->io == IO_READ) {
+ DBG(3, "Close and open the device again to choose "
+ "the mmap I/O method")
+ return -EINVAL;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_REQBUFS failed. "
+ "Previous buffers are still mapped.")
+ return -EINVAL;
+ }
+
+ if (cam->stream == STREAM_ON) {
+ cam->stream = STREAM_INTERRUPT;
+ err = wait_event_interruptible
+ ( cam->wait_stream,
+ (cam->stream == STREAM_OFF) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "The camera is misconfigured. To use "
+ "it, close and open /dev/video%d "
+ "again.", cam->v4ldev->minor)
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ sn9c102_empty_framequeues(cam);
+
+ sn9c102_release_buffers(cam);
+ if (rb.count)
+ rb.count = sn9c102_request_buffers(cam, rb.count);
+
+ if (copy_to_user(arg, &rb, sizeof(rb))) {
+ sn9c102_release_buffers(cam);
+ cam->io = IO_NONE;
+ return -EFAULT;
+ }
+
+ cam->io = rb.count ? IO_MMAP : IO_NONE;
+
+ return 0;
+ }
+
+ case VIDIOC_QUERYBUF:
+ {
+ struct v4l2_buffer b;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ b.index >= cam->nbuffers || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ memcpy(&b, &cam->frame[b.index].buf, sizeof(b));
+
+ if (cam->frame[b.index].vma_use_count)
+ b.flags |= V4L2_BUF_FLAG_MAPPED;
+
+ if (cam->frame[b.index].state == F_DONE)
+ b.flags |= V4L2_BUF_FLAG_DONE;
+ else if (cam->frame[b.index].state != F_UNUSED)
+ b.flags |= V4L2_BUF_FLAG_QUEUED;
+
+ if (copy_to_user(arg, &b, sizeof(b)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_QBUF:
+ {
+ struct v4l2_buffer b;
+ unsigned long lock_flags;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ b.index >= cam->nbuffers || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (cam->frame[b.index].state != F_UNUSED)
+ return -EINVAL;
+
+ cam->frame[b.index].state = F_QUEUED;
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_add_tail(&cam->frame[b.index].frame, &cam->inqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ PDBGG("Frame #%lu queued", (unsigned long)b.index)
+
+ return 0;
+ }
+
+ case VIDIOC_DQBUF:
+ {
+ struct v4l2_buffer b;
+ struct sn9c102_frame_t *f;
+ unsigned long lock_flags;
+ int err = 0;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io!= IO_MMAP)
+ return -EINVAL;
+
+ if (list_empty(&cam->outqueue)) {
+ if (cam->stream == STREAM_OFF)
+ return -EINVAL;
+ if (filp->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+ err = wait_event_interruptible
+ ( cam->wait_frame,
+ (!list_empty(&cam->outqueue)) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err)
+ return err;
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ f = list_entry(cam->outqueue.next, struct sn9c102_frame_t,
+ frame);
+ list_del(&cam->outqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ f->state = F_UNUSED;
+
+ memcpy(&b, &f->buf, sizeof(b));
+ if (f->vma_use_count)
+ b.flags |= V4L2_BUF_FLAG_MAPPED;
+
+ if (copy_to_user(arg, &b, sizeof(b)))
+ return -EFAULT;
+
+ PDBGG("Frame #%lu dequeued", (unsigned long)f->buf.index)
+
+ return 0;
+ }
+
+ case VIDIOC_STREAMON:
+ {
+ int type;
+
+ if (copy_from_user(&type, arg, sizeof(type)))
+ return -EFAULT;
+
+ if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (list_empty(&cam->inqueue))
+ return -EINVAL;
+
+ cam->stream = STREAM_ON;
+
+ DBG(3, "Stream on")
+
+ return 0;
+ }
+
+ case VIDIOC_STREAMOFF:
+ {
+ int type, err;
+
+ if (copy_from_user(&type, arg, sizeof(type)))
+ return -EFAULT;
+
+ if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (cam->stream == STREAM_ON) {
+ cam->stream = STREAM_INTERRUPT;
+ err = wait_event_interruptible
+ ( cam->wait_stream,
+ (cam->stream == STREAM_OFF) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "The camera is misconfigured. To use "
+ "it, close and open /dev/video%d "
+ "again.", cam->v4ldev->minor)
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ sn9c102_empty_framequeues(cam);
+
+ DBG(3, "Stream off")
+
+ return 0;
+ }
+
+ case VIDIOC_G_STD:
+ case VIDIOC_S_STD:
+ case VIDIOC_QUERYSTD:
+ case VIDIOC_ENUMSTD:
+ case VIDIOC_QUERYMENU:
+ case VIDIOC_G_PARM:
+ case VIDIOC_S_PARM:
+ return -EINVAL;
+
+ default:
+ return -EINVAL;
+
+ }
+}
+
+
+static int sn9c102_ioctl(struct inode* inode, struct file* filp,
+ unsigned int cmd, unsigned long arg)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ int err = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ err = sn9c102_v4l2_ioctl(inode, filp, cmd, (void __user *)arg);
+
+ up(&cam->fileop_sem);
+
+ return err;
+}
+
+
+static struct file_operations sn9c102_fops = {
+ .owner = THIS_MODULE,
+ .open = sn9c102_open,
+ .release = sn9c102_release,
+ .ioctl = sn9c102_ioctl,
+ .read = sn9c102_read,
+ .poll = sn9c102_poll,
+ .mmap = sn9c102_mmap,
+ .llseek = no_llseek,
+};
+
+/*****************************************************************************/
+
+/* It exists a single interface only. We do not need to validate anything. */
+static int
+sn9c102_usb_probe(struct usb_interface* intf, const struct usb_device_id* id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct sn9c102_device* cam;
+ static unsigned int dev_nr = 0;
+ unsigned int i, n;
+ int err = 0, r;
+
+ n = sizeof(sn9c102_id_table)/sizeof(sn9c102_id_table[0]);
+ for (i = 0; i < n-1; i++)
+ if (udev->descriptor.idVendor==sn9c102_id_table[i].idVendor &&
+ udev->descriptor.idProduct==sn9c102_id_table[i].idProduct)
+ break;
+ if (i == n-1)
+ return -ENODEV;
+
+ if (!(cam = kmalloc(sizeof(struct sn9c102_device), GFP_KERNEL)))
+ return -ENOMEM;
+ memset(cam, 0, sizeof(*cam));
+
+ cam->usbdev = udev;
+
+ memcpy(&cam->dev, &udev->dev, sizeof(struct device));
+
+ if (!(cam->control_buffer = kmalloc(8, GFP_KERNEL))) {
+ DBG(1, "kmalloc() failed")
+ err = -ENOMEM;
+ goto fail;
+ }
+ memset(cam->control_buffer, 0, 8);
+
+ if (!(cam->v4ldev = video_device_alloc())) {
+ DBG(1, "video_device_alloc() failed")
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ init_MUTEX(&cam->dev_sem);
+
+ r = sn9c102_read_reg(cam, 0x00);
+ if (r < 0 || r != 0x10) {
+ DBG(1, "Sorry, this is not a SN9C10[12] based camera "
+ "(vid/pid 0x%04X/0x%04X)",
+ sn9c102_id_table[i].idVendor,sn9c102_id_table[i].idProduct)
+ err = -ENODEV;
+ goto fail;
+ }
+
+ DBG(2, "SN9C10[12] PC Camera Controller detected "
+ "(vid/pid 0x%04X/0x%04X)",
+ sn9c102_id_table[i].idVendor, sn9c102_id_table[i].idProduct)
+
+ for (i = 0; sn9c102_sensor_table[i]; i++) {
+ err = sn9c102_sensor_table[i](cam);
+ if (!err)
+ break;
+ }
+
+ if (!err && cam->sensor) {
+ DBG(2, "%s image sensor detected", cam->sensor->name)
+ DBG(3, "Support for %s maintained by %s",
+ cam->sensor->name, cam->sensor->maintainer)
+ } else {
+ DBG(1, "No supported image sensor detected")
+ err = -ENODEV;
+ goto fail;
+ }
+
+ if (sn9c102_init(cam)) {
+ DBG(1, "Initialization failed. I will retry on open().")
+ cam->state |= DEV_MISCONFIGURED;
+ }
+
+ strcpy(cam->v4ldev->name, "SN9C10[12] PC Camera");
+ cam->v4ldev->owner = THIS_MODULE;
+ cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES;
+ cam->v4ldev->hardware = VID_HARDWARE_SN9C102;
+ cam->v4ldev->fops = &sn9c102_fops;
+ cam->v4ldev->minor = video_nr[dev_nr];
+ cam->v4ldev->release = video_device_release;
+ video_set_drvdata(cam->v4ldev, cam);
+
+ down(&cam->dev_sem);
+
+ err = video_register_device(cam->v4ldev, VFL_TYPE_GRABBER,
+ video_nr[dev_nr]);
+ if (err) {
+ DBG(1, "V4L2 device registration failed")
+ if (err == -ENFILE && video_nr[dev_nr] == -1)
+ DBG(1, "Free /dev/videoX node not found")
+ video_nr[dev_nr] = -1;
+ dev_nr = (dev_nr < SN9C102_MAX_DEVICES-1) ? dev_nr+1 : 0;
+ up(&cam->dev_sem);
+ goto fail;
+ }
+
+ DBG(2, "V4L2 device registered as /dev/video%d", cam->v4ldev->minor)
+
+ sn9c102_create_sysfs(cam);
+
+ usb_set_intfdata(intf, cam);
+
+ up(&cam->dev_sem);
+
+ return 0;
+
+fail:
+ if (cam) {
+ kfree(cam->control_buffer);
+ if (cam->v4ldev)
+ video_device_release(cam->v4ldev);
+ kfree(cam);
+ }
+ return err;
+}
+
+
+static void sn9c102_usb_disconnect(struct usb_interface* intf)
+{
+ struct sn9c102_device* cam = usb_get_intfdata(intf);
+
+ if (!cam)
+ return;
+
+ down_write(&sn9c102_disconnect);
+
+ down(&cam->dev_sem);
+
+ DBG(2, "Disconnecting %s...", cam->v4ldev->name)
+
+ wake_up_interruptible_all(&cam->open);
+
+ if (cam->users) {
+ DBG(2, "Device /dev/video%d is open! Deregistration and "
+ "memory deallocation are deferred on close.",
+ cam->v4ldev->minor)
+ cam->state |= DEV_MISCONFIGURED;
+ sn9c102_stop_transfer(cam);
+ cam->state |= DEV_DISCONNECTED;
+ wake_up_interruptible(&cam->wait_frame);
+ wake_up_interruptible(&cam->wait_stream);
+ } else {
+ cam->state |= DEV_DISCONNECTED;
+ sn9c102_release_resources(cam);
+ }
+
+ up(&cam->dev_sem);
+
+ if (!cam->users)
+ kfree(cam);
+
+ up_write(&sn9c102_disconnect);
+}
+
+
+static struct usb_driver sn9c102_usb_driver = {
+ .owner = THIS_MODULE,
+ .name = "sn9c102",
+ .id_table = sn9c102_id_table,
+ .probe = sn9c102_usb_probe,
+ .disconnect = sn9c102_usb_disconnect,
+};
+
+/*****************************************************************************/
+
+static int __init sn9c102_module_init(void)
+{
+ int err = 0;
+
+ KDBG(2, SN9C102_MODULE_NAME " v" SN9C102_MODULE_VERSION)
+ KDBG(3, SN9C102_MODULE_AUTHOR)
+
+ if ((err = usb_register(&sn9c102_usb_driver)))
+ KDBG(1, "usb_register() failed")
+
+ return err;
+}
+
+
+static void __exit sn9c102_module_exit(void)
+{
+ usb_deregister(&sn9c102_usb_driver);
+}
+
+
+module_init(sn9c102_module_init);
+module_exit(sn9c102_module_exit);
--- /dev/null
+/***************************************************************************
+ * Driver for PAS106B image sensor connected to the SN9C10[12] PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include <linux/delay.h>
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor pas106b;
+
+
+static int pas106b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+ err += sn9c102_write_reg(cam, 0x20, 0x19);
+ err += sn9c102_write_reg(cam, 0x09, 0x18);
+
+ err += sn9c102_i2c_write(cam, 0x02, 0x0c);
+ err += sn9c102_i2c_write(cam, 0x03, 0x12);
+ err += sn9c102_i2c_write(cam, 0x04, 0x05);
+ err += sn9c102_i2c_write(cam, 0x05, 0x22);
+ err += sn9c102_i2c_write(cam, 0x06, 0xac);
+ err += sn9c102_i2c_write(cam, 0x07, 0x00);
+ err += sn9c102_i2c_write(cam, 0x08, 0x01);
+ err += sn9c102_i2c_write(cam, 0x0a, 0x00);
+ err += sn9c102_i2c_write(cam, 0x0b, 0x00);
+ err += sn9c102_i2c_write(cam, 0x0d, 0x00);
+ err += sn9c102_i2c_write(cam, 0x10, 0x06);
+ err += sn9c102_i2c_write(cam, 0x11, 0x06);
+ err += sn9c102_i2c_write(cam, 0x12, 0x00);
+ err += sn9c102_i2c_write(cam, 0x14, 0x02);
+ err += sn9c102_i2c_write(cam, 0x13, 0x01);
+
+ msleep(400);
+
+ return err;
+}
+
+
+static int pas106b_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_RED_BALANCE:
+ return (ctrl->value = sn9c102_i2c_read(cam, 0x0c))<0 ? -EIO:0;
+ case V4L2_CID_BLUE_BALANCE:
+ return (ctrl->value = sn9c102_i2c_read(cam, 0x09))<0 ? -EIO:0;
+ case V4L2_CID_BRIGHTNESS:
+ return (ctrl->value = sn9c102_i2c_read(cam, 0x0e))<0 ? -EIO:0;
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int pas106b_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_RED_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x0c, ctrl->value & 0x1f);
+ break;
+ case V4L2_CID_BLUE_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x09, ctrl->value & 0x1f);
+ break;
+ case V4L2_CID_BRIGHTNESS:
+ err += sn9c102_i2c_write(cam, 0x0e, ctrl->value & 0x1f);
+ break;
+ default:
+ return -EINVAL;
+ }
+ err += sn9c102_i2c_write(cam, 0x13, 0x01);
+
+ return err;
+}
+
+
+static int pas106b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &pas106b;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 4,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 3;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor pas106b = {
+ .name = "PAS106B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_400KHZ | SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_2WIRES,
+ .slave_read_id = 0x40,
+ .slave_write_id = 0x40,
+ .init = &pas106b_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_RED_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "red balance",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x03,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_BLUE_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "blue balance",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x02,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_BRIGHTNESS,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "brightness",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x06,
+ .flags = 0,
+ },
+ },
+ .get_ctrl = &pas106b_get_ctrl,
+ .set_ctrl = &pas106b_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ },
+ .set_crop = &pas106b_set_crop,
+ .pix_format = {
+ .width = 352,
+ .height = 288,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8, /* we use this field as 'bits per pixel' */
+ }
+};
+
+
+int sn9c102_probe_pas106b(struct sn9c102_device* cam)
+{
+ int r0 = 0, r1 = 0, err = 0;
+ unsigned int pid = 0;
+
+ /* Minimal initialization to enable the I2C communication
+ NOTE: do NOT change the values! */
+ err += sn9c102_write_reg(cam, 0x01, 0x01); /* sensor power down */
+ err += sn9c102_write_reg(cam, 0x00, 0x01); /* sensor power on */
+ err += sn9c102_write_reg(cam, 0x28, 0x17); /* sensor clock at 48 MHz */
+ if (err)
+ return -EIO;
+
+ r0 = sn9c102_i2c_try_read(cam, &pas106b, 0x00);
+ r1 = sn9c102_i2c_try_read(cam, &pas106b, 0x01);
+
+ if (r0 < 0 || r1 < 0)
+ return -EIO;
+
+ pid = (r0 << 11) | ((r1 & 0xf0) >> 4);
+ if (pid != 0x007)
+ return -ENODEV;
+
+ sn9c102_attach_sensor(cam, &pas106b);
+
+ return 0;
+}
--- /dev/null
+/***************************************************************************
+ * API for image sensors connected to the SN9C10[12] PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#ifndef _SN9C102_SENSOR_H_
+#define _SN9C102_SENSOR_H_
+
+#include <linux/usb.h>
+#include <linux/videodev.h>
+#include <linux/device.h>
+#include <linux/stddef.h>
+#include <linux/errno.h>
+#include <asm/types.h>
+
+struct sn9c102_device;
+struct sn9c102_sensor;
+
+/*****************************************************************************/
+
+/* OVERVIEW.
+ This is a small interface that allows you to add support for any CCD/CMOS
+ image sensors connected to the SN9C10X bridges. The entire API is documented
+ below. In the most general case, to support a sensor there are three steps
+ you have to follow:
+ 1) define the main "sn9c102_sensor" structure by setting the basic fields;
+ 2) write a probing function to be called by the core module when the USB
+ camera is recognized, then add both the USB ids and the name of that
+ function to the two corresponding tables SENSOR_TABLE and ID_TABLE (see
+ below);
+ 3) implement the methods that you want/need (and fill the rest of the main
+ structure accordingly).
+ "sn9c102_pas106b.c" is an example of all this stuff. Remember that you do
+ NOT need to touch the source code of the core module for the things to work
+ properly, unless you find bugs or flaws in it. Finally, do not forget to
+ read the V4L2 API for completeness. */
+
+/*****************************************************************************/
+
+/* Probing functions: on success, you must attach the sensor to the camera
+ by calling sn9c102_attach_sensor() provided below.
+ To enable the I2C communication, you might need to perform a really basic
+ initialization of the SN9C10X chip by using the write function declared
+ ahead.
+ Functions must return 0 on success, the appropriate error otherwise. */
+extern int sn9c102_probe_pas106b(struct sn9c102_device* cam);
+extern int sn9c102_probe_tas5110c1b(struct sn9c102_device* cam);
+extern int sn9c102_probe_tas5130d1b(struct sn9c102_device* cam);
+
+/* Add the above entries to this table. Be sure to add the entry in the right
+ place, since, on failure, the next probing routine is called according to
+ the order of the list below, from top to bottom */
+#define SN9C102_SENSOR_TABLE \
+static int (*sn9c102_sensor_table[])(struct sn9c102_device*) = { \
+ &sn9c102_probe_pas106b, /* strong detection based on SENSOR vid/pid */\
+ &sn9c102_probe_tas5110c1b, /* detection based on USB pid/vid */ \
+ &sn9c102_probe_tas5130d1b, /* detection based on USB pid/vid */ \
+ NULL, \
+};
+
+/* Attach a probed sensor to the camera. */
+extern void
+sn9c102_attach_sensor(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor);
+
+/* Each SN9C10X camera has proper PID/VID identifiers. Add them here in case.*/
+#define SN9C102_ID_TABLE \
+static const struct usb_device_id sn9c102_id_table[] = { \
+ { USB_DEVICE(0xc45, 0x6001), }, \
+ { USB_DEVICE(0xc45, 0x6005), }, /* TAS5110C1B */ \
+ { USB_DEVICE(0xc45, 0x6009), }, /* PAS106B */ \
+ { USB_DEVICE(0xc45, 0x600d), }, /* PAS106B */ \
+ { USB_DEVICE(0xc45, 0x6024), }, \
+ { USB_DEVICE(0xc45, 0x6025), }, /* TAS5130D1B Maybe also TAS5110C1B */\
+ { USB_DEVICE(0xc45, 0x6028), }, /* Maybe PAS202B */ \
+ { USB_DEVICE(0xc45, 0x6029), }, \
+ { USB_DEVICE(0xc45, 0x602a), }, /* Maybe HV7131[D|E1] */ \
+ { USB_DEVICE(0xc45, 0x602c), }, /* Maybe OV7620 */ \
+ { USB_DEVICE(0xc45, 0x6030), }, /* Maybe MI03 */ \
+ { USB_DEVICE(0xc45, 0x8001), }, \
+ { } \
+};
+
+/*****************************************************************************/
+
+/* Read/write routines: they always return -1 on error, 0 or the read value
+ otherwise. NOTE that a real read operation is not supported by the SN9C10X
+ chip for some of its registers. To work around this problem, a pseudo-read
+ call is provided instead: it returns the last successfully written value
+ on the register (0 if it has never been written), the usual -1 on error. */
+
+/* The "try" I2C I/O versions are used when probing the sensor */
+extern int sn9c102_i2c_try_write(struct sn9c102_device*,struct sn9c102_sensor*,
+ u8 address, u8 value);
+extern int sn9c102_i2c_try_read(struct sn9c102_device*,struct sn9c102_sensor*,
+ u8 address);
+
+/* This must be used if and only if the sensor doesn't implement the standard
+ I2C protocol, like the TASC sensors. There a number of good reasons why you
+ must use the single-byte versions of this function: do not abuse. It writes
+ n bytes, from data0 to datan, (registers 0x09 - 0x09+n of SN9C10X chip) */
+extern int sn9c102_i2c_try_raw_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 n,
+ u8 data0, u8 data1, u8 data2, u8 data3,
+ u8 data4, u8 data5);
+
+/* To be used after the sensor struct has been attached to the camera struct */
+extern int sn9c102_i2c_write(struct sn9c102_device*, u8 address, u8 value);
+extern int sn9c102_i2c_read(struct sn9c102_device*, u8 address);
+
+/* I/O on registers in the bridge. Could be used by the sensor methods too */
+extern int sn9c102_write_reg(struct sn9c102_device*, u8 value, u16 index);
+extern int sn9c102_pread_reg(struct sn9c102_device*, u16 index);
+
+/* NOTE: there are no debugging functions here. To uniform the output you must
+ use the dev_info()/dev_warn()/dev_err() macros defined in device.h, already
+ included here, the argument being the struct device 'dev' of the sensor
+ structure. Do NOT use these macros before the sensor is attached or the
+ kernel will crash! However you should not need to notify the user about
+ common errors or other messages, since this is done by the master module. */
+
+/*****************************************************************************/
+
+enum sn9c102_i2c_frequency { /* sensors may support both the frequencies */
+ SN9C102_I2C_100KHZ = 0x01,
+ SN9C102_I2C_400KHZ = 0x02,
+};
+
+enum sn9c102_i2c_interface {
+ SN9C102_I2C_2WIRES,
+ SN9C102_I2C_3WIRES,
+};
+
+struct sn9c102_sensor {
+ char name[32], /* sensor name */
+ maintainer[64]; /* name of the mantainer <email> */
+
+ /* These sensor capabilities must be provided if the SN9C10X controller
+ needs to communicate through the sensor serial interface by using
+ at least one of the i2c functions available */
+ enum sn9c102_i2c_frequency frequency;
+ enum sn9c102_i2c_interface interface;
+
+ /* These identifiers must be provided if the image sensor implements
+ the standard I2C protocol. TASC sensors don't, although they have a
+ serial interface: so this is a case where the "raw" I2C version
+ could be helpful. */
+ u8 slave_read_id, slave_write_id; /* reg. 0x09 */
+
+ /* NOTE: Where not noted,most of the functions below are not mandatory.
+ Set to null if you do not implement them. If implemented,
+ they must return 0 on success, the proper error otherwise. */
+
+ int (*init)(struct sn9c102_device* cam);
+ /* This function is called after the sensor has been attached.
+ It should be used to initialize the sensor only, but may also
+ configure part of the SN9C10X chip if necessary. You don't need to
+ setup picture settings like brightness, contrast, etc.. here, if
+ the corrisponding controls are implemented (see below), since
+ they are adjusted in the core driver by calling the set_ctrl()
+ method after init(), where the arguments are the default values
+ specified in the v4l2_queryctrl list of supported controls;
+ Same suggestions apply for other settings, _if_ the corresponding
+ methods are present; if not, the initialization must configure the
+ sensor according to the default configuration structures below. */
+
+ struct v4l2_queryctrl qctrl[V4L2_CID_LASTP1-V4L2_CID_BASE];
+ /* Optional list of default controls, defined as indicated in the
+ V4L2 API. Menu type controls are not handled by this interface. */
+
+ int (*get_ctrl)(struct sn9c102_device* cam, struct v4l2_control* ctrl);
+ int (*set_ctrl)(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl);
+ /* You must implement at least the set_ctrl method if you have defined
+ the list above. The returned value must follow the V4L2
+ specifications for the VIDIOC_G|C_CTRL ioctls. V4L2_CID_H|VCENTER
+ are not supported by this driver, so do not implement them. Also,
+ passed values are NOT checked to see if they are out of bounds. */
+
+ struct v4l2_cropcap cropcap;
+ /* Think the image sensor as a grid of R,G,B monochromatic pixels
+ disposed according to a particular Bayer pattern, which describes
+ the complete array of pixels, from (0,0) to (xmax, ymax). We will
+ use this coordinate system from now on. It is assumed the sensor
+ chip can be programmed to capture/transmit a subsection of that
+ array of pixels: we will call this subsection "active window".
+ It is not always true that the largest achievable active window can
+ cover the whole array of pixels. The V4L2 API defines another
+ area called "source rectangle", which, in turn, is a subrectangle of
+ the active window. The SN9C10X chip is always programmed to read the
+ source rectangle.
+ The bounds of both the active window and the source rectangle are
+ specified in the cropcap substructures 'bounds' and 'defrect'.
+ By default, the source rectangle should cover the largest possible
+ area. Again, it is not always true that the largest source rectangle
+ can cover the entire active window, although it is a rare case for
+ the hardware we have. The bounds of the source rectangle _must_ be
+ multiple of 16 and must use the same coordinate system as indicated
+ before; their centers shall align initially.
+ If necessary, the sensor chip must be initialized during init() to
+ set the bounds of the active sensor window; however, by default, it
+ usually covers the largest achievable area (maxwidth x maxheight)
+ of pixels, so no particular initialization is needed, if you have
+ defined the correct default bounds in the structures.
+ See the V4L2 API for further details.
+ NOTE: once you have defined the bounds of the active window
+ (struct cropcap.bounds) you must not change them.anymore.
+ Only 'bounds' and 'defrect' fields are mandatory, other fields
+ will be ignored. */
+
+ int (*set_crop)(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect);
+ /* To be called on VIDIOC_C_SETCROP. The core module always calls a
+ default routine which configures the appropriate SN9C10X regs (also
+ scaling), but you may need to override/adjust specific stuff.
+ 'rect' contains width and height values that are multiple of 16: in
+ case you override the default function, you always have to program
+ the chip to match those values; on error return the corresponding
+ error code without rolling back.
+ NOTE: in case, you must program the SN9C10X chip to get rid of
+ blank pixels or blank lines at the _start_ of each line or
+ frame after each HSYNC or VSYNC, so that the image starts with
+ real RGB data (see regs 0x12,0x13) (having set H_SIZE and,
+ V_SIZE you don't have to care about blank pixels or blank
+ lines at the end of each line or frame). */
+
+ struct v4l2_pix_format pix_format;
+ /* What you have to define here are: initial 'width' and 'height' of
+ the target rectangle, the bayer 'pixelformat' and 'priv' which we'll
+ be used to indicate the number of bits per pixel, 8 or 9.
+ Nothing more.
+ NOTE 1: both 'width' and 'height' _must_ be either 1/1 or 1/2 or 1/4
+ of cropcap.defrect.width and cropcap.defrect.height. I
+ suggest 1/1.
+ NOTE 2: as said above, you have to program the SN9C10X chip to get
+ rid of any blank pixels, so that the output of the sensor
+ matches the RGB bayer sequence (i.e. BGBGBG...GRGRGR). */
+
+ const struct device* dev;
+ /* This is the argument for dev_err(), dev_info() and dev_warn(). It
+ is used for debugging purposes. You must not access the struct
+ before the sensor is attached. */
+
+ const struct usb_device* usbdev;
+ /* Points to the usb_device struct after the sensor is attached.
+ Do not touch unless you know what you are doing. */
+
+ /* Do NOT write to the data below, it's READ ONLY. It is used by the
+ core module to store successfully updated values of the above
+ settings, for rollbacks..etc..in case of errors during atomic I/O */
+ struct v4l2_queryctrl _qctrl[V4L2_CID_LASTP1-V4L2_CID_BASE];
+ struct v4l2_rect _rect;
+};
+
+#endif /* _SN9C102_SENSOR_H_ */
--- /dev/null
+/***************************************************************************
+ * Driver for TAS5110C1B image sensor connected to the SN9C10[12] PC *
+ * Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor tas5110c1b;
+
+
+static int tas5110c1b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x44, 0x01);
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x60, 0x17);
+ err += sn9c102_write_reg(cam, 0x06, 0x18);
+ err += sn9c102_write_reg(cam, 0xcb, 0x19);
+
+ return err;
+}
+
+
+static int tas5110c1b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &tas5110c1b;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 69,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 9;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor tas5110c1b = {
+ .name = "TAS5110C1B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .init = &tas5110c1b_init,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ },
+ .set_crop = &tas5110c1b_set_crop,
+ .pix_format = {
+ .width = 352,
+ .height = 288,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ }
+};
+
+
+int sn9c102_probe_tas5110c1b(struct sn9c102_device* cam)
+{
+ /* This sensor has no identifiers, so let's attach it anyway */
+ sn9c102_attach_sensor(cam, &tas5110c1b);
+
+ /* At the moment, only devices whose PID is 0x6005 have this sensor */
+ if (tas5110c1b.usbdev->descriptor.idProduct != 0x6005)
+ return -ENODEV;
+
+ return 0;
+}
--- /dev/null
+/***************************************************************************
+ * Driver for TAS5130D1B image sensor connected to the SN9C10[12] PC *
+ * Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor tas5130d1b;
+
+
+static int tas5130d1b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+ err += sn9c102_write_reg(cam, 0x04, 0x01);
+ err += sn9c102_write_reg(cam, 0x01, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x60, 0x17);
+ err += sn9c102_write_reg(cam, 0x07, 0x18);
+ err += sn9c102_write_reg(cam, 0x33, 0x19);
+
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x00, 0x40,
+ 0x47, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x02, 0x20,
+ 0xa9, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x00, 0xc0,
+ 0x49, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x02, 0x20,
+ 0x6c, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x00, 0xc0,
+ 0x08, 0, 0);
+ err += sn9c102_i2c_try_raw_write(cam, &tas5130d1b, 4, 0x11, 0x00, 0x20,
+ 0x00, 0, 0);
+
+ err += sn9c102_write_reg(cam, 0x63, 0x19);
+
+ return err;
+}
+
+
+static int tas5130d1b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &tas5130d1b;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 104,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 12;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor tas5130d1b = {
+ .name = "TAS5130D1B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_3WIRES,
+ .init = &tas5130d1b_init,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ },
+ .set_crop = &tas5130d1b_set_crop,
+ .pix_format = {
+ .width = 640,
+ .height = 480,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ }
+};
+
+
+int sn9c102_probe_tas5130d1b(struct sn9c102_device* cam)
+{
+ /* This sensor has no identifiers, so let's attach it anyway */
+ sn9c102_attach_sensor(cam, &tas5130d1b);
+
+ /* At the moment, only devices whose PID is 0x6025 have this sensor */
+ if (tas5130d1b.usbdev->descriptor.idProduct != 0x6025)
+ return -ENODEV;
+
+ dev_info(tas5130d1b.dev, "TAS5130D1B detected, but the support for it "
+ "is disabled at the moment - needs further "
+ "testing -\n");
+
+ return -ENODEV;
+}
--- /dev/null
+/***************************************************************************
+ * Interface for video post-processing functions for the W996[87]CF driver *
+ * for Linux. *
+ * *
+ * Copyright (C) 2002-2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#ifndef _W9968CF_VPP_H_
+#define _W9968CF_VPP_H_
+
+#include <linux/module.h>
+#include <asm/types.h>
+
+struct w9968cf_vpp_t {
+ struct module* owner;
+ int (*check_headers)(const unsigned char*, const unsigned long);
+ int (*decode)(const char*, const unsigned long, const unsigned,
+ const unsigned, char*);
+ void (*swap_yuvbytes)(void*, unsigned long);
+ void (*uyvy_to_rgbx)(u8*, unsigned long, u8*, u16, u8);
+ void (*scale_up)(u8*, u8*, u16, u16, u16, u16, u16);
+
+ u8 busy; /* read-only flag: module is/is not in use */
+};
+
+extern int w9968cf_vppmod_register(struct w9968cf_vpp_t*);
+extern int w9968cf_vppmod_deregister(struct w9968cf_vpp_t*);
+
+#endif /* _W9968CF_VPP_H_ */
--- /dev/null
+/*
+ * USB PhidgetServo driver 1.0
+ *
+ * Copyright (C) 2004 Sean Young <sean@mess.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This is a driver for the USB PhidgetServo version 2.0 and 3.0 servo
+ * controllers available at: http://www.phidgets.com/
+ *
+ * Note that the driver takes input as: degrees.minutes
+ * -23 < degrees < 203
+ * 0 < minutes < 59
+ *
+ * CAUTION: Generally you should use 0 < degrees < 180 as anything else
+ * is probably beyond the range of your servo and may damage it.
+ */
+
+#include <linux/config.h>
+#ifdef CONFIG_USB_DEBUG
+#define DEBUG 1
+#endif
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+
+#define DRIVER_AUTHOR "Sean Young <sean@mess.org>"
+#define DRIVER_DESC "USB PhidgetServo Driver"
+
+#define VENDOR_ID_GLAB 0x06c2
+#define DEVICE_ID_4MOTOR_SERVO_30 0x0038
+#define DEVICE_ID_1MOTOR_SERVO_30 0x0039
+
+#define VENDOR_ID_WISEGROUP 0x0925
+#define DEVICE_ID_1MOTOR_SERVO_20 0x8101
+#define DEVICE_ID_4MOTOR_SERVO_20 0x8104
+
+static struct usb_device_id id_table[] = {
+ {USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_4MOTOR_SERVO_30)},
+ {USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_1MOTOR_SERVO_30)},
+ {USB_DEVICE(VENDOR_ID_WISEGROUP, DEVICE_ID_4MOTOR_SERVO_20)},
+ {USB_DEVICE(VENDOR_ID_WISEGROUP, DEVICE_ID_1MOTOR_SERVO_20)},
+ {}
+};
+
+MODULE_DEVICE_TABLE(usb, id_table);
+
+struct phidget_servo {
+ struct usb_device *udev;
+ int version;
+ int quad_servo;
+ int pulse[4];
+ int degrees[4];
+ int minutes[4];
+};
+
+static void
+change_position_v30(struct phidget_servo *servo, int servo_no, int degrees,
+ int minutes)
+{
+ int retval;
+ unsigned char *buffer;
+
+ buffer = kmalloc(6, GFP_KERNEL);
+ if (!buffer) {
+ dev_err(&servo->udev->dev, "%s - out of memory\n",
+ __FUNCTION__);
+ return;
+ }
+
+ /*
+ * pulse = 0 - 4095
+ * angle = 0 - 180 degrees
+ *
+ * pulse = angle * 10.6 + 243.8
+ */
+ servo->pulse[servo_no] = ((degrees*60 + minutes)*106 + 2438*60)/600;
+ servo->degrees[servo_no]= degrees;
+ servo->minutes[servo_no]= minutes;
+
+ /*
+ * The PhidgetServo v3.0 is controlled by sending 6 bytes,
+ * 4 * 12 bits for each servo.
+ *
+ * low = lower 8 bits pulse
+ * high = higher 4 bits pulse
+ *
+ * offset bits
+ * +---+-----------------+
+ * | 0 | low 0 |
+ * +---+--------+--------+
+ * | 1 | high 1 | high 0 |
+ * +---+--------+--------+
+ * | 2 | low 1 |
+ * +---+-----------------+
+ * | 3 | low 2 |
+ * +---+--------+--------+
+ * | 4 | high 3 | high 2 |
+ * +---+--------+--------+
+ * | 5 | low 3 |
+ * +---+-----------------+
+ */
+
+ buffer[0] = servo->pulse[0] & 0xff;
+ buffer[1] = (servo->pulse[0] >> 8 & 0x0f)
+ | (servo->pulse[1] >> 4 & 0xf0);
+ buffer[2] = servo->pulse[1] & 0xff;
+ buffer[3] = servo->pulse[2] & 0xff;
+ buffer[4] = (servo->pulse[2] >> 8 & 0x0f)
+ | (servo->pulse[3] >> 4 & 0xf0);
+ buffer[5] = servo->pulse[3] & 0xff;
+
+ dev_dbg(&servo->udev->dev,
+ "data: %02x %02x %02x %02x %02x %02x\n",
+ buffer[0], buffer[1], buffer[2],
+ buffer[3], buffer[4], buffer[5]);
+
+ retval = usb_control_msg(servo->udev,
+ usb_sndctrlpipe(servo->udev, 0),
+ 0x09, 0x21, 0x0200, 0x0000, buffer, 6, 2 * HZ);
+ if (retval != 6)
+ dev_err(&servo->udev->dev, "retval = %d\n", retval);
+ kfree(buffer);
+}
+
+static void
+change_position_v20(struct phidget_servo *servo, int servo_no, int degrees,
+ int minutes)
+{
+ int retval;
+ unsigned char *buffer;
+
+ buffer = kmalloc(2, GFP_KERNEL);
+ if (!buffer) {
+ dev_err(&servo->udev->dev, "%s - out of memory\n",
+ __FUNCTION__);
+ return;
+ }
+
+ /*
+ * angle = 0 - 180 degrees
+ * pulse = angle + 23
+ */
+ servo->pulse[servo_no]= degrees + 23;
+ servo->degrees[servo_no]= degrees;
+ servo->minutes[servo_no]= 0;
+
+ /*
+ * The PhidgetServo v2.0 is controlled by sending two bytes. The
+ * first byte is the servo number xor'ed with 2:
+ *
+ * servo 0 = 2
+ * servo 1 = 3
+ * servo 2 = 0
+ * servo 3 = 1
+ *
+ * The second byte is the position.
+ */
+
+ buffer[0] = servo_no ^ 2;
+ buffer[1] = servo->pulse[servo_no];
+
+ dev_dbg(&servo->udev->dev, "data: %02x %02x\n", buffer[0], buffer[1]);
+
+ retval = usb_control_msg(servo->udev,
+ usb_sndctrlpipe(servo->udev, 0),
+ 0x09, 0x21, 0x0200, 0x0000, buffer, 2, 2 * HZ);
+ if (retval != 2)
+ dev_err(&servo->udev->dev, "retval = %d\n", retval);
+ kfree(buffer);
+}
+
+#define show_set(value) \
+static ssize_t set_servo##value (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ int degrees, minutes; \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ struct phidget_servo *servo = usb_get_intfdata (intf); \
+ \
+ minutes = 0; \
+ /* must at least convert degrees */ \
+ if (sscanf (buf, "%d.%d", °rees, &minutes) < 1) { \
+ return -EINVAL; \
+ } \
+ \
+ if (degrees < -23 || degrees > (180 + 23) || \
+ minutes < 0 || minutes > 59) { \
+ return -EINVAL; \
+ } \
+ \
+ if (servo->version >= 3) \
+ change_position_v30 (servo, value, degrees, minutes); \
+ else \
+ change_position_v20 (servo, value, degrees, minutes); \
+ \
+ return count; \
+} \
+ \
+static ssize_t show_servo##value (struct device *dev, char *buf) \
+{ \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ struct phidget_servo *servo = usb_get_intfdata (intf); \
+ \
+ return sprintf (buf, "%d.%02d\n", servo->degrees[value], \
+ servo->minutes[value]); \
+} \
+static DEVICE_ATTR(servo##value, S_IWUGO | S_IRUGO, \
+ show_servo##value, set_servo##value);
+
+show_set(0);
+show_set(1);
+show_set(2);
+show_set(3);
+
+static int
+servo_probe(struct usb_interface *interface, const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(interface);
+ struct phidget_servo *dev = NULL;
+
+ dev = kmalloc(sizeof (struct phidget_servo), GFP_KERNEL);
+ if (dev == NULL) {
+ dev_err(&interface->dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+ memset(dev, 0x00, sizeof (*dev));
+
+ dev->udev = usb_get_dev(udev);
+ switch (udev->descriptor.idVendor) {
+ case VENDOR_ID_WISEGROUP:
+ dev->version = 2;
+ break;
+ case VENDOR_ID_GLAB:
+ dev->version = 3;
+ break;
+ }
+ switch (udev->descriptor.idProduct) {
+ case DEVICE_ID_4MOTOR_SERVO_20:
+ case DEVICE_ID_4MOTOR_SERVO_30:
+ dev->quad_servo = 1;
+ break;
+ case DEVICE_ID_1MOTOR_SERVO_20:
+ case DEVICE_ID_1MOTOR_SERVO_30:
+ dev->quad_servo = 0;
+ break;
+ }
+
+ usb_set_intfdata(interface, dev);
+
+ device_create_file(&interface->dev, &dev_attr_servo0);
+ if (dev->quad_servo) {
+ device_create_file(&interface->dev, &dev_attr_servo1);
+ device_create_file(&interface->dev, &dev_attr_servo2);
+ device_create_file(&interface->dev, &dev_attr_servo3);
+ }
+
+ dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 attached\n",
+ dev->quad_servo ? 4 : 1, dev->version);
+ if (dev->version == 2)
+ dev_info(&interface->dev,
+ "WARNING: v2.0 not tested! Please report if it works.\n");
+
+ return 0;
+}
+
+static void
+servo_disconnect(struct usb_interface *interface)
+{
+ struct phidget_servo *dev;
+
+ dev = usb_get_intfdata(interface);
+ usb_set_intfdata(interface, NULL);
+
+ device_remove_file(&interface->dev, &dev_attr_servo0);
+ if (dev->quad_servo) {
+ device_remove_file(&interface->dev, &dev_attr_servo1);
+ device_remove_file(&interface->dev, &dev_attr_servo2);
+ device_remove_file(&interface->dev, &dev_attr_servo3);
+ }
+
+ usb_put_dev(dev->udev);
+
+ kfree(dev);
+
+ dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 detached\n",
+ dev->quad_servo ? 4 : 1, dev->version);
+}
+
+static struct usb_driver servo_driver = {
+ .owner = THIS_MODULE,
+ .name = "phidgetservo",
+ .probe = servo_probe,
+ .disconnect = servo_disconnect,
+ .id_table = id_table
+};
+
+static int __init
+phidget_servo_init(void)
+{
+ int retval = 0;
+
+ retval = usb_register(&servo_driver);
+ if (retval)
+ err("usb_register failed. Error number %d", retval);
+
+ return retval;
+}
+
+static void __exit
+phidget_servo_exit(void)
+{
+ usb_deregister(&servo_driver);
+}
+
+module_init(phidget_servo_init);
+module_exit(phidget_servo_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * drivers/video/asiliantfb.c
+ * frame buffer driver for Asiliant 69000 chip
+ * Copyright (C) 2001-2003 Saito.K & Jeanne
+ *
+ * from driver/video/chipsfb.c and,
+ *
+ * drivers/video/asiliantfb.c -- frame buffer device for
+ * Asiliant 69030 chip (formerly Intel, formerly Chips & Technologies)
+ * Author: apc@agelectronics.co.uk
+ * Copyright (C) 2000 AG Electronics
+ * Note: the data sheets don't seem to be available from Asiliant.
+ * They are available by searching developer.intel.com, but are not otherwise
+ * linked to.
+ *
+ * This driver should be portable with minimal effort to the 69000 display
+ * chip, and to the twin-display mode of the 69030.
+ * Contains code from Thomas Hhenleitner <th@visuelle-maschinen.de> (thanks)
+ *
+ * Derived from the CT65550 driver chipsfb.c:
+ * Copyright (C) 1998 Paul Mackerras
+ * ...which was derived from the Powermac "chips" driver:
+ * Copyright (C) 1997 Fabio Riccardi.
+ * And from the frame buffer device for Open Firmware-initialized devices:
+ * Copyright (C) 1997 Geert Uytterhoeven.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/tty.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+/* Built in clock of the 69030 */
+const unsigned Fref = 14318180;
+
+#define mmio_base (p->screen_base + 0x400000)
+
+#define mm_write_ind(num, val, ap, dp) do { \
+ writeb((num), mmio_base + (ap)); writeb((val), mmio_base + (dp)); \
+} while (0)
+
+static void mm_write_xr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7ac, 0x7ad);
+}
+#define write_xr(num, val) mm_write_xr(p, num, val)
+
+static void mm_write_fr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7a0, 0x7a1);
+}
+#define write_fr(num, val) mm_write_fr(p, num, val)
+
+static void mm_write_cr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7a8, 0x7a9);
+}
+#define write_cr(num, val) mm_write_cr(p, num, val)
+
+static void mm_write_gr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x79c, 0x79d);
+}
+#define write_gr(num, val) mm_write_gr(p, num, val)
+
+static void mm_write_sr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x788, 0x789);
+}
+#define write_sr(num, val) mm_write_sr(p, num, val)
+
+static void mm_write_ar(struct fb_info *p, u8 reg, u8 data)
+{
+ readb(mmio_base + 0x7b4);
+ mm_write_ind(reg, data, 0x780, 0x780);
+}
+#define write_ar(num, val) mm_write_ar(p, num, val)
+
+/*
+ * Exported functions
+ */
+int asiliantfb_init(void);
+
+static int asiliantfb_pci_init(struct pci_dev *dp, const struct pci_device_id *);
+static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ struct fb_info *info);
+static int asiliantfb_set_par(struct fb_info *info);
+static int asiliantfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int transp, struct fb_info *info);
+
+static struct fb_ops asiliantfb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = asiliantfb_check_var,
+ .fb_set_par = asiliantfb_set_par,
+ .fb_setcolreg = asiliantfb_setcolreg,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_cursor = soft_cursor,
+};
+
+/* Calculate the ratios for the dot clocks without using a single long long
+ * value */
+static void asiliant_calc_dclk2(u32 *ppixclock, u8 *dclk2_m, u8 *dclk2_n, u8 *dclk2_div)
+{
+ unsigned pixclock = *ppixclock;
+ unsigned Ftarget = 1000000 * (1000000 / pixclock);
+ unsigned n;
+ unsigned best_error = 0xffffffff;
+ unsigned best_m = 0xffffffff,
+ best_n = 0xffffffff;
+ unsigned ratio;
+ unsigned remainder;
+ unsigned char divisor = 0;
+
+ /* Calculate the frequency required. This is hard enough. */
+ ratio = 1000000 / pixclock;
+ remainder = 1000000 % pixclock;
+ Ftarget = 1000000 * ratio + (1000000 * remainder) / pixclock;
+
+ while (Ftarget < 100000000) {
+ divisor += 0x10;
+ Ftarget <<= 1;
+ }
+
+ ratio = Ftarget / Fref;
+ remainder = Ftarget % Fref;
+
+ /* This expresses the constraint that 150kHz <= Fref/n <= 5Mhz,
+ * together with 3 <= n <= 257. */
+ for (n = 3; n <= 257; n++) {
+ unsigned m = n * ratio + (n * remainder) / Fref;
+
+ /* 3 <= m <= 257 */
+ if (m >= 3 && m <= 257) {
+ unsigned new_error = ((Ftarget * n) - (Fref * m)) >= 0 ?
+ ((Ftarget * n) - (Fref * m)) : ((Fref * m) - (Ftarget * n));
+ if (new_error < best_error) {
+ best_n = n;
+ best_m = m;
+ best_error = new_error;
+ }
+ }
+ /* But if VLD = 4, then 4m <= 1028 */
+ else if (m <= 1028) {
+ /* remember there are still only 8-bits of precision in m, so
+ * avoid over-optimistic error calculations */
+ unsigned new_error = ((Ftarget * n) - (Fref * (m & ~3))) >= 0 ?
+ ((Ftarget * n) - (Fref * (m & ~3))) : ((Fref * (m & ~3)) - (Ftarget * n));
+ if (new_error < best_error) {
+ best_n = n;
+ best_m = m;
+ best_error = new_error;
+ }
+ }
+ }
+ if (best_m > 257)
+ best_m >>= 2; /* divide m by 4, and leave VCO loop divide at 4 */
+ else
+ divisor |= 4; /* or set VCO loop divide to 1 */
+ *dclk2_m = best_m - 2;
+ *dclk2_n = best_n - 2;
+ *dclk2_div = divisor;
+ *ppixclock = pixclock;
+ return;
+}
+
+static void asiliant_set_timing(struct fb_info *p)
+{
+ unsigned hd = p->var.xres / 8;
+ unsigned hs = (p->var.xres + p->var.right_margin) / 8;
+ unsigned he = (p->var.xres + p->var.right_margin + p->var.hsync_len) / 8;
+ unsigned ht = (p->var.left_margin + p->var.xres + p->var.right_margin + p->var.hsync_len) / 8;
+ unsigned vd = p->var.yres;
+ unsigned vs = p->var.yres + p->var.lower_margin;
+ unsigned ve = p->var.yres + p->var.lower_margin + p->var.vsync_len;
+ unsigned vt = p->var.upper_margin + p->var.yres + p->var.lower_margin + p->var.vsync_len;
+ unsigned wd = (p->var.xres_virtual * ((p->var.bits_per_pixel+7)/8)) / 8;
+
+ if ((p->var.xres == 640) && (p->var.yres == 480) && (p->var.pixclock == 39722)) {
+ write_fr(0x01, 0x02); /* LCD */
+ } else {
+ write_fr(0x01, 0x01); /* CRT */
+ }
+
+ write_cr(0x11, (ve - 1) & 0x0f);
+ write_cr(0x00, (ht - 5) & 0xff);
+ write_cr(0x01, hd - 1);
+ write_cr(0x02, hd);
+ write_cr(0x03, ((ht - 1) & 0x1f) | 0x80);
+ write_cr(0x04, hs);
+ write_cr(0x05, (((ht - 1) & 0x20) <<2) | (he & 0x1f));
+ write_cr(0x3c, (ht - 1) & 0xc0);
+ write_cr(0x06, (vt - 2) & 0xff);
+ write_cr(0x30, (vt - 2) >> 8);
+ write_cr(0x07, 0x00);
+ write_cr(0x08, 0x00);
+ write_cr(0x09, 0x00);
+ write_cr(0x10, (vs - 1) & 0xff);
+ write_cr(0x32, ((vs - 1) >> 8) & 0xf);
+ write_cr(0x11, ((ve - 1) & 0x0f) | 0x80);
+ write_cr(0x12, (vd - 1) & 0xff);
+ write_cr(0x31, ((vd - 1) & 0xf00) >> 8);
+ write_cr(0x13, wd & 0xff);
+ write_cr(0x41, (wd & 0xf00) >> 8);
+ write_cr(0x15, (vs - 1) & 0xff);
+ write_cr(0x33, ((vs - 1) >> 8) & 0xf);
+ write_cr(0x38, ((ht - 5) & 0x100) >> 8);
+ write_cr(0x16, (vt - 1) & 0xff);
+ write_cr(0x18, 0x00);
+
+ if (p->var.xres == 640) {
+ writeb(0xc7, mmio_base + 0x784); /* set misc output reg */
+ } else {
+ writeb(0x07, mmio_base + 0x784); /* set misc output reg */
+ }
+}
+
+static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ struct fb_info *p)
+{
+ unsigned long Ftarget, ratio, remainder;
+
+ ratio = 1000000 / var->pixclock;
+ remainder = 1000000 % var->pixclock;
+ Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock;
+
+ /* First check the constraint that the maximum post-VCO divisor is 32,
+ * and the maximum Fvco is 220MHz */
+ if (Ftarget > 220000000 || Ftarget < 3125000) {
+ printk(KERN_ERR "asiliantfb dotclock must be between 3.125 and 220MHz\n");
+ return -ENXIO;
+ }
+ var->xres_virtual = var->xres;
+ var->yres_virtual = var->yres;
+
+ if (var->bits_per_pixel == 24) {
+ var->red.offset = 16;
+ var->green.offset = 8;
+ var->blue.offset = 0;
+ var->red.length = var->blue.length = var->green.length = 8;
+ } else if (var->bits_per_pixel == 16) {
+ switch (var->red.offset) {
+ case 11:
+ var->green.length = 6;
+ break;
+ case 10:
+ var->green.length = 5;
+ break;
+ default:
+ return -EINVAL;
+ }
+ var->green.offset = 5;
+ var->blue.offset = 0;
+ var->red.length = var->blue.length = 5;
+ } else if (var->bits_per_pixel == 8) {
+ var->red.offset = var->green.offset = var->blue.offset = 0;
+ var->red.length = var->green.length = var->blue.length = 8;
+ }
+ return 0;
+}
+
+static int asiliantfb_set_par(struct fb_info *p)
+{
+ u8 dclk2_m; /* Holds m-2 value for register */
+ u8 dclk2_n; /* Holds n-2 value for register */
+ u8 dclk2_div; /* Holds divisor bitmask */
+
+ /* Set pixclock */
+ asiliant_calc_dclk2(&p->var.pixclock, &dclk2_m, &dclk2_n, &dclk2_div);
+
+ /* Set color depth */
+ if (p->var.bits_per_pixel == 24) {
+ write_xr(0x81, 0x16); /* 24 bit packed color mode */
+ write_xr(0x82, 0x00); /* Disable palettes */
+ write_xr(0x20, 0x20); /* 24 bit blitter mode */
+ } else if (p->var.bits_per_pixel == 16) {
+ if (p->var.red.offset == 11)
+ write_xr(0x81, 0x15); /* 16 bit color mode */
+ else
+ write_xr(0x81, 0x14); /* 15 bit color mode */
+ write_xr(0x82, 0x00); /* Disable palettes */
+ write_xr(0x20, 0x10); /* 16 bit blitter mode */
+ } else if (p->var.bits_per_pixel == 8) {
+ write_xr(0x0a, 0x02); /* Linear */
+ write_xr(0x81, 0x12); /* 8 bit color mode */
+ write_xr(0x82, 0x00); /* Graphics gamma enable */
+ write_xr(0x20, 0x00); /* 8 bit blitter mode */
+ }
+ p->fix.line_length = p->var.xres * (p->var.bits_per_pixel >> 3);
+ p->fix.visual = (p->var.bits_per_pixel == 8) ? FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR;
+ write_xr(0xc4, dclk2_m);
+ write_xr(0xc5, dclk2_n);
+ write_xr(0xc7, dclk2_div);
+ /* Set up the CR registers */
+ asiliant_set_timing(p);
+ return 0;
+}
+
+static int asiliantfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int transp, struct fb_info *p)
+{
+ if (regno > 255)
+ return 1;
+ red >>= 8;
+ green >>= 8;
+ blue >>= 8;
+
+ /* Set hardware palete */
+ writeb(regno, mmio_base + 0x790);
+ udelay(1);
+ writeb(red, mmio_base + 0x791);
+ writeb(green, mmio_base + 0x791);
+ writeb(blue, mmio_base + 0x791);
+
+ switch(p->var.bits_per_pixel) {
+ case 15:
+ if (regno < 16) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ ((red & 0xf8) << 7) |
+ ((green & 0xf8) << 2) |
+ ((blue & 0xf8) >> 3);
+ }
+ break;
+ case 16:
+ if (regno < 16) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ ((red & 0xf8) << 8) |
+ ((green & 0xfc) << 3) |
+ ((blue & 0xf8) >> 3);
+ }
+ break;
+ case 24:
+ if (regno < 24) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ (red << 16) |
+ (green << 8) |
+ (blue);
+ }
+ break;
+ }
+ return 0;
+}
+
+struct chips_init_reg {
+ unsigned char addr;
+ unsigned char data;
+};
+
+#define N_ELTS(x) (sizeof(x) / sizeof(x[0]))
+
+static struct chips_init_reg chips_init_sr[] =
+{
+ {0x00, 0x03}, /* Reset register */
+ {0x01, 0x01}, /* Clocking mode */
+ {0x02, 0x0f}, /* Plane mask */
+ {0x04, 0x0e} /* Memory mode */
+};
+
+static struct chips_init_reg chips_init_gr[] =
+{
+ {0x03, 0x00}, /* Data rotate */
+ {0x05, 0x00}, /* Graphics mode */
+ {0x06, 0x01}, /* Miscellaneous */
+ {0x08, 0x00} /* Bit mask */
+};
+
+static struct chips_init_reg chips_init_ar[] =
+{
+ {0x10, 0x01}, /* Mode control */
+ {0x11, 0x00}, /* Overscan */
+ {0x12, 0x0f}, /* Memory plane enable */
+ {0x13, 0x00} /* Horizontal pixel panning */
+};
+
+static struct chips_init_reg chips_init_cr[] =
+{
+ {0x0c, 0x00}, /* Start address high */
+ {0x0d, 0x00}, /* Start address low */
+ {0x40, 0x00}, /* Extended Start Address */
+ {0x41, 0x00}, /* Extended Start Address */
+ {0x14, 0x00}, /* Underline location */
+ {0x17, 0xe3}, /* CRT mode control */
+ {0x70, 0x00} /* Interlace control */
+};
+
+
+static struct chips_init_reg chips_init_fr[] =
+{
+ {0x01, 0x02},
+ {0x03, 0x08},
+ {0x08, 0xcc},
+ {0x0a, 0x08},
+ {0x18, 0x00},
+ {0x1e, 0x80},
+ {0x40, 0x83},
+ {0x41, 0x00},
+ {0x48, 0x13},
+ {0x4d, 0x60},
+ {0x4e, 0x0f},
+
+ {0x0b, 0x01},
+
+ {0x21, 0x51},
+ {0x22, 0x1d},
+ {0x23, 0x5f},
+ {0x20, 0x4f},
+ {0x34, 0x00},
+ {0x24, 0x51},
+ {0x25, 0x00},
+ {0x27, 0x0b},
+ {0x26, 0x00},
+ {0x37, 0x80},
+ {0x33, 0x0b},
+ {0x35, 0x11},
+ {0x36, 0x02},
+ {0x31, 0xea},
+ {0x32, 0x0c},
+ {0x30, 0xdf},
+ {0x10, 0x0c},
+ {0x11, 0xe0},
+ {0x12, 0x50},
+ {0x13, 0x00},
+ {0x16, 0x03},
+ {0x17, 0xbd},
+ {0x1a, 0x00},
+};
+
+
+static struct chips_init_reg chips_init_xr[] =
+{
+ {0xce, 0x00}, /* set default memory clock */
+ {0xcc, 200 }, /* MCLK ratio M */
+ {0xcd, 18 }, /* MCLK ratio N */
+ {0xce, 0x90}, /* MCLK divisor = 2 */
+
+ {0xc4, 209 },
+ {0xc5, 118 },
+ {0xc7, 32 },
+ {0xcf, 0x06},
+ {0x09, 0x01}, /* IO Control - CRT controller extensions */
+ {0x0a, 0x02}, /* Frame buffer mapping */
+ {0x0b, 0x01}, /* PCI burst write */
+ {0x40, 0x03}, /* Memory access control */
+ {0x80, 0x82}, /* Pixel pipeline configuration 0 */
+ {0x81, 0x12}, /* Pixel pipeline configuration 1 */
+ {0x82, 0x08}, /* Pixel pipeline configuration 2 */
+
+ {0xd0, 0x0f},
+ {0xd1, 0x01},
+};
+
+static void __init chips_hw_init(struct fb_info *p)
+{
+ int i;
+
+ for (i = 0; i < N_ELTS(chips_init_xr); ++i)
+ write_xr(chips_init_xr[i].addr, chips_init_xr[i].data);
+ write_xr(0x81, 0x12);
+ write_xr(0x82, 0x08);
+ write_xr(0x20, 0x00);
+ for (i = 0; i < N_ELTS(chips_init_sr); ++i)
+ write_sr(chips_init_sr[i].addr, chips_init_sr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_gr); ++i)
+ write_gr(chips_init_gr[i].addr, chips_init_gr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_ar); ++i)
+ write_ar(chips_init_ar[i].addr, chips_init_ar[i].data);
+ /* Enable video output in attribute index register */
+ writeb(0x20, mmio_base + 0x780);
+ for (i = 0; i < N_ELTS(chips_init_cr); ++i)
+ write_cr(chips_init_cr[i].addr, chips_init_cr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_fr); ++i)
+ write_fr(chips_init_fr[i].addr, chips_init_fr[i].data);
+}
+
+static struct fb_fix_screeninfo asiliantfb_fix __initdata = {
+ .id = "Asiliant 69000",
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_PSEUDOCOLOR,
+ .accel = FB_ACCEL_NONE,
+ .line_length = 640,
+ .smem_len = 0x200000, /* 2MB */
+};
+
+static struct fb_var_screeninfo asiliantfb_var __initdata = {
+ .xres = 640,
+ .yres = 480,
+ .xres_virtual = 640,
+ .yres_virtual = 480,
+ .bits_per_pixel = 8,
+ .red = { .length = 8 },
+ .green = { .length = 8 },
+ .blue = { .length = 8 },
+ .height = -1,
+ .width = -1,
+ .vmode = FB_VMODE_NONINTERLACED,
+ .pixclock = 39722,
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+};
+
+static void __init init_asiliant(struct fb_info *p, unsigned long addr)
+{
+ p->fix = asiliantfb_fix;
+ p->fix.smem_start = addr;
+ p->var = asiliantfb_var;
+ p->fbops = &asiliantfb_ops;
+ p->flags = FBINFO_FLAG_DEFAULT;
+
+ fb_alloc_cmap(&p->cmap, 256, 0);
+
+ if (register_framebuffer(p) < 0) {
+ printk(KERN_ERR "C&T 69000 framebuffer failed to register\n");
+ return;
+ }
+
+ printk(KERN_INFO "fb%d: Asiliant 69000 frame buffer (%dK RAM detected)\n",
+ p->node, p->fix.smem_len / 1024);
+
+ writeb(0xff, mmio_base + 0x78c);
+ chips_hw_init(p);
+}
+
+static int __devinit
+asiliantfb_pci_init(struct pci_dev *dp, const struct pci_device_id *ent)
+{
+ unsigned long addr, size;
+ struct fb_info *p;
+
+ if ((dp->resource[0].flags & IORESOURCE_MEM) == 0)
+ return -ENODEV;
+ addr = pci_resource_start(dp, 0);
+ size = pci_resource_len(dp, 0);
+ if (addr == 0)
+ return -ENODEV;
+ if (!request_mem_region(addr, size, "asiliantfb"))
+ return -EBUSY;
+
+ p = framebuffer_alloc(sizeof(u32) * 256, &dp->dev);
+ if (!p) {
+ release_mem_region(addr, size);
+ return -ENOMEM;
+ }
+ p->pseudo_palette = p->par;
+ p->par = NULL;
+
+ p->screen_base = ioremap(addr, 0x800000);
+ if (p->screen_base == NULL) {
+ release_mem_region(addr, size);
+ framebuffer_release(p);
+ return -ENOMEM;
+ }
+
+ pci_write_config_dword(dp, 4, 0x02800083);
+ writeb(3, addr + 0x400784);
+
+ init_asiliant(p, addr);
+
+ /* Clear the entire framebuffer */
+ memset(p->screen_base, 0, 0x200000);
+
+ pci_set_drvdata(dp, p);
+ return 0;
+}
+
+static void __devexit asiliantfb_remove(struct pci_dev *dp)
+{
+ struct fb_info *p = pci_get_drvdata(dp);
+
+ unregister_framebuffer(p);
+ iounmap(p->screen_base);
+ release_mem_region(pci_resource_start(dp, 0), pci_resource_len(dp, 0));
+ pci_set_drvdata(dp, NULL);
+ framebuffer_release(p);
+}
+
+static struct pci_device_id asiliantfb_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_CT, PCI_DEVICE_ID_CT_69000, PCI_ANY_ID, PCI_ANY_ID },
+ { 0 }
+};
+
+MODULE_DEVICE_TABLE(pci, asiliantfb_pci_tbl);
+
+static struct pci_driver asiliantfb_driver = {
+ .name = "asiliantfb",
+ .id_table = asiliantfb_pci_tbl,
+ .probe = asiliantfb_pci_init,
+ .remove = __devexit_p(asiliantfb_remove),
+};
+
+int __init asiliantfb_init(void)
+{
+ return pci_module_init(&asiliantfb_driver);
+}
+
+static void __exit asiliantfb_exit(void)
+{
+ pci_unregister_driver(&asiliantfb_driver);
+}
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * SGI GBE frame buffer driver
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc. - Jeffrey Newquist
+ * Copyright (C) 2002 Vivien Chappelier <vivien.chappelier@linux-mips.org>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/errno.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+
+#ifdef CONFIG_X86
+#include <asm/mtrr.h>
+#endif
+#ifdef CONFIG_MIPS
+#include <asm/addrspace.h>
+#endif
+#include <asm/byteorder.h>
+#include <asm/io.h>
+#include <asm/tlbflush.h>
+
+#include <video/gbe.h>
+
+static struct sgi_gbe *gbe;
+
+struct gbefb_par {
+ struct fb_var_screeninfo var;
+ struct gbe_timing_info timing;
+ int valid;
+};
+
+#ifdef CONFIG_SGI_IP32
+#define GBE_BASE 0x16000000 /* SGI O2 */
+#endif
+
+#ifdef CONFIG_X86_VISWS
+#define GBE_BASE 0xd0000000 /* SGI Visual Workstation */
+#endif
+
+/* macro for fastest write-though access to the framebuffer */
+#ifdef CONFIG_MIPS
+#ifdef CONFIG_CPU_R10000
+#define pgprot_fb(_prot) (((_prot) & (~_CACHE_MASK)) | _CACHE_UNCACHED_ACCELERATED)
+#else
+#define pgprot_fb(_prot) (((_prot) & (~_CACHE_MASK)) | _CACHE_CACHABLE_NO_WA)
+#endif
+#endif
+#ifdef CONFIG_X86
+#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
+#endif
+
+/*
+ * RAM we reserve for the frame buffer. This defines the maximum screen
+ * size
+ */
+#if CONFIG_FB_GBE_MEM > 8
+#error GBE Framebuffer cannot use more than 8MB of memory
+#endif
+
+#define TILE_SHIFT 16
+#define TILE_SIZE (1 << TILE_SHIFT)
+#define TILE_MASK (TILE_SIZE - 1)
+
+static unsigned int gbe_mem_size = CONFIG_FB_GBE_MEM * 1024*1024;
+static void *gbe_mem;
+static dma_addr_t gbe_dma_addr;
+unsigned long gbe_mem_phys;
+
+static struct {
+ uint16_t *cpu;
+ dma_addr_t dma;
+} gbe_tiles;
+
+static int gbe_revision;
+
+static struct fb_info fb_info;
+static int ypan, ywrap;
+
+static uint32_t pseudo_palette[256];
+
+static char *mode_option __initdata = NULL;
+
+/* default CRT mode */
+static struct fb_var_screeninfo default_var_CRT __initdata = {
+ /* 640x480, 60 Hz, Non-Interlaced (25.175 MHz dotclock) */
+ .xres = 640,
+ .yres = 480,
+ .xres_virtual = 640,
+ .yres_virtual = 480,
+ .xoffset = 0,
+ .yoffset = 0,
+ .bits_per_pixel = 8,
+ .grayscale = 0,
+ .red = { 0, 8, 0 },
+ .green = { 0, 8, 0 },
+ .blue = { 0, 8, 0 },
+ .transp = { 0, 0, 0 },
+ .nonstd = 0,
+ .activate = 0,
+ .height = -1,
+ .width = -1,
+ .accel_flags = 0,
+ .pixclock = 39722, /* picoseconds */
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+
+/* default LCD mode */
+static struct fb_var_screeninfo default_var_LCD __initdata = {
+ /* 1600x1024, 8 bpp */
+ .xres = 1600,
+ .yres = 1024,
+ .xres_virtual = 1600,
+ .yres_virtual = 1024,
+ .xoffset = 0,
+ .yoffset = 0,
+ .bits_per_pixel = 8,
+ .grayscale = 0,
+ .red = { 0, 8, 0 },
+ .green = { 0, 8, 0 },
+ .blue = { 0, 8, 0 },
+ .transp = { 0, 0, 0 },
+ .nonstd = 0,
+ .activate = 0,
+ .height = -1,
+ .width = -1,
+ .accel_flags = 0,
+ .pixclock = 9353,
+ .left_margin = 20,
+ .right_margin = 30,
+ .upper_margin = 37,
+ .lower_margin = 3,
+ .hsync_len = 20,
+ .vsync_len = 3,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED
+};
+
+/* default modedb mode */
+/* 640x480, 60 Hz, Non-Interlaced (25.172 MHz dotclock) */
+static struct fb_videomode default_mode_CRT __initdata = {
+ .refresh = 60,
+ .xres = 640,
+ .yres = 480,
+ .pixclock = 39722,
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+/* 1600x1024 SGI flatpanel 1600sw */
+static struct fb_videomode default_mode_LCD __initdata = {
+ /* 1600x1024, 8 bpp */
+ .xres = 1600,
+ .yres = 1024,
+ .pixclock = 9353,
+ .left_margin = 20,
+ .right_margin = 30,
+ .upper_margin = 37,
+ .lower_margin = 3,
+ .hsync_len = 20,
+ .vsync_len = 3,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+
+struct fb_videomode *default_mode = &default_mode_CRT;
+struct fb_var_screeninfo *default_var = &default_var_CRT;
+
+static int flat_panel_enabled = 0;
+
+static struct gbefb_par par_current;
+
+static void gbe_reset(void)
+{
+ /* Turn on dotclock PLL */
+ gbe->ctrlstat = 0x300aa000;
+}
+
+
+/*
+ * Function: gbe_turn_off
+ * Parameters: (None)
+ * Description: This should turn off the monitor and gbe. This is used
+ * when switching between the serial console and the graphics
+ * console.
+ */
+
+void gbe_turn_off(void)
+{
+ int i;
+ unsigned int val, x, y, vpixen_off;
+
+ /* check if pixel counter is on */
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) == 1)
+ return;
+
+ /* turn off DMA */
+ val = gbe->ovr_control;
+ SET_GBE_FIELD(OVR_CONTROL, OVR_DMA_ENABLE, val, 0);
+ gbe->ovr_control = val;
+ udelay(1000);
+ val = gbe->frm_control;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 0);
+ gbe->frm_control = val;
+ udelay(1000);
+ val = gbe->did_control;
+ SET_GBE_FIELD(DID_CONTROL, DID_DMA_ENABLE, val, 0);
+ gbe->did_control = val;
+ udelay(1000);
+
+ /* We have to wait through two vertical retrace periods before
+ * the pixel DMA is turned off for sure. */
+ for (i = 0; i < 10000; i++) {
+ val = gbe->frm_inhwctrl;
+ if (GET_GBE_FIELD(FRM_INHWCTRL, FRM_DMA_ENABLE, val)) {
+ udelay(10);
+ } else {
+ val = gbe->ovr_inhwctrl;
+ if (GET_GBE_FIELD(OVR_INHWCTRL, OVR_DMA_ENABLE, val)) {
+ udelay(10);
+ } else {
+ val = gbe->did_inhwctrl;
+ if (GET_GBE_FIELD(DID_INHWCTRL, DID_DMA_ENABLE, val)) {
+ udelay(10);
+ } else
+ break;
+ }
+ }
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off DMA timed out\n");
+
+ /* wait for vpixen_off */
+ val = gbe->vt_vpixen;
+ vpixen_off = GET_GBE_FIELD(VT_VPIXEN, VPIXEN_OFF, val);
+
+ for (i = 0; i < 100000; i++) {
+ val = gbe->vt_xy;
+ x = GET_GBE_FIELD(VT_XY, X, val);
+ y = GET_GBE_FIELD(VT_XY, Y, val);
+ if (y < vpixen_off)
+ break;
+ udelay(1);
+ }
+ if (i == 100000)
+ printk(KERN_ERR
+ "gbefb: wait for vpixen_off timed out\n");
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ x = GET_GBE_FIELD(VT_XY, X, val);
+ y = GET_GBE_FIELD(VT_XY, Y, val);
+ if (y > vpixen_off)
+ break;
+ udelay(1);
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: wait for vpixen_off timed out\n");
+
+ /* turn off pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XY, FREEZE, val, 1);
+ gbe->vt_xy = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off pixel clock timed out\n");
+
+ /* turn off dot clock */
+ val = gbe->dotclock;
+ SET_GBE_FIELD(DOTCLK, RUN, val, 0);
+ gbe->dotclock = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->dotclock;
+ if (GET_GBE_FIELD(DOTCLK, RUN, val))
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off dotclock timed out\n");
+
+ /* reset the frame DMA FIFO */
+ val = gbe->frm_size_tile;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_FIFO_RESET, val, 1);
+ gbe->frm_size_tile = val;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_FIFO_RESET, val, 0);
+ gbe->frm_size_tile = val;
+}
+
+static void gbe_turn_on(void)
+{
+ unsigned int val, i;
+
+ /*
+ * Check if pixel counter is off, for unknown reason this
+ * code hangs Visual Workstations
+ */
+ if (gbe_revision < 2) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) == 0)
+ return;
+ }
+
+ /* turn on dot clock */
+ val = gbe->dotclock;
+ SET_GBE_FIELD(DOTCLK, RUN, val, 1);
+ gbe->dotclock = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->dotclock;
+ if (GET_GBE_FIELD(DOTCLK, RUN, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on dotclock timed out\n");
+
+ /* turn on pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XY, FREEZE, val, 0);
+ gbe->vt_xy = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val))
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on pixel clock timed out\n");
+
+ /* turn on DMA */
+ val = gbe->frm_control;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 1);
+ gbe->frm_control = val;
+ udelay(1000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->frm_inhwctrl;
+ if (GET_GBE_FIELD(FRM_INHWCTRL, FRM_DMA_ENABLE, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on DMA timed out\n");
+}
+
+/*
+ * Blank the display.
+ */
+static int gbefb_blank(int blank, struct fb_info *info)
+{
+ /* 0 unblank, 1 blank, 2 no vsync, 3 no hsync, 4 off */
+ switch (blank) {
+ case 0: /* unblank */
+ gbe_turn_on();
+ break;
+
+ case 1: /* blank */
+ gbe_turn_off();
+ break;
+
+ default:
+ /* Nothing */
+ break;
+ }
+ return 0;
+}
+
+/*
+ * Setup flatpanel related registers.
+ */
+static void gbefb_setup_flatpanel(struct gbe_timing_info *timing)
+{
+ int fp_wid, fp_hgt, fp_vbs, fp_vbe;
+ u32 outputVal = 0;
+
+ SET_GBE_FIELD(VT_FLAGS, HDRV_INVERT, outputVal,
+ (timing->flags & FB_SYNC_HOR_HIGH_ACT) ? 0 : 1);
+ SET_GBE_FIELD(VT_FLAGS, VDRV_INVERT, outputVal,
+ (timing->flags & FB_SYNC_VERT_HIGH_ACT) ? 0 : 1);
+ gbe->vt_flags = outputVal;
+
+ /* Turn on the flat panel */
+ fp_wid = 1600;
+ fp_hgt = 1024;
+ fp_vbs = 0;
+ fp_vbe = 1600;
+ timing->pll_m = 4;
+ timing->pll_n = 1;
+ timing->pll_p = 0;
+
+ outputVal = 0;
+ SET_GBE_FIELD(FP_DE, ON, outputVal, fp_vbs);
+ SET_GBE_FIELD(FP_DE, OFF, outputVal, fp_vbe);
+ gbe->fp_de = outputVal;
+ outputVal = 0;
+ SET_GBE_FIELD(FP_HDRV, OFF, outputVal, fp_wid);
+ gbe->fp_hdrv = outputVal;
+ outputVal = 0;
+ SET_GBE_FIELD(FP_VDRV, ON, outputVal, 1);
+ SET_GBE_FIELD(FP_VDRV, OFF, outputVal, fp_hgt + 1);
+ gbe->fp_vdrv = outputVal;
+}
+
+struct gbe_pll_info {
+ int clock_rate;
+ int fvco_min;
+ int fvco_max;
+};
+
+static struct gbe_pll_info gbe_pll_table[2] = {
+ { 20, 80, 220 },
+ { 27, 80, 220 },
+};
+
+static int compute_gbe_timing(struct fb_var_screeninfo *var,
+ struct gbe_timing_info *timing)
+{
+ int pll_m, pll_n, pll_p, error, best_m, best_n, best_p, best_error;
+ int pixclock;
+ struct gbe_pll_info *gbe_pll;
+
+ if (gbe_revision < 2)
+ gbe_pll = &gbe_pll_table[0];
+ else
+ gbe_pll = &gbe_pll_table[1];
+
+ /* Determine valid resolution and timing
+ * GBE crystal runs at 20Mhz or 27Mhz
+ * pll_m, pll_n, pll_p define the following frequencies
+ * fvco = pll_m * 20Mhz / pll_n
+ * fout = fvco / (2**pll_p) */
+ best_error = 1000000000;
+ best_n = best_m = best_p = 0;
+ for (pll_p = 0; pll_p < 4; pll_p++)
+ for (pll_m = 1; pll_m < 256; pll_m++)
+ for (pll_n = 1; pll_n < 64; pll_n++) {
+ pixclock = (1000000 / gbe_pll->clock_rate) *
+ (pll_n << pll_p) / pll_m;
+
+ error = var->pixclock - pixclock;
+
+ if (error < 0)
+ error = -error;
+
+ if (error < best_error &&
+ pll_m / pll_n >
+ gbe_pll->fvco_min / gbe_pll->clock_rate &&
+ pll_m / pll_n <
+ gbe_pll->fvco_max / gbe_pll->clock_rate) {
+ best_error = error;
+ best_m = pll_m;
+ best_n = pll_n;
+ best_p = pll_p;
+ }
+ }
+
+ if (!best_n || !best_m)
+ return -EINVAL; /* Resolution to high */
+
+ pixclock = (1000000 / gbe_pll->clock_rate) *
+ (best_n << best_p) / best_m;
+
+ /* set video timing information */
+ if (timing) {
+ timing->width = var->xres;
+ timing->height = var->yres;
+ timing->pll_m = best_m;
+ timing->pll_n = best_n;
+ timing->pll_p = best_p;
+ timing->cfreq = gbe_pll->clock_rate * 1000 * timing->pll_m /
+ (timing->pll_n << timing->pll_p);
+ timing->htotal = var->left_margin + var->xres +
+ var->right_margin + var->hsync_len;
+ timing->vtotal = var->upper_margin + var->yres +
+ var->lower_margin + var->vsync_len;
+ timing->fields_sec = 1000 * timing->cfreq / timing->htotal *
+ 1000 / timing->vtotal;
+ timing->hblank_start = var->xres;
+ timing->vblank_start = var->yres;
+ timing->hblank_end = timing->htotal;
+ timing->hsync_start = var->xres + var->right_margin + 1;
+ timing->hsync_end = timing->hsync_start + var->hsync_len;
+ timing->vblank_end = timing->vtotal;
+ timing->vsync_start = var->yres + var->lower_margin + 1;
+ timing->vsync_end = timing->vsync_start + var->vsync_len;
+ }
+
+ return pixclock;
+}
+
+static void gbe_set_timing_info(struct gbe_timing_info *timing)
+{
+ int temp;
+ unsigned int val;
+
+ /* setup dot clock PLL */
+ val = 0;
+ SET_GBE_FIELD(DOTCLK, M, val, timing->pll_m - 1);
+ SET_GBE_FIELD(DOTCLK, N, val, timing->pll_n - 1);
+ SET_GBE_FIELD(DOTCLK, P, val, timing->pll_p);
+ SET_GBE_FIELD(DOTCLK, RUN, val, 0); /* do not start yet */
+ gbe->dotclock = val;
+ udelay(10000);
+
+ /* setup pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XYMAX, MAXX, val, timing->htotal);
+ SET_GBE_FIELD(VT_XYMAX, MAXY, val, timing->vtotal);
+ gbe->vt_xymax = val;
+
+ /* setup video timing signals */
+ val = 0;
+ SET_GBE_FIELD(VT_VSYNC, VSYNC_ON, val, timing->vsync_start);
+ SET_GBE_FIELD(VT_VSYNC, VSYNC_OFF, val, timing->vsync_end);
+ gbe->vt_vsync = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HSYNC, HSYNC_ON, val, timing->hsync_start);
+ SET_GBE_FIELD(VT_HSYNC, HSYNC_OFF, val, timing->hsync_end);
+ gbe->vt_hsync = val;
+ val = 0;
+ SET_GBE_FIELD(VT_VBLANK, VBLANK_ON, val, timing->vblank_start);
+ SET_GBE_FIELD(VT_VBLANK, VBLANK_OFF, val, timing->vblank_end);
+ gbe->vt_vblank = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HBLANK, HBLANK_ON, val,
+ timing->hblank_start - 5);
+ SET_GBE_FIELD(VT_HBLANK, HBLANK_OFF, val,
+ timing->hblank_end - 3);
+ gbe->vt_hblank = val;
+
+ /* setup internal timing signals */
+ val = 0;
+ SET_GBE_FIELD(VT_VCMAP, VCMAP_ON, val, timing->vblank_start);
+ SET_GBE_FIELD(VT_VCMAP, VCMAP_OFF, val, timing->vblank_end);
+ gbe->vt_vcmap = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HCMAP, HCMAP_ON, val, timing->hblank_start);
+ SET_GBE_FIELD(VT_HCMAP, HCMAP_OFF, val, timing->hblank_end);
+ gbe->vt_hcmap = val;
+
+ val = 0;
+ temp = timing->vblank_start - timing->vblank_end - 1;
+ if (temp > 0)
+ temp = -temp;
+
+ if (flat_panel_enabled)
+ gbefb_setup_flatpanel(timing);
+
+ SET_GBE_FIELD(DID_START_XY, DID_STARTY, val, (u32) temp);
+ if (timing->hblank_end >= 20)
+ SET_GBE_FIELD(DID_START_XY, DID_STARTX, val,
+ timing->hblank_end - 20);
+ else
+ SET_GBE_FIELD(DID_START_XY, DID_STARTX, val,
+ timing->htotal - (20 - timing->hblank_end));
+ gbe->did_start_xy = val;
+
+ val = 0;
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTY, val, (u32) (temp + 1));
+ if (timing->hblank_end >= GBE_CRS_MAGIC)
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTX, val,
+ timing->hblank_end - GBE_CRS_MAGIC);
+ else
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTX, val,
+ timing->htotal - (GBE_CRS_MAGIC -
+ timing->hblank_end));
+ gbe->crs_start_xy = val;
+
+ val = 0;
+ SET_GBE_FIELD(VC_START_XY, VC_STARTY, val, (u32) temp);
+ SET_GBE_FIELD(VC_START_XY, VC_STARTX, val, timing->hblank_end - 4);
+ gbe->vc_start_xy = val;
+
+ val = 0;
+ temp = timing->hblank_end - GBE_PIXEN_MAGIC_ON;
+ if (temp < 0)
+ temp += timing->htotal; /* allow blank to wrap around */
+
+ SET_GBE_FIELD(VT_HPIXEN, HPIXEN_ON, val, temp);
+ SET_GBE_FIELD(VT_HPIXEN, HPIXEN_OFF, val,
+ ((temp + timing->width -
+ GBE_PIXEN_MAGIC_OFF) % timing->htotal));
+ gbe->vt_hpixen = val;
+
+ val = 0;
+ SET_GBE_FIELD(VT_VPIXEN, VPIXEN_ON, val, timing->vblank_end);
+ SET_GBE_FIELD(VT_VPIXEN, VPIXEN_OFF, val, timing->vblank_start);
+ gbe->vt_vpixen = val;
+
+ /* turn off sync on green */
+ val = 0;
+ SET_GBE_FIELD(VT_FLAGS, SYNC_LOW, val, 1);
+ gbe->vt_flags = val;
+}
+
+/*
+ * Set the hardware according to 'par'.
+ */
+
+static int gbefb_set_par(struct fb_info *info)
+{
+ int i;
+ unsigned int val;
+ int wholeTilesX, partTilesX, maxPixelsPerTileX;
+ int height_pix;
+ int xpmax, ypmax; /* Monitor resolution */
+ int bytesPerPixel; /* Bytes per pixel */
+ struct gbefb_par *par = (struct gbefb_par *) info->par;
+
+ compute_gbe_timing(&info->var, &par->timing);
+
+ bytesPerPixel = info->var.bits_per_pixel / 8;
+ info->fix.line_length = info->var.xres_virtual * bytesPerPixel;
+ xpmax = par->timing.width;
+ ypmax = par->timing.height;
+
+ /* turn off GBE */
+ gbe_turn_off();
+
+ /* set timing info */
+ gbe_set_timing_info(&par->timing);
+
+ /* initialize DIDs */
+ val = 0;
+ switch (bytesPerPixel) {
+ case 1:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_I8);
+ break;
+ case 2:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_ARGB5);
+ break;
+ case 4:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_RGB8);
+ break;
+ }
+ SET_GBE_FIELD(WID, BUF, val, GBE_BMODE_BOTH);
+
+ for (i = 0; i < 32; i++)
+ gbe->mode_regs[i] = val;
+
+ /* Initialize interrupts */
+ gbe->vt_intr01 = 0xffffffff;
+ gbe->vt_intr23 = 0xffffffff;
+
+ /* HACK:
+ The GBE hardware uses a tiled memory to screen mapping. Tiles are
+ blocks of 512x128, 256x128 or 128x128 pixels, respectively for 8bit,
+ 16bit and 32 bit modes (64 kB). They cover the screen with partial
+ tiles on the right and/or bottom of the screen if needed.
+ For exemple in 640x480 8 bit mode the mapping is:
+
+ <-------- 640 ----->
+ <---- 512 ----><128|384 offscreen>
+ ^ ^
+ | 128 [tile 0] [tile 1]
+ | v
+ ^
+ 4 128 [tile 2] [tile 3]
+ 8 v
+ 0 ^
+ 128 [tile 4] [tile 5]
+ | v
+ | ^
+ v 96 [tile 6] [tile 7]
+ 32 offscreen
+
+ Tiles have the advantage that they can be allocated individually in
+ memory. However, this mapping is not linear at all, which is not
+ really convienient. In order to support linear addressing, the GBE
+ DMA hardware is fooled into thinking the screen is only one tile
+ large and but has a greater height, so that the DMA transfer covers
+ the same region.
+ Tiles are still allocated as independent chunks of 64KB of
+ continuous physical memory and remapped so that the kernel sees the
+ framebuffer as a continuous virtual memory. The GBE tile table is
+ set up so that each tile references one of these 64k blocks:
+
+ GBE -> tile list framebuffer TLB <------------ CPU
+ [ tile 0 ] -> [ 64KB ] <- [ 16x 4KB page entries ] ^
+ ... ... ... linear virtual FB
+ [ tile n ] -> [ 64KB ] <- [ 16x 4KB page entries ] v
+
+
+ The GBE hardware is then told that the buffer is 512*tweaked_height,
+ with tweaked_height = real_width*real_height/pixels_per_tile.
+ Thus the GBE hardware will scan the first tile, filing the first 64k
+ covered region of the screen, and then will proceed to the next
+ tile, until the whole screen is covered.
+
+ Here is what would happen at 640x480 8bit:
+
+ normal tiling linear
+ ^ 11111111111111112222 11111111111111111111 ^
+ 128 11111111111111112222 11111111111111111111 102 lines
+ 11111111111111112222 11111111111111111111 v
+ V 11111111111111112222 11111111222222222222
+ 33333333333333334444 22222222222222222222
+ 33333333333333334444 22222222222222222222
+ < 512 > < 256 > 102*640+256 = 64k
+
+ NOTE: The only mode for which this is not working is 800x600 8bit,
+ as 800*600/512 = 937.5 which is not integer and thus causes
+ flickering.
+ I guess this is not so important as one can use 640x480 8bit or
+ 800x600 16bit anyway.
+ */
+
+ /* Tell gbe about the tiles table location */
+ /* tile_ptr -> [ tile 1 ] -> FB mem */
+ /* [ tile 2 ] -> FB mem */
+ /* ... */
+ val = 0;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_TILE_PTR, val, gbe_tiles.dma >> 9);
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 0); /* do not start */
+ SET_GBE_FIELD(FRM_CONTROL, FRM_LINEAR, val, 0);
+ gbe->frm_control = val;
+
+ maxPixelsPerTileX = 512 / bytesPerPixel;
+ wholeTilesX = 1;
+ partTilesX = 0;
+
+ /* Initialize the framebuffer */
+ val = 0;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_WIDTH_TILE, val, wholeTilesX);
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_RHS, val, partTilesX);
+
+ switch (bytesPerPixel) {
+ case 1:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_8);
+ break;
+ case 2:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_16);
+ break;
+ case 4:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_32);
+ break;
+ }
+ gbe->frm_size_tile = val;
+
+ /* compute tweaked height */
+ height_pix = xpmax * ypmax / maxPixelsPerTileX;
+
+ val = 0;
+ SET_GBE_FIELD(FRM_SIZE_PIXEL, FB_HEIGHT_PIX, val, height_pix);
+ gbe->frm_size_pixel = val;
+
+ /* turn off DID and overlay DMA */
+ gbe->did_control = 0;
+ gbe->ovr_width_tile = 0;
+
+ /* Turn off mouse cursor */
+ gbe->crs_ctl = 0;
+
+ /* Turn on GBE */
+ gbe_turn_on();
+
+ /* Initialize the gamma map */
+ udelay(10);
+ for (i = 0; i < 256; i++)
+ gbe->gmap[i] = (i << 24) | (i << 16) | (i << 8);
+
+ /* Initialize the color map */
+ for (i = 0; i < 256; i++) {
+ int j;
+
+ for (j = 0; j < 1000 && gbe->cm_fifo >= 63; j++)
+ udelay(10);
+ if (j == 1000)
+ printk(KERN_ERR "gbefb: cmap FIFO timeout\n");
+
+ gbe->cmap[i] = (i << 8) | (i << 16) | (i << 24);
+ }
+
+ return 0;
+}
+
+static void gbefb_encode_fix(struct fb_fix_screeninfo *fix,
+ struct fb_var_screeninfo *var)
+{
+ memset(fix, 0, sizeof(struct fb_fix_screeninfo));
+ strcpy(fix->id, "SGI GBE");
+ fix->smem_start = (unsigned long) gbe_mem;
+ fix->smem_len = gbe_mem_size;
+ fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->type_aux = 0;
+ fix->accel = FB_ACCEL_NONE;
+ switch (var->bits_per_pixel) {
+ case 8:
+ fix->visual = FB_VISUAL_PSEUDOCOLOR;
+ break;
+ default:
+ fix->visual = FB_VISUAL_TRUECOLOR;
+ break;
+ }
+ fix->ywrapstep = 0;
+ fix->xpanstep = 0;
+ fix->ypanstep = 0;
+ fix->line_length = var->xres_virtual * var->bits_per_pixel / 8;
+ fix->mmio_start = GBE_BASE;
+ fix->mmio_len = sizeof(struct sgi_gbe);
+}
+
+/*
+ * Set a single color register. The values supplied are already
+ * rounded down to the hardware's capabilities (according to the
+ * entries in the var structure). Return != 0 for invalid regno.
+ */
+
+static int gbefb_setcolreg(unsigned regno, unsigned red, unsigned green,
+ unsigned blue, unsigned transp,
+ struct fb_info *info)
+{
+ int i;
+
+ if (regno > 255)
+ return 1;
+ red >>= 8;
+ green >>= 8;
+ blue >>= 8;
+
+ switch (info->var.bits_per_pixel) {
+ case 8:
+ /* wait for the color map FIFO to have a free entry */
+ for (i = 0; i < 1000 && gbe->cm_fifo >= 63; i++)
+ udelay(10);
+ if (i == 1000) {
+ printk(KERN_ERR "gbefb: cmap FIFO timeout\n");
+ return 1;
+ }
+ gbe->cmap[regno] = (red << 24) | (green << 16) | (blue << 8);
+ break;
+ case 15:
+ case 16:
+ red >>= 3;
+ green >>= 3;
+ blue >>= 3;
+ pseudo_palette[regno] =
+ (red << info->var.red.offset) |
+ (green << info->var.green.offset) |
+ (blue << info->var.blue.offset);
+ break;
+ case 32:
+ pseudo_palette[regno] =
+ (red << info->var.red.offset) |
+ (green << info->var.green.offset) |
+ (blue << info->var.blue.offset);
+ break;
+ }
+
+ return 0;
+}
+
+/*
+ * Check video mode validity, eventually modify var to best match.
+ */
+static int gbefb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+ unsigned int line_length;
+ struct gbe_timing_info timing;
+
+ /* Limit bpp to 8, 16, and 32 */
+ if (var->bits_per_pixel <= 8)
+ var->bits_per_pixel = 8;
+ else if (var->bits_per_pixel <= 16)
+ var->bits_per_pixel = 16;
+ else if (var->bits_per_pixel <= 32)
+ var->bits_per_pixel = 32;
+ else
+ return -EINVAL;
+
+ /* Check the mode can be mapped linearly with the tile table trick. */
+ /* This requires width x height x bytes/pixel be a multiple of 512 */
+ if ((var->xres * var->yres * var->bits_per_pixel) & 4095)
+ return -EINVAL;
+
+ var->grayscale = 0; /* No grayscale for now */
+
+ if ((var->pixclock = compute_gbe_timing(var, &timing)) < 0)
+ return(-EINVAL);
+
+ /* Adjust virtual resolution, if necessary */
+ if (var->xres > var->xres_virtual || (!ywrap && !ypan))
+ var->xres_virtual = var->xres;
+ if (var->yres > var->yres_virtual || (!ywrap && !ypan))
+ var->yres_virtual = var->yres;
+
+ if (var->vmode & FB_VMODE_CONUPDATE) {
+ var->vmode |= FB_VMODE_YWRAP;
+ var->xoffset = info->var.xoffset;
+ var->yoffset = info->var.yoffset;
+ }
+
+ /* No grayscale for now */
+ var->grayscale = 0;
+
+ /* Memory limit */
+ line_length = var->xres_virtual * var->bits_per_pixel / 8;
+ if (line_length * var->yres_virtual > gbe_mem_size)
+ return -ENOMEM; /* Virtual resolution too high */
+
+ switch (var->bits_per_pixel) {
+ case 8:
+ var->red.offset = 0;
+ var->red.length = 8;
+ var->green.offset = 0;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case 16: /* RGB 1555 */
+ var->red.offset = 10;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 5;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case 32: /* RGB 8888 */
+ var->red.offset = 24;
+ var->red.length = 8;
+ var->green.offset = 16;
+ var->green.length = 8;
+ var->blue.offset = 8;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 8;
+ break;
+ }
+ var->red.msb_right = 0;
+ var->green.msb_right = 0;
+ var->blue.msb_right = 0;
+ var->transp.msb_right = 0;
+
+ var->left_margin = timing.htotal - timing.hsync_end;
+ var->right_margin = timing.hsync_start - timing.width;
+ var->upper_margin = timing.vtotal - timing.vsync_end;
+ var->lower_margin = timing.vsync_start - timing.height;
+ var->hsync_len = timing.hsync_end - timing.hsync_start;
+ var->vsync_len = timing.vsync_end - timing.vsync_start;
+
+ return 0;
+}
+
+static int gbefb_mmap(struct fb_info *info, struct file *file,
+ struct vm_area_struct *vma)
+{
+ unsigned long size = vma->vm_end - vma->vm_start;
+ unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
+ unsigned long addr;
+ unsigned long phys_addr, phys_size;
+ u16 *tile;
+
+ /* check range */
+ if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT))
+ return -EINVAL;
+ if (offset + size > gbe_mem_size)
+ return -EINVAL;
+
+ /* remap using the fastest write-through mode on architecture */
+ /* try not polluting the cache when possible */
+ pgprot_val(vma->vm_page_prot) =
+ pgprot_fb(pgprot_val(vma->vm_page_prot));
+
+ vma->vm_flags |= VM_IO | VM_RESERVED;
+ vma->vm_file = file;
+
+ /* look for the starting tile */
+ tile = &gbe_tiles.cpu[offset >> TILE_SHIFT];
+ addr = vma->vm_start;
+ offset &= TILE_MASK;
+
+ /* remap each tile separately */
+ do {
+ phys_addr = (((unsigned long) (*tile)) << TILE_SHIFT) + offset;
+ if ((offset + size) < TILE_SIZE)
+ phys_size = size;
+ else
+ phys_size = TILE_SIZE - offset;
+
+ if (remap_page_range
+ (vma, addr, phys_addr, phys_size, vma->vm_page_prot))
+ return -EAGAIN;
+
+ offset = 0;
+ size -= phys_size;
+ addr += phys_size;
+ tile++;
+ } while (size);
+
+ return 0;
+}
+
+static struct fb_ops gbefb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = gbefb_check_var,
+ .fb_set_par = gbefb_set_par,
+ .fb_setcolreg = gbefb_setcolreg,
+ .fb_mmap = gbefb_mmap,
+ .fb_blank = gbefb_blank,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_cursor = soft_cursor,
+};
+
+/*
+ * Initialization
+ */
+
+int __init gbefb_setup(char *options)
+{
+ char *this_opt;
+
+ if (!options || !*options)
+ return 0;
+
+ while ((this_opt = strsep(&options, ",")) != NULL) {
+ if (!strncmp(this_opt, "monitor:", 8)) {
+ if (!strncmp(this_opt + 8, "crt", 3)) {
+ flat_panel_enabled = 0;
+ default_var = &default_var_CRT;
+ default_mode = &default_mode_CRT;
+ } else if (!strncmp(this_opt + 8, "1600sw", 6) ||
+ !strncmp(this_opt + 8, "lcd", 3)) {
+ flat_panel_enabled = 1;
+ default_var = &default_var_LCD;
+ default_mode = &default_mode_LCD;
+ }
+ } else if (!strncmp(this_opt, "mem:", 4)) {
+ gbe_mem_size = memparse(this_opt + 4, &this_opt);
+ if (gbe_mem_size > CONFIG_FB_GBE_MEM * 1024 * 1024)
+ gbe_mem_size = CONFIG_FB_GBE_MEM * 1024 * 1024;
+ if (gbe_mem_size < TILE_SIZE)
+ gbe_mem_size = TILE_SIZE;
+ } else
+ mode_option = this_opt;
+ }
+ return 0;
+}
+
+int __init gbefb_init(void)
+{
+ int i, ret = 0;
+
+ if (!request_mem_region(GBE_BASE, sizeof(struct sgi_gbe), "GBE")) {
+ printk(KERN_ERR "gbefb: couldn't reserve mmio region\n");
+ return -EBUSY;
+ }
+
+ gbe = (struct sgi_gbe *) ioremap(GBE_BASE, sizeof(struct sgi_gbe));
+ if (!gbe) {
+ printk(KERN_ERR "gbefb: couldn't map mmio region\n");
+ ret = -ENXIO;
+ goto out_release_mem_region;
+ }
+ gbe_revision = gbe->ctrlstat & 15;
+
+ gbe_tiles.cpu =
+ dma_alloc_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ &gbe_tiles.dma, GFP_KERNEL);
+ if (!gbe_tiles.cpu) {
+ printk(KERN_ERR "gbefb: couldn't allocate tiles table\n");
+ ret = -ENOMEM;
+ goto out_unmap;
+ }
+
+
+ if (gbe_mem_phys) {
+ /* memory was allocated at boot time */
+ gbe_mem = ioremap_nocache(gbe_mem_phys, gbe_mem_size);
+ gbe_dma_addr = 0;
+ } else {
+ /* try to allocate memory with the classical allocator
+ * this has high chance to fail on low memory machines */
+ gbe_mem = dma_alloc_coherent(NULL, gbe_mem_size, &gbe_dma_addr,
+ GFP_KERNEL);
+ gbe_mem_phys = (unsigned long) gbe_dma_addr;
+ }
+
+#ifdef CONFIG_X86
+ mtrr_add(gbe_mem_phys, gbe_mem_size, MTRR_TYPE_WRCOMB, 1);
+#endif
+
+ if (!gbe_mem) {
+ printk(KERN_ERR "gbefb: couldn't map framebuffer\n");
+ ret = -ENXIO;
+ goto out_tiles_free;
+ }
+
+ /* map framebuffer memory into tiles table */
+ for (i = 0; i < (gbe_mem_size >> TILE_SHIFT); i++)
+ gbe_tiles.cpu[i] = (gbe_mem_phys >> TILE_SHIFT) + i;
+
+ fb_info.currcon = -1;
+ fb_info.fbops = &gbefb_ops;
+ fb_info.pseudo_palette = pseudo_palette;
+ fb_info.flags = FBINFO_FLAG_DEFAULT;
+ fb_info.screen_base = gbe_mem;
+ fb_alloc_cmap(&fb_info.cmap, 256, 0);
+
+ /* reset GBE */
+ gbe_reset();
+
+ /* turn on default video mode */
+ if (fb_find_mode(&par_current.var, &fb_info, mode_option, NULL, 0,
+ default_mode, 8) == 0)
+ par_current.var = *default_var;
+ fb_info.var = par_current.var;
+ gbefb_check_var(&par_current.var, &fb_info);
+ gbefb_encode_fix(&fb_info.fix, &fb_info.var);
+ fb_info.par = &par_current;
+
+ if (register_framebuffer(&fb_info) < 0) {
+ ret = -ENXIO;
+ printk(KERN_ERR "gbefb: couldn't register framebuffer\n");
+ goto out_gbe_unmap;
+ }
+
+ printk(KERN_INFO "fb%d: %s rev %d @ 0x%08x using %dkB memory\n",
+ fb_info.node, fb_info.fix.id, gbe_revision, (unsigned) GBE_BASE,
+ gbe_mem_size >> 10);
+
+ return 0;
+
+out_gbe_unmap:
+ if (gbe_dma_addr)
+ dma_free_coherent(NULL, gbe_mem_size, gbe_mem, gbe_mem_phys);
+ else
+ iounmap(gbe_mem);
+out_tiles_free:
+ dma_free_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ (void *)gbe_tiles.cpu, gbe_tiles.dma);
+out_unmap:
+ iounmap(gbe);
+out_release_mem_region:
+ release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
+ return ret;
+}
+
+void __exit gbefb_exit(void)
+{
+ unregister_framebuffer(&fb_info);
+ gbe_turn_off();
+ if (gbe_dma_addr)
+ dma_free_coherent(NULL, gbe_mem_size, gbe_mem, gbe_mem_phys);
+ else
+ iounmap(gbe_mem);
+ dma_free_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ (void *)gbe_tiles.cpu, gbe_tiles.dma);
+ release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
+ iounmap(gbe);
+}
+
+#ifdef MODULE
+module_init(gbefb_init);
+module_exit(gbefb_exit);
+#endif
+
+MODULE_LICENSE("GPL");
--- /dev/null
+#ifndef __PXAFB_H__
+#define __PXAFB_H__
+
+/*
+ * linux/drivers/video/pxafb.h
+ * -- Intel PXA250/210 LCD Controller Frame Buffer Device
+ *
+ * Copyright (C) 1999 Eric A. Thomas.
+ * Copyright (C) 2004 Jean-Frederic Clere.
+ * Copyright (C) 2004 Ian Campbell.
+ * Copyright (C) 2004 Jeff Lackey.
+ * Based on sa1100fb.c Copyright (C) 1999 Eric A. Thomas
+ * which in turn is
+ * Based on acornfb.c Copyright (C) Russell King.
+ *
+ * 2001-08-03: Cliff Brake <cbrake@acclent.com>
+ * - ported SA1100 code to PXA
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive
+ * for more details.
+ */
+
+/* Shadows for LCD controller registers */
+struct pxafb_lcd_reg {
+ unsigned int lccr0;
+ unsigned int lccr1;
+ unsigned int lccr2;
+ unsigned int lccr3;
+};
+
+/* PXA LCD DMA descriptor */
+struct pxafb_dma_descriptor {
+ unsigned int fdadr;
+ unsigned int fsadr;
+ unsigned int fidr;
+ unsigned int ldcmd;
+};
+
+struct pxafb_info {
+ struct fb_info fb;
+ struct device *dev;
+
+ u_int max_bpp;
+ u_int max_xres;
+ u_int max_yres;
+
+ /*
+ * These are the addresses we mapped
+ * the framebuffer memory region to.
+ */
+ /* raw memory addresses */
+ dma_addr_t map_dma; /* physical */
+ u_char * map_cpu; /* virtual */
+ u_int map_size;
+
+ /* addresses of pieces placed in raw buffer */
+ u_char * screen_cpu; /* virtual address of frame buffer */
+ dma_addr_t screen_dma; /* physical address of frame buffer */
+ u16 * palette_cpu; /* virtual address of palette memory */
+ dma_addr_t palette_dma; /* physical address of palette memory */
+ u_int palette_size;
+
+ /* DMA descriptors */
+ struct pxafb_dma_descriptor * dmadesc_fblow_cpu;
+ dma_addr_t dmadesc_fblow_dma;
+ struct pxafb_dma_descriptor * dmadesc_fbhigh_cpu;
+ dma_addr_t dmadesc_fbhigh_dma;
+ struct pxafb_dma_descriptor * dmadesc_palette_cpu;
+ dma_addr_t dmadesc_palette_dma;
+
+ dma_addr_t fdadr0;
+ dma_addr_t fdadr1;
+
+ u_int lccr0;
+ u_int lccr3;
+ u_int cmap_inverse:1,
+ cmap_static:1,
+ unused:30;
+
+ u_int reg_lccr0;
+ u_int reg_lccr1;
+ u_int reg_lccr2;
+ u_int reg_lccr3;
+
+ volatile u_char state;
+ volatile u_char task_state;
+ struct semaphore ctrlr_sem;
+ wait_queue_head_t ctrlr_wait;
+ struct work_struct task;
+
+#ifdef CONFIG_CPU_FREQ
+ struct notifier_block freq_transition;
+ struct notifier_block freq_policy;
+#endif
+};
+
+#define TO_INF(ptr,member) container_of(ptr,struct pxafb_info,member)
+
+/*
+ * These are the actions for set_ctrlr_state
+ */
+#define C_DISABLE (0)
+#define C_ENABLE (1)
+#define C_DISABLE_CLKCHANGE (2)
+#define C_ENABLE_CLKCHANGE (3)
+#define C_REENABLE (4)
+#define C_DISABLE_PM (5)
+#define C_ENABLE_PM (6)
+#define C_STARTUP (7)
+
+#define PXA_NAME "PXA"
+
+/*
+ * Debug macros
+ */
+#if DEBUG
+# define DPRINTK(fmt, args...) printk("%s: " fmt, __FUNCTION__ , ## args)
+#else
+# define DPRINTK(fmt, args...)
+#endif
+
+/*
+ * Minimum X and Y resolutions
+ */
+#define MIN_XRES 64
+#define MIN_YRES 64
+
+#endif /* __PXAFB_H__ */
--- /dev/null
+/*
+ * linux/drivers/video/riva/fbdev-i2c.c - nVidia i2c
+ *
+ * Maintained by Ani Joshi <ajoshi@shell.unixbox.com>
+ *
+ * Copyright 2004 Antonino A. Daplas <adaplas @pol.net>
+ *
+ * Based on radeonfb-i2c.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/fb.h>
+
+#include <asm/io.h>
+
+#include "rivafb.h"
+#include "../edid.h"
+
+#define RIVA_DDC 0x50
+
+static void riva_gpio_setscl(void* data, int state)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ val = VGA_RD08(par->riva.PCIO, 0x3d5) & 0xf0;
+
+ if (state)
+ val |= 0x20;
+ else
+ val &= ~0x20;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ VGA_WR08(par->riva.PCIO, 0x3d5, val | 0x1);
+}
+
+static void riva_gpio_setsda(void* data, int state)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ val = VGA_RD08(par->riva.PCIO, 0x3d5) & 0xf0;
+
+ if (state)
+ val |= 0x10;
+ else
+ val &= ~0x10;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ VGA_WR08(par->riva.PCIO, 0x3d5, val | 0x1);
+}
+
+static int riva_gpio_getscl(void* data)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val = 0;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base);
+ if (VGA_RD08(par->riva.PCIO, 0x3d5) & 0x04)
+ val = 1;
+
+ val = VGA_RD08(par->riva.PCIO, 0x3d5);
+
+ return val;
+}
+
+static int riva_gpio_getsda(void* data)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val = 0;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base);
+ if (VGA_RD08(par->riva.PCIO, 0x3d5) & 0x08)
+ val = 1;
+
+ return val;
+}
+
+#define I2C_ALGO_RIVA 0x0e0000
+static int riva_setup_i2c_bus(struct riva_i2c_chan *chan, const char *name)
+{
+ int rc;
+
+ strcpy(chan->adapter.name, name);
+ chan->adapter.owner = THIS_MODULE;
+ chan->adapter.id = I2C_ALGO_RIVA;
+ chan->adapter.algo_data = &chan->algo;
+ chan->adapter.dev.parent = &chan->par->pdev->dev;
+ chan->algo.setsda = riva_gpio_setsda;
+ chan->algo.setscl = riva_gpio_setscl;
+ chan->algo.getsda = riva_gpio_getsda;
+ chan->algo.getscl = riva_gpio_getscl;
+ chan->algo.udelay = 40;
+ chan->algo.timeout = 20;
+ chan->algo.data = chan;
+
+ i2c_set_adapdata(&chan->adapter, chan);
+
+ /* Raise SCL and SDA */
+ riva_gpio_setsda(chan, 1);
+ riva_gpio_setscl(chan, 1);
+ udelay(20);
+
+ rc = i2c_bit_add_bus(&chan->adapter);
+ if (rc == 0)
+ dev_dbg(&chan->par->pdev->dev, "I2C bus %s registered.\n", name);
+ else
+ dev_warn(&chan->par->pdev->dev, "Failed to register I2C bus %s.\n", name);
+ return rc;
+}
+
+void riva_create_i2c_busses(struct riva_par *par)
+{
+ par->chan[0].par = par;
+ par->chan[1].par = par;
+ par->chan[2].par = par;
+
+ switch (par->riva.Architecture) {
+#if 0 /* no support yet for other nVidia chipsets */
+ par->chan[2].ddc_base = 0x50;
+ riva_setup_i2c_bus(&par->chan[2], "BUS2");
+#endif
+ case NV_ARCH_10:
+ case NV_ARCH_20:
+ case NV_ARCH_04:
+ par->chan[1].ddc_base = 0x36;
+ riva_setup_i2c_bus(&par->chan[1], "BUS1");
+ case NV_ARCH_03:
+ par->chan[0].ddc_base = 0x3e;
+ riva_setup_i2c_bus(&par->chan[0], "BUS0");
+ }
+}
+
+void riva_delete_i2c_busses(struct riva_par *par)
+{
+ if (par->chan[0].par)
+ i2c_bit_del_bus(&par->chan[0].adapter);
+ par->chan[0].par = NULL;
+
+ if (par->chan[1].par)
+ i2c_bit_del_bus(&par->chan[1].adapter);
+ par->chan[1].par = NULL;
+
+}
+
+static u8 *riva_do_probe_i2c_edid(struct riva_i2c_chan *chan)
+{
+ u8 start = 0x0;
+ struct i2c_msg msgs[] = {
+ {
+ .addr = RIVA_DDC,
+ .len = 1,
+ .buf = &start,
+ }, {
+ .addr = RIVA_DDC,
+ .flags = I2C_M_RD,
+ .len = EDID_LENGTH,
+ },
+ };
+ u8 *buf;
+
+ buf = kmalloc(EDID_LENGTH, GFP_KERNEL);
+ if (!buf) {
+ dev_warn(&chan->par->pdev->dev, "Out of memory!\n");
+ return NULL;
+ }
+ msgs[1].buf = buf;
+
+ if (i2c_transfer(&chan->adapter, msgs, 2) == 2)
+ return buf;
+ dev_dbg(&chan->par->pdev->dev, "Unable to read EDID block.\n");
+ kfree(buf);
+ return NULL;
+}
+
+int riva_probe_i2c_connector(struct riva_par *par, int conn, u8 **out_edid)
+{
+ u8 *edid = NULL;
+ int i;
+
+ for (i = 0; i < 3; i++) {
+ /* Do the real work */
+ edid = riva_do_probe_i2c_edid(&par->chan[conn-1]);
+ if (edid)
+ break;
+ }
+ if (out_edid)
+ *out_edid = edid;
+ if (!edid)
+ return 1;
+
+ return 0;
+}
+
--- /dev/null
+menu "Dallas's 1-wire bus"
+
+config W1
+ tristate "Dallas's 1-wire support"
+ ---help---
+ Dallas's 1-wire bus is usefull to connect slow 1-pin devices
+ such as iButtons and thermal sensors.
+
+ If you want W1 support, you should say Y here.
+
+ This W1 support can also be built as a module. If so, the module
+ will be called wire.ko.
+
+config W1_MATROX
+ tristate "Matrox G400 transport layer for 1-wire"
+ depends on W1
+ help
+ Say Y here if you want to communicate with your 1-wire devices
+ using Matrox's G400 GPIO pins.
+
+ This support is also available as a module. If so, the module
+ will be called matrox_w1.ko.
+
+config W1_THERM
+ tristate "Thermal family implementation"
+ depends on W1
+ help
+ Say Y here if you want to connect 1-wire thermal sensors to you
+ wire.
+
+endmenu
--- /dev/null
+#
+# Makefile for the Dallas's 1-wire bus.
+#
+
+obj-$(CONFIG_W1) += wire.o
+wire-objs := w1.o w1_int.o w1_family.o w1_netlink.o w1_io.o
+
+obj-$(CONFIG_W1_MATROX) += matrox_w1.o
+obj-$(CONFIG_W1_THERM) += w1_therm.o
--- /dev/null
+/*
+ * matrox_w1.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/atomic.h>
+#include <asm/types.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/pci_ids.h>
+#include <linux/pci.h>
+#include <linux/timer.h>
+
+#include "w1.h"
+#include "w1_int.h"
+#include "w1_log.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for transport(Dallas 1-wire prtocol) over VGA DDC(matrox gpio).");
+
+static struct pci_device_id matrox_w1_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G400) },
+ { },
+};
+MODULE_DEVICE_TABLE(pci, matrox_w1_tbl);
+
+static int __devinit matrox_w1_probe(struct pci_dev *, const struct pci_device_id *);
+static void __devexit matrox_w1_remove(struct pci_dev *);
+
+static struct pci_driver matrox_w1_pci_driver = {
+ .name = "matrox_w1",
+ .id_table = matrox_w1_tbl,
+ .probe = matrox_w1_probe,
+ .remove = __devexit_p(matrox_w1_remove),
+};
+
+/*
+ * Matrox G400 DDC registers.
+ */
+
+#define MATROX_G400_DDC_CLK (1<<4)
+#define MATROX_G400_DDC_DATA (1<<1)
+
+#define MATROX_BASE 0x3C00
+#define MATROX_STATUS 0x1e14
+
+#define MATROX_PORT_INDEX_OFFSET 0x00
+#define MATROX_PORT_DATA_OFFSET 0x0A
+
+#define MATROX_GET_CONTROL 0x2A
+#define MATROX_GET_DATA 0x2B
+#define MATROX_CURSOR_CTL 0x06
+
+struct matrox_device
+{
+ unsigned long base_addr;
+ unsigned long port_index, port_data;
+ u8 data_mask;
+
+ unsigned long phys_addr, virt_addr;
+ unsigned long found;
+
+ struct w1_bus_master *bus_master;
+};
+
+static u8 matrox_w1_read_ddc_bit(unsigned long);
+static void matrox_w1_write_ddc_bit(unsigned long, u8);
+
+/*
+ * These functions read and write DDC Data bit.
+ *
+ * Using tristate pins, since i can't fin any open-drain pin in whole motherboard.
+ * Unfortunately we can't connect to Intel's 82801xx IO controller
+ * since we don't know motherboard schema, wich has pretty unused(may be not) GPIO.
+ *
+ * I've heard that PIIX also has open drain pin.
+ *
+ * Port mapping.
+ */
+static __inline__ u8 matrox_w1_read_reg(struct matrox_device *dev, u8 reg)
+{
+ u8 ret;
+
+ writeb(reg, dev->port_index);
+ ret = readb(dev->port_data);
+ barrier();
+
+ return ret;
+}
+
+static __inline__ void matrox_w1_write_reg(struct matrox_device *dev, u8 reg, u8 val)
+{
+ writeb(reg, dev->port_index);
+ writeb(val, dev->port_data);
+ wmb();
+}
+
+static void matrox_w1_write_ddc_bit(unsigned long data, u8 bit)
+{
+ u8 ret;
+ struct matrox_device *dev = (struct matrox_device *) data;
+
+ if (bit)
+ bit = 0;
+ else
+ bit = dev->data_mask;
+
+ ret = matrox_w1_read_reg(dev, MATROX_GET_CONTROL);
+ matrox_w1_write_reg(dev, MATROX_GET_CONTROL, ((ret & ~dev->data_mask) | bit));
+ matrox_w1_write_reg(dev, MATROX_GET_DATA, 0x00);
+}
+
+static u8 matrox_w1_read_ddc_bit(unsigned long data)
+{
+ u8 ret;
+ struct matrox_device *dev = (struct matrox_device *) data;
+
+ ret = matrox_w1_read_reg(dev, MATROX_GET_DATA);
+
+ return ret;
+}
+
+static void matrox_w1_hw_init(struct matrox_device *dev)
+{
+ matrox_w1_write_reg(dev, MATROX_GET_DATA, 0xFF);
+ matrox_w1_write_reg(dev, MATROX_GET_CONTROL, 0x00);
+}
+
+static int __devinit matrox_w1_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ struct matrox_device *dev;
+ int err;
+
+ assert(pdev != NULL);
+ assert(ent != NULL);
+
+ if (pdev->vendor != PCI_VENDOR_ID_MATROX || pdev->device != PCI_DEVICE_ID_MATROX_G400)
+ return -ENODEV;
+
+ dev = kmalloc(sizeof(struct matrox_device) +
+ sizeof(struct w1_bus_master), GFP_KERNEL);
+ if (!dev) {
+ dev_err(&pdev->dev,
+ "%s: Failed to create new matrox_device object.\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ memset(dev, 0, sizeof(struct matrox_device) + sizeof(struct w1_bus_master));
+
+ dev->bus_master = (struct w1_bus_master *)(dev + 1);
+
+ /*
+ * True for G400, for some other we need resource 0, see drivers/video/matrox/matroxfb_base.c
+ */
+
+ dev->phys_addr = pci_resource_start(pdev, 1);
+
+ dev->virt_addr =
+ (unsigned long) ioremap_nocache(dev->phys_addr, 16384);
+ if (!dev->virt_addr) {
+ dev_err(&pdev->dev, "%s: failed to ioremap(0x%lx, %d).\n",
+ __func__, dev->phys_addr, 16384);
+ err = -EIO;
+ goto err_out_free_device;
+ }
+
+ dev->base_addr = dev->virt_addr + MATROX_BASE;
+ dev->port_index = dev->base_addr + MATROX_PORT_INDEX_OFFSET;
+ dev->port_data = dev->base_addr + MATROX_PORT_DATA_OFFSET;
+ dev->data_mask = (MATROX_G400_DDC_DATA);
+
+ matrox_w1_hw_init(dev);
+
+ dev->bus_master->data = (unsigned long) dev;
+ dev->bus_master->read_bit = &matrox_w1_read_ddc_bit;
+ dev->bus_master->write_bit = &matrox_w1_write_ddc_bit;
+
+ err = w1_add_master_device(dev->bus_master);
+ if (err)
+ goto err_out_free_device;
+
+ pci_set_drvdata(pdev, dev);
+
+ dev->found = 1;
+
+ dev_info(&pdev->dev, "Matrox G400 GPIO transport layer for 1-wire.\n");
+
+ return 0;
+
+err_out_free_device:
+ kfree(dev);
+
+ return err;
+}
+
+static void __devexit matrox_w1_remove(struct pci_dev *pdev)
+{
+ struct matrox_device *dev = pci_get_drvdata(pdev);
+
+ assert(dev != NULL);
+
+ if (dev->found) {
+ w1_remove_master_device(dev->bus_master);
+ iounmap((void *) dev->virt_addr);
+ }
+ kfree(dev);
+}
+
+static int __init matrox_w1_init(void)
+{
+ return pci_module_init(&matrox_w1_pci_driver);
+}
+
+static void __exit matrox_w1_fini(void)
+{
+ pci_unregister_driver(&matrox_w1_pci_driver);
+}
+
+module_init(matrox_w1_init);
+module_exit(matrox_w1_fini);
--- /dev/null
+/*
+ * w1.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/atomic.h>
+#include <asm/delay.h>
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/suspend.h>
+
+#include "w1.h"
+#include "w1_io.h"
+#include "w1_log.h"
+#include "w1_int.h"
+#include "w1_family.h"
+#include "w1_netlink.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for 1-wire Dallas network protocol.");
+
+static int w1_timeout = 5 * HZ;
+int w1_max_slave_count = 10;
+
+module_param_named(timeout, w1_timeout, int, 0);
+module_param_named(max_slave_count, w1_max_slave_count, int, 0);
+
+spinlock_t w1_mlock = SPIN_LOCK_UNLOCKED;
+LIST_HEAD(w1_masters);
+
+static pid_t control_thread;
+static int control_needs_exit;
+static DECLARE_COMPLETION(w1_control_complete);
+static DECLARE_WAIT_QUEUE_HEAD(w1_control_wait);
+
+static int w1_master_match(struct device *dev, struct device_driver *drv)
+{
+ return 1;
+}
+
+static int w1_master_probe(struct device *dev)
+{
+ return -ENODEV;
+}
+
+static int w1_master_remove(struct device *dev)
+{
+ return 0;
+}
+
+static void w1_master_release(struct device *dev)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+
+ complete(&md->dev_released);
+}
+
+static void w1_slave_release(struct device *dev)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+
+ complete(&sl->dev_released);
+}
+
+static ssize_t w1_default_read_name(struct device *dev, char *buf)
+{
+ return sprintf(buf, "No family registered.\n");
+}
+
+static ssize_t w1_default_read_bin(struct kobject *kobj, char *buf, loff_t off,
+ size_t count)
+{
+ return sprintf(buf, "No family registered.\n");
+}
+
+struct bus_type w1_bus_type = {
+ .name = "w1",
+ .match = w1_master_match,
+};
+
+struct device_driver w1_driver = {
+ .name = "w1_driver",
+ .bus = &w1_bus_type,
+ .probe = w1_master_probe,
+ .remove = w1_master_remove,
+};
+
+struct device w1_device = {
+ .parent = NULL,
+ .bus = &w1_bus_type,
+ .bus_id = "w1 bus master",
+ .driver = &w1_driver,
+ .release = &w1_master_release
+};
+
+static struct device_attribute w1_slave_attribute = {
+ .attr = {
+ .name = "name",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_default_read_name,
+};
+
+static struct device_attribute w1_slave_attribute_val = {
+ .attr = {
+ .name = "value",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_default_read_name,
+};
+
+static ssize_t w1_master_attribute_show(struct device *dev, char *buf)
+{
+ return sprintf(buf, "please fix me\n");
+#if 0
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ int c = PAGE_SIZE;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ c -= snprintf(buf + PAGE_SIZE - c, c, "%s\n", md->name);
+ c -= snprintf(buf + PAGE_SIZE - c, c,
+ "bus_master=0x%p, timeout=%d, max_slave_count=%d, attempts=%lu\n",
+ md->bus_master, w1_timeout, md->max_slave_count,
+ md->attempts);
+ c -= snprintf(buf + PAGE_SIZE - c, c, "%d slaves: ",
+ md->slave_count);
+ if (md->slave_count == 0)
+ c -= snprintf(buf + PAGE_SIZE - c, c, "no.\n");
+ else {
+ struct list_head *ent, *n;
+ struct w1_slave *sl;
+
+ list_for_each_safe(ent, n, &md->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ c -= snprintf(buf + PAGE_SIZE - c, c, "%s[%p] ",
+ sl->name, sl);
+ }
+ c -= snprintf(buf + PAGE_SIZE - c, c, "\n");
+ }
+
+ up(&md->mutex);
+
+ return PAGE_SIZE - c;
+#endif
+}
+
+struct device_attribute w1_master_attribute = {
+ .attr = {
+ .name = "w1_master_stats",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE,
+ },
+ .show = &w1_master_attribute_show,
+};
+
+static struct bin_attribute w1_slave_bin_attribute = {
+ .attr = {
+ .name = "w1_slave",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE,
+ },
+ .size = W1_SLAVE_DATA_SIZE,
+ .read = &w1_default_read_bin,
+};
+
+static int __w1_attach_slave_device(struct w1_slave *sl)
+{
+ int err;
+
+ sl->dev.parent = &sl->master->dev;
+ sl->dev.driver = sl->master->driver;
+ sl->dev.bus = &w1_bus_type;
+ sl->dev.release = &w1_slave_release;
+
+ snprintf(&sl->dev.bus_id[0], sizeof(sl->dev.bus_id),
+ "%x-%llx",
+ (unsigned int) sl->reg_num.family,
+ (unsigned long long) sl->reg_num.id);
+ snprintf (&sl->name[0], sizeof(sl->name),
+ "%x-%llx",
+ (unsigned int) sl->reg_num.family,
+ (unsigned long long) sl->reg_num.id);
+
+ dev_dbg(&sl->dev, "%s: registering %s.\n", __func__,
+ &sl->dev.bus_id[0]);
+
+ err = device_register(&sl->dev);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "Device registration [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ return err;
+ }
+
+ w1_slave_bin_attribute.read = sl->family->fops->rbin;
+ w1_slave_attribute.show = sl->family->fops->rname;
+ w1_slave_attribute_val.show = sl->family->fops->rval;
+ w1_slave_attribute_val.attr.name = sl->family->fops->rvalname;
+
+ err = device_create_file(&sl->dev, &w1_slave_attribute);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ err = device_create_file(&sl->dev, &w1_slave_attribute_val);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_remove_file(&sl->dev, &w1_slave_attribute);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ err = sysfs_create_bin_file(&sl->dev.kobj, &w1_slave_bin_attribute);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_remove_file(&sl->dev, &w1_slave_attribute);
+ device_remove_file(&sl->dev, &w1_slave_attribute_val);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ list_add_tail(&sl->w1_slave_entry, &sl->master->slist);
+
+ return 0;
+}
+
+static int w1_attach_slave_device(struct w1_master *dev, struct w1_reg_num *rn)
+{
+ struct w1_slave *sl;
+ struct w1_family *f;
+ int err;
+
+ sl = kmalloc(sizeof(struct w1_slave), GFP_KERNEL);
+ if (!sl) {
+ dev_err(&dev->dev,
+ "%s: failed to allocate new slave device.\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ memset(sl, 0, sizeof(*sl));
+
+ sl->owner = THIS_MODULE;
+ sl->master = dev;
+
+ memcpy(&sl->reg_num, rn, sizeof(sl->reg_num));
+ atomic_set(&sl->refcnt, 0);
+ init_completion(&sl->dev_released);
+
+ spin_lock(&w1_flock);
+ f = w1_family_registered(rn->family);
+ if (!f) {
+ spin_unlock(&w1_flock);
+ dev_info(&dev->dev, "Family %x is not registered.\n",
+ rn->family);
+ kfree(sl);
+ return -ENODEV;
+ }
+ __w1_family_get(f);
+ spin_unlock(&w1_flock);
+
+ sl->family = f;
+
+
+ err = __w1_attach_slave_device(sl);
+ if (err < 0) {
+ dev_err(&dev->dev, "%s: Attaching %s failed.\n", __func__,
+ sl->name);
+ w1_family_put(sl->family);
+ kfree(sl);
+ return err;
+ }
+
+ dev->slave_count++;
+
+ return 0;
+}
+
+static void w1_slave_detach(struct w1_slave *sl)
+{
+ dev_info(&sl->dev, "%s: detaching %s.\n", __func__, sl->name);
+
+ while (atomic_read(&sl->refcnt))
+ schedule_timeout(10);
+
+ sysfs_remove_bin_file(&sl->dev.kobj, &w1_slave_bin_attribute);
+ device_remove_file(&sl->dev, &w1_slave_attribute);
+ device_unregister(&sl->dev);
+ w1_family_put(sl->family);
+}
+
+static void w1_search(struct w1_master *dev)
+{
+ u64 last, rn, tmp;
+ int i, count = 0, slave_count;
+ int last_family_desc, last_zero, last_device;
+ int search_bit, id_bit, comp_bit, desc_bit;
+ struct list_head *ent;
+ struct w1_slave *sl;
+ int family_found = 0;
+ struct w1_netlink_msg msg;
+
+ dev->attempts++;
+
+ memset(&msg, 0, sizeof(msg));
+
+ search_bit = id_bit = comp_bit = 0;
+ rn = tmp = last = 0;
+ last_device = last_zero = last_family_desc = 0;
+
+ desc_bit = 64;
+
+ while (!(id_bit && comp_bit) && !last_device
+ && count++ < dev->max_slave_count) {
+ last = rn;
+ rn = 0;
+
+ last_family_desc = 0;
+
+ /*
+ * Reset bus and all 1-wire device state machines
+ * so they can respond to our requests.
+ *
+ * Return 0 - device(s) present, 1 - no devices present.
+ */
+ if (w1_reset_bus(dev)) {
+ dev_info(&dev->dev, "No devices present on the wire.\n");
+ break;
+ }
+
+#if 1
+ memset(&msg, 0, sizeof(msg));
+
+ w1_write_8(dev, W1_SEARCH);
+ for (i = 0; i < 64; ++i) {
+ /*
+ * Read 2 bits from bus.
+ * All who don't sleep must send ID bit and COMPLEMENT ID bit.
+ * They actually are ANDed between all senders.
+ */
+ id_bit = w1_read_bit(dev);
+ comp_bit = w1_read_bit(dev);
+
+ if (id_bit && comp_bit)
+ break;
+
+ if (id_bit == 0 && comp_bit == 0) {
+ if (i == desc_bit)
+ search_bit = 1;
+ else if (i > desc_bit)
+ search_bit = 0;
+ else
+ search_bit = ((last >> i) & 0x1);
+
+ if (search_bit == 0) {
+ last_zero = i;
+ if (last_zero < 9)
+ last_family_desc = last_zero;
+ }
+
+ }
+ else
+ search_bit = id_bit;
+
+ tmp = search_bit;
+ rn |= (tmp << i);
+
+ /*
+ * Write 1 bit to bus
+ * and make all who don't have "search_bit" in "i"'th position
+ * in it's registration number sleep.
+ */
+ w1_write_bit(dev, search_bit);
+
+ }
+#endif
+ msg.id.w1_id = rn;
+ msg.val = w1_calc_crc8((u8 *) & rn, 7);
+ w1_netlink_send(dev, &msg);
+
+ if (desc_bit == last_zero)
+ last_device = 1;
+
+ desc_bit = last_zero;
+
+ slave_count = 0;
+ list_for_each(ent, &dev->slist) {
+ struct w1_reg_num *tmp;
+
+ tmp = (struct w1_reg_num *) &rn;
+
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (sl->reg_num.family == tmp->family &&
+ sl->reg_num.id == tmp->id &&
+ sl->reg_num.crc == tmp->crc)
+ break;
+ else if (sl->reg_num.family == tmp->family) {
+ family_found = 1;
+ break;
+ }
+
+ slave_count++;
+ }
+
+ if (slave_count == dev->slave_count &&
+ msg.val && (*((__u8 *) & msg.val) == msg.id.id.crc)) {
+ w1_attach_slave_device(dev, (struct w1_reg_num *) &rn);
+ }
+ }
+}
+
+int w1_control(void *data)
+{
+ struct w1_slave *sl;
+ struct w1_master *dev;
+ struct list_head *ent, *ment, *n, *mn;
+ int err, have_to_wait = 0, timeout;
+
+ daemonize("w1_control");
+ allow_signal(SIGTERM);
+
+ while (!control_needs_exit || have_to_wait) {
+ have_to_wait = 0;
+
+ timeout = w1_timeout;
+ do {
+ timeout = interruptible_sleep_on_timeout(&w1_control_wait, timeout);
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+ } while (!signal_pending(current) && (timeout > 0));
+
+ if (signal_pending(current))
+ flush_signals(current);
+
+ list_for_each_safe(ment, mn, &w1_masters) {
+ dev = list_entry(ment, struct w1_master, w1_master_entry);
+
+ if (!control_needs_exit && !dev->need_exit)
+ continue;
+ /*
+ * Little race: we can create thread but not set the flag.
+ * Get a chance for external process to set flag up.
+ */
+ if (!dev->initialized) {
+ have_to_wait = 1;
+ continue;
+ }
+
+ spin_lock(&w1_mlock);
+ list_del(&dev->w1_master_entry);
+ spin_unlock(&w1_mlock);
+
+ if (control_needs_exit) {
+ dev->need_exit = 1;
+
+ err = kill_proc(dev->kpid, SIGTERM, 1);
+ if (err)
+ dev_err(&dev->dev,
+ "Failed to send signal to w1 kernel thread %d.\n",
+ dev->kpid);
+ }
+
+ wait_for_completion(&dev->dev_exited);
+
+ list_for_each_safe(ent, n, &dev->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (!sl)
+ dev_warn(&dev->dev,
+ "%s: slave entry is NULL.\n",
+ __func__);
+ else {
+ list_del(&sl->w1_slave_entry);
+
+ w1_slave_detach(sl);
+ kfree(sl);
+ }
+ }
+ device_remove_file(&dev->dev, &w1_master_attribute);
+ atomic_dec(&dev->refcnt);
+ }
+ }
+
+ complete_and_exit(&w1_control_complete, 0);
+}
+
+int w1_process(void *data)
+{
+ struct w1_master *dev = (struct w1_master *) data;
+ unsigned long timeout;
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (!dev->need_exit) {
+ timeout = w1_timeout;
+ do {
+ timeout = interruptible_sleep_on_timeout(&dev->kwait, timeout);
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+ } while (!signal_pending(current) && (timeout > 0));
+
+ if (signal_pending(current))
+ flush_signals(current);
+
+ if (dev->need_exit)
+ break;
+
+ if (!dev->initialized)
+ continue;
+
+ if (down_interruptible(&dev->mutex))
+ continue;
+ w1_search(dev);
+ up(&dev->mutex);
+ }
+
+ atomic_dec(&dev->refcnt);
+ complete_and_exit(&dev->dev_exited, 0);
+
+ return 0;
+}
+
+int w1_init(void)
+{
+ int retval;
+
+ printk(KERN_INFO "Driver for 1-wire Dallas network protocol.\n");
+
+ retval = bus_register(&w1_bus_type);
+ if (retval) {
+ printk(KERN_ERR "Failed to register bus. err=%d.\n", retval);
+ goto err_out_exit_init;
+ }
+
+ retval = driver_register(&w1_driver);
+ if (retval) {
+ printk(KERN_ERR
+ "Failed to register master driver. err=%d.\n",
+ retval);
+ goto err_out_bus_unregister;
+ }
+
+ control_thread = kernel_thread(&w1_control, NULL, 0);
+ if (control_thread < 0) {
+ printk(KERN_ERR "Failed to create control thread. err=%d\n",
+ control_thread);
+ retval = control_thread;
+ goto err_out_driver_unregister;
+ }
+
+ return 0;
+
+err_out_driver_unregister:
+ driver_unregister(&w1_driver);
+
+err_out_bus_unregister:
+ bus_unregister(&w1_bus_type);
+
+err_out_exit_init:
+ return retval;
+}
+
+void w1_fini(void)
+{
+ struct w1_master *dev;
+ struct list_head *ent, *n;
+
+ list_for_each_safe(ent, n, &w1_masters) {
+ dev = list_entry(ent, struct w1_master, w1_master_entry);
+ __w1_remove_master_device(dev);
+ }
+
+ control_needs_exit = 1;
+
+ wait_for_completion(&w1_control_complete);
+
+ driver_unregister(&w1_driver);
+ bus_unregister(&w1_bus_type);
+}
+
+module_init(w1_init);
+module_exit(w1_fini);
--- /dev/null
+/*
+ * w1.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_H
+#define __W1_H
+
+struct w1_reg_num
+{
+ __u64 family:8,
+ id:48,
+ crc:8;
+};
+
+#ifdef __KERNEL__
+
+#include <linux/completion.h>
+#include <linux/device.h>
+
+#include <net/sock.h>
+
+#include <asm/semaphore.h>
+
+#include "w1_family.h"
+
+#define W1_MAXNAMELEN 32
+#define W1_SLAVE_DATA_SIZE 128
+
+#define W1_SEARCH 0xF0
+#define W1_CONDITIONAL_SEARCH 0xEC
+#define W1_CONVERT_TEMP 0x44
+#define W1_SKIP_ROM 0xCC
+#define W1_READ_SCRATCHPAD 0xBE
+#define W1_READ_ROM 0x33
+#define W1_READ_PSUPPLY 0xB4
+#define W1_MATCH_ROM 0x55
+
+struct w1_slave
+{
+ struct module *owner;
+ unsigned char name[W1_MAXNAMELEN];
+ struct list_head w1_slave_entry;
+ struct w1_reg_num reg_num;
+ atomic_t refcnt;
+ u8 rom[9];
+
+ struct w1_master *master;
+ struct w1_family *family;
+ struct device dev;
+ struct completion dev_released;
+};
+
+struct w1_bus_master
+{
+ unsigned long data;
+
+ u8 (*read_bit)(unsigned long);
+ void (*write_bit)(unsigned long, u8);
+};
+
+struct w1_master
+{
+ struct list_head w1_master_entry;
+ struct module *owner;
+ unsigned char name[W1_MAXNAMELEN];
+ struct list_head slist;
+ int max_slave_count, slave_count;
+ unsigned long attempts;
+ int initialized;
+ u32 id;
+
+ atomic_t refcnt;
+
+ void *priv;
+ int priv_size;
+
+ int need_exit;
+ pid_t kpid;
+ wait_queue_head_t kwait;
+ struct semaphore mutex;
+
+ struct device_driver *driver;
+ struct device dev;
+ struct completion dev_released;
+ struct completion dev_exited;
+
+ struct w1_bus_master *bus_master;
+
+ u32 seq, groups;
+ struct sock *nls;
+};
+
+#endif /* __KERNEL__ */
+
+#endif /* __W1_H */
--- /dev/null
+/*
+ * w1_family.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+
+#include "w1_family.h"
+
+spinlock_t w1_flock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(w1_families);
+
+int w1_register_family(struct w1_family *newf)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f;
+ int ret = 0;
+
+ spin_lock(&w1_flock);
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == newf->fid) {
+ ret = -EEXIST;
+ break;
+ }
+ }
+
+ if (!ret) {
+ atomic_set(&newf->refcnt, 0);
+ newf->need_exit = 0;
+ list_add_tail(&newf->family_entry, &w1_families);
+ }
+
+ spin_unlock(&w1_flock);
+
+ return ret;
+}
+
+void w1_unregister_family(struct w1_family *fent)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f;
+
+ spin_lock(&w1_flock);
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == fent->fid) {
+ list_del(&fent->family_entry);
+ break;
+ }
+ }
+
+ fent->need_exit = 1;
+
+ spin_unlock(&w1_flock);
+
+ while (atomic_read(&fent->refcnt))
+ schedule_timeout(10);
+}
+
+/*
+ * Should be called under w1_flock held.
+ */
+struct w1_family * w1_family_registered(u8 fid)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f = NULL;
+ int ret = 0;
+
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == fid) {
+ ret = 1;
+ break;
+ }
+ }
+
+ return (ret) ? f : NULL;
+}
+
+void w1_family_put(struct w1_family *f)
+{
+ spin_lock(&w1_flock);
+ __w1_family_put(f);
+ spin_unlock(&w1_flock);
+}
+
+void __w1_family_put(struct w1_family *f)
+{
+ if (atomic_dec_and_test(&f->refcnt))
+ f->need_exit = 1;
+}
+
+void w1_family_get(struct w1_family *f)
+{
+ spin_lock(&w1_flock);
+ __w1_family_get(f);
+ spin_unlock(&w1_flock);
+
+}
+
+void __w1_family_get(struct w1_family *f)
+{
+ atomic_inc(&f->refcnt);
+}
+
+EXPORT_SYMBOL(w1_family_get);
+EXPORT_SYMBOL(w1_family_put);
+EXPORT_SYMBOL(__w1_family_get);
+EXPORT_SYMBOL(__w1_family_put);
+EXPORT_SYMBOL(w1_family_registered);
+EXPORT_SYMBOL(w1_unregister_family);
+EXPORT_SYMBOL(w1_register_family);
--- /dev/null
+/*
+ * w1_family.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_FAMILY_H
+#define __W1_FAMILY_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <asm/atomic.h>
+
+#define W1_FAMILY_DEFAULT 0
+#define W1_FAMILY_THERM 0x10
+#define W1_FAMILY_IBUT 0xff /* ? */
+
+#define MAXNAMELEN 32
+
+struct w1_family_ops
+{
+ ssize_t (* rname)(struct device *, char *);
+ ssize_t (* rbin)(struct kobject *, char *, loff_t, size_t);
+
+ ssize_t (* rval)(struct device *, char *);
+ unsigned char rvalname[MAXNAMELEN];
+};
+
+struct w1_family
+{
+ struct list_head family_entry;
+ u8 fid;
+
+ struct w1_family_ops *fops;
+
+ atomic_t refcnt;
+ u8 need_exit;
+};
+
+extern spinlock_t w1_flock;
+
+void w1_family_get(struct w1_family *);
+void w1_family_put(struct w1_family *);
+void __w1_family_get(struct w1_family *);
+void __w1_family_put(struct w1_family *);
+struct w1_family * w1_family_registered(u8);
+void w1_unregister_family(struct w1_family *);
+int w1_register_family(struct w1_family *);
+
+#endif /* __W1_FAMILY_H */
--- /dev/null
+/*
+ * w1_int.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+
+#include "w1.h"
+#include "w1_log.h"
+
+static u32 w1_ids = 1;
+
+extern struct device_driver w1_driver;
+extern struct bus_type w1_bus_type;
+extern struct device w1_device;
+extern struct device_attribute w1_master_attribute;
+extern int w1_max_slave_count;
+extern struct list_head w1_masters;
+extern spinlock_t w1_mlock;
+
+extern int w1_process(void *);
+
+struct w1_master * w1_alloc_dev(u32 id, int slave_count,
+ struct device_driver *driver, struct device *device)
+{
+ struct w1_master *dev;
+ int err;
+
+ /*
+ * We are in process context(kernel thread), so can sleep.
+ */
+ dev = kmalloc(sizeof(struct w1_master) + sizeof(struct w1_bus_master), GFP_KERNEL);
+ if (!dev) {
+ printk(KERN_ERR
+ "Failed to allocate %d bytes for new w1 device.\n",
+ sizeof(struct w1_master));
+ return NULL;
+ }
+
+ memset(dev, 0, sizeof(struct w1_master) + sizeof(struct w1_bus_master));
+
+ dev->bus_master = (struct w1_bus_master *)(dev + 1);
+
+ dev->owner = THIS_MODULE;
+ dev->max_slave_count = slave_count;
+ dev->slave_count = 0;
+ dev->attempts = 0;
+ dev->kpid = -1;
+ dev->initialized = 0;
+ dev->id = id;
+
+ atomic_set(&dev->refcnt, 2);
+
+ INIT_LIST_HEAD(&dev->slist);
+ init_MUTEX(&dev->mutex);
+
+ init_waitqueue_head(&dev->kwait);
+ init_completion(&dev->dev_released);
+ init_completion(&dev->dev_exited);
+
+ memcpy(&dev->dev, device, sizeof(struct device));
+ snprintf(dev->dev.bus_id, sizeof(dev->dev.bus_id),
+ "w1_bus_master%u", dev->id);
+ snprintf(dev->name, sizeof(dev->name), "w1_bus_master%u", dev->id);
+
+ dev->driver = driver;
+
+ dev->groups = 23;
+ dev->seq = 1;
+ dev->nls = netlink_kernel_create(NETLINK_NFLOG, NULL);
+ if (!dev->nls) {
+ printk(KERN_ERR "Failed to create new netlink socket(%u).\n",
+ NETLINK_NFLOG);
+ memset(dev, 0, sizeof(struct w1_master));
+ kfree(dev);
+ dev = NULL;
+ }
+
+ err = device_register(&dev->dev);
+ if (err) {
+ printk(KERN_ERR "Failed to register master device. err=%d\n", err);
+ if (dev->nls->sk_socket)
+ sock_release(dev->nls->sk_socket);
+ memset(dev, 0, sizeof(struct w1_master));
+ kfree(dev);
+ dev = NULL;
+ }
+
+ return dev;
+}
+
+void w1_free_dev(struct w1_master *dev)
+{
+ device_unregister(&dev->dev);
+ if (dev->nls->sk_socket)
+ sock_release(dev->nls->sk_socket);
+ memset(dev, 0, sizeof(struct w1_master) + sizeof(struct w1_bus_master));
+ kfree(dev);
+}
+
+int w1_add_master_device(struct w1_bus_master *master)
+{
+ struct w1_master *dev;
+ int retval = 0;
+
+ dev = w1_alloc_dev(w1_ids++, w1_max_slave_count, &w1_driver, &w1_device);
+ if (!dev)
+ return -ENOMEM;
+
+ dev->kpid = kernel_thread(&w1_process, dev, 0);
+ if (dev->kpid < 0) {
+ dev_err(&dev->dev,
+ "Failed to create new kernel thread. err=%d\n",
+ dev->kpid);
+ retval = dev->kpid;
+ goto err_out_free_dev;
+ }
+
+ retval = device_create_file(&dev->dev, &w1_master_attribute);
+ if (retval)
+ goto err_out_kill_thread;
+
+ memcpy(dev->bus_master, master, sizeof(struct w1_bus_master));
+
+ dev->initialized = 1;
+
+ spin_lock(&w1_mlock);
+ list_add(&dev->w1_master_entry, &w1_masters);
+ spin_unlock(&w1_mlock);
+
+ return 0;
+
+err_out_kill_thread:
+ dev->need_exit = 1;
+ if (kill_proc(dev->kpid, SIGTERM, 1))
+ dev_err(&dev->dev,
+ "Failed to send signal to w1 kernel thread %d.\n",
+ dev->kpid);
+ wait_for_completion(&dev->dev_exited);
+
+err_out_free_dev:
+ w1_free_dev(dev);
+
+ return retval;
+}
+
+void __w1_remove_master_device(struct w1_master *dev)
+{
+ int err;
+
+ dev->need_exit = 1;
+ err = kill_proc(dev->kpid, SIGTERM, 1);
+ if (err)
+ dev_err(&dev->dev,
+ "%s: Failed to send signal to w1 kernel thread %d.\n",
+ __func__, dev->kpid);
+
+ while (atomic_read(&dev->refcnt))
+ schedule_timeout(10);
+
+ w1_free_dev(dev);
+}
+
+void w1_remove_master_device(struct w1_bus_master *bm)
+{
+ struct w1_master *dev = NULL;
+ struct list_head *ent, *n;
+
+ list_for_each_safe(ent, n, &w1_masters) {
+ dev = list_entry(ent, struct w1_master, w1_master_entry);
+ if (!dev->initialized)
+ continue;
+
+ if (dev->bus_master->data == bm->data)
+ break;
+ }
+
+ if (!dev) {
+ printk(KERN_ERR "Device doesn't exist.\n");
+ return;
+ }
+
+ __w1_remove_master_device(dev);
+}
+
+EXPORT_SYMBOL(w1_alloc_dev);
+EXPORT_SYMBOL(w1_free_dev);
+EXPORT_SYMBOL(w1_add_master_device);
+EXPORT_SYMBOL(w1_remove_master_device);
+EXPORT_SYMBOL(__w1_remove_master_device);
--- /dev/null
+/*
+ * w1_int.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_INT_H
+#define __W1_INT_H
+
+#include <linux/kernel.h>
+#include <linux/device.h>
+
+#include "w1.h"
+
+struct w1_master * w1_alloc_dev(int, struct device_driver *, struct device *);
+void w1_free_dev(struct w1_master *dev);
+int w1_add_master_device(struct w1_bus_master *);
+void w1_remove_master_device(struct w1_bus_master *);
+void __w1_remove_master_device(struct w1_master *);
+
+#endif /* __W1_INT_H */
--- /dev/null
+/*
+ * w1_io.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/io.h>
+#include <asm/delay.h>
+
+#include <linux/moduleparam.h>
+
+#include "w1.h"
+#include "w1_log.h"
+#include "w1_io.h"
+
+int w1_delay_parm = 1;
+module_param_named(delay_coef, w1_delay_parm, int, 0);
+
+static u8 w1_crc8_table[] = {
+ 0, 94, 188, 226, 97, 63, 221, 131, 194, 156, 126, 32, 163, 253, 31, 65,
+ 157, 195, 33, 127, 252, 162, 64, 30, 95, 1, 227, 189, 62, 96, 130, 220,
+ 35, 125, 159, 193, 66, 28, 254, 160, 225, 191, 93, 3, 128, 222, 60, 98,
+ 190, 224, 2, 92, 223, 129, 99, 61, 124, 34, 192, 158, 29, 67, 161, 255,
+ 70, 24, 250, 164, 39, 121, 155, 197, 132, 218, 56, 102, 229, 187, 89, 7,
+ 219, 133, 103, 57, 186, 228, 6, 88, 25, 71, 165, 251, 120, 38, 196, 154,
+ 101, 59, 217, 135, 4, 90, 184, 230, 167, 249, 27, 69, 198, 152, 122, 36,
+ 248, 166, 68, 26, 153, 199, 37, 123, 58, 100, 134, 216, 91, 5, 231, 185,
+ 140, 210, 48, 110, 237, 179, 81, 15, 78, 16, 242, 172, 47, 113, 147, 205,
+ 17, 79, 173, 243, 112, 46, 204, 146, 211, 141, 111, 49, 178, 236, 14, 80,
+ 175, 241, 19, 77, 206, 144, 114, 44, 109, 51, 209, 143, 12, 82, 176, 238,
+ 50, 108, 142, 208, 83, 13, 239, 177, 240, 174, 76, 18, 145, 207, 45, 115,
+ 202, 148, 118, 40, 171, 245, 23, 73, 8, 86, 180, 234, 105, 55, 213, 139,
+ 87, 9, 235, 181, 54, 104, 138, 212, 149, 203, 41, 119, 244, 170, 72, 22,
+ 233, 183, 85, 11, 136, 214, 52, 106, 43, 117, 151, 201, 74, 20, 246, 168,
+ 116, 42, 200, 150, 21, 75, 169, 247, 182, 232, 10, 84, 215, 137, 107, 53
+};
+
+void w1_delay(unsigned long tm)
+{
+ udelay(tm * w1_delay_parm);
+}
+
+void w1_write_bit(struct w1_master *dev, int bit)
+{
+ if (bit) {
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(6);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(64);
+ } else {
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(60);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(10);
+ }
+}
+
+void w1_write_8(struct w1_master *dev, u8 byte)
+{
+ int i;
+
+ for (i = 0; i < 8; ++i)
+ w1_write_bit(dev, (byte >> i) & 0x1);
+}
+
+u8 w1_read_bit(struct w1_master *dev)
+{
+ int result;
+
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(6);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(9);
+
+ result = dev->bus_master->read_bit(dev->bus_master->data);
+ w1_delay(55);
+
+ return result & 0x1;
+}
+
+u8 w1_read_8(struct w1_master * dev)
+{
+ int i;
+ u8 res = 0;
+
+ for (i = 0; i < 8; ++i)
+ res |= (w1_read_bit(dev) << i);
+
+ return res;
+}
+
+int w1_reset_bus(struct w1_master *dev)
+{
+ int result;
+
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(480);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(70);
+
+ result = dev->bus_master->read_bit(dev->bus_master->data) & 0x1;
+ w1_delay(410);
+
+ return result;
+}
+
+u8 w1_calc_crc8(u8 * data, int len)
+{
+ u8 crc = 0;
+
+ while (len--)
+ crc = w1_crc8_table[crc ^ *data++];
+
+ return crc;
+}
+
+EXPORT_SYMBOL(w1_write_bit);
+EXPORT_SYMBOL(w1_write_8);
+EXPORT_SYMBOL(w1_read_bit);
+EXPORT_SYMBOL(w1_read_8);
+EXPORT_SYMBOL(w1_reset_bus);
+EXPORT_SYMBOL(w1_calc_crc8);
+EXPORT_SYMBOL(w1_delay);
--- /dev/null
+/*
+ * w1_io.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_IO_H
+#define __W1_IO_H
+
+#include "w1.h"
+
+void w1_delay(unsigned long);
+void w1_write_bit(struct w1_master *, int);
+void w1_write_8(struct w1_master *, u8);
+u8 w1_read_bit(struct w1_master *);
+u8 w1_read_8(struct w1_master *);
+int w1_reset_bus(struct w1_master *);
+u8 w1_calc_crc8(u8 *, int);
+
+#endif /* __W1_IO_H */
--- /dev/null
+/*
+ * w1_log.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_LOG_H
+#define __W1_LOG_H
+
+#define DEBUG
+
+#ifdef W1_DEBUG
+# define assert(expr) do {} while (0)
+#else
+# define assert(expr) \
+ if(unlikely(!(expr))) { \
+ printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ }
+#endif
+
+#endif /* __W1_LOG_H */
+
--- /dev/null
+/*
+ * w1_netlink.c
+ *
+ * Copyright (c) 2003 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+
+#include "w1.h"
+#include "w1_log.h"
+#include "w1_netlink.h"
+
+void w1_netlink_send(struct w1_master *dev, struct w1_netlink_msg *msg)
+{
+ unsigned int size;
+ struct sk_buff *skb;
+ struct w1_netlink_msg *data;
+ struct nlmsghdr *nlh;
+
+ size = NLMSG_SPACE(sizeof(struct w1_netlink_msg));
+
+ skb = alloc_skb(size, GFP_ATOMIC);
+ if (!skb) {
+ dev_err(&dev->dev, "skb_alloc() failed.\n");
+ return;
+ }
+
+ nlh = NLMSG_PUT(skb, 0, dev->seq++, NLMSG_DONE, size - sizeof(*nlh));
+
+ data = (struct w1_netlink_msg *)NLMSG_DATA(nlh);
+
+ memcpy(data, msg, sizeof(struct w1_netlink_msg));
+
+ NETLINK_CB(skb).dst_groups = dev->groups;
+ netlink_broadcast(dev->nls, skb, 0, dev->groups, GFP_ATOMIC);
+
+nlmsg_failure:
+ return;
+}
--- /dev/null
+/*
+ * w1_netlink.h
+ *
+ * Copyright (c) 2003 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_NETLINK_H
+#define __W1_NETLINK_H
+
+#include <asm/types.h>
+
+#include "w1.h"
+
+struct w1_netlink_msg
+{
+ union
+ {
+ struct w1_reg_num id;
+ __u64 w1_id;
+ } id;
+ __u64 val;
+};
+
+#ifdef __KERNEL__
+
+void w1_netlink_send(struct w1_master *, struct w1_netlink_msg *);
+
+#endif /* __KERNEL__ */
+#endif /* __W1_NETLINK_H */
--- /dev/null
+/*
+ * w1_therm.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the therms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/types.h>
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/device.h>
+#include <linux/types.h>
+
+#include "w1.h"
+#include "w1_io.h"
+#include "w1_int.h"
+#include "w1_family.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for 1-wire Dallas network protocol, temperature family.");
+
+static ssize_t w1_therm_read_name(struct device *, char *);
+static ssize_t w1_therm_read_temp(struct device *, char *);
+static ssize_t w1_therm_read_bin(struct kobject *, char *, loff_t, size_t);
+
+static struct w1_family_ops w1_therm_fops = {
+ .rname = &w1_therm_read_name,
+ .rbin = &w1_therm_read_bin,
+ .rval = &w1_therm_read_temp,
+ .rvalname = "temp1_input",
+};
+
+static ssize_t w1_therm_read_name(struct device *dev, char *buf)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+
+ return sprintf(buf, "%s\n", sl->name);
+}
+
+static ssize_t w1_therm_read_temp(struct device *dev, char *buf)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+ s16 temp;
+
+ /*
+ * Must be more precise.
+ */
+ temp = 0;
+ temp <<= sl->rom[1] / 2;
+ temp |= sl->rom[0] / 2;
+
+ return sprintf(buf, "%d\n", temp * 1000);
+}
+
+static ssize_t w1_therm_read_bin(struct kobject *kobj, char *buf, loff_t off, size_t count)
+{
+ struct w1_slave *sl = container_of(container_of(kobj, struct device, kobj),
+ struct w1_slave, dev);
+ struct w1_master *dev = sl->master;
+ u8 rom[9], crc, verdict;
+ size_t icount;
+ int i;
+ u16 temp;
+
+ atomic_inc(&sl->refcnt);
+ if (down_interruptible(&sl->master->mutex)) {
+ count = 0;
+ goto out_dec;
+ }
+
+ if (off > W1_SLAVE_DATA_SIZE) {
+ count = 0;
+ goto out;
+ }
+ if (off + count > W1_SLAVE_DATA_SIZE)
+ count = W1_SLAVE_DATA_SIZE - off;
+
+ icount = count;
+
+ memset(buf, 0, count);
+ memset(rom, 0, sizeof(rom));
+
+ count = 0;
+ verdict = 0;
+ crc = 0;
+ if (!w1_reset_bus(dev)) {
+ u64 id = *(u64 *) & sl->reg_num;
+ int count = 0;
+
+ w1_write_8(dev, W1_MATCH_ROM);
+ for (i = 0; i < 8; ++i)
+ w1_write_8(dev, (id >> i * 8) & 0xff);
+
+ w1_write_8(dev, W1_CONVERT_TEMP);
+
+ while (dev->bus_master->read_bit(dev->bus_master->data) == 0
+ && count < 10) {
+ w1_delay(1);
+ count++;
+ }
+
+ if (count < 10) {
+ if (!w1_reset_bus(dev)) {
+ w1_write_8(dev, W1_MATCH_ROM);
+ for (i = 0; i < 8; ++i)
+ w1_write_8(dev,
+ (id >> i * 8) & 0xff);
+
+ w1_write_8(dev, W1_READ_SCRATCHPAD);
+ for (i = 0; i < 9; ++i)
+ rom[i] = w1_read_8(dev);
+
+ crc = w1_calc_crc8(rom, 8);
+
+ if (rom[8] == crc && rom[0])
+ verdict = 1;
+ }
+ }
+ else
+ dev_warn(&dev->dev,
+ "18S20 doesn't respond to CONVERT_TEMP.\n");
+ }
+
+ for (i = 0; i < 9; ++i)
+ count += snprintf(buf + count, icount - count, "%02x ", rom[i]);
+ count += snprintf(buf + count, icount - count, ": crc=%02x %s\n",
+ crc, (verdict) ? "YES" : "NO");
+ if (verdict)
+ memcpy(sl->rom, rom, sizeof(sl->rom));
+ for (i = 0; i < 9; ++i)
+ count += snprintf(buf + count, icount - count, "%02x ", sl->rom[i]);
+ temp = 0;
+ temp <<= sl->rom[1] / 2;
+ temp |= sl->rom[0] / 2;
+ count += snprintf(buf + count, icount - count, "t=%u\n", temp);
+out:
+ up(&dev->mutex);
+out_dec:
+ atomic_dec(&sl->refcnt);
+
+ return count;
+}
+
+static struct w1_family w1_therm_family = {
+ .fid = W1_FAMILY_THERM,
+ .fops = &w1_therm_fops,
+};
+
+static int __init w1_therm_init(void)
+{
+ return w1_register_family(&w1_therm_family);
+}
+
+static void __exit w1_therm_fini(void)
+{
+ w1_unregister_family(&w1_therm_family);
+}
+
+module_init(w1_therm_init);
+module_exit(w1_therm_fini);
--- /dev/null
+#
+# Makefile for the Linux filesystems.
+#
+# 14 Sep 2000, Christoph Hellwig <hch@infradead.org>
+# Rewritten to use lists instead of if-statements.
+#
+
+obj-y := open.o read_write.o file_table.o buffer.o \
+ bio.o super.o block_dev.o char_dev.o stat.o exec.o pipe.o \
+ namei.o fcntl.o ioctl.o readdir.o select.o fifo.o locks.o \
+ dcache.o inode.o attr.o bad_inode.o file.o dnotify.o \
+ filesystems.o namespace.o seq_file.o xattr.o libfs.o \
+ fs-writeback.o mpage.o direct-io.o aio.o
+
+obj-$(CONFIG_EPOLL) += eventpoll.o
+obj-$(CONFIG_COMPAT) += compat.o
+
+nfsd-$(CONFIG_NFSD) := nfsctl.o
+obj-y += $(nfsd-y) $(nfsd-m)
+
+obj-$(CONFIG_BINFMT_AOUT) += binfmt_aout.o
+obj-$(CONFIG_BINFMT_EM86) += binfmt_em86.o
+obj-$(CONFIG_BINFMT_MISC) += binfmt_misc.o
+
+# binfmt_script is always there
+obj-y += binfmt_script.o
+
+obj-$(CONFIG_BINFMT_ELF) += binfmt_elf.o
+obj-$(CONFIG_BINFMT_SOM) += binfmt_som.o
+obj-$(CONFIG_BINFMT_FLAT) += binfmt_flat.o
+
+obj-$(CONFIG_FS_MBCACHE) += mbcache.o
+obj-$(CONFIG_FS_POSIX_ACL) += posix_acl.o xattr_acl.o
+
+obj-$(CONFIG_QUOTA) += dquot.o
+obj-$(CONFIG_QFMT_V1) += quota_v1.o
+obj-$(CONFIG_QFMT_V2) += quota_v2.o
+obj-$(CONFIG_QUOTACTL) += quota.o
+
+obj-$(CONFIG_PROC_FS) += proc/
+obj-y += partitions/
+obj-$(CONFIG_SYSFS) += sysfs/
+obj-y += devpts/
+
+obj-$(CONFIG_PROFILING) += dcookies.o
+
+# Do not add any filesystems before this line
+obj-$(CONFIG_REISERFS_FS) += reiserfs/
+obj-$(CONFIG_EXT3_FS) += ext3/ # Before ext2 so root fs can be ext3
+obj-$(CONFIG_JBD) += jbd/
+obj-$(CONFIG_EXT2_FS) += ext2/
+obj-$(CONFIG_CRAMFS) += cramfs/
+obj-$(CONFIG_RAMFS) += ramfs/
+obj-$(CONFIG_HUGETLBFS) += hugetlbfs/
+obj-$(CONFIG_CODA_FS) += coda/
+obj-$(CONFIG_INTERMEZZO_FS) += intermezzo/
+obj-$(CONFIG_MINIX_FS) += minix/
+obj-$(CONFIG_FAT_FS) += fat/
+obj-$(CONFIG_UMSDOS_FS) += umsdos/
+obj-$(CONFIG_MSDOS_FS) += msdos/
+obj-$(CONFIG_VFAT_FS) += vfat/
+obj-$(CONFIG_BFS_FS) += bfs/
+obj-$(CONFIG_ISO9660_FS) += isofs/
+obj-$(CONFIG_DEVFS_FS) += devfs/
+obj-$(CONFIG_HFSPLUS_FS) += hfsplus/ # Before hfs to find wrapped HFS+
+obj-$(CONFIG_HFS_FS) += hfs/
+obj-$(CONFIG_VXFS_FS) += freevxfs/
+obj-$(CONFIG_NFS_FS) += nfs/
+obj-$(CONFIG_EXPORTFS) += exportfs/
+obj-$(CONFIG_NFSD) += nfsd/
+obj-$(CONFIG_LOCKD) += lockd/
+obj-$(CONFIG_NLS) += nls/
+obj-$(CONFIG_SYSV_FS) += sysv/
+obj-$(CONFIG_SMB_FS) += smbfs/
+obj-$(CONFIG_CIFS) += cifs/
+obj-$(CONFIG_NCP_FS) += ncpfs/
+obj-$(CONFIG_HPFS_FS) += hpfs/
+obj-$(CONFIG_NTFS_FS) += ntfs/
+obj-$(CONFIG_UFS_FS) += ufs/
+obj-$(CONFIG_EFS_FS) += efs/
+obj-$(CONFIG_JFFS_FS) += jffs/
+obj-$(CONFIG_JFFS2_FS) += jffs2/
+obj-$(CONFIG_AFFS_FS) += affs/
+obj-$(CONFIG_ROMFS_FS) += romfs/
+obj-$(CONFIG_QNX4FS_FS) += qnx4/
+obj-$(CONFIG_AUTOFS_FS) += autofs/
+obj-$(CONFIG_AUTOFS4_FS) += autofs4/
+obj-$(CONFIG_ADFS_FS) += adfs/
+obj-$(CONFIG_UDF_FS) += udf/
+obj-$(CONFIG_SUN_OPENPROMFS) += openpromfs/
+obj-$(CONFIG_JFS_FS) += jfs/
+obj-$(CONFIG_XFS_FS) += xfs/
+obj-$(CONFIG_AFS_FS) += afs/
+obj-$(CONFIG_BEFS_FS) += befs/
+++ /dev/null
-/*
- * linux/fs/ext3/resize.c
- *
- * Support for resizing an ext3 filesystem while it is mounted.
- *
- * Copyright (C) 2001, 2002 Andreas Dilger <adilger@clusterfs.com>
- *
- * This could probably be made into a module, because it is not often in use.
- */
-
-#include <linux/config.h>
-
-#define EXT3FS_DEBUG
-
-#include <linux/sched.h>
-#include <linux/smp_lock.h>
-#include <linux/ext3_jbd.h>
-
-#include <linux/errno.h>
-#include <linux/slab.h>
-
-
-#define outside(b, first, last) ((b) < (first) || (b) >= (last))
-#define inside(b, first, last) ((b) >= (first) && (b) < (last))
-
-static int verify_group_input(struct super_block *sb,
- struct ext3_new_group_data *input)
-{
- struct ext3_sb_info *sbi = EXT3_SB(sb);
- struct ext3_super_block *es = sbi->s_es;
- unsigned start = le32_to_cpu(es->s_blocks_count);
- unsigned end = start + input->blocks_count;
- unsigned group = input->group;
- unsigned itend = input->inode_table + EXT3_SB(sb)->s_itb_per_group;
- unsigned overhead = ext3_bg_has_super(sb, group) ?
- (1 + ext3_bg_num_gdb(sb, group) +
- le16_to_cpu(es->s_reserved_gdt_blocks)) : 0;
- unsigned metaend = start + overhead;
- struct buffer_head *bh;
- int free_blocks_count;
- int err = -EINVAL;
-
- input->free_blocks_count = free_blocks_count =
- input->blocks_count - 2 - overhead - sbi->s_itb_per_group;
-
- if (test_opt(sb, DEBUG))
- printk("EXT3-fs: adding %s group %u: %u blocks "
- "(%d free, %u reserved)\n",
- ext3_bg_has_super(sb, input->group) ? "normal" :
- "no-super", input->group, input->blocks_count,
- free_blocks_count, input->reserved_blocks);
-
- if (group != sbi->s_groups_count)
- ext3_warning(sb, __FUNCTION__,
- "Cannot add at group %u (only %lu groups)",
- input->group, sbi->s_groups_count);
- else if ((start - le32_to_cpu(es->s_first_data_block)) %
- EXT3_BLOCKS_PER_GROUP(sb))
- ext3_warning(sb, __FUNCTION__, "Last group not full");
- else if (input->reserved_blocks > input->blocks_count / 5)
- ext3_warning(sb, __FUNCTION__, "Reserved blocks too high (%u)",
- input->reserved_blocks);
- else if (free_blocks_count < 0)
- ext3_warning(sb, __FUNCTION__, "Bad blocks count %u",
- input->blocks_count);
- else if (!(bh = sb_bread(sb, end - 1)))
- ext3_warning(sb, __FUNCTION__, "Cannot read last block (%u)",
- end - 1);
- else if (outside(input->block_bitmap, start, end))
- ext3_warning(sb, __FUNCTION__,
- "Block bitmap not in group (block %u)",
- input->block_bitmap);
- else if (outside(input->inode_bitmap, start, end))
- ext3_warning(sb, __FUNCTION__,
- "Inode bitmap not in group (block %u)",
- input->inode_bitmap);
- else if (outside(input->inode_table, start, end) ||
- outside(itend - 1, start, end))
- ext3_warning(sb, __FUNCTION__,
- "Inode table not in group (blocks %u-%u)",
- input->inode_table, itend - 1);
- else if (input->inode_bitmap == input->block_bitmap)
- ext3_warning(sb, __FUNCTION__,
- "Block bitmap same as inode bitmap (%u)",
- input->block_bitmap);
- else if (inside(input->block_bitmap, input->inode_table, itend))
- ext3_warning(sb, __FUNCTION__,
- "Block bitmap (%u) in inode table (%u-%u)",
- input->block_bitmap, input->inode_table, itend-1);
- else if (inside(input->inode_bitmap, input->inode_table, itend))
- ext3_warning(sb, __FUNCTION__,
- "Inode bitmap (%u) in inode table (%u-%u)",
- input->inode_bitmap, input->inode_table, itend-1);
- else if (inside(input->block_bitmap, start, metaend))
- ext3_warning(sb, __FUNCTION__,
- "Block bitmap (%u) in GDT table (%u-%u)",
- input->block_bitmap, start, metaend - 1);
- else if (inside(input->inode_bitmap, start, metaend))
- ext3_warning(sb, __FUNCTION__,
- "Inode bitmap (%u) in GDT table (%u-%u)",
- input->inode_bitmap, start, metaend - 1);
- else if (inside(input->inode_table, start, metaend) ||
- inside(itend - 1, start, metaend))
- ext3_warning(sb, __FUNCTION__,
- "Inode table (%u-%u) overlaps GDT table (%u-%u)",
- input->inode_table, itend - 1, start, metaend - 1);
- else {
- brelse(bh);
- err = 0;
- }
-
- return err;
-}
-
-static struct buffer_head *bclean(handle_t *handle, struct super_block *sb,
- unsigned long blk)
-{
- struct buffer_head *bh;
- int err;
-
- bh = sb_getblk(sb, blk);
- set_buffer_uptodate(bh);
- if ((err = ext3_journal_get_write_access(handle, bh))) {
- brelse(bh);
- bh = ERR_PTR(err);
- } else
- memset(bh->b_data, 0, sb->s_blocksize);
-
- return bh;
-}
-
-/*
- * To avoid calling the atomic setbit hundreds or thousands of times, we only
- * need to use it within a single byte (to ensure we get endianness right).
- * We can use memset for the rest of the bitmap as there are no other users.
- */
-static void mark_bitmap_end(int start_bit, int end_bit, char *bitmap)
-{
- int i;
-
- if (start_bit >= end_bit)
- return;
-
- ext3_debug("mark end bits +%d through +%d used\n", start_bit, end_bit);
- for (i = start_bit; i < ((start_bit + 7) & ~7UL); i++)
- ext3_set_bit(i, bitmap);
- if (i < end_bit)
- memset(bitmap + (i >> 3), 0xff, (end_bit - i) >> 3);
-}
-
-/*
- * Set up the block and inode bitmaps, and the inode table for the new group.
- * This doesn't need to be part of the main transaction, since we are only
- * changing blocks outside the actual filesystem. We still do journaling to
- * ensure the recovery is correct in case of a failure just after resize.
- * If any part of this fails, we simply abort the resize.
- *
- * We only pass inode because of the ext3 journal wrappers.
- */
-static int setup_new_group_blocks(struct super_block *sb, struct inode *inode,
- struct ext3_new_group_data *input)
-{
- struct ext3_sb_info *sbi = EXT3_SB(sb);
- unsigned long start = input->group * sbi->s_blocks_per_group +
- le32_to_cpu(sbi->s_es->s_first_data_block);
- int reserved_gdb = ext3_bg_has_super(sb, input->group) ?
- le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks) : 0;
- unsigned long gdblocks = ext3_bg_num_gdb(sb, input->group);
- struct buffer_head *bh;
- handle_t *handle;
- unsigned long block;
- int bit;
- int i;
- int err = 0, err2;
-
- handle = ext3_journal_start(inode, reserved_gdb + gdblocks +
- 2 + sbi->s_itb_per_group);
- if (IS_ERR(handle))
- return PTR_ERR(handle);
-
- lock_super(sb);
- if (input->group != sbi->s_groups_count) {
- err = -EBUSY;
- goto exit_journal;
- }
-
- if (IS_ERR(bh = bclean(handle, sb, input->block_bitmap))) {
- err = PTR_ERR(bh);
- goto exit_journal;
- }
-
- if (ext3_bg_has_super(sb, input->group)) {
- ext3_debug("mark backup superblock %#04lx (+0)\n", start);
- ext3_set_bit(0, bh->b_data);
- }
-
- /* Copy all of the GDT blocks into the backup in this group */
- for (i = 0, bit = 1, block = start + 1;
- i < gdblocks; i++, block++, bit++) {
- struct buffer_head *gdb;
-
- ext3_debug("update backup group %#04lx (+%d)\n", block, bit);
-
- gdb = sb_getblk(sb, block);
- set_buffer_uptodate(gdb);
- if ((err = ext3_journal_get_write_access(handle, gdb))) {
- brelse(gdb);
- goto exit_bh;
- }
- memcpy(gdb->b_data, sbi->s_group_desc[i], bh->b_size);
- ext3_journal_dirty_metadata(handle, gdb);
- ext3_set_bit(bit, bh->b_data);
- brelse(gdb);
- }
-
- /* Zero out all of the reserved backup group descriptor table blocks */
- for (i = 0, bit = gdblocks + 1, block = start + bit;
- i < reserved_gdb; i++, block++, bit++) {
- struct buffer_head *gdb;
-
- ext3_debug("clear reserved block %#04lx (+%d)\n", block, bit);
-
- if (IS_ERR(gdb = bclean(handle, sb, block))) {
- err = PTR_ERR(bh);
- goto exit_bh;
- }
- ext3_journal_dirty_metadata(handle, gdb);
- ext3_set_bit(bit, bh->b_data);
- brelse(gdb);
- }
- ext3_debug("mark block bitmap %#04x (+%ld)\n", input->block_bitmap,
- input->block_bitmap - start);
- ext3_set_bit(input->block_bitmap - start, bh->b_data);
- ext3_debug("mark inode bitmap %#04x (+%ld)\n", input->inode_bitmap,
- input->inode_bitmap - start);
- ext3_set_bit(input->inode_bitmap - start, bh->b_data);
-
- /* Zero out all of the inode table blocks */
- for (i = 0, block = input->inode_table, bit = block - start;
- i < sbi->s_itb_per_group; i++, bit++, block++) {
- struct buffer_head *it;
-
- ext3_debug("clear inode block %#04x (+%ld)\n", block, bit);
- if (IS_ERR(it = bclean(handle, sb, block))) {
- err = PTR_ERR(it);
- goto exit_bh;
- }
- ext3_journal_dirty_metadata(handle, it);
- brelse(it);
- ext3_set_bit(bit, bh->b_data);
- }
- mark_bitmap_end(input->blocks_count, EXT3_BLOCKS_PER_GROUP(sb),
- bh->b_data);
- ext3_journal_dirty_metadata(handle, bh);
- brelse(bh);
-
- /* Mark unused entries in inode bitmap used */
- ext3_debug("clear inode bitmap %#04x (+%ld)\n",
- input->inode_bitmap, input->inode_bitmap - start);
- if (IS_ERR(bh = bclean(handle, sb, input->inode_bitmap))) {
- err = PTR_ERR(bh);
- goto exit_journal;
- }
-
- mark_bitmap_end(EXT3_INODES_PER_GROUP(sb), EXT3_BLOCKS_PER_GROUP(sb),
- bh->b_data);
- ext3_journal_dirty_metadata(handle, bh);
-exit_bh:
- brelse(bh);
-
-exit_journal:
- unlock_super(sb);
- if ((err2 = ext3_journal_stop(handle)) && !err)
- err = err2;
-
- return err;
-}
-
-/*
- * Iterate through the groups which hold BACKUP superblock/GDT copies in an
- * ext3 filesystem. The counters should be initialized to 1, 5, and 7 before
- * calling this for the first time. In a sparse filesystem it will be the
- * sequence of powers of 3, 5, and 7: 1, 3, 5, 7, 9, 25, 27, 49, 81, ...
- * For a non-sparse filesystem it will be every group: 1, 2, 3, 4, ...
- */
-unsigned ext3_list_backups(struct super_block *sb, unsigned *three,
- unsigned *five, unsigned *seven)
-{
- unsigned *min = three;
- int mult = 3;
- unsigned ret;
-
- if (!EXT3_HAS_RO_COMPAT_FEATURE(sb,
- EXT3_FEATURE_RO_COMPAT_SPARSE_SUPER)) {
- ret = *min;
- *min += 1;
- return ret;
- }
-
- if (*five < *min) {
- min = five;
- mult = 5;
- }
- if (*seven < *min) {
- min = seven;
- mult = 7;
- }
-
- ret = *min;
- *min *= mult;
-
- return ret;
-}
-
-/*
- * Check that all of the backup GDT blocks are held in the primary GDT block.
- * It is assumed that they are stored in group order. Returns the number of
- * groups in current filesystem that have BACKUPS, or -ve error code.
- */
-static int verify_reserved_gdb(struct super_block *sb,
- struct buffer_head *primary)
-{
- const unsigned long blk = primary->b_blocknr;
- const unsigned long end = EXT3_SB(sb)->s_groups_count;
- unsigned three = 1;
- unsigned five = 5;
- unsigned seven = 7;
- unsigned grp;
- __u32 *p = (__u32 *)primary->b_data;
- int gdbackups = 0;
-
- while ((grp = ext3_list_backups(sb, &three, &five, &seven)) < end) {
- if (le32_to_cpu(*p++) != grp * EXT3_BLOCKS_PER_GROUP(sb) + blk){
- ext3_warning(sb, __FUNCTION__,
- "reserved GDT %ld missing grp %d (%ld)\n",
- blk, grp,
- grp * EXT3_BLOCKS_PER_GROUP(sb) + blk);
- return -EINVAL;
- }
- if (++gdbackups > EXT3_ADDR_PER_BLOCK(sb))
- return -EFBIG;
- }
-
- return gdbackups;
-}
-
-/*
- * Called when we need to bring a reserved group descriptor table block into
- * use from the resize inode. The primary copy of the new GDT block currently
- * is an indirect block (under the double indirect block in the resize inode).
- * The new backup GDT blocks will be stored as leaf blocks in this indirect
- * block, in group order. Even though we know all the block numbers we need,
- * we check to ensure that the resize inode has actually reserved these blocks.
- *
- * Don't need to update the block bitmaps because the blocks are still in use.
- *
- * We get all of the error cases out of the way, so that we are sure to not
- * fail once we start modifying the data on disk, because JBD has no rollback.
- */
-static int add_new_gdb(handle_t *handle, struct inode *inode,
- struct ext3_new_group_data *input,
- struct buffer_head **primary)
-{
- struct super_block *sb = inode->i_sb;
- struct ext3_super_block *es = EXT3_SB(sb)->s_es;
- unsigned long gdb_num = input->group / EXT3_DESC_PER_BLOCK(sb);
- unsigned long gdb_off = input->group % EXT3_DESC_PER_BLOCK(sb);
- unsigned long gdblock = EXT3_SB(sb)->s_sbh->b_blocknr + 1 + gdb_num;
- struct buffer_head **o_group_desc, **n_group_desc;
- struct buffer_head *dind;
- int gdbackups;
- struct ext3_iloc iloc;
- __u32 *data;
- int err;
-
- if (test_opt(sb, DEBUG))
- printk("EXT3-fs: ext3_add_new_gdb: adding group block %lu\n",
- gdb_num);
-
- /*
- * If we are not using the primary superblock/GDT copy don't resize,
- * because the user tools have no way of handling this. Probably a
- * bad time to do it anyways.
- */
- if (EXT3_SB(sb)->s_sbh->b_blocknr !=
- le32_to_cpu(EXT3_SB(sb)->s_es->s_first_data_block)) {
- ext3_warning(sb, __FUNCTION__,
- "won't resize using backup superblock at %lu\n",
- EXT3_SB(sb)->s_sbh->b_blocknr);
- return -EPERM;
- }
-
- *primary = sb_bread(sb, gdblock);
- if (!*primary)
- return -EIO;
-
- if ((gdbackups = verify_reserved_gdb(sb, *primary)) < 0) {
- err = gdbackups;
- goto exit_bh;
- }
-
- data = EXT3_I(inode)->i_data + EXT3_DIND_BLOCK;
- dind = sb_bread(sb, le32_to_cpu(*data));
- if (!dind) {
- err = -EIO;
- goto exit_bh;
- }
-
- data = (__u32 *)dind->b_data;
- if (le32_to_cpu(data[gdb_num % EXT3_ADDR_PER_BLOCK(sb)]) != gdblock) {
- ext3_warning(sb, __FUNCTION__,
- "new group %u GDT block %lu not reserved\n",
- input->group, gdblock);
- err = -EINVAL;
- goto exit_dind;
- }
-
- if ((err = ext3_journal_get_write_access(handle, EXT3_SB(sb)->s_sbh)))
- goto exit_dind;
-
- if ((err = ext3_journal_get_write_access(handle, *primary)))
- goto exit_sbh;
-
- if ((err = ext3_journal_get_write_access(handle, dind)))
- goto exit_primary;
-
- /* ext3_reserve_inode_write() gets a reference on the iloc */
- if ((err = ext3_reserve_inode_write(handle, inode, &iloc)))
- goto exit_dindj;
-
- n_group_desc = (struct buffer_head **)kmalloc((gdb_num + 1) *
- sizeof(struct buffer_head *), GFP_KERNEL);
- if (!n_group_desc) {
- err = -ENOMEM;
- ext3_warning (sb, __FUNCTION__,
- "not enough memory for %lu groups", gdb_num + 1);
- goto exit_inode;
- }
-
- /*
- * Finally, we have all of the possible failures behind us...
- *
- * Remove new GDT block from inode double-indirect block and clear out
- * the new GDT block for use (which also "frees" the backup GDT blocks
- * from the reserved inode). We don't need to change the bitmaps for
- * these blocks, because they are marked as in-use from being in the
- * reserved inode, and will become GDT blocks (primary and backup).
- */
- /*
- printk("removing block %d = %ld from dindir %ld[%ld]\n",
- ((__u32 *)(dind->b_data))[gdb_off], gdblock, dind->b_blocknr,
- gdb_num); */
- data[gdb_num % EXT3_ADDR_PER_BLOCK(sb)] = 0;
- ext3_journal_dirty_metadata(handle, dind);
- brelse(dind);
- inode->i_blocks -= (gdbackups + 1) * sb->s_blocksize >> 9;
- ext3_mark_iloc_dirty(handle, inode, &iloc);
- memset((*primary)->b_data, 0, sb->s_blocksize);
- ext3_journal_dirty_metadata(handle, *primary);
-
- o_group_desc = EXT3_SB(sb)->s_group_desc;
- memcpy(n_group_desc, o_group_desc,
- EXT3_SB(sb)->s_gdb_count * sizeof(struct buffer_head *));
- n_group_desc[gdb_num] = *primary;
- EXT3_SB(sb)->s_group_desc = n_group_desc;
- EXT3_SB(sb)->s_gdb_count++;
- kfree(o_group_desc);
-
- es->s_reserved_gdt_blocks =
- cpu_to_le16(le16_to_cpu(es->s_reserved_gdt_blocks) - 1);
- ext3_journal_dirty_metadata(handle, EXT3_SB(sb)->s_sbh);
-
- return 0;
-
-exit_inode:
- //ext3_journal_release_buffer(handle, iloc.bh);
- brelse(iloc.bh);
-exit_dindj:
- //ext3_journal_release_buffer(handle, dind);
-exit_primary:
- //ext3_journal_release_buffer(handle, *primary);
-exit_sbh:
- //ext3_journal_release_buffer(handle, *primary);
-exit_dind:
- brelse(dind);
-exit_bh:
- brelse(*primary);
-
- ext3_debug("leaving with error %d\n", err);
- return err;
-}
-
-/*
- * Called when we are adding a new group which has a backup copy of each of
- * the GDT blocks (i.e. sparse group) and there are reserved GDT blocks.
- * We need to add these reserved backup GDT blocks to the resize inode, so
- * that they are kept for future resizing and not allocated to files.
- *
- * Each reserved backup GDT block will go into a different indirect block.
- * The indirect blocks are actually the primary reserved GDT blocks,
- * so we know in advance what their block numbers are. We only get the
- * double-indirect block to verify it is pointing to the primary reserved
- * GDT blocks so we don't overwrite a data block by accident. The reserved
- * backup GDT blocks are stored in their reserved primary GDT block.
- */
-static int reserve_backup_gdb(handle_t *handle, struct inode *inode,
- struct ext3_new_group_data *input)
-{
- struct super_block *sb = inode->i_sb;
- int reserved_gdb =le16_to_cpu(EXT3_SB(sb)->s_es->s_reserved_gdt_blocks);
- struct buffer_head **primary;
- struct buffer_head *dind;
- struct ext3_iloc iloc;
- unsigned long blk;
- __u32 *data, *end;
- int gdbackups = 0;
- int res, i;
- int err;
-
- primary = kmalloc(reserved_gdb * sizeof(*primary), GFP_KERNEL);
- if (!primary)
- return -ENOMEM;
-
- data = EXT3_I(inode)->i_data + EXT3_DIND_BLOCK;
- dind = sb_bread(sb, le32_to_cpu(*data));
- if (!dind) {
- err = -EIO;
- goto exit_free;
- }
-
- blk = EXT3_SB(sb)->s_sbh->b_blocknr + 1 + EXT3_SB(sb)->s_gdb_count;
- data = (__u32 *)dind->b_data + EXT3_SB(sb)->s_gdb_count;
- end = (__u32 *)dind->b_data + EXT3_ADDR_PER_BLOCK(sb);
-
- /* Get each reserved primary GDT block and verify it holds backups */
- for (res = 0; res < reserved_gdb; res++, blk++) {
- if (le32_to_cpu(*data) != blk) {
- ext3_warning(sb, __FUNCTION__,
- "reserved block %lu not at offset %ld\n",
- blk, (long)(data - (__u32 *)dind->b_data));
- err = -EINVAL;
- goto exit_bh;
- }
- primary[res] = sb_bread(sb, blk);
- if (!primary[res]) {
- err = -EIO;
- goto exit_bh;
- }
- if ((gdbackups = verify_reserved_gdb(sb, primary[res])) < 0) {
- brelse(primary[res]);
- err = gdbackups;
- goto exit_bh;
- }
- if (++data >= end)
- data = (__u32 *)dind->b_data;
- }
-
- for (i = 0; i < reserved_gdb; i++) {
- if ((err = ext3_journal_get_write_access(handle, primary[i]))) {
- /*
- int j;
- for (j = 0; j < i; j++)
- ext3_journal_release_buffer(handle, primary[j]);
- */
- goto exit_bh;
- }
- }
-
- if ((err = ext3_reserve_inode_write(handle, inode, &iloc)))
- goto exit_bh;
-
- /*
- * Finally we can add each of the reserved backup GDT blocks from
- * the new group to its reserved primary GDT block.
- */
- blk = input->group * EXT3_BLOCKS_PER_GROUP(sb);
- for (i = 0; i < reserved_gdb; i++) {
- int err2;
- data = (__u32 *)primary[i]->b_data;
- /* printk("reserving backup %lu[%u] = %lu\n",
- primary[i]->b_blocknr, gdbackups,
- blk + primary[i]->b_blocknr); */
- data[gdbackups] = cpu_to_le32(blk + primary[i]->b_blocknr);
- err2 = ext3_journal_dirty_metadata(handle, primary[i]);
- if (!err)
- err = err2;
- }
- inode->i_blocks += reserved_gdb * sb->s_blocksize >> 9;
- ext3_mark_iloc_dirty(handle, inode, &iloc);
-
-exit_bh:
- while (--res >= 0)
- brelse(primary[res]);
- brelse(dind);
-
-exit_free:
- kfree(primary);
-
- return err;
-}
-
-/*
- * Update the backup copies of the ext3 metadata. These don't need to be part
- * of the main resize transaction, because e2fsck will re-write them if there
- * is a problem (basically only OOM will cause a problem). However, we
- * _should_ update the backups if possible, in case the primary gets trashed
- * for some reason and we need to run e2fsck from a backup superblock. The
- * important part is that the new block and inode counts are in the backup
- * superblocks, and the location of the new group metadata in the GDT backups.
- *
- * We do not need lock_super() for this, because these blocks are not
- * otherwise touched by the filesystem code when it is mounted. We don't
- * need to worry about last changing from sbi->s_groups_count, because the
- * worst that can happen is that we do not copy the full number of backups
- * at this time. The resize which changed s_groups_count will backup again.
- *
- * We only pass inode because of the ext3 journal wrappers.
- */
-static void update_backups(struct super_block *sb, struct inode *inode,
- int blk_off, char *data, int size)
-{
- struct ext3_sb_info *sbi = EXT3_SB(sb);
- const unsigned long last = sbi->s_groups_count;
- const int bpg = EXT3_BLOCKS_PER_GROUP(sb);
- unsigned three = 1;
- unsigned five = 5;
- unsigned seven = 7;
- unsigned group;
- int rest = sb->s_blocksize - size;
- handle_t *handle;
- int err = 0, err2;
-
- handle = ext3_journal_start(inode, EXT3_MAX_TRANS_DATA);
- if (IS_ERR(handle)) {
- group = 1;
- err = PTR_ERR(handle);
- goto exit_err;
- }
-
- while ((group = ext3_list_backups(sb, &three, &five, &seven)) < last) {
- struct buffer_head *bh;
-
- /* Out of journal space, and can't get more - abort - so sad */
- if (handle->h_buffer_credits == 0 &&
- ext3_journal_extend(handle, EXT3_MAX_TRANS_DATA) &&
- (err = ext3_journal_restart(handle, EXT3_MAX_TRANS_DATA)))
- break;
-
- bh = sb_getblk(sb, group * bpg + blk_off);
- set_buffer_uptodate(bh);
- ext3_debug(sb, __FUNCTION__, "update metadata backup %#04lx\n",
- bh->b_blocknr);
- if ((err = ext3_journal_get_write_access(handle, bh)))
- break;
- memcpy(bh->b_data, data, size);
- if (rest)
- memset(bh->b_data + size, 0, rest);
- ext3_journal_dirty_metadata(handle, bh);
- brelse(bh);
- }
- if ((err2 = ext3_journal_stop(handle)) && !err)
- err = err2;
-
- /*
- * Ugh! Need to have e2fsck write the backup copies. It is too
- * late to revert the resize, we shouldn't fail just because of
- * the backup copies (they are only needed in case of corruption).
- *
- * However, if we got here we have a journal problem too, so we
- * can't really start a transaction to mark the superblock.
- * Chicken out and just set the flag on the hope it will be written
- * to disk, and if not - we will simply wait until next fsck.
- */
-exit_err:
- if (err) {
- ext3_warning(sb, __FUNCTION__,
- "can't update backup for group %d (err %d), "
- "forcing fsck on next reboot\n", group, err);
- sbi->s_mount_state &= ~EXT3_VALID_FS;
- sbi->s_es->s_state &= ~cpu_to_le16(EXT3_VALID_FS);
- mark_buffer_dirty(sbi->s_sbh);
- }
-}
-
-/* Add group descriptor data to an existing or new group descriptor block.
- * Ensure we handle all possible error conditions _before_ we start modifying
- * the filesystem, because we cannot abort the transaction and not have it
- * write the data to disk.
- *
- * If we are on a GDT block boundary, we need to get the reserved GDT block.
- * Otherwise, we may need to add backup GDT blocks for a sparse group.
- *
- * We only need to hold the superblock lock while we are actually adding
- * in the new group's counts to the superblock. Prior to that we have
- * not really "added" the group at all. We re-check that we are still
- * adding in the last group in case things have changed since verifying.
- */
-int ext3_group_add(struct super_block *sb, struct ext3_new_group_data *input)
-{
- struct ext3_sb_info *sbi = EXT3_SB(sb);
- struct ext3_super_block *es = sbi->s_es;
- int reserved_gdb = ext3_bg_has_super(sb, input->group) ?
- le16_to_cpu(es->s_reserved_gdt_blocks) : 0;
- struct buffer_head *primary = NULL;
- struct ext3_group_desc *gdp;
- struct inode *inode = NULL;
- struct inode bogus;
- handle_t *handle;
- int gdb_off, gdb_num;
- int err, err2;
-
- gdb_num = input->group / EXT3_DESC_PER_BLOCK(sb);
- gdb_off = input->group % EXT3_DESC_PER_BLOCK(sb);
-
- if (gdb_off == 0 && !EXT3_HAS_RO_COMPAT_FEATURE(sb,
- EXT3_FEATURE_RO_COMPAT_SPARSE_SUPER)) {
- ext3_warning(sb, __FUNCTION__,
- "Can't resize non-sparse filesystem further\n");
- return -EPERM;
- }
-
- if (reserved_gdb || gdb_off == 0) {
- if (!EXT3_HAS_COMPAT_FEATURE(sb,
- EXT3_FEATURE_COMPAT_RESIZE_INODE)){
- ext3_warning(sb, __FUNCTION__,
- "No reserved GDT blocks, can't resize\n");
- return -EPERM;
- }
- inode = iget(sb, EXT3_RESIZE_INO);
- if (!inode || is_bad_inode(inode)) {
- ext3_warning(sb, __FUNCTION__,
- "Error opening resize inode\n");
- iput(inode);
- return -ENOENT;
- }
- } else {
- /* Used only for ext3 journal wrapper functions to get sb */
- inode = &bogus;
- bogus.i_sb = sb;
- }
-
- if ((err = verify_group_input(sb, input)))
- goto exit_put;
-
- if ((err = setup_new_group_blocks(sb, inode, input)))
- goto exit_put;
-
- /*
- * We will always be modifying at least the superblock and a GDT
- * block. If we are adding a group past the last current GDT block,
- * we will also modify the inode and the dindirect block. If we
- * are adding a group with superblock/GDT backups we will also
- * modify each of the reserved GDT dindirect blocks.
- */
- handle = ext3_journal_start(inode, ext3_bg_has_super(sb, input->group) ?
- 3 + reserved_gdb : 4);
- if (IS_ERR(handle)) {
- err = PTR_ERR(handle);
- goto exit_put;
- }
-
- lock_super(sb);
- if (input->group != EXT3_SB(sb)->s_groups_count) {
- ext3_warning(sb, __FUNCTION__,
- "multiple resizers run on filesystem!\n");
- goto exit_journal;
- }
-
- if ((err = ext3_journal_get_write_access(handle, sbi->s_sbh)))
- goto exit_journal;
-
- /*
- * We will only either add reserved group blocks to a backup group
- * or remove reserved blocks for the first group in a new group block.
- * Doing both would be mean more complex code, and sane people don't
- * use non-sparse filesystems anymore. This is already checked above.
- */
- if (gdb_off) {
- primary = sbi->s_group_desc[gdb_num];
- if ((err = ext3_journal_get_write_access(handle, primary)))
- goto exit_journal;
-
- if (reserved_gdb && ext3_bg_num_gdb(sb, input->group) &&
- (err = reserve_backup_gdb(handle, inode, input)))
- goto exit_journal;
- } else if ((err = add_new_gdb(handle, inode, input, &primary)))
- goto exit_journal;
-
- /* Finally update group descriptor block for new group */
- gdp = (struct ext3_group_desc *)primary->b_data + gdb_off;
-
- gdp->bg_block_bitmap = cpu_to_le32(input->block_bitmap);
- gdp->bg_inode_bitmap = cpu_to_le32(input->inode_bitmap);
- gdp->bg_inode_table = cpu_to_le32(input->inode_table);
- gdp->bg_free_blocks_count = cpu_to_le16(input->free_blocks_count);
- gdp->bg_free_inodes_count = cpu_to_le16(EXT3_INODES_PER_GROUP(sb));
-
- EXT3_SB(sb)->s_groups_count++;
- ext3_journal_dirty_metadata(handle, primary);
-
- /* Update superblock with new block counts */
- es->s_blocks_count = cpu_to_le32(le32_to_cpu(es->s_blocks_count) +
- input->blocks_count);
- es->s_free_blocks_count =
- cpu_to_le32(le32_to_cpu(es->s_free_blocks_count) +
- input->free_blocks_count);
- es->s_r_blocks_count = cpu_to_le32(le32_to_cpu(es->s_r_blocks_count) +
- input->reserved_blocks);
- es->s_inodes_count = cpu_to_le32(le32_to_cpu(es->s_inodes_count) +
- EXT3_INODES_PER_GROUP(sb));
- es->s_free_inodes_count =
- cpu_to_le32(le32_to_cpu(es->s_free_inodes_count) +
- EXT3_INODES_PER_GROUP(sb));
- ext3_journal_dirty_metadata(handle, EXT3_SB(sb)->s_sbh);
- sb->s_dirt = 1;
-
-exit_journal:
- unlock_super(sb);
- handle->h_sync = 1;
- if ((err2 = ext3_journal_stop(handle)) && !err)
- err = err2;
- if (!err) {
- update_backups(sb, inode, sbi->s_sbh->b_blocknr, (char *)es,
- sizeof(struct ext3_super_block));
- update_backups(sb, inode, primary->b_blocknr, primary->b_data,
- primary->b_size);
- }
-exit_put:
- if (inode != &bogus)
- iput(inode);
- return err;
-} /* ext3_group_add */
-
-/* Extend the filesystem to the new number of blocks specified. This entry
- * point is only used to extend the current filesystem to the end of the last
- * existing group. It can be accessed via ioctl, or by "remount,resize=<size>"
- * for emergencies (because it has no dependencies on reserved blocks).
- *
- * If we _really_ wanted, we could use default values to call ext3_group_add()
- * allow the "remount" trick to work for arbitrary resizing, assuming enough
- * GDT blocks are reserved to grow to the desired size.
- */
-int ext3_group_extend(struct super_block *sb, struct ext3_super_block *es,
- unsigned long n_blocks_count)
-{
- unsigned long o_blocks_count;
- unsigned long o_groups_count;
- unsigned long last;
- int add;
- struct inode *inode;
- struct buffer_head * bh;
- handle_t *handle;
- int err;
-
- o_blocks_count = le32_to_cpu(es->s_blocks_count);
- o_groups_count = EXT3_SB(sb)->s_groups_count;
-
- if (test_opt(sb, DEBUG))
- printk("EXT3-fs: extending last group from %lu to %lu blocks\n",
- o_blocks_count, n_blocks_count);
-
- if (n_blocks_count == 0 || n_blocks_count == o_blocks_count)
- return 0;
-
- if (n_blocks_count < o_blocks_count) {
- ext3_warning(sb, __FUNCTION__,
- "can't shrink FS - resize aborted");
- return -EBUSY;
- }
-
- /* Handle the remaining blocks in the last group only. */
- last = (o_blocks_count - le32_to_cpu(es->s_first_data_block)) %
- EXT3_BLOCKS_PER_GROUP(sb);
-
- if (last == 0) {
- ext3_warning(sb, __FUNCTION__,
- "need to use ext2online to resize further\n");
- return -EPERM;
- }
-
- add = EXT3_BLOCKS_PER_GROUP(sb) - last;
-
- if (o_blocks_count + add > n_blocks_count)
- add = n_blocks_count - o_blocks_count;
-
- if (o_blocks_count + add < n_blocks_count)
- ext3_warning(sb, __FUNCTION__,
- "will only finish group (%lu blocks, %u new)",
- o_blocks_count + add, add);
-
- /* See if the device is actually as big as what was requested */
- bh = sb_bread(sb, o_blocks_count + add -1);
- if (!bh) {
- ext3_warning(sb, __FUNCTION__,
- "can't read last block, resize aborted");
- return -ENOSPC;
- }
- brelse(bh);
-
- /* Get a bogus inode to "free" the new blocks in this group. */
- if (!(inode = new_inode(sb))) {
- ext3_warning(sb, __FUNCTION__,
- "error getting dummy resize inode");
- return -ENOMEM;
- }
- inode->i_ino = 0;
-
- EXT3_I(inode)->i_state = EXT3_STATE_RESIZE;
-
- /* We will update the superblock, one block bitmap, and
- * one group descriptor via ext3_free_blocks().
- */
- handle = ext3_journal_start(inode, 3);
- if (IS_ERR(handle)) {
- err = PTR_ERR(handle);
- ext3_warning(sb, __FUNCTION__, "error %d on journal start",err);
- goto exit_put;
- }
-
- lock_super(sb);
- if (o_blocks_count != le32_to_cpu(es->s_blocks_count)) {
- ext3_warning(sb, __FUNCTION__,
- "multiple resizers run on filesystem!\n");
- err = -EBUSY;
- goto exit_put;
- }
-
- if ((err = ext3_journal_get_write_access(handle,
- EXT3_SB(sb)->s_sbh))) {
- ext3_warning(sb, __FUNCTION__,
- "error %d on journal write access", err);
- unlock_super(sb);
- ext3_journal_stop(handle);
- goto exit_put;
- }
- es->s_blocks_count = cpu_to_le32(o_blocks_count + add);
- ext3_journal_dirty_metadata(handle, EXT3_SB(sb)->s_sbh);
- sb->s_dirt = 1;
- unlock_super(sb);
- ext3_debug("freeing blocks %ld through %ld\n", o_blocks_count,
- o_blocks_count + add);
- ext3_free_blocks(handle, inode, o_blocks_count, add);
- ext3_debug("freed blocks %ld through %ld\n", o_blocks_count,
- o_blocks_count + add);
- if ((err = ext3_journal_stop(handle)))
- goto exit_put;
- if (test_opt(sb, DEBUG))
- printk("EXT3-fs: extended group to %u blocks\n",
- le32_to_cpu(es->s_blocks_count));
- update_backups(sb, inode, EXT3_SB(sb)->s_sbh->b_blocknr, (char *)es,
- sizeof(struct ext3_super_block));
-exit_put:
- iput(inode);
-
- return err;
-} /* ext3_group_extend */
--- /dev/null
+/*
+ * Copyright (C) 2000 - 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include <linux/stddef.h>
+#include <linux/fs.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include <linux/blkdev.h>
+#include <linux/statfs.h>
+#include <asm/uaccess.h>
+#include "hostfs.h"
+#include "kern_util.h"
+#include "kern.h"
+#include "user_util.h"
+#include "2_5compat.h"
+#include "mem.h"
+#include "filehandle.h"
+
+struct externfs {
+ struct list_head list;
+ struct externfs_mount_ops *mount_ops;
+ struct file_system_type type;
+};
+
+static inline struct externfs_inode *EXTERNFS_I(struct inode *inode)
+{
+ return(container_of(inode, struct externfs_inode, vfs_inode));
+}
+
+#define file_externfs_i(file) EXTERNFS_I((file)->f_dentry->d_inode)
+
+int externfs_d_delete(struct dentry *dentry)
+{
+ return(1);
+}
+
+struct dentry_operations externfs_dentry_ops = {
+};
+
+#define EXTERNFS_SUPER_MAGIC 0x00c0ffee
+
+static struct inode_operations externfs_iops;
+static struct inode_operations externfs_dir_iops;
+static struct address_space_operations externfs_link_aops;
+
+static char *dentry_name(struct dentry *dentry, int extra)
+{
+ struct dentry *parent;
+ char *name;
+ int len;
+
+ len = 0;
+ parent = dentry;
+ while(parent->d_parent != parent){
+ len += parent->d_name.len + 1;
+ parent = parent->d_parent;
+ }
+
+ name = kmalloc(len + extra + 1, GFP_KERNEL);
+ if(name == NULL) return(NULL);
+
+ name[len] = '\0';
+ parent = dentry;
+ while(parent->d_parent != parent){
+ len -= parent->d_name.len + 1;
+ name[len] = '/';
+ strncpy(&name[len + 1], parent->d_name.name,
+ parent->d_name.len);
+ parent = parent->d_parent;
+ }
+
+ return(name);
+}
+
+char *inode_name(struct inode *ino, int extra)
+{
+ struct dentry *dentry;
+
+ dentry = list_entry(ino->i_dentry.next, struct dentry, d_alias);
+ return(dentry_name(dentry, extra));
+}
+
+char *inode_name_prefix(struct inode *inode, char *prefix)
+{
+ int len;
+ char *name;
+
+ len = strlen(prefix);
+ name = inode_name(inode, len);
+ if(name == NULL)
+ return(name);
+
+ memmove(&name[len], name, strlen(name) + 1);
+ memcpy(name, prefix, strlen(prefix));
+ return(name);
+}
+
+static int read_name(struct inode *ino, char *name)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ /* The non-int inode fields are copied into ints by stat_file and
+ * then copied into the inode because passing the actual pointers
+ * in and having them treated as int * breaks on big-endian machines
+ */
+ dev_t i_rdev;
+ int err;
+ int i_mode, i_nlink, i_blksize;
+ unsigned long atime, mtime, ctime;
+ unsigned long long i_size;
+ unsigned long long i_ino;
+ unsigned long long i_blocks;
+
+ err = (*ops->stat_file)(name, ino->i_sb->s_fs_info, &i_rdev, &i_ino,
+ &i_mode, &i_nlink, &ino->i_uid, &ino->i_gid,
+ &i_size, &atime, &mtime, &ctime, &i_blksize,
+ &i_blocks);
+ if(err) return(err);
+
+ ino->i_atime.tv_sec = atime;
+ ino->i_atime.tv_nsec = 0;
+
+ ino->i_ctime.tv_sec = ctime;
+ ino->i_ctime.tv_nsec = 0;
+
+ ino->i_mtime.tv_sec = mtime;
+ ino->i_mtime.tv_nsec = 0;
+
+ ino->i_ino = i_ino;
+ ino->i_rdev = i_rdev;
+ ino->i_mode = i_mode;
+ ino->i_nlink = i_nlink;
+ ino->i_size = i_size;
+ ino->i_blksize = i_blksize;
+ ino->i_blocks = i_blocks;
+ return(0);
+}
+
+static char *follow_link(char *link,
+ int (*do_read_link)(char *path, int uid, int gid,
+ char *buf, int size,
+ struct externfs_data *ed),
+ int uid, int gid, struct externfs_data *ed)
+{
+ int len, n;
+ char *name, *resolved, *end;
+
+ len = 64;
+ while(1){
+ n = -ENOMEM;
+ name = kmalloc(len, GFP_KERNEL);
+ if(name == NULL)
+ goto out;
+
+ n = (*do_read_link)(link, uid, gid, name, len, ed);
+ if(n < len)
+ break;
+ len *= 2;
+ kfree(name);
+ }
+ if(n < 0)
+ goto out_free;
+
+ if(*name == '/')
+ return(name);
+
+ end = strrchr(link, '/');
+ if(end == NULL)
+ return(name);
+
+ *(end + 1) = '\0';
+ len = strlen(link) + strlen(name) + 1;
+
+ resolved = kmalloc(len, GFP_KERNEL);
+ if(resolved == NULL){
+ n = -ENOMEM;
+ goto out_free;
+ }
+
+ sprintf(resolved, "%s%s", link, name);
+ kfree(name);
+ return(resolved);
+
+ out_free:
+ kfree(name);
+ out:
+ return(ERR_PTR(n));
+}
+
+static int read_inode(struct inode *ino)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *name, *new;
+ int err = 0, type;
+
+ /* Unfortunately, we are called from iget() when we don't have a dentry
+ * allocated yet.
+ */
+ if(list_empty(&ino->i_dentry))
+ goto out;
+
+ err = -ENOMEM;
+ name = inode_name(ino, 0);
+ if(name == NULL)
+ goto out;
+
+ type = (*ops->file_type)(name, NULL, ed);
+ if(type < 0){
+ err = type;
+ goto out_free;
+ }
+
+ if(type == OS_TYPE_SYMLINK){
+ new = follow_link(name, ops->read_link, current->fsuid,
+ current->fsgid, ed);
+ if(IS_ERR(new)){
+ err = PTR_ERR(new);
+ goto out_free;
+ }
+ kfree(name);
+ name = new;
+ }
+
+ err = read_name(ino, name);
+ out_free:
+ kfree(name);
+ out:
+ return(err);
+}
+
+int externfs_statfs(struct super_block *sb, struct kstatfs *sf)
+{
+ /* do_statfs uses struct statfs64 internally, but the linux kernel
+ * struct statfs still has 32-bit versions for most of these fields,
+ * so we convert them here
+ */
+ int err;
+ long long f_blocks;
+ long long f_bfree;
+ long long f_bavail;
+ long long f_files;
+ long long f_ffree;
+ struct externfs_data *ed = sb->s_fs_info;
+
+ err = (*ed->file_ops->statfs)(&sf->f_bsize, &f_blocks, &f_bfree,
+ &f_bavail, &f_files, &f_ffree,
+ &sf->f_fsid, sizeof(sf->f_fsid),
+ &sf->f_namelen, sf->f_spare, ed);
+ if(err)
+ return(err);
+
+ sf->f_blocks = f_blocks;
+ sf->f_bfree = f_bfree;
+ sf->f_bavail = f_bavail;
+ sf->f_files = f_files;
+ sf->f_ffree = f_ffree;
+ sf->f_type = EXTERNFS_SUPER_MAGIC;
+ return(0);
+}
+
+static struct inode *externfs_alloc_inode(struct super_block *sb)
+{
+ struct externfs_data *ed = sb->s_fs_info;
+ struct externfs_inode *ext;
+
+ ext = (*ed->mount_ops->init_file)(ed);
+ if(ext == NULL)
+ return(NULL);
+
+ *ext = ((struct externfs_inode) { .ops = ed->file_ops });
+
+ inode_init_once(&ext->vfs_inode);
+ return(&ext->vfs_inode);
+}
+
+static void externfs_destroy_inode(struct inode *inode)
+{
+ struct externfs_inode *ext = EXTERNFS_I(inode);
+
+ (*ext->ops->close_file)(ext, inode->i_size);
+}
+
+static void externfs_read_inode(struct inode *inode)
+{
+ read_inode(inode);
+}
+
+static struct super_operations externfs_sbops = {
+ .alloc_inode = externfs_alloc_inode,
+ .destroy_inode = externfs_destroy_inode,
+ .read_inode = externfs_read_inode,
+ .statfs = externfs_statfs,
+};
+
+int externfs_readdir(struct file *file, void *ent, filldir_t filldir)
+{
+ void *dir;
+ char *name;
+ unsigned long long next, ino;
+ int error, len;
+ struct externfs_file_ops *ops = file_externfs_i(file)->ops;
+ struct externfs_data *ed = file->f_dentry->d_inode->i_sb->s_fs_info;
+
+ name = dentry_name(file->f_dentry, 0);
+ if(name == NULL)
+ return(-ENOMEM);
+
+ dir = (*ops->open_dir)(name, current->fsuid, current->fsgid, ed);
+ kfree(name);
+ if(IS_ERR(dir))
+ return(PTR_ERR(dir));
+
+ next = file->f_pos;
+ while((name = (*ops->read_dir)(dir, &next, &ino, &len, ed)) != NULL){
+ error = (*filldir)(ent, name, len, file->f_pos, ino,
+ DT_UNKNOWN);
+ if(error)
+ break;
+ file->f_pos = next;
+ }
+ (*ops->close_dir)(dir, ed);
+ return(0);
+}
+
+int externfs_file_open(struct inode *ino, struct file *file)
+{
+ ino->i_nlink++;
+ return(0);
+}
+
+int externfs_fsync(struct file *file, struct dentry *dentry, int datasync)
+{
+ struct externfs_file_ops *ops = file_externfs_i(file)->ops;
+ struct inode *inode = dentry->d_inode;
+ struct externfs_data *ed = inode->i_sb->s_fs_info;
+
+ return((*ops->truncate_file)(EXTERNFS_I(inode), inode->i_size, ed));
+}
+
+static struct file_operations externfs_file_fops = {
+ .llseek = generic_file_llseek,
+ .read = generic_file_read,
+ .write = generic_file_write,
+ .mmap = generic_file_mmap,
+ .open = externfs_file_open,
+ .release = NULL,
+ .fsync = externfs_fsync,
+};
+
+static struct file_operations externfs_dir_fops = {
+ .readdir = externfs_readdir,
+ .read = generic_read_dir,
+};
+
+struct wp_info {
+ struct page *page;
+ int count;
+ unsigned long long start;
+ unsigned long long size;
+ int (*truncate)(struct externfs_inode *ext, __u64 size,
+ struct externfs_data *ed);
+ struct externfs_inode *ei;
+ struct externfs_data *ed;
+};
+
+static void externfs_finish_writepage(char *buffer, int res, void *arg)
+{
+ struct wp_info *wp = arg;
+
+ if(res == wp->count){
+ ClearPageError(wp->page);
+ if(wp->start + res > wp->size)
+ (*wp->truncate)(wp->ei, wp->size, wp->ed);
+ }
+ else {
+ SetPageError(wp->page);
+ ClearPageUptodate(wp->page);
+ }
+
+ kunmap(wp->page);
+ unlock_page(wp->page);
+ kfree(wp);
+}
+
+int externfs_writepage(struct page *page, struct writeback_control *wbc)
+{
+ struct address_space *mapping = page->mapping;
+ struct inode *inode = mapping->host;
+ struct externfs_file_ops *ops = EXTERNFS_I(inode)->ops;
+ struct wp_info *wp;
+ struct externfs_data *ed = inode->i_sb->s_fs_info;
+ char *buffer;
+ unsigned long long base;
+ int count = PAGE_CACHE_SIZE;
+ int end_index = inode->i_size >> PAGE_CACHE_SHIFT;
+ int err, offset;
+
+ base = ((unsigned long long) page->index) << PAGE_CACHE_SHIFT;
+
+ /* If we are entirely outside the file, then return an error */
+ err = -EIO;
+ offset = inode->i_size & (PAGE_CACHE_SIZE-1);
+ if (page->index > end_index ||
+ ((page->index == end_index) && !offset))
+ goto out_unlock;
+
+ err = -ENOMEM;
+ wp = kmalloc(sizeof(*wp), GFP_KERNEL);
+ if(wp == NULL)
+ goto out_unlock;
+
+ *wp = ((struct wp_info) { .page = page,
+ .count = count,
+ .start = base,
+ .size = inode->i_size,
+ .truncate = ops->truncate_file,
+ .ei = EXTERNFS_I(inode),
+ .ed = ed });
+
+ buffer = kmap(page);
+ err = (*ops->write_file)(EXTERNFS_I(inode), base, buffer, 0,
+ count, externfs_finish_writepage, wp, ed);
+
+ return err;
+
+ out_unlock:
+ unlock_page(page);
+ return(err);
+}
+
+static void externfs_finish_readpage(char *buffer, int res, void *arg)
+{
+ struct page *page = arg;
+ struct inode *inode;
+
+ if(res < 0){
+ SetPageError(page);
+ goto out;
+ }
+
+ inode = page->mapping->host;
+ if(inode->i_size >> PAGE_CACHE_SHIFT == page->index)
+ res = inode->i_size % PAGE_CACHE_SIZE;
+
+ memset(&buffer[res], 0, PAGE_CACHE_SIZE - res);
+
+ flush_dcache_page(page);
+ SetPageUptodate(page);
+ if (PageError(page))
+ ClearPageError(page);
+ out:
+ kunmap(page);
+ unlock_page(page);
+}
+
+static int externfs_readpage(struct file *file, struct page *page)
+{
+ struct inode *ino = page->mapping->host;
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *buffer;
+ long long start;
+ int err = 0;
+
+ start = (long long) page->index << PAGE_CACHE_SHIFT;
+ buffer = kmap(page);
+
+ if(ops->map_file_page != NULL){
+ /* XXX What happens when PAGE_SIZE != PAGE_CACHE_SIZE? */
+ err = (*ops->map_file_page)(file_externfs_i(file), start,
+ buffer, file->f_mode & FMODE_WRITE,
+ ed);
+ if(!err)
+ err = PAGE_CACHE_SIZE;
+ }
+ else err = (*ops->read_file)(file_externfs_i(file), start, buffer,
+ PAGE_CACHE_SIZE, 0, 0,
+ externfs_finish_readpage, page, ed);
+
+ if(err > 0)
+ err = 0;
+ return(err);
+}
+
+struct writepage_info {
+ struct semaphore sem;
+ int res;
+};
+
+static void externfs_finish_prepare(char *buffer, int res, void *arg)
+{
+ struct writepage_info *wp = arg;
+
+ wp->res = res;
+ up(&wp->sem);
+}
+
+int externfs_prepare_write(struct file *file, struct page *page,
+ unsigned int from, unsigned int to)
+{
+ struct address_space *mapping = page->mapping;
+ struct inode *inode = mapping->host;
+ struct externfs_file_ops *ops = EXTERNFS_I(inode)->ops;
+ struct externfs_data *ed = inode->i_sb->s_fs_info;
+ char *buffer;
+ long long start;
+ int err;
+ struct writepage_info wp;
+
+ if(PageUptodate(page))
+ return(0);
+
+ start = (long long) page->index << PAGE_CACHE_SHIFT;
+ buffer = kmap(page);
+
+ if(ops->map_file_page != NULL){
+ err = (*ops->map_file_page)(file_externfs_i(file), start,
+ buffer, file->f_mode & FMODE_WRITE,
+ ed);
+ goto out;
+
+ }
+
+ init_MUTEX_LOCKED(&wp.sem);
+ err = (*ops->read_file)(file_externfs_i(file), start, buffer,
+ PAGE_CACHE_SIZE, from, to,
+ externfs_finish_prepare, &wp, ed);
+ down(&wp.sem);
+ if(err < 0)
+ goto out;
+
+ err = wp.res;
+ if(err < 0)
+ goto out;
+
+ if(from > 0)
+ memset(buffer, 0, from);
+ if(to < PAGE_CACHE_SIZE)
+ memset(buffer + to, 0, PAGE_CACHE_SIZE - to);
+
+ SetPageUptodate(page);
+ err = 0;
+ out:
+ kunmap(page);
+ return(err);
+}
+
+static int externfs_commit_write(struct file *file, struct page *page,
+ unsigned from, unsigned to)
+{
+ struct address_space *mapping = page->mapping;
+ struct inode *inode = mapping->host;
+ struct externfs_file_ops *ops = EXTERNFS_I(inode)->ops;
+ unsigned long long size;
+ long long start;
+ int err;
+
+ start = (long long) (page->index << PAGE_CACHE_SHIFT);
+
+ if(ops->map_file_page != NULL)
+ err = to - from;
+ else {
+ size = start + to;
+ if(size > inode->i_size){
+ inode->i_size = size;
+ mark_inode_dirty(inode);
+ }
+ }
+
+ set_page_dirty(page);
+ return(to - from);
+}
+
+static int externfs_removepage(struct page *page, int gfpmask)
+{
+ physmem_remove_mapping(page_address(page));
+ return(0);
+}
+
+static struct address_space_operations externfs_aops = {
+ .writepage = externfs_writepage,
+ .readpage = externfs_readpage,
+ .releasepage = externfs_removepage,
+/* .set_page_dirty = __set_page_dirty_nobuffers, */
+ .prepare_write = externfs_prepare_write,
+ .commit_write = externfs_commit_write
+};
+
+static int init_inode(struct inode *inode, struct dentry *dentry)
+{
+ char *name = NULL;
+ int type, err = -ENOMEM, rdev;
+ struct externfs_inode *ext = EXTERNFS_I(inode);
+ struct externfs_file_ops *ops = ext->ops;
+ struct externfs_data *ed = inode->i_sb->s_fs_info;
+
+ if(dentry){
+ name = dentry_name(dentry, 0);
+ if(name == NULL)
+ goto out;
+ type = (*ops->file_type)(name, &rdev, ed);
+ }
+ else type = OS_TYPE_DIR;
+
+ err = 0;
+ if(type == OS_TYPE_SYMLINK)
+ inode->i_op = &page_symlink_inode_operations;
+ else if(type == OS_TYPE_DIR)
+ inode->i_op = &externfs_dir_iops;
+ else inode->i_op = &externfs_iops;
+
+ if(type == OS_TYPE_DIR) inode->i_fop = &externfs_dir_fops;
+ else inode->i_fop = &externfs_file_fops;
+
+ if(type == OS_TYPE_SYMLINK)
+ inode->i_mapping->a_ops = &externfs_link_aops;
+ else inode->i_mapping->a_ops = &externfs_aops;
+
+ switch (type) {
+ case OS_TYPE_CHARDEV:
+ init_special_inode(inode, S_IFCHR, rdev);
+ break;
+ case OS_TYPE_BLOCKDEV:
+ init_special_inode(inode, S_IFBLK, rdev);
+ break;
+ case OS_TYPE_FIFO:
+ init_special_inode(inode, S_IFIFO, 0);
+ break;
+ case OS_TYPE_SOCK:
+ init_special_inode(inode, S_IFSOCK, 0);
+ break;
+ case OS_TYPE_SYMLINK:
+ inode->i_mode = S_IFLNK | S_IRWXUGO;
+ }
+
+ err = (*ops->open_file)(ext, name, current->fsuid, current->fsgid,
+ inode, ed);
+ if((err != -EISDIR) && (err != -ENOENT) && (err != -ENXIO))
+ goto out_put;
+
+ err = 0;
+
+ out_free:
+ kfree(name);
+ out:
+ return(err);
+
+ out_put:
+ iput(inode);
+ goto out_free;
+}
+
+int externfs_create(struct inode *dir, struct dentry *dentry, int mode,
+ struct nameidata *nd)
+{
+ struct externfs_inode *ext = EXTERNFS_I(dir);
+ struct externfs_file_ops *ops = ext->ops;
+ struct inode *inode;
+ struct externfs_data *ed = dir->i_sb->s_fs_info;
+ char *name;
+ int err = -ENOMEM;
+
+ inode = iget(dir->i_sb, 0);
+ if(inode == NULL)
+ goto out;
+
+ err = init_inode(inode, dentry);
+ if(err)
+ goto out_put;
+
+ err = -ENOMEM;
+ name = dentry_name(dentry, 0);
+ if(name == NULL)
+ goto out_put;
+
+ err = (*ops->create_file)(ext, name, mode, current->fsuid,
+ current->fsuid, inode, ed);
+ if(err)
+ goto out_free;
+
+ err = read_name(inode, name);
+ if(err)
+ goto out_rm;
+
+ inode->i_nlink++;
+ d_instantiate(dentry, inode);
+ kfree(name);
+ out:
+ return(err);
+
+ out_rm:
+ (*ops->unlink_file)(name, ed);
+ out_free:
+ kfree(name);
+ out_put:
+ inode->i_nlink = 0;
+ iput(inode);
+ goto out;
+}
+
+struct dentry *externfs_lookup(struct inode *ino, struct dentry *dentry,
+ struct nameidata *nd)
+{
+ struct inode *inode;
+ char *name;
+ int err = -ENOMEM;
+
+ inode = iget(ino->i_sb, 0);
+ if(inode == NULL)
+ goto out;
+
+ err = init_inode(inode, dentry);
+ if(err)
+ goto out_put;
+
+ err = -ENOMEM;
+ name = dentry_name(dentry, 0);
+ if(name == NULL)
+ goto out_put;
+
+ err = read_name(inode, name);
+ kfree(name);
+ if(err){
+ if(err != -ENOENT)
+ goto out_put;
+
+ inode->i_nlink = 0;
+ iput(inode);
+ inode = NULL;
+ }
+ d_add(dentry, inode);
+ dentry->d_op = &externfs_dentry_ops;
+ return(NULL);
+
+ out_put:
+ inode->i_nlink = 0;
+ iput(inode);
+ out:
+ return(ERR_PTR(err));
+}
+
+static char *inode_dentry_name(struct inode *ino, struct dentry *dentry)
+{
+ char *file;
+ int len;
+
+ file = inode_name(ino, dentry->d_name.len + 1);
+ if(file == NULL) return(NULL);
+ strcat(file, "/");
+ len = strlen(file);
+ strncat(file, dentry->d_name.name, dentry->d_name.len);
+ file[len + dentry->d_name.len] = '\0';
+ return(file);
+}
+
+int externfs_link(struct dentry *to, struct inode *ino, struct dentry *from)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *from_name, *to_name;
+ int err = -ENOMEM;
+
+ from_name = inode_dentry_name(ino, from);
+ if(from_name == NULL)
+ goto out;
+
+ to_name = dentry_name(to, 0);
+ if(to_name == NULL)
+ goto out_free_from;
+
+ err = (*ops->link_file)(to_name, from_name, current->fsuid,
+ current->fsgid, ed);
+ if(err)
+ goto out_free_to;
+
+ d_instantiate(from, to->d_inode);
+ to->d_inode->i_nlink++;
+ atomic_inc(&to->d_inode->i_count);
+
+ out_free_to:
+ kfree(to_name);
+ out_free_from:
+ kfree(from_name);
+ out:
+ return(err);
+}
+
+int externfs_unlink(struct inode *ino, struct dentry *dentry)
+{
+ struct inode *inode;
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *file;
+ int err;
+
+ file = inode_dentry_name(ino, dentry);
+ if(file == NULL)
+ return(-ENOMEM);
+
+ inode = dentry->d_inode;
+ if((inode->i_nlink == 1) && (ops->invisible != NULL))
+ (*ops->invisible)(EXTERNFS_I(inode));
+
+ err = (*ops->unlink_file)(file, ed);
+ kfree(file);
+
+ inode->i_nlink--;
+
+ return(err);
+}
+
+int externfs_symlink(struct inode *ino, struct dentry *dentry, const char *to)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct inode *inode;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *file;
+ int err;
+
+ file = inode_dentry_name(ino, dentry);
+ if(file == NULL)
+ return(-ENOMEM);
+ err = (*ops->make_symlink)(file, to, current->fsuid, current->fsgid,
+ ed);
+ kfree(file);
+ if(err)
+ goto out;
+
+ err = -ENOMEM;
+ inode = iget(ino->i_sb, 0);
+ if(inode == NULL)
+ goto out;
+
+ err = init_inode(inode, dentry);
+ if(err)
+ goto out_put;
+
+ d_instantiate(dentry, inode);
+ inode->i_nlink++;
+ out:
+ return(err);
+
+ out_put:
+ iput(inode);
+ goto out;
+}
+
+int externfs_make_dir(struct inode *ino, struct dentry *dentry, int mode)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct inode *inode;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *file;
+ int err = -ENOMEM;
+
+ file = inode_dentry_name(ino, dentry);
+ if(file == NULL)
+ goto out;
+ err = (*ops->make_dir)(file, mode, current->fsuid, current->fsgid, ed);
+
+ err = -ENOMEM;
+ inode = iget(ino->i_sb, 0);
+ if(inode == NULL)
+ goto out_free;
+
+ err = init_inode(inode, dentry);
+ if(err)
+ goto out_put;
+
+ err = read_name(inode, file);
+ if(err)
+ goto out_put;
+
+ kfree(file);
+ d_instantiate(dentry, inode);
+ inode->i_nlink = 2;
+ inode->i_mode = S_IFDIR | mode;
+
+ ino->i_nlink++;
+ out:
+ return(err);
+ out_put:
+ inode->i_nlink = 0;
+ iput(inode);
+ out_free:
+ kfree(file);
+ goto out;
+}
+
+int externfs_remove_dir(struct inode *ino, struct dentry *dentry)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *file;
+ int err;
+
+ file = inode_dentry_name(ino, dentry);
+ if(file == NULL)
+ return(-ENOMEM);
+ err = (*ops->remove_dir)(file, current->fsuid, current->fsgid, ed);
+ kfree(file);
+
+ dentry->d_inode->i_nlink = 0;
+ ino->i_nlink--;
+ return(err);
+}
+
+int externfs_make_node(struct inode *dir, struct dentry *dentry, int mode,
+ dev_t dev)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(dir)->ops;
+ struct externfs_data *ed = dir->i_sb->s_fs_info;
+ struct inode *inode;
+ char *name;
+ int err = -ENOMEM;
+
+ inode = iget(dir->i_sb, 0);
+ if(inode == NULL)
+ goto out;
+
+ err = init_inode(inode, dentry);
+ if(err)
+ goto out_put;
+
+ err = -ENOMEM;
+ name = dentry_name(dentry, 0);
+ if(name == NULL)
+ goto out_put;
+
+ init_special_inode(inode, mode, dev);
+ err = (*ops->make_node)(name, mode & S_IRWXUGO, current->fsuid,
+ current->fsgid, mode & S_IFMT, MAJOR(dev),
+ MINOR(dev), ed);
+ if(err)
+ goto out_free;
+
+ err = read_name(inode, name);
+ if(err)
+ goto out_rm;
+
+ inode->i_nlink++;
+ d_instantiate(dentry, inode);
+ kfree(name);
+ out:
+ return(err);
+
+ out_rm:
+ (*ops->unlink_file)(name, ed);
+ out_free:
+ kfree(name);
+ out_put:
+ inode->i_nlink = 0;
+ iput(inode);
+ goto out;
+}
+
+int externfs_rename(struct inode *from_ino, struct dentry *from,
+ struct inode *to_ino, struct dentry *to)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(from_ino)->ops;
+ struct externfs_data *ed = from_ino->i_sb->s_fs_info;
+ char *from_name, *to_name;
+ int err;
+
+ from_name = inode_dentry_name(from_ino, from);
+ if(from_name == NULL)
+ return(-ENOMEM);
+ to_name = inode_dentry_name(to_ino, to);
+ if(to_name == NULL){
+ kfree(from_name);
+ return(-ENOMEM);
+ }
+ err = (*ops->rename_file)(from_name, to_name, ed);
+ kfree(from_name);
+ kfree(to_name);
+
+ from_ino->i_nlink--;
+ to_ino->i_nlink++;
+ return(err);
+}
+
+void externfs_truncate(struct inode *ino)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+
+ (*ops->truncate_file)(EXTERNFS_I(ino), ino->i_size, ed);
+}
+
+int externfs_permission(struct inode *ino, int desired, struct nameidata *nd)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *name;
+ int r = 0, w = 0, x = 0, err;
+
+ if(ops->access_file == NULL)
+ return(vfs_permission(ino, desired));
+
+ if(desired & MAY_READ) r = 1;
+ if(desired & MAY_WRITE) w = 1;
+ if(desired & MAY_EXEC) x = 1;
+ name = inode_name(ino, 0);
+ if(name == NULL)
+ return(-ENOMEM);
+
+ err = (*ops->access_file)(name, r, w, x, current->fsuid,
+ current->fsgid, ed);
+ kfree(name);
+
+ if(!err)
+ err = vfs_permission(ino, desired);
+ return(err);
+}
+
+int externfs_setattr(struct dentry *dentry, struct iattr *attr)
+{
+ struct externfs_file_ops *ops = EXTERNFS_I(dentry->d_inode)->ops;
+ struct externfs_data *ed = dentry->d_inode->i_sb->s_fs_info;
+ struct externfs_iattr attrs;
+ char *name;
+ int err;
+
+ attrs.ia_valid = 0;
+ if(attr->ia_valid & ATTR_MODE){
+ attrs.ia_valid |= EXTERNFS_ATTR_MODE;
+ attrs.ia_mode = attr->ia_mode;
+ }
+ if(attr->ia_valid & ATTR_UID){
+ attrs.ia_valid |= EXTERNFS_ATTR_UID;
+ attrs.ia_uid = attr->ia_uid;
+ }
+ if(attr->ia_valid & ATTR_GID){
+ attrs.ia_valid |= EXTERNFS_ATTR_GID;
+ attrs.ia_gid = attr->ia_gid;
+ }
+ if(attr->ia_valid & ATTR_SIZE){
+ attrs.ia_valid |= EXTERNFS_ATTR_SIZE;
+ attrs.ia_size = attr->ia_size;
+ }
+ if(attr->ia_valid & ATTR_ATIME){
+ attrs.ia_valid |= EXTERNFS_ATTR_ATIME;
+ attrs.ia_atime = attr->ia_atime.tv_sec;
+ }
+ if(attr->ia_valid & ATTR_MTIME){
+ attrs.ia_valid |= EXTERNFS_ATTR_MTIME;
+ attrs.ia_mtime = attr->ia_mtime.tv_sec;
+ }
+ if(attr->ia_valid & ATTR_CTIME){
+ attrs.ia_valid |= EXTERNFS_ATTR_CTIME;
+ attrs.ia_ctime = attr->ia_ctime.tv_sec;
+ }
+ if(attr->ia_valid & ATTR_ATIME_SET){
+ attrs.ia_valid |= EXTERNFS_ATTR_ATIME_SET;
+ attrs.ia_atime = attr->ia_atime.tv_sec;
+ }
+ if(attr->ia_valid & ATTR_MTIME_SET){
+ attrs.ia_valid |= EXTERNFS_ATTR_MTIME_SET;
+ }
+ name = dentry_name(dentry, 0);
+ if(name == NULL)
+ return(-ENOMEM);
+ err = (*ops->set_attr)(name, &attrs, ed);
+ kfree(name);
+ if(err)
+ return(err);
+
+ return(inode_setattr(dentry->d_inode, attr));
+}
+
+int externfs_getattr(struct vfsmount *mnt, struct dentry *dentry,
+ struct kstat *stat)
+{
+ generic_fillattr(dentry->d_inode, stat);
+ return(0);
+}
+
+static struct inode_operations externfs_iops = {
+ .create = externfs_create,
+ .link = externfs_link,
+ .unlink = externfs_unlink,
+ .symlink = externfs_symlink,
+ .mkdir = externfs_make_dir,
+ .rmdir = externfs_remove_dir,
+ .mknod = externfs_make_node,
+ .rename = externfs_rename,
+ .truncate = externfs_truncate,
+ .permission = externfs_permission,
+ .setattr = externfs_setattr,
+ .getattr = externfs_getattr,
+};
+
+static struct inode_operations externfs_dir_iops = {
+ .create = externfs_create,
+ .lookup = externfs_lookup,
+ .link = externfs_link,
+ .unlink = externfs_unlink,
+ .symlink = externfs_symlink,
+ .mkdir = externfs_make_dir,
+ .rmdir = externfs_remove_dir,
+ .mknod = externfs_make_node,
+ .rename = externfs_rename,
+ .truncate = externfs_truncate,
+ .permission = externfs_permission,
+ .setattr = externfs_setattr,
+ .getattr = externfs_getattr,
+};
+
+int externfs_link_readpage(struct file *file, struct page *page)
+{
+ struct inode *ino = page->mapping->host;
+ struct externfs_file_ops *ops = EXTERNFS_I(ino)->ops;
+ struct externfs_data *ed = ino->i_sb->s_fs_info;
+ char *buffer, *name;
+ long long start;
+ int err;
+
+ start = page->index << PAGE_CACHE_SHIFT;
+ buffer = kmap(page);
+ name = inode_name(ino, 0);
+ if(name == NULL)
+ return(-ENOMEM);
+
+ err = (*ops->read_link)(name, current->fsuid, current->fsgid, buffer,
+ PAGE_CACHE_SIZE, ed);
+
+ kfree(name);
+ if(err == PAGE_CACHE_SIZE)
+ err = -E2BIG;
+ else if(err > 0){
+ flush_dcache_page(page);
+ SetPageUptodate(page);
+ if (PageError(page)) ClearPageError(page);
+ err = 0;
+ }
+ kunmap(page);
+ unlock_page(page);
+ return(err);
+}
+
+static int externfs_flushpage(struct page *page, unsigned long offset)
+{
+ return(externfs_writepage(page, NULL));
+}
+
+struct externfs_data *inode_externfs_info(struct inode *inode)
+{
+ return(inode->i_sb->s_fs_info);
+}
+
+static struct address_space_operations externfs_link_aops = {
+ .readpage = externfs_link_readpage,
+ .releasepage = externfs_removepage,
+ .invalidatepage = externfs_flushpage,
+};
+
+DECLARE_MUTEX(externfs_sem);
+struct list_head externfses = LIST_HEAD_INIT(externfses);
+
+static struct externfs *find_externfs(struct file_system_type *type)
+{
+ struct list_head *ele;
+ struct externfs *fs;
+
+ down(&externfs_sem);
+ list_for_each(ele, &externfses){
+ fs = list_entry(ele, struct externfs, list);
+ if(&fs->type == type)
+ goto out;
+ }
+ fs = NULL;
+ out:
+ up(&externfs_sem);
+ return(fs);
+}
+
+#define DEFAULT_ROOT "/"
+
+char *host_root_filename(char *mount_arg)
+{
+ char *root = DEFAULT_ROOT;
+
+ if((mount_arg != NULL) && (*mount_arg != '\0'))
+ root = mount_arg;
+
+ return(uml_strdup(root));
+}
+
+static int externfs_fill_sb(struct super_block *sb, void *data, int silent)
+{
+ struct externfs *fs;
+ struct inode *root_inode;
+ struct externfs_data *sb_data;
+ int err = -EINVAL;
+
+ sb->s_blocksize = 1024;
+ sb->s_blocksize_bits = 10;
+ sb->s_magic = EXTERNFS_SUPER_MAGIC;
+ sb->s_op = &externfs_sbops;
+
+ fs = find_externfs(sb->s_type);
+ if(fs == NULL){
+ printk("Couldn't find externfs for filesystem '%s'\n",
+ sb->s_type->name);
+ goto out;
+ }
+
+ sb_data = (*fs->mount_ops->mount)(data);
+ if(IS_ERR(sb_data)){
+ err = PTR_ERR(sb_data);
+ goto out;
+ }
+
+ sb->s_fs_info = sb_data;
+ sb_data->mount_ops = fs->mount_ops;
+
+ root_inode = iget(sb, 0);
+ if(root_inode == NULL)
+ goto out;
+
+ err = init_inode(root_inode, NULL);
+ if(err)
+ goto out_put;
+
+ err = -ENOMEM;
+ sb->s_root = d_alloc_root(root_inode);
+ if(sb->s_root == NULL)
+ goto out_put;
+
+ err = read_inode(root_inode);
+ if(err)
+ goto out_put;
+
+ out:
+ return(err);
+
+ out_put:
+ iput(root_inode);
+ goto out;
+}
+
+struct super_block *externfs_read_super(struct file_system_type *type,
+ int flags, const char *dev_name,
+ void *data)
+{
+ return(get_sb_nodev(type, flags, data, externfs_fill_sb));
+}
+
+void init_externfs(struct externfs_data *ed, struct externfs_file_ops *ops)
+{
+ ed->file_ops = ops;
+}
+
+int register_externfs(char *name, struct externfs_mount_ops *mount_ops)
+{
+ struct externfs *new;
+ int err = -ENOMEM;
+
+ new = kmalloc(sizeof(*new), GFP_KERNEL);
+ if(new == NULL)
+ goto out;
+
+ memset(new, 0, sizeof(*new));
+ *new = ((struct externfs) { .list = LIST_HEAD_INIT(new->list),
+ .mount_ops = mount_ops,
+ .type = { .name = name,
+ .get_sb = externfs_read_super,
+ .kill_sb = kill_anon_super,
+ .fs_flags = 0,
+ .owner = THIS_MODULE } });
+ list_add(&new->list, &externfses);
+
+ err = register_filesystem(&new->type);
+ if(err)
+ goto out_del;
+ return(0);
+
+ out_del:
+ list_del(&new->list);
+ kfree(new);
+ out:
+ return(err);
+}
+
+void unregister_externfs(char *name)
+{
+ struct list_head *ele;
+ struct externfs *fs;
+
+ down(&externfs_sem);
+ list_for_each(ele, &externfses){
+ fs = list_entry(ele, struct externfs, list);
+ if(!strcmp(fs->type.name, name)){
+ list_del(ele);
+ up(&externfs_sem);
+ return;
+ }
+ }
+ up(&externfs_sem);
+ printk("Unregister_externfs - filesystem '%s' not found\n", name);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/stddef.h"
+#include "linux/string.h"
+#include "linux/errno.h"
+#include "linux/types.h"
+#include "linux/slab.h"
+#include "linux/fs.h"
+#include "asm/fcntl.h"
+#include "hostfs.h"
+#include "filehandle.h"
+
+extern int append;
+
+char *get_path(const char *path[], char *buf, int size)
+{
+ const char **s;
+ char *p;
+ int new = 1;
+
+ for(s = path; *s != NULL; s++){
+ new += strlen(*s);
+ if((*(s + 1) != NULL) && (strlen(*s) > 0) &&
+ ((*s)[strlen(*s) - 1] != '/'))
+ new++;
+ }
+
+ if(new > size){
+ buf = kmalloc(new, GFP_KERNEL);
+ if(buf == NULL)
+ return(NULL);
+ }
+
+ p = buf;
+ for(s = path; *s != NULL; s++){
+ strcpy(p, *s);
+ p += strlen(*s);
+ if((*(s + 1) != NULL) && (strlen(*s) > 0) &&
+ ((*s)[strlen(*s) - 1] != '/'))
+ strcpy(p++, "/");
+ }
+
+ return(buf);
+}
+
+void free_path(const char *buf, char *tmp)
+{
+ if((buf != tmp) && (buf != NULL))
+ kfree((char *) buf);
+}
+
+int host_open_file(const char *path[], int r, int w, struct file_handle *fh)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int mode = 0, err;
+ struct openflags flags = OPENFLAGS();
+
+ if (r)
+ flags = of_read(flags);
+ if (w)
+ flags = of_write(flags);
+ if(append)
+ flags = of_append(flags);
+
+ err = -ENOMEM;
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = open_filehandle(file, flags, mode, fh);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+void *host_open_dir(const char *path[])
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ void *dir = ERR_PTR(-ENOMEM);
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ dir = open_dir(file);
+ out:
+ free_path(file, tmp);
+ return(dir);
+}
+
+char *host_read_dir(void *stream, unsigned long long *pos,
+ unsigned long long *ino_out, int *len_out)
+{
+ int err;
+ char *name;
+
+ err = os_seek_dir(stream, *pos);
+ if(err)
+ return(ERR_PTR(err));
+
+ err = os_read_dir(stream, ino_out, &name);
+ if(err)
+ return(ERR_PTR(err));
+
+ if(name == NULL)
+ return(NULL);
+
+ *len_out = strlen(name);
+ *pos = os_tell_dir(stream);
+ return(name);
+}
+
+int host_file_type(const char *path[], int *rdev)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ struct uml_stat buf;
+ int ret;
+
+ ret = -ENOMEM;
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ if(rdev != NULL){
+ ret = os_lstat_file(file, &buf);
+ if(ret)
+ goto out;
+ *rdev = MKDEV(buf.ust_rmajor, buf.ust_rminor);
+ }
+
+ ret = os_file_type(file);
+ out:
+ free_path(file, tmp);
+ return(ret);
+}
+
+int host_create_file(const char *path[], int mode, struct file_handle *fh)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = open_filehandle(file, of_create(of_rdwr(OPENFLAGS())), mode, fh);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+static int do_stat_file(const char *path, int *dev_out,
+ unsigned long long *inode_out, int *mode_out,
+ int *nlink_out, int *uid_out, int *gid_out,
+ unsigned long long *size_out, unsigned long *atime_out,
+ unsigned long *mtime_out, unsigned long *ctime_out,
+ int *blksize_out, unsigned long long *blocks_out)
+{
+ struct uml_stat buf;
+ int err;
+
+ err = os_lstat_file(path, &buf);
+ if(err < 0)
+ return(err);
+
+ if(dev_out != NULL) *dev_out = MKDEV(buf.ust_major, buf.ust_minor);
+ if(inode_out != NULL) *inode_out = buf.ust_ino;
+ if(mode_out != NULL) *mode_out = buf.ust_mode;
+ if(nlink_out != NULL) *nlink_out = buf.ust_nlink;
+ if(uid_out != NULL) *uid_out = buf.ust_uid;
+ if(gid_out != NULL) *gid_out = buf.ust_gid;
+ if(size_out != NULL) *size_out = buf.ust_size;
+ if(atime_out != NULL) *atime_out = buf.ust_atime;
+ if(mtime_out != NULL) *mtime_out = buf.ust_mtime;
+ if(ctime_out != NULL) *ctime_out = buf.ust_ctime;
+ if(blksize_out != NULL) *blksize_out = buf.ust_blksize;
+ if(blocks_out != NULL) *blocks_out = buf.ust_blocks;
+
+ return(0);
+}
+
+int host_stat_file(const char *path[], int *dev_out,
+ unsigned long long *inode_out, int *mode_out,
+ int *nlink_out, int *uid_out, int *gid_out,
+ unsigned long long *size_out, unsigned long *atime_out,
+ unsigned long *mtime_out, unsigned long *ctime_out,
+ int *blksize_out, unsigned long long *blocks_out)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err;
+
+ err = -ENOMEM;
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = do_stat_file(file, dev_out, inode_out, mode_out, nlink_out,
+ uid_out, gid_out, size_out, atime_out, mtime_out,
+ ctime_out, blksize_out, blocks_out);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_set_attr(const char *path[], struct externfs_iattr *attrs)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ unsigned long time;
+ int err = 0, ma;
+
+ if(append && (attrs->ia_valid & EXTERNFS_ATTR_SIZE))
+ return(-EPERM);
+
+ err = -ENOMEM;
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ if(attrs->ia_valid & EXTERNFS_ATTR_MODE){
+ err = os_set_file_perms(file, attrs->ia_mode);
+ if(err < 0)
+ goto out;
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_UID){
+ err = os_set_file_owner(file, attrs->ia_uid, -1);
+ if(err < 0)
+ goto out;
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_GID){
+ err = os_set_file_owner(file, -1, attrs->ia_gid);
+ if(err < 0)
+ goto out;
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_SIZE){
+ err = os_truncate_file(file, attrs->ia_size);
+ if(err < 0)
+ goto out;
+ }
+ ma = EXTERNFS_ATTR_ATIME_SET | EXTERNFS_ATTR_MTIME_SET;
+ if((attrs->ia_valid & ma) == ma){
+ err = os_set_file_time(file, attrs->ia_atime, attrs->ia_mtime);
+ if(err)
+ goto out;
+ }
+ else {
+ if(attrs->ia_valid & EXTERNFS_ATTR_ATIME_SET){
+ err = do_stat_file(file, NULL, NULL, NULL, NULL, NULL,
+ NULL, NULL, NULL, &time,
+ NULL, NULL, NULL);
+ if(err != 0)
+ goto out;
+
+ err = os_set_file_time(file, attrs->ia_atime, time);
+ if(err)
+ goto out;
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_MTIME_SET){
+ err = do_stat_file(file, NULL, NULL, NULL, NULL, NULL,
+ NULL, NULL, &time, NULL,
+ NULL, NULL, NULL);
+ if(err != 0)
+ goto out;
+
+ err = os_set_file_time(file, time, attrs->ia_mtime);
+ if(err)
+ goto out;
+ }
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_CTIME) ;
+ if(attrs->ia_valid & (EXTERNFS_ATTR_ATIME | EXTERNFS_ATTR_MTIME)){
+ err = do_stat_file(file, NULL, NULL, NULL, NULL, NULL,
+ NULL, NULL, &attrs->ia_atime,
+ &attrs->ia_mtime, NULL, NULL, NULL);
+ if(err != 0)
+ goto out;
+ }
+
+ err = 0;
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_make_symlink(const char *from[], const char *to)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(from, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_make_symlink(to, file);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_unlink_file(const char *path[])
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ if(append)
+ return(-EPERM);
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_remove_file(file);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_make_dir(const char *path[], int mode)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_make_dir(file, mode);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_remove_dir(const char *path[])
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_remove_dir(file);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+int host_link_file(const char *to[], const char *from[])
+{
+ char from_tmp[HOSTFS_BUFSIZE], *f, to_tmp[HOSTFS_BUFSIZE], *t;
+ int err = -ENOMEM;
+
+ f = get_path(from, from_tmp, sizeof(from_tmp));
+ t = get_path(to, to_tmp, sizeof(to_tmp));
+ if((f == NULL) || (t == NULL))
+ goto out;
+
+ err = os_link_file(t, f);
+ out:
+ free_path(f, from_tmp);
+ free_path(t, to_tmp);
+ return(err);
+}
+
+int host_read_link(const char *path[], char *buf, int size)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int n = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ n = os_read_symlink(file, buf, size);
+ if(n < size)
+ buf[n] = '\0';
+ out:
+ free_path(file, tmp);
+ return(n);
+}
+
+int host_rename_file(const char *from[], const char *to[])
+{
+ char from_tmp[HOSTFS_BUFSIZE], *f, to_tmp[HOSTFS_BUFSIZE], *t;
+ int err = -ENOMEM;
+
+ f = get_path(from, from_tmp, sizeof(from_tmp));
+ t = get_path(to, to_tmp, sizeof(to_tmp));
+ if((f == NULL) || (t == NULL))
+ goto out;
+
+ err = os_move_file(f, t);
+ out:
+ free_path(f, from_tmp);
+ free_path(t, to_tmp);
+ return(err);
+}
+
+int host_stat_fs(const char *path[], long *bsize_out, long long *blocks_out,
+ long long *bfree_out, long long *bavail_out,
+ long long *files_out, long long *ffree_out, void *fsid_out,
+ int fsid_size, long *namelen_out, long *spare_out)
+{
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_stat_filesystem(file, bsize_out, blocks_out, bfree_out,
+ bavail_out, files_out, ffree_out, fsid_out,
+ fsid_size, namelen_out, spare_out);
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+char *generic_host_read_dir(void *stream, unsigned long long *pos,
+ unsigned long long *ino_out, int *len_out,
+ void *mount)
+{
+ return(host_read_dir(stream, pos, ino_out, len_out));
+}
+
+int generic_host_truncate_file(struct file_handle *fh, __u64 size, void *m)
+{
+ return(truncate_file(fh, size));
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2000 - 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/stddef.h"
+#include "linux/string.h"
+#include "linux/types.h"
+#include "linux/errno.h"
+#include "linux/slab.h"
+#include "linux/init.h"
+#include "linux/fs.h"
+#include "linux/stat.h"
+#include "hostfs.h"
+#include "kern.h"
+#include "init.h"
+#include "kern_util.h"
+#include "filehandle.h"
+#include "os.h"
+
+/* Changed in hostfs_args before the kernel starts running */
+static char *jail_dir = "/";
+int append = 0;
+
+static int __init hostfs_args(char *options, int *add)
+{
+ char *ptr;
+
+ ptr = strchr(options, ',');
+ if(ptr != NULL)
+ *ptr++ = '\0';
+ if(*options != '\0')
+ jail_dir = options;
+
+ options = ptr;
+ while(options){
+ ptr = strchr(options, ',');
+ if(ptr != NULL)
+ *ptr++ = '\0';
+ if(*options != '\0'){
+ if(!strcmp(options, "append"))
+ append = 1;
+ else printf("hostfs_args - unsupported option - %s\n",
+ options);
+ }
+ options = ptr;
+ }
+ return(0);
+}
+
+__uml_setup("hostfs=", hostfs_args,
+"hostfs=<root dir>,<flags>,...\n"
+" This is used to set hostfs parameters. The root directory argument\n"
+" is used to confine all hostfs mounts to within the specified directory\n"
+" tree on the host. If this isn't specified, then a user inside UML can\n"
+" mount anything on the host that's accessible to the user that's running\n"
+" it.\n"
+" The only flag currently supported is 'append', which specifies that all\n"
+" files opened by hostfs will be opened in append mode.\n\n"
+);
+
+struct hostfs_data {
+ struct externfs_data ext;
+ char *mount;
+};
+
+struct hostfs_file {
+ struct externfs_inode ext;
+ struct file_handle fh;
+};
+
+static int hostfs_access_file(char *file, int uid, int w, int x, int gid,
+ int r, struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+ char tmp[HOSTFS_BUFSIZE];
+ int err, mode = 0;
+
+ if(r) mode = OS_ACC_R_OK;
+ if(w) mode |= OS_ACC_W_OK;
+ if(x) mode |= OS_ACC_X_OK;
+
+ err = -ENOMEM;
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_access(file, mode);
+ free_path(file, tmp);
+ out:
+ return(err);
+}
+
+static int hostfs_make_node(const char *file, int mode, int uid, int gid,
+ int type, int major, int minor,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+ char tmp[HOSTFS_BUFSIZE];
+ int err = -ENOMEM;
+
+ file = get_path(path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ /* XXX Pass type in an OS-independent way */
+ mode |= type;
+
+ err = os_make_dev(file, mode, major, minor);
+ free_path(file, tmp);
+ out:
+ return(err);
+}
+
+static int hostfs_stat_file(const char *file, struct externfs_data *ed,
+ dev_t *dev_out, unsigned long long *inode_out,
+ int *mode_out, int *nlink_out, int *uid_out,
+ int *gid_out, unsigned long long *size_out,
+ unsigned long *atime_out, unsigned long *mtime_out,
+ unsigned long *ctime_out, int *blksize_out,
+ unsigned long long *blocks_out)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ /* XXX Why pretend everything is owned by root? */
+ *uid_out = 0;
+ *gid_out = 0;
+ return(host_stat_file(path, dev_out, inode_out, mode_out, nlink_out,
+ NULL, NULL, size_out, atime_out, mtime_out,
+ ctime_out, blksize_out, blocks_out));
+}
+
+static int hostfs_file_type(const char *file, int *rdev,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_file_type(path, rdev));
+}
+
+static char *hostfs_name(struct inode *inode)
+{
+ struct externfs_data *ed = inode_externfs_info(inode);
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+
+ return(inode_name_prefix(inode, mount));
+}
+
+static struct externfs_inode *hostfs_init_file(struct externfs_data *ed)
+{
+ struct hostfs_file *hf;
+
+ hf = kmalloc(sizeof(*hf), GFP_KERNEL);
+ if(hf == NULL)
+ return(NULL);
+
+ hf->fh.fd = -1;
+ return(&hf->ext);
+}
+
+static int hostfs_open_file(struct externfs_inode *ext, char *file,
+ int uid, int gid, struct inode *inode,
+ struct externfs_data *ed)
+{
+ struct hostfs_file *hf = container_of(ext, struct hostfs_file, ext);
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+ int err;
+
+ err = host_open_file(path, 1, 1, &hf->fh);
+ if(err == -EISDIR)
+ goto out;
+
+ if(err == -EACCES)
+ err = host_open_file(path, 1, 0, &hf->fh);
+
+ if(err)
+ goto out;
+
+ is_reclaimable(&hf->fh, hostfs_name, inode);
+ out:
+ return(err);
+}
+
+static void *hostfs_open_dir(char *file, int uid, int gid,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_open_dir(path));
+}
+
+static void hostfs_close_dir(void *stream, struct externfs_data *ed)
+{
+ os_close_dir(stream);
+}
+
+static char *hostfs_read_dir(void *stream, unsigned long long *pos,
+ unsigned long long *ino_out, int *len_out,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+
+ return(generic_host_read_dir(stream, pos, ino_out, len_out, mount));
+}
+
+static int hostfs_read_file(struct externfs_inode *ext,
+ unsigned long long offset, char *buf, int len,
+ int ignore_start, int ignore_end,
+ void (*completion)(char *, int, void *), void *arg,
+ struct externfs_data *ed)
+{
+ struct hostfs_file *hf = container_of(ext, struct hostfs_file, ext);
+ int err = 0;
+
+ if(ignore_start != 0){
+ err = read_file(&hf->fh, offset, buf, ignore_start);
+ if(err < 0)
+ goto out;
+ }
+
+ if(ignore_end != len)
+ err = read_file(&hf->fh, offset + ignore_end, buf + ignore_end,
+ len - ignore_end);
+
+ out:
+
+ (*completion)(buf, err, arg);
+ if (err > 0)
+ err = 0;
+ return(err);
+}
+
+static int hostfs_write_file(struct externfs_inode *ext,
+ unsigned long long offset, const char *buf,
+ int start, int len,
+ void (*completion)(char *, int, void *),
+ void *arg, struct externfs_data *ed)
+{
+ struct file_handle *fh;
+ int err;
+
+ fh = &container_of(ext, struct hostfs_file, ext)->fh;
+ err = write_file(fh, offset + start, buf + start, len);
+
+ (*completion)((char *) buf, err, arg);
+ if (err > 0)
+ err = 0;
+
+ return(err);
+}
+
+static int hostfs_create_file(struct externfs_inode *ext, char *file, int mode,
+ int uid, int gid, struct inode *inode,
+ struct externfs_data *ed)
+{
+ struct hostfs_file *hf = container_of(ext, struct hostfs_file,
+ ext);
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+ int err = -ENOMEM;
+
+ err = host_create_file(path, mode, &hf->fh);
+ if(err)
+ goto out;
+
+ is_reclaimable(&hf->fh, hostfs_name, inode);
+ out:
+ return(err);
+}
+
+static int hostfs_set_attr(const char *file, struct externfs_iattr *attrs,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_set_attr(path, attrs));
+}
+
+static int hostfs_make_symlink(const char *from, const char *to, int uid,
+ int gid, struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, from, NULL };
+
+ return(host_make_symlink(path, to));
+}
+
+static int hostfs_link_file(const char *to, const char *from, int uid, int gid,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *to_path[] = { jail_dir, mount, to, NULL };
+ const char *from_path[] = { jail_dir, mount, from, NULL };
+
+ return(host_link_file(to_path, from_path));
+}
+
+static int hostfs_unlink_file(const char *file, struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_unlink_file(path));
+}
+
+static int hostfs_make_dir(const char *file, int mode, int uid, int gid,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_make_dir(path, mode));
+}
+
+static int hostfs_remove_dir(const char *file, int uid, int gid,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_remove_dir(path));
+}
+
+static int hostfs_read_link(char *file, int uid, int gid, char *buf, int size,
+ struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, file, NULL };
+
+ return(host_read_link(path, buf, size));
+}
+
+static int hostfs_rename_file(char *from, char *to, struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *to_path[] = { jail_dir, mount, to, NULL };
+ const char *from_path[] = { jail_dir, mount, from, NULL };
+
+ return(host_rename_file(from_path, to_path));
+}
+
+static int hostfs_stat_fs(long *bsize_out, long long *blocks_out,
+ long long *bfree_out, long long *bavail_out,
+ long long *files_out, long long *ffree_out,
+ void *fsid_out, int fsid_size, long *namelen_out,
+ long *spare_out, struct externfs_data *ed)
+{
+ char *mount = container_of(ed, struct hostfs_data, ext)->mount;
+ const char *path[] = { jail_dir, mount, NULL };
+
+ return(host_stat_fs(path, bsize_out, blocks_out, bfree_out, bavail_out,
+ files_out, ffree_out, fsid_out, fsid_size,
+ namelen_out, spare_out));
+}
+
+void hostfs_close_file(struct externfs_inode *ext,
+ unsigned long long size)
+{
+ struct hostfs_file *hf = container_of(ext, struct hostfs_file, ext);
+
+ if(hf->fh.fd != -1){
+ truncate_file(&hf->fh, size);
+ close_file(&hf->fh);
+ }
+
+ kfree(hf);
+}
+
+int hostfs_truncate_file(struct externfs_inode *ext, __u64 size,
+ struct externfs_data *ed)
+{
+ struct hostfs_file *hf = container_of(ext, struct hostfs_file, ext);
+
+ return(truncate_file(&hf->fh, size));
+}
+
+static struct externfs_file_ops hostfs_file_ops = {
+ .stat_file = hostfs_stat_file,
+ .file_type = hostfs_file_type,
+ .access_file = hostfs_access_file,
+ .open_file = hostfs_open_file,
+ .open_dir = hostfs_open_dir,
+ .read_dir = hostfs_read_dir,
+ .read_file = hostfs_read_file,
+ .write_file = hostfs_write_file,
+ .map_file_page = NULL,
+ .close_file = hostfs_close_file,
+ .close_dir = hostfs_close_dir,
+ .invisible = NULL,
+ .create_file = hostfs_create_file,
+ .set_attr = hostfs_set_attr,
+ .make_symlink = hostfs_make_symlink,
+ .unlink_file = hostfs_unlink_file,
+ .make_dir = hostfs_make_dir,
+ .remove_dir = hostfs_remove_dir,
+ .make_node = hostfs_make_node,
+ .link_file = hostfs_link_file,
+ .read_link = hostfs_read_link,
+ .rename_file = hostfs_rename_file,
+ .statfs = hostfs_stat_fs,
+ .truncate_file = hostfs_truncate_file
+};
+
+static struct externfs_data *mount_fs(char *mount_arg)
+{
+ struct hostfs_data *hd;
+ int err = -ENOMEM;
+
+ hd = kmalloc(sizeof(*hd), GFP_KERNEL);
+ if(hd == NULL)
+ goto out;
+
+ hd->mount = host_root_filename(mount_arg);
+ if(hd->mount == NULL)
+ goto out_free;
+
+ init_externfs(&hd->ext, &hostfs_file_ops);
+
+ return(&hd->ext);
+
+ out_free:
+ kfree(hd);
+ out:
+ return(ERR_PTR(err));
+}
+
+static struct externfs_mount_ops hostfs_mount_ops = {
+ .init_file = hostfs_init_file,
+ .mount = mount_fs,
+};
+
+static int __init init_hostfs(void)
+{
+ return(register_externfs("hostfs", &hostfs_mount_ops));
+}
+
+static void __exit exit_hostfs(void)
+{
+ unregister_externfs("hostfs");
+}
+
+__initcall(init_hostfs);
+__exitcall(exit_hostfs);
+
+#if 0
+module_init(init_hostfs)
+module_exit(exit_hostfs)
+MODULE_LICENSE("GPL");
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/stat.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/kdev_t.h>
+#include "linux/init.h"
+#include "linux/workqueue.h"
+#include <asm/irq.h>
+#include "hostfs.h"
+#include "mem.h"
+#include "os.h"
+#include "mode.h"
+#include "aio.h"
+#include "irq_user.h"
+#include "irq_kern.h"
+#include "filehandle.h"
+#include "metadata.h"
+
+#define HUMFS_VERSION 2
+
+static int humfs_stat_file(const char *path, struct externfs_data *ed,
+ dev_t *dev_out, unsigned long long *inode_out,
+ int *mode_out, int *nlink_out, int *uid_out,
+ int *gid_out, unsigned long long *size_out,
+ unsigned long *atime_out, unsigned long *mtime_out,
+ unsigned long *ctime_out, int *blksize_out,
+ unsigned long long *blocks_out)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err, mode, perms, major, minor;
+ char type;
+
+ err = host_stat_file(data_path, NULL, inode_out, mode_out,
+ nlink_out, NULL, NULL, size_out, atime_out,
+ mtime_out, ctime_out, blksize_out, blocks_out);
+ if(err)
+ return(err);
+
+ err = (*mount->meta->ownerships)(path, &perms, uid_out, gid_out,
+ &type, &major, &minor, mount);
+ if(err)
+ return(err);
+
+ *mode_out = (*mode_out & ~S_IRWXUGO) | perms;
+
+ mode = 0;
+ switch(type){
+ case 'c':
+ mode = S_IFCHR;
+ *dev_out = MKDEV(major, minor);
+ break;
+ case 'b':
+ mode = S_IFBLK;
+ *dev_out = MKDEV(major, minor);
+ break;
+ case 's':
+ mode = S_IFSOCK;
+ break;
+ default:
+ break;
+ }
+
+ if(mode != 0)
+ *mode_out = (*mode_out & ~S_IFMT) | mode;
+
+ return(0);
+}
+
+static int meta_type(const char *path, int *dev_out, void *m)
+{
+ struct humfs *mount = m;
+ int err, type, maj, min;
+ char c;
+
+ err = (*mount->meta->ownerships)(path, NULL, NULL, NULL, &c, &maj,
+ &min, mount);
+ if(err)
+ return(err);
+
+ if(c == 0)
+ return(0);
+
+ if(dev_out)
+ *dev_out = MKDEV(maj, min);
+
+ switch(c){
+ case 'c':
+ type = OS_TYPE_CHARDEV;
+ break;
+ case 'b':
+ type = OS_TYPE_BLOCKDEV;
+ break;
+ case 'p':
+ type = OS_TYPE_FIFO;
+ break;
+ case 's':
+ type = OS_TYPE_SOCK;
+ break;
+ default:
+ type = -EINVAL;
+ break;
+ }
+
+ return(type);
+}
+
+static int humfs_file_type(const char *path, int *dev_out,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int type;
+
+ type = meta_type(path, dev_out, mount);
+ if(type != 0)
+ return(type);
+
+ return(host_file_type(data_path, dev_out));
+}
+
+static char *humfs_data_name(struct inode *inode)
+{
+ struct externfs_data *ed = inode_externfs_info(inode);
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+
+ return(inode_name_prefix(inode, mount->data));
+}
+
+static struct externfs_inode *humfs_init_file(struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct humfs_file *hf;
+
+ hf = (*mount->meta->init_file)();
+ if(hf == NULL)
+ return(NULL);
+
+ hf->data.fd = -1;
+ return(&hf->ext);
+}
+
+static int humfs_open_file(struct externfs_inode *ext, char *path, int uid,
+ int gid, struct inode *inode,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ struct openflags flags;
+ char tmp[HOSTFS_BUFSIZE], *file;
+ int err = -ENOMEM;
+
+ file = get_path(data_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ flags = of_rdwr(OPENFLAGS());
+ if(mount->direct)
+ flags = of_direct(flags);
+
+ if(path == NULL)
+ path = "";
+ err = (*mount->meta->open_file)(hf, path, inode, mount);
+ if(err)
+ goto out_free;
+
+ err = open_filehandle(file, flags, 0, &hf->data);
+ if(err == -EISDIR)
+ goto out;
+ else if(err == -EPERM){
+ flags = of_set_rw(flags, 1, 0);
+ err = open_filehandle(file, flags, 0, &hf->data);
+ }
+
+ if(err)
+ goto out_close;
+
+ hf->mount = mount;
+ is_reclaimable(&hf->data, humfs_data_name, inode);
+
+ out_free:
+ free_path(file, tmp);
+ out:
+ return(err);
+
+ out_close:
+ (*mount->meta->close_file)(hf);
+ goto out_free;
+}
+
+static void *humfs_open_dir(char *path, int uid, int gid,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+
+ return(host_open_dir(data_path));
+}
+
+static void humfs_close_dir(void *stream, struct externfs_data *ed)
+{
+ os_close_dir(stream);
+}
+
+static char *humfs_read_dir(void *stream, unsigned long long *pos,
+ unsigned long long *ino_out, int *len_out,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+
+ return(generic_host_read_dir(stream, pos, ino_out, len_out, mount));
+}
+
+LIST_HEAD(humfs_replies);
+
+struct humfs_aio {
+ struct aio_context aio;
+ struct list_head list;
+ void (*completion)(char *, int, void *);
+ char *buf;
+ int real_len;
+ int err;
+ void *data;
+};
+
+static int humfs_reply_fd = -1;
+
+struct humfs_aio last_task_aio, last_intr_aio;
+struct humfs_aio *last_task_aio_ptr, *last_intr_aio_ptr;
+
+void humfs_work_proc(void *unused)
+{
+ struct humfs_aio *aio;
+ unsigned long flags;
+
+ while(!list_empty(&humfs_replies)){
+ local_irq_save(flags);
+ aio = list_entry(humfs_replies.next, struct humfs_aio, list);
+
+ last_task_aio = *aio;
+ last_task_aio_ptr = aio;
+
+ list_del(&aio->list);
+ local_irq_restore(flags);
+
+ if(aio->err >= 0)
+ aio->err = aio->real_len;
+ (*aio->completion)(aio->buf, aio->err, aio->data);
+ kfree(aio);
+ }
+}
+
+DECLARE_WORK(humfs_work, humfs_work_proc, NULL);
+
+static irqreturn_t humfs_interrupt(int irq, void *dev_id,
+ struct pt_regs *unused)
+{
+ struct aio_thread_reply reply;
+ struct humfs_aio *aio;
+ int err, fd = (int) dev_id;
+
+ while(1){
+ err = os_read_file(fd, &reply, sizeof(reply));
+ if(err < 0){
+ if(err == -EAGAIN)
+ break;
+ printk("humfs_interrupt - read returned err %d\n",
+ -err);
+ return(IRQ_HANDLED);
+ }
+ aio = reply.data;
+ aio->err = reply.err;
+ list_add(&aio->list, &humfs_replies);
+ last_intr_aio = *aio;
+ last_intr_aio_ptr = aio;
+ }
+
+ if(!list_empty(&humfs_replies))
+ schedule_work(&humfs_work);
+ reactivate_fd(fd, HUMFS_IRQ);
+ return(IRQ_HANDLED);
+}
+
+static int init_humfs_aio(void)
+{
+ int fds[2], err;
+
+ err = os_pipe(fds, 1, 1);
+ if(err){
+ printk("init_humfs_aio - pipe failed, err = %d\n", -err);
+ goto out;
+ }
+
+ err = um_request_irq(HUMFS_IRQ, fds[0], IRQ_READ, humfs_interrupt,
+ SA_INTERRUPT | SA_SAMPLE_RANDOM, "humfs",
+ (void *) fds[0]);
+ if(err){
+ printk("init_humfs_aio - : um_request_irq failed, err = %d\n",
+ err);
+ goto out_close;
+ }
+
+ humfs_reply_fd = fds[1];
+ goto out;
+
+ out_close:
+ os_close_file(fds[0]);
+ os_close_file(fds[1]);
+ out:
+ return(0);
+}
+
+__initcall(init_humfs_aio);
+
+static int humfs_aio(enum aio_type type, int fd, unsigned long long offset,
+ char *buf, int len, int real_len,
+ void (*completion)(char *, int, void *), void *arg)
+{
+ struct humfs_aio *aio;
+ int err = -ENOMEM;
+
+ aio = kmalloc(sizeof(*aio), GFP_KERNEL);
+ if(aio == NULL)
+ goto out;
+ *aio = ((struct humfs_aio) { .aio = INIT_AIO_CONTEXT,
+ .list = LIST_HEAD_INIT(aio->list),
+ .completion= completion,
+ .buf = buf,
+ .err = 0,
+ .real_len = real_len,
+ .data = arg });
+
+ err = submit_aio(type, fd, buf, len, offset, humfs_reply_fd, aio);
+ if(err)
+ (*completion)(buf, err, arg);
+
+ out:
+ return(err);
+}
+
+static int humfs_read_file(struct externfs_inode *ext,
+ unsigned long long offset, char *buf, int len,
+ int ignore_start, int ignore_end,
+ void (*completion)(char *, int, void *), void *arg,
+ struct externfs_data *ed)
+{
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ int fd = filehandle_fd(&hf->data);
+
+ if(fd < 0){
+ (*completion)(buf, fd, arg);
+ return(fd);
+ }
+
+ return(humfs_aio(AIO_READ, fd, offset, buf, len, len, completion,
+ arg));
+}
+
+static int humfs_write_file(struct externfs_inode *ext,
+ unsigned long long offset,
+ const char *buf, int start, int len,
+ void (*completion)(char *, int, void *), void *arg,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ int err, orig_len = len, fd = filehandle_fd(&hf->data);
+
+ if(fd < 0){
+ (*completion)((char *) buf, fd, arg);
+ return(fd);
+ }
+
+ if(mount->direct)
+ len = PAGE_SIZE;
+ else {
+ offset += start;
+ buf += start;
+ }
+
+ err = humfs_aio(AIO_WRITE, fd, offset, (char *) buf, len, orig_len,
+ completion, arg);
+
+ if(err < 0)
+ return(err);
+
+ if(mount->direct)
+ err = orig_len;
+
+ return(err);
+}
+
+static int humfs_map_file_page(struct externfs_inode *ext,
+ unsigned long long offset, char *buf, int w,
+ struct externfs_data *ed)
+{
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ unsigned long long size, need;
+ int err, fd = filehandle_fd(&hf->data);
+
+ if(fd < 0)
+ return(fd);
+
+ err = os_fd_size(fd, &size);
+ if(err)
+ return(err);
+
+ need = offset + PAGE_SIZE;
+ if(size < need){
+ err = os_truncate_fd(fd, need);
+ if(err)
+ return(err);
+ }
+
+ return(physmem_subst_mapping(buf, fd, offset, w));
+}
+
+static void humfs_close_file(struct externfs_inode *ext,
+ unsigned long long size)
+{
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ int fd;
+
+ if(hf->data.fd == -1)
+ return;
+
+ fd = filehandle_fd(&hf->data);
+ physmem_forget_descriptor(fd);
+ truncate_file(&hf->data, size);
+ close_file(&hf->data);
+
+ (*hf->mount->meta->close_file)(hf);
+}
+
+/* XXX Assumes that you can't make a normal file */
+
+static int humfs_make_node(const char *path, int mode, int uid, int gid,
+ int type, int major, int minor,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct file_handle fh;
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err;
+ char t;
+
+ err = host_create_file(data_path, S_IRWXUGO, &fh);
+ if(err)
+ goto out;
+
+ close_file(&fh);
+
+ switch(type){
+ case S_IFCHR:
+ t = 'c';
+ break;
+ case S_IFBLK:
+ t = 'b';
+ break;
+ case S_IFIFO:
+ t = 'p';
+ break;
+ case S_IFSOCK:
+ t = 's';
+ break;
+ default:
+ err = -EINVAL;
+ printk("make_node - bad node type : %d\n", type);
+ goto out_rm;
+ }
+
+ err = (*mount->meta->make_node)(path, mode, uid, gid, t, major, minor,
+ mount);
+ if(err)
+ goto out_rm;
+
+ out:
+ return(err);
+
+ out_rm:
+ host_unlink_file(data_path);
+ goto out;
+}
+
+static int humfs_create_file(struct externfs_inode *ext, char *path, int mode,
+ int uid, int gid, struct inode *inode,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err;
+
+ err = (*mount->meta->create_file)(hf, path, mode, uid, gid, inode,
+ mount);
+ if(err)
+ goto out;
+
+ err = host_create_file(data_path, S_IRWXUGO, &hf->data);
+ if(err)
+ goto out_rm;
+
+
+ is_reclaimable(&hf->data, humfs_data_name, inode);
+
+ return(0);
+
+ out_rm:
+ (*mount->meta->remove_file)(path, mount);
+ (*mount->meta->close_file)(hf);
+ out:
+ return(err);
+}
+
+static int humfs_set_attr(const char *path, struct externfs_iattr *attrs,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int (*chown)(const char *, int, int, int, struct humfs *);
+ int err;
+
+ chown = mount->meta->change_ownerships;
+ if(attrs->ia_valid & EXTERNFS_ATTR_MODE){
+ err = (*chown)(path, attrs->ia_mode, -1, -1, mount);
+ if(err)
+ return(err);
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_UID){
+ err = (*chown)(path, -1, attrs->ia_uid, -1, mount);
+ if(err)
+ return(err);
+ }
+ if(attrs->ia_valid & EXTERNFS_ATTR_GID){
+ err = (*chown)(path, -1, -1, attrs->ia_gid, mount);
+ if(err)
+ return(err);
+ }
+
+ attrs->ia_valid &= ~(EXTERNFS_ATTR_MODE | EXTERNFS_ATTR_UID |
+ EXTERNFS_ATTR_GID);
+
+ return(host_set_attr(data_path, attrs));
+}
+
+static int humfs_make_symlink(const char *from, const char *to, int uid,
+ int gid, struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ struct humfs_file *hf;
+ const char *data_path[3] = { mount->data, from, NULL };
+ int err = -ENOMEM;
+
+ hf = (*mount->meta->init_file)();
+ if(hf == NULL)
+ goto out;
+
+ err = (*mount->meta->create_file)(hf, from, S_IRWXUGO, uid, gid, NULL,
+ mount);
+ if(err)
+ goto out_close;
+
+ err = host_make_symlink(data_path, to);
+ if(err)
+ (*mount->meta->remove_file)(from, mount);
+
+ out_close:
+ (*mount->meta->close_file)(hf);
+ out:
+ return(err);
+}
+
+static int humfs_link_file(const char *to, const char *from, int uid, int gid,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path_from[3] = { mount->data, from, NULL };
+ const char *data_path_to[3] = { mount->data, to, NULL };
+ int err;
+
+ err = (*mount->meta->create_link)(to, from, mount);
+ if(err)
+ return(err);
+
+ err = host_link_file(data_path_to, data_path_from);
+ if(err)
+ (*mount->meta->remove_file)(from, mount);
+
+ return(err);
+}
+
+static int humfs_unlink_file(const char *path, struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err;
+
+ err = (*mount->meta->remove_file)(path, mount);
+ if (err)
+ return err;
+
+ (*mount->meta->remove_file)(path, mount);
+ return(host_unlink_file(data_path));
+}
+
+static void humfs_invisible(struct externfs_inode *ext)
+{
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+ struct humfs *mount = hf->mount;
+
+ (*mount->meta->invisible)(hf);
+ not_reclaimable(&hf->data);
+}
+
+static int humfs_make_dir(const char *path, int mode, int uid, int gid,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err;
+
+ err = (*mount->meta->create_dir)(path, mode, uid, gid, mount);
+ if(err)
+ return(err);
+
+ err = host_make_dir(data_path, S_IRWXUGO);
+ if(err)
+ (*mount->meta->remove_dir)(path, mount);
+
+ return(err);
+}
+
+static int humfs_remove_dir(const char *path, int uid, int gid,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, path, NULL };
+ int err;
+
+ err = host_remove_dir(data_path);
+ if (err)
+ return err;
+
+ (*mount->meta->remove_dir)(path, mount);
+
+ return(err);
+}
+
+static int humfs_read_link(char *file, int uid, int gid, char *buf, int size,
+ struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, file, NULL };
+
+ return(host_read_link(data_path, buf, size));
+}
+
+struct humfs *inode_humfs_info(struct inode *inode)
+{
+ return(container_of(inode_externfs_info(inode), struct humfs, ext));
+}
+
+static int humfs_rename_file(char *from, char *to, struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path_from[3] = { mount->data, from, NULL };
+ const char *data_path_to[3] = { mount->data, to, NULL };
+ int err;
+
+ err = (*mount->meta->rename_file)(from, to, mount);
+ if(err)
+ return(err);
+
+ err = host_rename_file(data_path_from, data_path_to);
+ if(err)
+ (*mount->meta->rename_file)(to, from, mount);
+
+ return(err);
+}
+
+static int humfs_stat_fs(long *bsize_out, long long *blocks_out,
+ long long *bfree_out, long long *bavail_out,
+ long long *files_out, long long *ffree_out,
+ void *fsid_out, int fsid_size, long *namelen_out,
+ long *spare_out, struct externfs_data *ed)
+{
+ struct humfs *mount = container_of(ed, struct humfs, ext);
+ const char *data_path[3] = { mount->data, NULL };
+ int err;
+
+ /* XXX Needs to maintain this info as metadata */
+ err = host_stat_fs(data_path, bsize_out, blocks_out, bfree_out,
+ bavail_out, files_out, ffree_out, fsid_out,
+ fsid_size, namelen_out, spare_out);
+ if(err)
+ return(err);
+
+ *blocks_out = mount->total / *bsize_out;
+ *bfree_out = (mount->total - mount->used) / *bsize_out;
+ *bavail_out = (mount->total - mount->used) / *bsize_out;
+ return(0);
+}
+
+int humfs_truncate_file(struct externfs_inode *ext, __u64 size,
+ struct externfs_data *ed)
+{
+ struct humfs_file *hf = container_of(ext, struct humfs_file, ext);
+
+ return(truncate_file(&hf->data, size));
+}
+
+char *humfs_path(char *dir, char *file)
+{
+ int need_slash, len = strlen(dir) + strlen(file);
+ char *new;
+
+ need_slash = (dir[strlen(dir) - 1] != '/');
+ if(need_slash)
+ len++;
+
+ new = kmalloc(len + 1, GFP_KERNEL);
+ if(new == NULL)
+ return(NULL);
+
+ strcpy(new, dir);
+ if(need_slash)
+ strcat(new, "/");
+ strcat(new, file);
+
+ return(new);
+}
+
+DECLARE_MUTEX(meta_sem);
+struct list_head metas = LIST_HEAD_INIT(metas);
+
+static struct humfs_meta_ops *find_meta(const char *name)
+{
+ struct list_head *ele;
+ struct humfs_meta_ops *m;
+
+ down(&meta_sem);
+ list_for_each(ele, &metas){
+ m = list_entry(ele, struct humfs_meta_ops, list);
+ if(!strcmp(m->name, name))
+ goto out;
+ }
+ m = NULL;
+ out:
+ up(&meta_sem);
+ return(m);
+}
+
+void register_meta(struct humfs_meta_ops *ops)
+{
+ down(&meta_sem);
+ list_add(&ops->list, &metas);
+ up(&meta_sem);
+}
+
+void unregister_meta(struct humfs_meta_ops *ops)
+{
+ down(&meta_sem);
+ list_del(&ops->list);
+ up(&meta_sem);
+}
+
+static struct humfs *read_superblock(char *root)
+{
+ struct humfs *mount;
+ struct humfs_meta_ops *meta = NULL;
+ struct file_handle *fh;
+ const char *path[] = { root, "superblock", NULL };
+ u64 used, total;
+ char meta_buf[33], line[HOSTFS_BUFSIZE], *newline;
+ unsigned long long pos;
+ int version, i, n, err;
+
+ fh = kmalloc(sizeof(*fh), GFP_KERNEL);
+ if(fh == NULL)
+ return(ERR_PTR(-ENOMEM));
+
+ err = host_open_file(path, 1, 0, fh);
+ if(err){
+ printk("Failed to open %s/%s, errno = %d\n", path[0],
+ path[1], err);
+ return(ERR_PTR(err));
+ }
+
+ used = 0;
+ total = 0;
+ pos = 0;
+ i = 0;
+ while(1){
+ n = read_file(fh, pos, &line[i], sizeof(line) - i - 1);
+ if((n == 0) && (i == 0))
+ break;
+ if(n < 0)
+ return(ERR_PTR(n));
+
+ pos += n;
+ if(n > 0)
+ line[n + i] = '\0';
+
+ newline = strchr(line, '\n');
+ if(newline == NULL){
+ printk("read_superblock - line too long : '%s'\n",
+ line);
+ return(ERR_PTR(-EINVAL));
+ }
+ newline++;
+
+ if(sscanf(line, "version %d\n", &version) == 1){
+ if(version != HUMFS_VERSION){
+ printk("humfs version mismatch - want version "
+ "%d, got version %d.\n", HUMFS_VERSION,
+ version);
+ return(ERR_PTR(-EINVAL));
+ }
+ }
+ else if(sscanf(line, "used %Lu\n", &used) == 1) ;
+ else if(sscanf(line, "total %Lu\n", &total) == 1) ;
+ else if(sscanf(line, "metadata %32s\n", meta_buf) == 1){
+ meta = find_meta(meta_buf);
+ if(meta == NULL){
+ printk("read_superblock - meta api \"%s\" not "
+ "registered\n", meta_buf);
+ return(ERR_PTR(-EINVAL));
+ }
+ }
+
+ else {
+ printk("read_superblock - bogus line : '%s'\n", line);
+ return(ERR_PTR(-EINVAL));
+ }
+
+ i = newline - line;
+ memmove(line, newline, sizeof(line) - i);
+ i = strlen(line);
+ }
+
+ if(used == 0){
+ printk("read_superblock - used not specified or set to "
+ "zero\n");
+ return(ERR_PTR(-EINVAL));
+ }
+ if(total == 0){
+ printk("read_superblock - total not specified or set to "
+ "zero\n");
+ return(ERR_PTR(-EINVAL));
+ }
+ if(used > total){
+ printk("read_superblock - used is greater than total\n");
+ return(ERR_PTR(-EINVAL));
+ }
+
+ if(meta == NULL){
+ meta = find_meta("shadow_fs");
+ }
+
+ if(meta == NULL){
+ printk("read_superblock - valid meta api was not specified\n");
+ return(ERR_PTR(-EINVAL));
+ }
+
+ mount = (*meta->init_mount)(root);
+ if(IS_ERR(mount))
+ return(mount);
+
+ *mount = ((struct humfs) { .total = total,
+ .used = used,
+ .meta = meta });
+ return(mount);
+}
+
+struct externfs_file_ops humfs_no_mmap_file_ops = {
+ .stat_file = humfs_stat_file,
+ .file_type = humfs_file_type,
+ .access_file = NULL,
+ .open_file = humfs_open_file,
+ .open_dir = humfs_open_dir,
+ .read_dir = humfs_read_dir,
+ .read_file = humfs_read_file,
+ .write_file = humfs_write_file,
+ .map_file_page = NULL,
+ .close_file = humfs_close_file,
+ .close_dir = humfs_close_dir,
+ .invisible = humfs_invisible,
+ .create_file = humfs_create_file,
+ .set_attr = humfs_set_attr,
+ .make_symlink = humfs_make_symlink,
+ .unlink_file = humfs_unlink_file,
+ .make_dir = humfs_make_dir,
+ .remove_dir = humfs_remove_dir,
+ .make_node = humfs_make_node,
+ .link_file = humfs_link_file,
+ .read_link = humfs_read_link,
+ .rename_file = humfs_rename_file,
+ .statfs = humfs_stat_fs,
+ .truncate_file = humfs_truncate_file
+};
+
+struct externfs_file_ops humfs_mmap_file_ops = {
+ .stat_file = humfs_stat_file,
+ .file_type = humfs_file_type,
+ .access_file = NULL,
+ .open_file = humfs_open_file,
+ .open_dir = humfs_open_dir,
+ .invisible = humfs_invisible,
+ .read_dir = humfs_read_dir,
+ .read_file = humfs_read_file,
+ .write_file = humfs_write_file,
+ .map_file_page = humfs_map_file_page,
+ .close_file = humfs_close_file,
+ .close_dir = humfs_close_dir,
+ .create_file = humfs_create_file,
+ .set_attr = humfs_set_attr,
+ .make_symlink = humfs_make_symlink,
+ .unlink_file = humfs_unlink_file,
+ .make_dir = humfs_make_dir,
+ .remove_dir = humfs_remove_dir,
+ .make_node = humfs_make_node,
+ .link_file = humfs_link_file,
+ .read_link = humfs_read_link,
+ .rename_file = humfs_rename_file,
+ .statfs = humfs_stat_fs,
+ .truncate_file = humfs_truncate_file
+};
+
+static struct externfs_data *mount_fs(char *mount_arg)
+{
+ char *root, *data, *flags;
+ struct humfs *mount;
+ struct externfs_file_ops *file_ops;
+ int err, do_mmap = 0;
+
+ if(mount_arg == NULL){
+ printk("humfs - no host directory specified\n");
+ return(NULL);
+ }
+
+ flags = strchr((char *) mount_arg, ',');
+ if(flags != NULL){
+ do {
+ *flags++ = '\0';
+
+ if(!strcmp(flags, "mmap"))
+ do_mmap = 1;
+
+ flags = strchr(flags, ',');
+ } while(flags != NULL);
+ }
+
+ err = -ENOMEM;
+ root = host_root_filename(mount_arg);
+ if(root == NULL)
+ goto err;
+
+ mount = read_superblock(root);
+ if(IS_ERR(mount)){
+ err = PTR_ERR(mount);
+ goto err_free_root;
+ }
+
+ data = humfs_path(root, "data/");
+ if(data == NULL)
+ goto err_free_mount;
+
+ if(CHOOSE_MODE(do_mmap, 0)){
+ printk("humfs doesn't support mmap in tt mode\n");
+ do_mmap = 0;
+ }
+
+ mount->data = data;
+ mount->mmap = do_mmap;
+
+ file_ops = do_mmap ? &humfs_mmap_file_ops : &humfs_no_mmap_file_ops;
+ init_externfs(&mount->ext, file_ops);
+
+ return(&mount->ext);
+
+ err_free_mount:
+ kfree(mount);
+ err_free_root:
+ kfree(root);
+ err:
+ return(NULL);
+}
+
+struct externfs_mount_ops humfs_mount_ops = {
+ .init_file = humfs_init_file,
+ .mount = mount_fs,
+};
+
+static int __init init_humfs(void)
+{
+ return(register_externfs("humfs", &humfs_mount_ops));
+}
+
+static void __exit exit_humfs(void)
+{
+ unregister_externfs("humfs");
+}
+
+__initcall(init_humfs);
+__exitcall(exit_humfs);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include "hostfs.h"
+#include "metadata.h"
+#include "kern_util.h"
+
+#define METADATA_FILE_PATH(meta) (meta)->root, "file_metadata"
+#define METADATA_DIR_PATH(meta) (meta)->root, "dir_metadata"
+
+struct meta_fs {
+ struct humfs humfs;
+ char *root;
+};
+
+struct meta_file {
+ struct humfs_file humfs;
+ struct file_handle fh;
+};
+
+static int meta_file_path(const char *path, struct meta_fs *meta,
+ const char *path_out[])
+{
+ const char *data_path[] = { meta->root, "data", path, NULL };
+ char data_tmp[HOSTFS_BUFSIZE];
+ char *data_file = get_path(data_path, data_tmp, sizeof(data_tmp));
+
+ if(data_file == NULL)
+ return(-ENOMEM);
+
+ path_out[0] = meta->root;
+ path_out[2] = path;
+ if(os_file_type(data_file) == OS_TYPE_DIR){
+ path_out[1] = "dir_metadata";
+ path_out[3] = "metadata";
+ path_out[4] = NULL;
+ }
+ else {
+ path_out[1] = "file_metadata";
+ path_out[3] = NULL;
+ }
+
+ return(0);
+}
+
+static int open_meta_file(const char *path, struct humfs *humfs,
+ struct file_handle *fh)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ const char *meta_path[5];
+ char meta_tmp[HOSTFS_BUFSIZE];
+ char *meta_file;
+ int err;
+
+ err = meta_file_path(path, meta, meta_path);
+ if(err)
+ goto out;
+
+ meta_file = get_path(meta_path, meta_tmp, sizeof(meta_tmp));
+ if(meta_file == NULL)
+ goto out;
+
+ err = open_filehandle(meta_file, of_rdwr(OPENFLAGS()), 0, fh);
+
+ out:
+ return(err);
+}
+
+static char *meta_fs_name(struct inode *inode)
+{
+ struct humfs *mount = inode_humfs_info(inode);
+ struct meta_fs *meta = container_of(mount, struct meta_fs, humfs);
+ const char *metadata_path[5];
+ char tmp[HOSTFS_BUFSIZE], *name, *file;
+
+ if(meta_file_path("", meta, metadata_path))
+ return(NULL);
+
+ file = get_path(metadata_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ return(NULL);
+
+ name = inode_name_prefix(inode, file);
+
+ free_path(file, tmp);
+ return(name);
+}
+
+static void metafs_invisible(struct humfs_file *hf)
+{
+ struct meta_file *mf = container_of(hf, struct meta_file, humfs);
+
+ not_reclaimable(&mf->fh);
+}
+
+static struct humfs_file *metafs_init_file(void)
+{
+ struct meta_file *mf;
+ int err = -ENOMEM;
+
+ mf = kmalloc(sizeof(*mf), GFP_KERNEL);
+ if(mf == NULL)
+ return(ERR_PTR(err));
+
+ return(&mf->humfs);
+}
+
+static int metafs_open_file(struct humfs_file *hf, const char *path,
+ struct inode *inode, struct humfs *humfs)
+{
+ struct meta_file *mf = container_of(hf, struct meta_file, humfs);
+ int err;
+
+ err = open_meta_file(path, humfs, &mf->fh);
+ if(err)
+ return(err);
+
+ is_reclaimable(&mf->fh, meta_fs_name, inode);
+
+ return(0);
+}
+
+static void metafs_close_file(struct humfs_file *hf)
+{
+ struct meta_file *meta = container_of(hf, struct meta_file, humfs);
+
+ close_file(&meta->fh);
+ kfree(meta);
+}
+
+static int metafs_create_file(struct humfs_file *hf, const char *path,
+ int mode, int uid, int gid, struct inode *inode,
+ struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ struct meta_file *mf = container_of(hf, struct meta_file, humfs);
+ char tmp[HOSTFS_BUFSIZE];
+ const char *metadata_path[] = { METADATA_FILE_PATH(meta), path, NULL };
+ char *file = get_path(metadata_path, tmp, sizeof(tmp));
+ char buf[sizeof("mmmm uuuuuuuuuu gggggggggg")];
+ int err = -ENOMEM;
+
+ if(file == NULL)
+ goto out;
+
+ err = open_filehandle(file, of_write(of_create(OPENFLAGS())), 0644,
+ &mf->fh);
+ if(err)
+ goto out_free_path;
+
+ if(inode != NULL)
+ is_reclaimable(&mf->fh, meta_fs_name, inode);
+
+ sprintf(buf, "%d %d %d\n", mode & S_IRWXUGO, uid, gid);
+ err = write_file(&mf->fh, 0, buf, strlen(buf));
+ if(err < 0)
+ goto out_rm;
+
+ free_path(file, tmp);
+ return(0);
+
+ out_rm:
+ close_file(&mf->fh);
+ os_remove_file(file);
+ out_free_path:
+ free_path(file, tmp);
+ out:
+ return(err);
+}
+
+static int metafs_create_link(const char *to, const char *from,
+ struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ const char *path_to[] = { METADATA_FILE_PATH(meta), to, NULL };
+ const char *path_from[] = { METADATA_FILE_PATH(meta), from, NULL };
+
+ return(host_link_file(path_to, path_from));
+}
+
+static int metafs_remove_file(const char *path, struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ char tmp[HOSTFS_BUFSIZE];
+ const char *metadata_path[] = { METADATA_FILE_PATH(meta), path, NULL };
+ char *file = get_path(metadata_path, tmp, sizeof(tmp));
+ int err = -ENOMEM;
+
+ if(file == NULL)
+ goto out;
+
+ err = os_remove_file(file);
+
+ out:
+ free_path(file, tmp);
+ return(err);
+}
+
+static int metafs_create_directory(const char *path, int mode, int uid,
+ int gid, struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ char tmp[HOSTFS_BUFSIZE];
+ const char *dir_path[] = { METADATA_DIR_PATH(meta), path, NULL, NULL };
+ const char *file_path[] = { METADATA_FILE_PATH(meta), path, NULL,
+ NULL };
+ char *file, dir_meta[sizeof("mmmm uuuuuuuuuu gggggggggg\n")];
+ int err, fd;
+
+ err = host_make_dir(dir_path, 0755);
+ if(err)
+ goto out;
+
+ err = host_make_dir(file_path, 0755);
+ if(err)
+ goto out_rm;
+
+ /* This to make the index independent of the number of elements in
+ * METADATA_DIR_PATH().
+ */
+ dir_path[sizeof(dir_path) / sizeof(dir_path[0]) - 2] = "metadata";
+
+ err = -ENOMEM;
+ file = get_path(dir_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ fd = os_open_file(file, of_create(of_rdwr(OPENFLAGS())), 0644);
+ if(fd < 0){
+ err = fd;
+ goto out_free;
+ }
+
+ sprintf(dir_meta, "%d %d %d\n", mode & S_IRWXUGO, uid, gid);
+ err = os_write_file(fd, dir_meta, strlen(dir_meta));
+ if(err > 0)
+ err = 0;
+
+ os_close_file(fd);
+
+ out_free:
+ free_path(file, tmp);
+ out_rm:
+ host_remove_dir(dir_path);
+ out:
+ return(err);
+}
+
+static int metafs_remove_directory(const char *path, struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ char tmp[HOSTFS_BUFSIZE], *file;
+ const char *dir_path[] = { METADATA_DIR_PATH(meta), path, "metadata",
+ NULL };
+ const char *file_path[] = { METADATA_FILE_PATH(meta), path, NULL };
+ char *slash;
+ int err;
+
+ err = -ENOMEM;
+ file = get_path(dir_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_remove_file(file);
+ if(err)
+ goto out_free;
+
+ slash = strrchr(file, '/');
+ if(slash == NULL){
+ printk("remove_shadow_directory failed to find last slash\n");
+ goto out_free;
+ }
+ *slash = '\0';
+ err = os_remove_dir(file);
+ free_path(file, tmp);
+
+ file = get_path(file_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = os_remove_dir(file);
+ if(err)
+ goto out_free;
+
+ out:
+ return(err);
+ out_free:
+ free_path(file, tmp);
+ goto out;
+}
+
+static int metafs_make_node(const char *path, int mode, int uid, int gid,
+ int type, int maj, int min, struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ struct file_handle fh;
+ char tmp[HOSTFS_BUFSIZE];
+ const char *metadata_path[] = { METADATA_FILE_PATH(meta), path, NULL };
+ int err;
+ char buf[sizeof("mmmm uuuuuuuuuu gggggggggg x nnn mmm\n")], *file;
+
+ sprintf(buf, "%d %d %d %c %d %d\n", mode & S_IRWXUGO, uid, gid, type,
+ maj, min);
+
+ err = -ENOMEM;
+ file = get_path(metadata_path, tmp, sizeof(tmp));
+ if(file == NULL)
+ goto out;
+
+ err = open_filehandle(file,
+ of_create(of_rdwr(OPENFLAGS())), 0644, &fh);
+ if(err)
+ goto out_free;
+
+ err = write_file(&fh, 0, buf, strlen(buf));
+ if(err > 0)
+ err = 0;
+
+ close_file(&fh);
+
+ out_free:
+ free_path(file, tmp);
+ out:
+ return(err);
+}
+
+static int metafs_ownerships(const char *path, int *mode_out, int *uid_out,
+ int *gid_out, char *type_out, int *maj_out,
+ int *min_out, struct humfs *humfs)
+{
+ struct file_handle fh;
+ char buf[sizeof("mmmm uuuuuuuuuu gggggggggg x nnn mmm\n")];
+ int err, n, mode, uid, gid, maj, min;
+ char type;
+
+ err = open_meta_file(path, humfs, &fh);
+ if(err)
+ goto out;
+
+ err = os_read_file(fh.fd, buf, sizeof(buf) - 1);
+ if(err < 0)
+ goto out_close;
+
+ buf[err] = '\0';
+ err = 0;
+
+ n = sscanf(buf, "%d %d %d %c %d %d", &mode, &uid, &gid, &type, &maj,
+ &min);
+ if(n == 3){
+ maj = -1;
+ min = -1;
+ type = 0;
+ err = 0;
+ }
+ else if(n != 6)
+ err = -EINVAL;
+
+ if(mode_out != NULL)
+ *mode_out = mode;
+ if(uid_out != NULL)
+ *uid_out = uid;
+ if(gid_out != NULL)
+ *gid_out = uid;
+ if(type_out != NULL)
+ *type_out = type;
+ if(maj_out != NULL)
+ *maj_out = maj;
+ if(min_out != NULL)
+ *min_out = min;
+
+ out_close:
+ close_file(&fh);
+ out:
+ return(err);
+}
+
+static int metafs_change_ownerships(const char *path, int mode, int uid,
+ int gid, struct humfs *humfs)
+{
+ struct file_handle fh;
+ char type;
+ char buf[sizeof("mmmm uuuuuuuuuu gggggggggg x nnn mmm\n")];
+ int err = -ENOMEM, old_mode, old_uid, old_gid, n, maj, min;
+
+ err = open_meta_file(path, humfs, &fh);
+ if(err)
+ goto out;
+
+ err = read_file(&fh, 0, buf, sizeof(buf) - 1);
+ if(err < 0)
+ goto out_close;
+
+ buf[err] = '\0';
+
+ n = sscanf(buf, "%d %d %d %c %d %d\n", &old_mode, &old_uid, &old_gid,
+ &type, &maj, &min);
+ if((n != 3) && (n != 6)){
+ err = -EINVAL;
+ goto out_close;
+ }
+
+ if(mode == -1)
+ mode = old_mode;
+ if(uid == -1)
+ uid = old_uid;
+ if(gid == -1)
+ gid = old_gid;
+
+ if(n == 3)
+ sprintf(buf, "%d %d %d\n", mode & S_IRWXUGO, uid, gid);
+ else
+ sprintf(buf, "%d %d %d %c %d %d\n", mode & S_IRWXUGO, uid, gid,
+ type, maj, min);
+
+ err = write_file(&fh, 0, buf, strlen(buf));
+ if(err > 0)
+ err = 0;
+
+ err = truncate_file(&fh, strlen(buf));
+
+ out_close:
+ close_file(&fh);
+ out:
+ return(err);
+}
+
+static int metafs_rename_file(const char *from, const char *to,
+ struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+ const char *metadata_path_from[5], *metadata_path_to[5];
+ int err;
+
+ err = meta_file_path(from, meta, metadata_path_from);
+ if(err)
+ return(err);
+
+ err = meta_file_path(to, meta, metadata_path_to);
+ if(err)
+ return(err);
+
+ return(host_rename_file(metadata_path_from, metadata_path_to));
+}
+
+static struct humfs *metafs_init_mount(char *root)
+{
+ struct meta_fs *meta;
+ int err = -ENOMEM;
+
+ meta = kmalloc(sizeof(*meta), GFP_KERNEL);
+ if(meta == NULL)
+ goto out;
+
+ meta->root = uml_strdup(root);
+ if(meta->root == NULL)
+ goto out_free_meta;
+
+ return(&meta->humfs);
+
+ out_free_meta:
+ kfree(meta);
+ out:
+ return(ERR_PTR(err));
+}
+
+static void metafs_free_mount(struct humfs *humfs)
+{
+ struct meta_fs *meta = container_of(humfs, struct meta_fs, humfs);
+
+ kfree(meta);
+}
+
+struct humfs_meta_ops hum_fs_meta_fs_ops = {
+ .list = LIST_HEAD_INIT(hum_fs_meta_fs_ops.list),
+ .name = "shadow_fs",
+ .init_file = metafs_init_file,
+ .open_file = metafs_open_file,
+ .close_file = metafs_close_file,
+ .ownerships = metafs_ownerships,
+ .make_node = metafs_make_node,
+ .create_file = metafs_create_file,
+ .create_link = metafs_create_link,
+ .remove_file = metafs_remove_file,
+ .create_dir = metafs_create_directory,
+ .remove_dir = metafs_remove_directory,
+ .change_ownerships = metafs_change_ownerships,
+ .rename_file = metafs_rename_file,
+ .invisible = metafs_invisible,
+ .init_mount = metafs_init_mount,
+ .free_mount = metafs_free_mount,
+};
+
+static int __init init_meta_fs(void)
+{
+ register_meta(&hum_fs_meta_fs_ops);
+ return(0);
+}
+
+static void __exit exit_meta_fs(void)
+{
+ unregister_meta(&hum_fs_meta_fs_ops);
+}
+
+__initcall(init_meta_fs);
+__exitcall(exit_meta_fs);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2004 Piotr Neuman (sikkh@wp.pl) and
+ * Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_FS_METADATA
+#define __UM_FS_METADATA
+
+#include "linux/fs.h"
+#include "linux/list.h"
+#include "os.h"
+#include "hostfs.h"
+
+struct humfs {
+ struct externfs_data ext;
+ __u64 used;
+ __u64 total;
+ char *data;
+ int mmap;
+ int direct;
+ struct humfs_meta_ops *meta;
+};
+
+struct humfs_file {
+ struct humfs *mount;
+ struct file_handle data;
+ struct externfs_inode ext;
+};
+
+struct humfs_meta_ops {
+ struct list_head list;
+ char *name;
+ struct humfs_file *(*init_file)(void);
+ int (*open_file)(struct humfs_file *hf, const char *path,
+ struct inode *inode, struct humfs *humfs);
+ int (*create_file)(struct humfs_file *hf, const char *path, int mode,
+ int uid, int gid, struct inode *inode,
+ struct humfs *humfs);
+ void (*close_file)(struct humfs_file *humfs);
+ int (*ownerships)(const char *path, int *mode_out, int *uid_out,
+ int *gid_out, char *type_out, int *maj_out,
+ int *min_out, struct humfs *humfs);
+ int (*make_node)(const char *path, int mode, int uid, int gid,
+ int type, int major, int minor, struct humfs *humfs);
+ int (*create_link)(const char *to, const char *from,
+ struct humfs *humfs);
+ int (*remove_file)(const char *path, struct humfs *humfs);
+ int (*create_dir)(const char *path, int mode, int uid, int gid,
+ struct humfs *humfs);
+ int (*remove_dir)(const char *path, struct humfs *humfs);
+ int (*change_ownerships)(const char *path, int mode, int uid, int gid,
+ struct humfs *humfs);
+ int (*rename_file)(const char *from, const char *to,
+ struct humfs *humfs);
+ void (*invisible)(struct humfs_file *hf);
+ struct humfs *(*init_mount)(char *root);
+ void (*free_mount)(struct humfs *humfs);
+};
+
+extern void register_meta(struct humfs_meta_ops *ops);
+extern void unregister_meta(struct humfs_meta_ops *ops);
+
+extern char *humfs_path(char *dir, char *file);
+extern char *humfs_name(struct inode *inode, char *prefix);
+extern struct humfs *inode_humfs_info(struct inode *inode);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * fs/isofs/export.c
+ *
+ * (C) 2004 Paul Serice - The new inode scheme requires switching
+ * from iget() to iget5_locked() which means
+ * the NFS export operations have to be hand
+ * coded because the default routines rely on
+ * iget().
+ *
+ * The following files are helpful:
+ *
+ * Documentation/filesystems/Exporting
+ * fs/exportfs/expfs.c.
+ */
+
+#include <linux/buffer_head.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/iso_fs.h>
+#include <linux/kernel.h>
+
+static struct dentry *
+isofs_export_iget(struct super_block *sb,
+ unsigned long block,
+ unsigned long offset,
+ __u32 generation)
+{
+ struct inode *inode;
+ struct dentry *result;
+ if (block == 0)
+ return ERR_PTR(-ESTALE);
+ inode = isofs_iget(sb, block, offset);
+ if (inode == NULL)
+ return ERR_PTR(-ENOMEM);
+ if (is_bad_inode(inode)
+ || (generation && inode->i_generation != generation))
+ {
+ iput(inode);
+ return ERR_PTR(-ESTALE);
+ }
+ result = d_alloc_anon(inode);
+ if (!result) {
+ iput(inode);
+ return ERR_PTR(-ENOMEM);
+ }
+ return result;
+}
+
+static struct dentry *
+isofs_export_get_dentry(struct super_block *sb, void *vobjp)
+{
+ __u32 *objp = vobjp;
+ unsigned long block = objp[0];
+ unsigned long offset = objp[1];
+ __u32 generation = objp[2];
+ return isofs_export_iget(sb, block, offset, generation);
+}
+
+/* This function is surprisingly simple. The trick is understanding
+ * that "child" is always a directory. So, to find its parent, you
+ * simply need to find its ".." entry, normalize its block and offset,
+ * and return the underlying inode. See the comments for
+ * isofs_normalize_block_and_offset(). */
+static struct dentry *isofs_export_get_parent(struct dentry *child)
+{
+ unsigned long parent_block = 0;
+ unsigned long parent_offset = 0;
+ struct inode *child_inode = child->d_inode;
+ struct iso_inode_info *e_child_inode = ISOFS_I(child_inode);
+ struct inode *parent_inode = NULL;
+ struct iso_directory_record *de = NULL;
+ struct buffer_head * bh = NULL;
+ struct dentry *rv = NULL;
+
+ /* "child" must always be a directory. */
+ if (!S_ISDIR(child_inode->i_mode)) {
+ printk(KERN_ERR "isofs: isofs_export_get_parent(): "
+ "child is not a directory!\n");
+ rv = ERR_PTR(-EACCES);
+ goto out;
+ }
+
+ /* It is an invariant that the directory offset is zero. If
+ * it is not zero, it means the directory failed to be
+ * normalized for some reason. */
+ if (e_child_inode->i_iget5_offset != 0) {
+ printk(KERN_ERR "isofs: isofs_export_get_parent(): "
+ "child directory not normalized!\n");
+ rv = ERR_PTR(-EACCES);
+ goto out;
+ }
+
+ /* The child inode has been normalized such that its
+ * i_iget5_block value points to the "." entry. Fortunately,
+ * the ".." entry is located in the same block. */
+ parent_block = e_child_inode->i_iget5_block;
+
+ /* Get the block in question. */
+ bh = sb_bread(child_inode->i_sb, parent_block);
+ if (bh == NULL) {
+ rv = ERR_PTR(-EACCES);
+ goto out;
+ }
+
+ /* This is the "." entry. */
+ de = (struct iso_directory_record*)bh->b_data;
+
+ /* The ".." entry is always the second entry. */
+ parent_offset = (unsigned long)isonum_711(de->length);
+ de = (struct iso_directory_record*)(bh->b_data + parent_offset);
+
+ /* Verify it is in fact the ".." entry. */
+ if ((isonum_711(de->name_len) != 1) || (de->name[0] != 1)) {
+ printk(KERN_ERR "isofs: Unable to find the \"..\" "
+ "directory for NFS.\n");
+ rv = ERR_PTR(-EACCES);
+ goto out;
+ }
+
+ /* Normalize */
+ isofs_normalize_block_and_offset(de, &parent_block, &parent_offset);
+
+ /* Get the inode. */
+ parent_inode = isofs_iget(child_inode->i_sb,
+ parent_block,
+ parent_offset);
+ if (parent_inode == NULL) {
+ rv = ERR_PTR(-EACCES);
+ goto out;
+ }
+
+ /* Allocate the dentry. */
+ rv = d_alloc_anon(parent_inode);
+ if (rv == NULL) {
+ rv = ERR_PTR(-ENOMEM);
+ goto out;
+ }
+
+ out:
+ if (bh) {
+ brelse(bh);
+ }
+ return rv;
+}
+
+static int
+isofs_export_encode_fh(struct dentry *dentry,
+ __u32 *fh32,
+ int *max_len,
+ int connectable)
+{
+ struct inode * inode = dentry->d_inode;
+ struct iso_inode_info * ei = ISOFS_I(inode);
+ int len = *max_len;
+ int type = 1;
+ __u16 *fh16 = (__u16*)fh32;
+
+ /*
+ * WARNING: max_len is 5 for NFSv2. Because of this
+ * limitation, we use the lower 16 bits of fh32[1] to hold the
+ * offset of the inode and the upper 16 bits of fh32[1] to
+ * hold the offset of the parent.
+ */
+
+ if (len < 3 || (connectable && len < 5))
+ return 255;
+
+ len = 3;
+ fh32[0] = ei->i_iget5_block;
+ fh16[2] = (__u16)ei->i_iget5_offset; /* fh16 [sic] */
+ fh32[2] = inode->i_generation;
+ if (connectable && !S_ISDIR(inode->i_mode)) {
+ struct inode *parent;
+ struct iso_inode_info *eparent;
+ spin_lock(&dentry->d_lock);
+ parent = dentry->d_parent->d_inode;
+ eparent = ISOFS_I(parent);
+ fh32[3] = eparent->i_iget5_block;
+ fh16[3] = (__u16)eparent->i_iget5_offset; /* fh16 [sic] */
+ fh32[4] = parent->i_generation;
+ spin_unlock(&dentry->d_lock);
+ len = 5;
+ type = 2;
+ }
+ *max_len = len;
+ return type;
+}
+
+
+static struct dentry *
+isofs_export_decode_fh(struct super_block *sb,
+ __u32 *fh32,
+ int fh_len,
+ int fileid_type,
+ int (*acceptable)(void *context, struct dentry *de),
+ void *context)
+{
+ __u16 *fh16 = (__u16*)fh32;
+ __u32 child[3]; /* The child is what triggered all this. */
+ __u32 parent[3]; /* The parent is just along for the ride. */
+
+ if (fh_len < 3 || fileid_type > 2)
+ return NULL;
+
+ child[0] = fh32[0];
+ child[1] = fh16[2]; /* fh16 [sic] */
+ child[2] = fh32[2];
+
+ parent[0] = 0;
+ parent[1] = 0;
+ parent[2] = 0;
+ if (fileid_type == 2) {
+ if (fh_len > 2) parent[0] = fh32[3];
+ parent[1] = fh16[3]; /* fh16 [sic] */
+ if (fh_len > 4) parent[2] = fh32[4];
+ }
+
+ return sb->s_export_op->find_exported_dentry(sb, child, parent,
+ acceptable, context);
+}
+
+
+struct export_operations isofs_export_ops = {
+ .decode_fh = isofs_export_decode_fh,
+ .encode_fh = isofs_export_encode_fh,
+ .get_dentry = isofs_export_get_dentry,
+ .get_parent = isofs_export_get_parent,
+};
--- /dev/null
+/*
+ * JFFS2 -- Journalling Flash File System, Version 2.
+ *
+ * Copyright (C) 2004 Ferenc Havasi <havasi@inf.u-szeged.hu>,
+ * University of Szeged, Hungary
+ *
+ * For licensing information, see the file 'LICENCE' in the
+ * jffs2 directory.
+ *
+ * $Id: compr.h,v 1.5 2004/06/23 16:34:39 havasi Exp $
+ *
+ */
+
+#ifndef __JFFS2_COMPR_H__
+#define __JFFS2_COMPR_H__
+
+#include <linux/kernel.h>
+#include <linux/vmalloc.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/jffs2.h>
+#include <linux/jffs2_fs_i.h>
+#include <linux/jffs2_fs_sb.h>
+#include "nodelist.h"
+
+#define JFFS2_RUBINMIPS_PRIORITY 10
+#define JFFS2_DYNRUBIN_PRIORITY 20
+#define JFFS2_LZARI_PRIORITY 30
+#define JFFS2_LZO_PRIORITY 40
+#define JFFS2_RTIME_PRIORITY 50
+#define JFFS2_ZLIB_PRIORITY 60
+
+#define JFFS2_RUBINMIPS_DISABLED /* RUBINs will be used only */
+#define JFFS2_DYNRUBIN_DISABLED /* for decompression */
+
+#define JFFS2_COMPR_MODE_NONE 0
+#define JFFS2_COMPR_MODE_PRIORITY 1
+#define JFFS2_COMPR_MODE_SIZE 2
+
+void jffs2_set_compression_mode(int mode);
+int jffs2_get_compression_mode(void);
+
+struct jffs2_compressor {
+ struct list_head list;
+ int priority; /* used by prirority comr. mode */
+ char *name;
+ char compr; /* JFFS2_COMPR_XXX */
+ int (*compress)(unsigned char *data_in, unsigned char *cpage_out,
+ uint32_t *srclen, uint32_t *destlen, void *model);
+ int (*decompress)(unsigned char *cdata_in, unsigned char *data_out,
+ uint32_t cdatalen, uint32_t datalen, void *model);
+ int usecount;
+ int disabled; /* if seted the compressor won't compress */
+ unsigned char *compr_buf; /* used by size compr. mode */
+ uint32_t compr_buf_size; /* used by size compr. mode */
+ uint32_t stat_compr_orig_size;
+ uint32_t stat_compr_new_size;
+ uint32_t stat_compr_blocks;
+ uint32_t stat_decompr_blocks;
+};
+
+int jffs2_register_compressor(struct jffs2_compressor *comp);
+int jffs2_unregister_compressor(struct jffs2_compressor *comp);
+
+int jffs2_compressors_init(void);
+int jffs2_compressors_exit(void);
+
+uint16_t jffs2_compress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
+ unsigned char *data_in, unsigned char **cpage_out,
+ uint32_t *datalen, uint32_t *cdatalen);
+
+int jffs2_decompress(struct jffs2_sb_info *c, struct jffs2_inode_info *f,
+ uint16_t comprtype, unsigned char *cdata_in,
+ unsigned char *data_out, uint32_t cdatalen, uint32_t datalen);
+
+void jffs2_free_comprbuf(unsigned char *comprbuf, unsigned char *orig);
+
+#ifdef CONFIG_JFFS2_PROC
+int jffs2_enable_compressor_name(const char *name);
+int jffs2_disable_compressor_name(const char *name);
+int jffs2_set_compression_mode_name(const char *mode_name);
+char *jffs2_get_compression_mode_name(void);
+int jffs2_set_compressor_priority(const char *mode_name, int priority);
+char *jffs2_list_compressors(void);
+char *jffs2_stats(void);
+#endif
+
+/* Compressor modules */
+/* These functions will be called by jffs2_compressors_init/exit */
+
+#ifdef CONFIG_JFFS2_RUBIN
+int jffs2_rubinmips_init(void);
+void jffs2_rubinmips_exit(void);
+int jffs2_dynrubin_init(void);
+void jffs2_dynrubin_exit(void);
+#endif
+#ifdef CONFIG_JFFS2_RTIME
+int jffs2_rtime_init(void);
+void jffs2_rtime_exit(void);
+#endif
+#ifdef CONFIG_JFFS2_ZLIB
+int jffs2_zlib_init(void);
+void jffs2_zlib_exit(void);
+#endif
+#ifdef CONFIG_JFFS2_LZARI
+int jffs2_lzari_init(void);
+void jffs2_lzari_exit(void);
+#endif
+#ifdef CONFIG_JFFS2_LZO
+int jffs2_lzo_init(void);
+void jffs2_lzo_exit(void);
+#endif
+
+/* Prototypes from proc.c */
+int jffs2_proc_init(void);
+int jffs2_proc_exit(void);
+
+#endif /* __JFFS2_COMPR_H__ */
--- /dev/null
+/*
+ * linux/fs/nls_ascii.c
+ *
+ * Charset ascii translation tables.
+ * Generated automatically from the Unicode and charset
+ * tables from the Unicode Organization (www.unicode.org).
+ * The Unicode to charset table has only exact mappings.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/nls.h>
+#include <linux/errno.h>
+
+static wchar_t charset2uni[128] = {
+ /* 0x00*/
+ 0x0000, 0x0001, 0x0002, 0x0003,
+ 0x0004, 0x0005, 0x0006, 0x0007,
+ 0x0008, 0x0009, 0x000a, 0x000b,
+ 0x000c, 0x000d, 0x000e, 0x000f,
+ /* 0x10*/
+ 0x0010, 0x0011, 0x0012, 0x0013,
+ 0x0014, 0x0015, 0x0016, 0x0017,
+ 0x0018, 0x0019, 0x001a, 0x001b,
+ 0x001c, 0x001d, 0x001e, 0x001f,
+ /* 0x20*/
+ 0x0020, 0x0021, 0x0022, 0x0023,
+ 0x0024, 0x0025, 0x0026, 0x0027,
+ 0x0028, 0x0029, 0x002a, 0x002b,
+ 0x002c, 0x002d, 0x002e, 0x002f,
+ /* 0x30*/
+ 0x0030, 0x0031, 0x0032, 0x0033,
+ 0x0034, 0x0035, 0x0036, 0x0037,
+ 0x0038, 0x0039, 0x003a, 0x003b,
+ 0x003c, 0x003d, 0x003e, 0x003f,
+ /* 0x40*/
+ 0x0040, 0x0041, 0x0042, 0x0043,
+ 0x0044, 0x0045, 0x0046, 0x0047,
+ 0x0048, 0x0049, 0x004a, 0x004b,
+ 0x004c, 0x004d, 0x004e, 0x004f,
+ /* 0x50*/
+ 0x0050, 0x0051, 0x0052, 0x0053,
+ 0x0054, 0x0055, 0x0056, 0x0057,
+ 0x0058, 0x0059, 0x005a, 0x005b,
+ 0x005c, 0x005d, 0x005e, 0x005f,
+ /* 0x60*/
+ 0x0060, 0x0061, 0x0062, 0x0063,
+ 0x0064, 0x0065, 0x0066, 0x0067,
+ 0x0068, 0x0069, 0x006a, 0x006b,
+ 0x006c, 0x006d, 0x006e, 0x006f,
+ /* 0x70*/
+ 0x0070, 0x0071, 0x0072, 0x0073,
+ 0x0074, 0x0075, 0x0076, 0x0077,
+ 0x0078, 0x0079, 0x007a, 0x007b,
+ 0x007c, 0x007d, 0x007e, 0x007f,
+};
+
+static unsigned char page00[128] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x40-0x47 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x48-0x4f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x50-0x57 */
+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x60-0x67 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x68-0x6f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x70-0x77 */
+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static unsigned char *page_uni2charset[128] = {
+ page00, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
+};
+
+static unsigned char charset2lower[128] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x40-0x47 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x48-0x4f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x50-0x57 */
+ 0x78, 0x79, 0x7a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x60-0x67 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x68-0x6f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x70-0x77 */
+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static unsigned char charset2upper[128] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x40-0x47 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x48-0x4f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x50-0x57 */
+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x60-0x67 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x68-0x6f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x70-0x77 */
+ 0x58, 0x59, 0x5a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static int uni2char(wchar_t uni, unsigned char *out, int boundlen)
+{
+ unsigned char *uni2charset;
+ unsigned char cl = uni & 0x00ff;
+ unsigned char ch = (uni & 0xff00) >> 8;
+
+ if (boundlen <= 0)
+ return -ENAMETOOLONG;
+
+ uni2charset = page_uni2charset[ch];
+ if (uni2charset && uni2charset[cl])
+ out[0] = uni2charset[cl];
+ else
+ return -EINVAL;
+ return 1;
+}
+
+static int char2uni(const unsigned char *rawstring, int boundlen, wchar_t *uni)
+{
+ *uni = charset2uni[*rawstring];
+ if (*uni == 0x0000)
+ return -EINVAL;
+ return 1;
+}
+
+static struct nls_table table = {
+ .charset = "ascii",
+ .uni2char = uni2char,
+ .char2uni = char2uni,
+ .charset2lower = charset2lower,
+ .charset2upper = charset2upper,
+ .owner = THIS_MODULE,
+};
+
+static int __init init_nls_ascii(void)
+{
+ return register_nls(&table);
+}
+
+static void __exit exit_nls_ascii(void)
+{
+ unregister_nls(&table);
+}
+
+module_init(init_nls_ascii)
+module_exit(exit_nls_ascii)
+
+MODULE_LICENSE("Dual BSD/GPL");
--- /dev/null
+/*
+ * collate.c - NTFS kernel collation handling. Part of the Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include "ntfs.h"
+#include "collate.h"
+
+static int ntfs_collate_binary(ntfs_volume *vol,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len)
+{
+ int rc;
+
+ ntfs_debug("Entering.");
+ rc = memcmp(data1, data2, min(data1_len, data2_len));
+ if (!rc && (data1_len != data2_len)) {
+ if (data1_len < data2_len)
+ rc = -1;
+ else
+ rc = 1;
+ }
+ ntfs_debug("Done, returning %i", rc);
+ return rc;
+}
+
+static int ntfs_collate_ntofs_ulong(ntfs_volume *vol,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len)
+{
+ int rc;
+ u32 d1, d2;
+
+ ntfs_debug("Entering.");
+ // FIXME: We don't really want to bug here.
+ BUG_ON(data1_len != data2_len);
+ BUG_ON(data1_len != 4);
+ d1 = le32_to_cpup(data1);
+ d2 = le32_to_cpup(data2);
+ if (d1 < d2)
+ rc = -1;
+ else {
+ if (d1 == d2)
+ rc = 0;
+ else
+ rc = 1;
+ }
+ ntfs_debug("Done, returning %i", rc);
+ return rc;
+}
+
+typedef int (*ntfs_collate_func_t)(ntfs_volume *, const void *, const int,
+ const void *, const int);
+
+static ntfs_collate_func_t ntfs_do_collate0x0[3] = {
+ ntfs_collate_binary,
+ NULL/*ntfs_collate_file_name*/,
+ NULL/*ntfs_collate_unicode_string*/,
+};
+
+static ntfs_collate_func_t ntfs_do_collate0x1[4] = {
+ ntfs_collate_ntofs_ulong,
+ NULL/*ntfs_collate_ntofs_sid*/,
+ NULL/*ntfs_collate_ntofs_security_hash*/,
+ NULL/*ntfs_collate_ntofs_ulongs*/,
+};
+
+/**
+ * ntfs_collate - collate two data items using a specified collation rule
+ * @vol: ntfs volume to which the data items belong
+ * @cr: collation rule to use when comparing the items
+ * @data1: first data item to collate
+ * @data1_len: length in bytes of @data1
+ * @data2: second data item to collate
+ * @data2_len: length in bytes of @data2
+ *
+ * Collate the two data items @data1 and @data2 using the collation rule @cr
+ * and return -1, 0, ir 1 if @data1 is found, respectively, to collate before,
+ * to match, or to collate after @data2.
+ *
+ * For speed we use the collation rule @cr as an index into two tables of
+ * function pointers to call the appropriate collation function.
+ */
+int ntfs_collate(ntfs_volume *vol, COLLATION_RULES cr,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len) {
+ ntfs_debug("Entering.");
+ /*
+ * FIXME: At the moment we only support COLLATION_BINARY and
+ * COLLATION_NTOFS_ULONG, so we BUG() for everything else for now.
+ */
+ BUG_ON(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG);
+ cr = le32_to_cpu(cr);
+ BUG_ON(cr < 0);
+ if (cr <= 0x02)
+ return ntfs_do_collate0x0[cr](vol, data1, data1_len,
+ data2, data2_len);
+ BUG_ON(cr < 0x10);
+ cr -= 0x10;
+ if (likely(cr <= 3))
+ return ntfs_do_collate0x1[cr](vol, data1, data1_len,
+ data2, data2_len);
+ BUG();
+ return 0;
+}
--- /dev/null
+/*
+ * collate.h - Defines for NTFS kernel collation handling. Part of the
+ * Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_NTFS_COLLATE_H
+#define _LINUX_NTFS_COLLATE_H
+
+#include "types.h"
+#include "volume.h"
+
+static inline BOOL ntfs_is_collation_rule_supported(COLLATION_RULES cr) {
+ /*
+ * FIXME: At the moment we only support COLLATION_BINARY and
+ * COLLATION_NTOFS_ULONG, so we return false for everything else for
+ * now.
+ */
+ if (unlikely(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG))
+ return FALSE;
+ cr = le32_to_cpu(cr);
+ if (likely(((cr >= 0) && (cr <= 0x02)) ||
+ ((cr >= 0x10) && (cr <= 0x13))))
+ return TRUE;
+ return FALSE;
+}
+
+extern int ntfs_collate(ntfs_volume *vol, COLLATION_RULES cr,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len);
+
+#endif /* _LINUX_NTFS_COLLATE_H */
--- /dev/null
+/*
+ * index.c - NTFS kernel index handling. Part of the Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include "ntfs.h"
+#include "collate.h"
+#include "index.h"
+
+/**
+ * ntfs_index_ctx_get - allocate and initialize a new index context
+ * @idx_ni: ntfs index inode with which to initialize the context
+ *
+ * Allocate a new index context, initialize it with @idx_ni and return it.
+ * Return NULL if allocation failed.
+ *
+ * Locking: Caller must hold i_sem on the index inode.
+ */
+ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni)
+{
+ ntfs_index_context *ictx;
+
+ ictx = kmem_cache_alloc(ntfs_index_ctx_cache, SLAB_NOFS);
+ if (ictx) {
+ ictx->idx_ni = idx_ni;
+ ictx->entry = NULL;
+ ictx->data = NULL;
+ ictx->data_len = 0;
+ ictx->is_in_root = 0;
+ ictx->ir = NULL;
+ ictx->actx = NULL;
+ ictx->base_ni = NULL;
+ ictx->ia = NULL;
+ ictx->page = NULL;
+ }
+ return ictx;
+}
+
+/**
+ * ntfs_index_ctx_put - release an index context
+ * @ictx: index context to free
+ *
+ * Release the index context @ictx, releasing all associated resources.
+ *
+ * Locking: Caller must hold i_sem on the index inode.
+ */
+void ntfs_index_ctx_put(ntfs_index_context *ictx)
+{
+ if (ictx->entry) {
+ if (ictx->is_in_root) {
+ if (ictx->actx)
+ put_attr_search_ctx(ictx->actx);
+ if (ictx->base_ni)
+ unmap_mft_record(ictx->base_ni);
+ } else {
+ struct page *page = ictx->page;
+ if (page) {
+ BUG_ON(!PageLocked(page));
+ unlock_page(page);
+ ntfs_unmap_page(page);
+ }
+ }
+ }
+ kmem_cache_free(ntfs_index_ctx_cache, ictx);
+ return;
+}
+
+/**
+ * ntfs_index_lookup - find a key in an index and return its index entry
+ * @key: [IN] key for which to search in the index
+ * @key_len: [IN] length of @key in bytes
+ * @ictx: [IN/OUT] context describing the index and the returned entry
+ *
+ * Before calling ntfs_index_lookup(), @ictx must have been obtained from a
+ * call to ntfs_index_ctx_get().
+ *
+ * Look for the @key in the index specified by the index lookup context @ictx.
+ * ntfs_index_lookup() walks the contents of the index looking for the @key.
+ *
+ * If the @key is found in the index, 0 is returned and @ictx is setup to
+ * describe the index entry containing the matching @key. @ictx->entry is the
+ * index entry and @ictx->data and @ictx->data_len are the index entry data and
+ * its length in bytes, respectively.
+ *
+ * If the @key is not found in the index, -ENOENT is returned and @ictx is
+ * setup to describe the index entry whose key collates immediately after the
+ * search @key, i.e. this is the position in the index at which an index entry
+ * with a key of @key would need to be inserted.
+ *
+ * If an error occurs return the negative error code and @ictx is left
+ * untouched.
+ *
+ * When finished with the entry and its data, call ntfs_index_ctx_put() to free
+ * the context and other associated resources.
+ *
+ * If the index entry was modified, call flush_dcache_index_entry_page()
+ * immediately after the modification and either ntfs_index_entry_mark_dirty()
+ * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
+ * ensure that the changes are written to disk.
+ *
+ * Locking: - Caller must hold i_sem on the index inode.
+ * - Each page cache page in the index allocation mapping must be
+ * locked whilst being accessed otherwise we may find a corrupt
+ * page due to it being under ->writepage at the moment which
+ * applies the mst protection fixups before writing out and then
+ * removes them again after the write is complete after which it
+ * unlocks the page.
+ */
+int ntfs_index_lookup(const void *key, const int key_len,
+ ntfs_index_context *ictx)
+{
+ ntfs_inode *idx_ni = ictx->idx_ni;
+ ntfs_volume *vol = idx_ni->vol;
+ struct super_block *sb = vol->sb;
+ ntfs_inode *base_ni = idx_ni->ext.base_ntfs_ino;
+ MFT_RECORD *m;
+ INDEX_ROOT *ir;
+ INDEX_ENTRY *ie;
+ INDEX_ALLOCATION *ia;
+ u8 *index_end;
+ attr_search_context *actx;
+ int rc, err = 0;
+ VCN vcn, old_vcn;
+ struct address_space *ia_mapping;
+ struct page *page;
+ u8 *kaddr;
+
+ ntfs_debug("Entering.");
+ BUG_ON(!NInoAttr(idx_ni));
+ BUG_ON(idx_ni->type != AT_INDEX_ALLOCATION);
+ BUG_ON(idx_ni->nr_extents != -1);
+ BUG_ON(!base_ni);
+ BUG_ON(!key);
+ BUG_ON(key_len <= 0);
+ if (!ntfs_is_collation_rule_supported(
+ idx_ni->itype.index.collation_rule)) {
+ ntfs_error(sb, "Index uses unsupported collation rule 0x%x. "
+ "Aborting lookup.", le32_to_cpu(
+ idx_ni->itype.index.collation_rule));
+ return -EOPNOTSUPP;
+ }
+ /* Get hold of the mft record for the index inode. */
+ m = map_mft_record(base_ni);
+ if (unlikely(IS_ERR(m))) {
+ ntfs_error(sb, "map_mft_record() failed with error code %ld.",
+ -PTR_ERR(m));
+ return PTR_ERR(m);
+ }
+ actx = get_attr_search_ctx(base_ni, m);
+ if (unlikely(!actx)) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+ /* Find the index root attribute in the mft record. */
+ if (!lookup_attr(AT_INDEX_ROOT, idx_ni->name, idx_ni->name_len,
+ CASE_SENSITIVE, 0, NULL, 0, actx)) {
+ ntfs_error(sb, "Index root attribute missing in inode 0x%lx.",
+ idx_ni->mft_no);
+ err = -EIO;
+ goto err_out;
+ }
+ /* Get to the index root value (it has been verified in read_inode). */
+ ir = (INDEX_ROOT*)((u8*)actx->attr +
+ le16_to_cpu(actx->attr->data.resident.value_offset));
+ index_end = (u8*)&ir->index + le32_to_cpu(ir->index.index_length);
+ /* The first index entry. */
+ ie = (INDEX_ENTRY*)((u8*)&ir->index +
+ le32_to_cpu(ir->index.entries_offset));
+ /*
+ * Loop until we exceed valid memory (corruption case) or until we
+ * reach the last entry.
+ */
+ for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
+ /* Bounds checks. */
+ if ((u8*)ie < (u8*)actx->mrec || (u8*)ie +
+ sizeof(INDEX_ENTRY_HEADER) > index_end ||
+ (u8*)ie + le16_to_cpu(ie->length) > index_end)
+ goto idx_err_out;
+ /*
+ * The last entry cannot contain a key. It can however contain
+ * a pointer to a child node in the B+tree so we just break out.
+ */
+ if (ie->flags & INDEX_ENTRY_END)
+ break;
+ /* Further bounds checks. */
+ if ((u32)sizeof(INDEX_ENTRY_HEADER) +
+ le16_to_cpu(ie->key_length) >
+ le16_to_cpu(ie->data.vi.data_offset) ||
+ (u32)le16_to_cpu(ie->data.vi.data_offset) +
+ le16_to_cpu(ie->data.vi.data_length) >
+ le16_to_cpu(ie->length))
+ goto idx_err_out;
+ /* If the keys match perfectly, we setup @ictx and return 0. */
+ if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
+ &ie->key, key_len)) {
+ir_done:
+ ictx->is_in_root = TRUE;
+ ictx->actx = actx;
+ ictx->base_ni = base_ni;
+ ictx->ia = NULL;
+ ictx->page = NULL;
+done:
+ ictx->entry = ie;
+ ictx->data = (u8*)ie +
+ le16_to_cpu(ie->data.vi.data_offset);
+ ictx->data_len = le16_to_cpu(ie->data.vi.data_length);
+ ntfs_debug("Done.");
+ return err;
+ }
+ /*
+ * Not a perfect match, need to do full blown collation so we
+ * know which way in the B+tree we have to go.
+ */
+ rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
+ key_len, &ie->key, le16_to_cpu(ie->key_length));
+ /*
+ * If @key collates before the key of the current entry, there
+ * is definitely no such key in this index but we might need to
+ * descend into the B+tree so we just break out of the loop.
+ */
+ if (rc == -1)
+ break;
+ /*
+ * A match should never happen as the memcmp() call should have
+ * cought it, but we still treat it correctly.
+ */
+ if (!rc)
+ goto ir_done;
+ /* The keys are not equal, continue the search. */
+ }
+ /*
+ * We have finished with this index without success. Check for the
+ * presence of a child node and if not present setup @ictx and return
+ * -ENOENT.
+ */
+ if (!(ie->flags & INDEX_ENTRY_NODE)) {
+ ntfs_debug("Entry not found.");
+ err = -ENOENT;
+ goto ir_done;
+ } /* Child node present, descend into it. */
+ /* Consistency check: Verify that an index allocation exists. */
+ if (!NInoIndexAllocPresent(idx_ni)) {
+ ntfs_error(sb, "No index allocation attribute but index entry "
+ "requires one. Inode 0x%lx is corrupt or "
+ "driver bug.", idx_ni->mft_no);
+ err = -EIO;
+ goto err_out;
+ }
+ /* Get the starting vcn of the index_block holding the child node. */
+ vcn = sle64_to_cpup((u8*)ie + le16_to_cpu(ie->length) - 8);
+ ia_mapping = VFS_I(idx_ni)->i_mapping;
+ /*
+ * We are done with the index root and the mft record. Release them,
+ * otherwise we deadlock with ntfs_map_page().
+ */
+ put_attr_search_ctx(actx);
+ unmap_mft_record(base_ni);
+ m = NULL;
+ actx = NULL;
+descend_into_child_node:
+ /*
+ * Convert vcn to index into the index allocation attribute in units
+ * of PAGE_CACHE_SIZE and map the page cache page, reading it from
+ * disk if necessary.
+ */
+ page = ntfs_map_page(ia_mapping, vcn <<
+ idx_ni->itype.index.vcn_size_bits >> PAGE_CACHE_SHIFT);
+ if (IS_ERR(page)) {
+ ntfs_error(sb, "Failed to map index page, error %ld.",
+ -PTR_ERR(page));
+ err = PTR_ERR(page);
+ goto err_out;
+ }
+ lock_page(page);
+ kaddr = (u8*)page_address(page);
+fast_descend_into_child_node:
+ /* Get to the index allocation block. */
+ ia = (INDEX_ALLOCATION*)(kaddr + ((vcn <<
+ idx_ni->itype.index.vcn_size_bits) & ~PAGE_CACHE_MASK));
+ /* Bounds checks. */
+ if ((u8*)ia < kaddr || (u8*)ia > kaddr + PAGE_CACHE_SIZE) {
+ ntfs_error(sb, "Out of bounds check failed. Corrupt inode "
+ "0x%lx or driver bug.", idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ if (sle64_to_cpu(ia->index_block_vcn) != vcn) {
+ ntfs_error(sb, "Actual VCN (0x%llx) of index buffer is "
+ "different from expected VCN (0x%llx). Inode "
+ "0x%lx is corrupt or driver bug.",
+ (unsigned long long)
+ sle64_to_cpu(ia->index_block_vcn),
+ (unsigned long long)vcn, idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ if (le32_to_cpu(ia->index.allocated_size) + 0x18 !=
+ idx_ni->itype.index.block_size) {
+ ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx has "
+ "a size (%u) differing from the index "
+ "specified size (%u). Inode is corrupt or "
+ "driver bug.", (unsigned long long)vcn,
+ idx_ni->mft_no,
+ le32_to_cpu(ia->index.allocated_size) + 0x18,
+ idx_ni->itype.index.block_size);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ index_end = (u8*)ia + idx_ni->itype.index.block_size;
+ if (index_end > kaddr + PAGE_CACHE_SIZE) {
+ ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx "
+ "crosses page boundary. Impossible! Cannot "
+ "access! This is probably a bug in the "
+ "driver.", (unsigned long long)vcn,
+ idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ index_end = (u8*)&ia->index + le32_to_cpu(ia->index.index_length);
+ if (index_end > (u8*)ia + idx_ni->itype.index.block_size) {
+ ntfs_error(sb, "Size of index buffer (VCN 0x%llx) of inode "
+ "0x%lx exceeds maximum size.",
+ (unsigned long long)vcn, idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ /* The first index entry. */
+ ie = (INDEX_ENTRY*)((u8*)&ia->index +
+ le32_to_cpu(ia->index.entries_offset));
+ /*
+ * Iterate similar to above big loop but applied to index buffer, thus
+ * loop until we exceed valid memory (corruption case) or until we
+ * reach the last entry.
+ */
+ for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
+ /* Bounds checks. */
+ if ((u8*)ie < (u8*)ia || (u8*)ie +
+ sizeof(INDEX_ENTRY_HEADER) > index_end ||
+ (u8*)ie + le16_to_cpu(ie->length) > index_end) {
+ ntfs_error(sb, "Index entry out of bounds in inode "
+ "0x%lx.", idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ /*
+ * The last entry cannot contain a ket. It can however contain
+ * a pointer to a child node in the B+tree so we just break out.
+ */
+ if (ie->flags & INDEX_ENTRY_END)
+ break;
+ /* Further bounds checks. */
+ if ((u32)sizeof(INDEX_ENTRY_HEADER) +
+ le16_to_cpu(ie->key_length) >
+ le16_to_cpu(ie->data.vi.data_offset) ||
+ (u32)le16_to_cpu(ie->data.vi.data_offset) +
+ le16_to_cpu(ie->data.vi.data_length) >
+ le16_to_cpu(ie->length)) {
+ ntfs_error(sb, "Index entry out of bounds in inode "
+ "0x%lx.", idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ /* If the keys match perfectly, we setup @ictx and return 0. */
+ if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
+ &ie->key, key_len)) {
+ia_done:
+ ictx->is_in_root = FALSE;
+ ictx->actx = NULL;
+ ictx->base_ni = NULL;
+ ictx->ia = ia;
+ ictx->page = page;
+ goto done;
+ }
+ /*
+ * Not a perfect match, need to do full blown collation so we
+ * know which way in the B+tree we have to go.
+ */
+ rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
+ key_len, &ie->key, le16_to_cpu(ie->key_length));
+ /*
+ * If @key collates before the key of the current entry, there
+ * is definitely no such key in this index but we might need to
+ * descend into the B+tree so we just break out of the loop.
+ */
+ if (rc == -1)
+ break;
+ /*
+ * A match should never happen as the memcmp() call should have
+ * cought it, but we still treat it correctly.
+ */
+ if (!rc)
+ goto ia_done;
+ /* The keys are not equal, continue the search. */
+ }
+ /*
+ * We have finished with this index buffer without success. Check for
+ * the presence of a child node and if not present return -ENOENT.
+ */
+ if (!(ie->flags & INDEX_ENTRY_NODE)) {
+ ntfs_debug("Entry not found.");
+ err = -ENOENT;
+ goto ia_done;
+ }
+ if ((ia->index.flags & NODE_MASK) == LEAF_NODE) {
+ ntfs_error(sb, "Index entry with child node found in a leaf "
+ "node in inode 0x%lx.", idx_ni->mft_no);
+ err = -EIO;
+ goto unm_err_out;
+ }
+ /* Child node present, descend into it. */
+ old_vcn = vcn;
+ vcn = sle64_to_cpup((u8*)ie + le16_to_cpu(ie->length) - 8);
+ if (vcn >= 0) {
+ /*
+ * If vcn is in the same page cache page as old_vcn we recycle
+ * the mapped page.
+ */
+ if (old_vcn << vol->cluster_size_bits >>
+ PAGE_CACHE_SHIFT == vcn <<
+ vol->cluster_size_bits >>
+ PAGE_CACHE_SHIFT)
+ goto fast_descend_into_child_node;
+ unlock_page(page);
+ ntfs_unmap_page(page);
+ goto descend_into_child_node;
+ }
+ ntfs_error(sb, "Negative child node vcn in inode 0x%lx.",
+ idx_ni->mft_no);
+ err = -EIO;
+unm_err_out:
+ unlock_page(page);
+ ntfs_unmap_page(page);
+err_out:
+ if (actx)
+ put_attr_search_ctx(actx);
+ if (m)
+ unmap_mft_record(base_ni);
+ return err;
+idx_err_out:
+ ntfs_error(sb, "Corrupt index. Aborting lookup.");
+ err = -EIO;
+ goto err_out;
+}
+
+#ifdef NTFS_RW
+
+/**
+ * __ntfs_index_entry_mark_dirty - mark an index allocation entry dirty
+ * @ictx: ntfs index context describing the index entry
+ *
+ * NOTE: You want to use fs/ntfs/index.h::ntfs_index_entry_mark_dirty() instead!
+ *
+ * Mark the index allocation entry described by the index entry context @ictx
+ * dirty.
+ *
+ * The index entry must be in an index block belonging to the index allocation
+ * attribute. Mark the buffers belonging to the index record as well as the
+ * page cache page the index block is in dirty. This automatically marks the
+ * VFS inode of the ntfs index inode to which the index entry belongs dirty,
+ * too (I_DIRTY_PAGES) and this in turn ensures the page buffers, and hence the
+ * dirty index block, will be written out to disk later.
+ */
+void __ntfs_index_entry_mark_dirty(ntfs_index_context *ictx)
+{
+ ntfs_inode *ni;
+ struct page *page;
+ struct buffer_head *bh, *head;
+ unsigned int rec_start, rec_end, bh_size, bh_start, bh_end;
+
+ BUG_ON(ictx->is_in_root);
+ ni = ictx->idx_ni;
+ page = ictx->page;
+ BUG_ON(!page_has_buffers(page));
+ /*
+ * If the index block is the same size as the page cache page, set all
+ * the buffers in the page, as well as the page itself, dirty.
+ */
+ if (ni->itype.index.block_size == PAGE_CACHE_SIZE) {
+ __set_page_dirty_buffers(page);
+ return;
+ }
+ /* Set only the buffers in which the index block is located dirty. */
+ rec_start = (unsigned int)((u8*)ictx->ia - (u8*)page_address(page));
+ rec_end = rec_start + ni->itype.index.block_size;
+ bh_size = ni->vol->sb->s_blocksize;
+ bh_start = 0;
+ bh = head = page_buffers(page);
+ do {
+ bh_end = bh_start + bh_size;
+ if ((bh_start >= rec_start) && (bh_end <= rec_end))
+ set_buffer_dirty(bh);
+ bh_start = bh_end;
+ } while ((bh = bh->b_this_page) != head);
+ /* Finally, set the page itself dirty, too. */
+ __set_page_dirty_nobuffers(page);
+}
+
+#endif /* NTFS_RW */
--- /dev/null
+/*
+ * index.h - Defines for NTFS kernel index handling. Part of the Linux-NTFS
+ * project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_NTFS_INDEX_H
+#define _LINUX_NTFS_INDEX_H
+
+#include <linux/fs.h>
+
+#include "types.h"
+#include "layout.h"
+#include "inode.h"
+#include "attrib.h"
+#include "mft.h"
+
+/**
+ * @idx_ni: index inode containing the @entry described by this context
+ * @entry: index entry (points into @ir or @ia)
+ * @data: index entry data (points into @entry)
+ * @data_len: length in bytes of @data
+ * @is_in_root: TRUE if @entry is in @ir and FALSE if it is in @ia
+ * @ir: index root if @is_in_root and NULL otherwise
+ * @actx: attribute search context if @is_in_root and NULL otherwise
+ * @base_ni: base inode if @is_in_root and NULL otherwise
+ * @ia: index block if @is_in_root is FALSE and NULL otherwise
+ * @page: page if @is_in_root is FALSE and NULL otherwise
+ *
+ * @idx_ni is the index inode this context belongs to.
+ *
+ * @entry is the index entry described by this context. @data and @data_len
+ * are the index entry data and its length in bytes, respectively. @data
+ * simply points into @entry. This is probably what the user is interested in.
+ *
+ * If @is_in_root is TRUE, @entry is in the index root attribute @ir described
+ * by the attribute search context @actx and the base inode @base_ni. @ia and
+ * @page are NULL in this case.
+ *
+ * If @is_in_root is FALSE, @entry is in the index allocation attribute and @ia
+ * and @page point to the index allocation block and the mapped, locked page it
+ * is in, respectively. @ir, @actx and @base_ni are NULL in this case.
+ *
+ * To obtain a context call ntfs_index_ctx_get().
+ *
+ * We use this context to allow ntfs_index_lookup() to return the found index
+ * @entry and its @data without having to allocate a buffer and copy the @entry
+ * and/or its @data into it.
+ *
+ * When finished with the @entry and its @data, call ntfs_index_ctx_put() to
+ * free the context and other associated resources.
+ *
+ * If the index entry was modified, call flush_dcache_index_entry_page()
+ * immediately after the modification and either ntfs_index_entry_mark_dirty()
+ * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
+ * ensure that the changes are written to disk.
+ */
+typedef struct {
+ ntfs_inode *idx_ni;
+ INDEX_ENTRY *entry;
+ void *data;
+ u16 data_len;
+ BOOL is_in_root;
+ INDEX_ROOT *ir;
+ attr_search_context *actx;
+ ntfs_inode *base_ni;
+ INDEX_ALLOCATION *ia;
+ struct page *page;
+} ntfs_index_context;
+
+extern ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni);
+extern void ntfs_index_ctx_put(ntfs_index_context *ictx);
+
+extern int ntfs_index_lookup(const void *key, const int key_len,
+ ntfs_index_context *ictx);
+
+#ifdef NTFS_RW
+
+/**
+ * ntfs_index_entry_flush_dcache_page - flush_dcache_page() for index entries
+ * @ictx: ntfs index context describing the index entry
+ *
+ * Call flush_dcache_page() for the page in which an index entry resides.
+ *
+ * This must be called every time an index entry is modified, just after the
+ * modification.
+ *
+ * If the index entry is in the index root attribute, simply flush the page
+ * containing the mft record containing the index root attribute.
+ *
+ * If the index entry is in an index block belonging to the index allocation
+ * attribute, simply flush the page cache page containing the index block.
+ */
+static inline void ntfs_index_entry_flush_dcache_page(ntfs_index_context *ictx)
+{
+ if (ictx->is_in_root)
+ flush_dcache_mft_record_page(ictx->actx->ntfs_ino);
+ else
+ flush_dcache_page(ictx->page);
+}
+
+extern void __ntfs_index_entry_mark_dirty(ntfs_index_context *ictx);
+
+/**
+ * ntfs_index_entry_mark_dirty - mark an index entry dirty
+ * @ictx: ntfs index context describing the index entry
+ *
+ * Mark the index entry described by the index entry context @ictx dirty.
+ *
+ * If the index entry is in the index root attribute, simply mark the mft
+ * record containing the index root attribute dirty. This ensures the mft
+ * record, and hence the index root attribute, will be written out to disk
+ * later.
+ *
+ * If the index entry is in an index block belonging to the index allocation
+ * attribute, mark the buffers belonging to the index record as well as the
+ * page cache page the index block is in dirty. This automatically marks the
+ * VFS inode of the ntfs index inode to which the index entry belongs dirty,
+ * too (I_DIRTY_PAGES) and this in turn ensures the page buffers, and hence the
+ * dirty index block, will be written out to disk later.
+ */
+static inline void ntfs_index_entry_mark_dirty(ntfs_index_context *ictx)
+{
+ if (ictx->is_in_root)
+ mark_mft_record_dirty(ictx->actx->ntfs_ino);
+ else
+ __ntfs_index_entry_mark_dirty(ictx);
+}
+
+#endif /* NTFS_RW */
+
+#endif /* _LINUX_NTFS_INDEX_H */
--- /dev/null
+/*
+ * quota.c - NTFS kernel quota ($Quota) handling. Part of the Linux-NTFS
+ * project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifdef NTFS_RW
+
+#include "ntfs.h"
+#include "index.h"
+#include "quota.h"
+
+/**
+ * ntfs_mark_quotas_out_of_date - mark the quotas out of date on an ntfs volume
+ * @vol: ntfs volume on which to mark the quotas out of date
+ *
+ * Mark the quotas out of date on the ntfs volume @vol and return TRUE on
+ * success and FALSE on error.
+ */
+BOOL ntfs_mark_quotas_out_of_date(ntfs_volume *vol)
+{
+ ntfs_index_context *ictx;
+ QUOTA_CONTROL_ENTRY *qce;
+ const u32 qid = QUOTA_DEFAULTS_ID;
+ int err;
+
+ ntfs_debug("Entering.");
+ if (NVolQuotaOutOfDate(vol))
+ goto done;
+ if (!vol->quota_ino || !vol->quota_q_ino) {
+ ntfs_error(vol->sb, "Quota inodes are not open.");
+ return FALSE;
+ }
+ down(&vol->quota_q_ino->i_sem);
+ ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino));
+ if (!ictx) {
+ ntfs_error(vol->sb, "Failed to get index context.");
+ return FALSE;
+ }
+ err = ntfs_index_lookup(&qid, sizeof(qid), ictx);
+ if (err) {
+ if (err == -ENOENT)
+ ntfs_error(vol->sb, "Quota defaults entry is not "
+ "present.");
+ else
+ ntfs_error(vol->sb, "Lookup of quota defaults entry "
+ "failed.");
+ goto err_out;
+ }
+ if (ictx->data_len < offsetof(QUOTA_CONTROL_ENTRY, sid)) {
+ ntfs_error(vol->sb, "Quota defaults entry size is invalid. "
+ "Run chkdsk.");
+ goto err_out;
+ }
+ qce = (QUOTA_CONTROL_ENTRY*)ictx->data;
+ if (le32_to_cpu(qce->version) != QUOTA_VERSION) {
+ ntfs_error(vol->sb, "Quota defaults entry version 0x%x is not "
+ "supported.", le32_to_cpu(qce->version));
+ goto err_out;
+ }
+ ntfs_debug("Quota defaults flags = 0x%x.", le32_to_cpu(qce->flags));
+ /* If quotas are already marked out of date, no need to do anything. */
+ if (qce->flags & QUOTA_FLAG_OUT_OF_DATE)
+ goto set_done;
+ /*
+ * If quota tracking is neither requested, nor enabled and there are no
+ * pending deletes, no need to mark the quotas out of date.
+ */
+ if (!(qce->flags & (QUOTA_FLAG_TRACKING_ENABLED |
+ QUOTA_FLAG_TRACKING_REQUESTED |
+ QUOTA_FLAG_PENDING_DELETES)))
+ goto set_done;
+ /*
+ * Set the QUOTA_FLAG_OUT_OF_DATE bit thus marking quotas out of date.
+ * This is verified on WinXP to be sufficient to cause windows to
+ * rescan the volume on boot and update all quota entries.
+ */
+ qce->flags |= QUOTA_FLAG_OUT_OF_DATE;
+ /* Ensure the modified flags are written to disk. */
+ ntfs_index_entry_flush_dcache_page(ictx);
+ ntfs_index_entry_mark_dirty(ictx);
+set_done:
+ ntfs_index_ctx_put(ictx);
+ up(&vol->quota_q_ino->i_sem);
+ /*
+ * We set the flag so we do not try to mark the quotas out of date
+ * again on remount.
+ */
+ NVolSetQuotaOutOfDate(vol);
+done:
+ ntfs_debug("Done.");
+ return TRUE;
+err_out:
+ ntfs_index_ctx_put(ictx);
+ up(&vol->quota_q_ino->i_sem);
+ return FALSE;
+}
+
+#endif /* NTFS_RW */
--- /dev/null
+/*
+ * quota.h - Defines for NTFS kernel quota ($Quota) handling. Part of the
+ * Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_NTFS_QUOTA_H
+#define _LINUX_NTFS_QUOTA_H
+
+#ifdef NTFS_RW
+
+#include "types.h"
+#include "volume.h"
+
+extern BOOL ntfs_mark_quotas_out_of_date(ntfs_volume *vol);
+
+#endif /* NTFS_RW */
+
+#endif /* _LINUX_NTFS_QUOTA_H */
--- /dev/null
+/*
+ * linux/fs/reiserfs/xattr.c
+ *
+ * Copyright (c) 2002 by Jeff Mahoney, <jeffm@suse.com>
+ *
+ */
+
+/*
+ * In order to implement EA/ACLs in a clean, backwards compatible manner,
+ * they are implemented as files in a "private" directory.
+ * Each EA is in it's own file, with the directory layout like so (/ is assumed
+ * to be relative to fs root). Inside the /.reiserfs_priv/xattrs directory,
+ * directories named using the capital-hex form of the objectid and
+ * generation number are used. Inside each directory are individual files
+ * named with the name of the extended attribute.
+ *
+ * So, for objectid 12648430, we could have:
+ * /.reiserfs_priv/xattrs/C0FFEE.0/system.posix_acl_access
+ * /.reiserfs_priv/xattrs/C0FFEE.0/system.posix_acl_default
+ * /.reiserfs_priv/xattrs/C0FFEE.0/user.Content-Type
+ * .. or similar.
+ *
+ * The file contents are the text of the EA. The size is known based on the
+ * stat data describing the file.
+ *
+ * In the case of system.posix_acl_access and system.posix_acl_default, since
+ * these are special cases for filesystem ACLs, they are interpreted by the
+ * kernel, in addition, they are negatively and positively cached and attached
+ * to the inode so that unnecessary lookups are avoided.
+ */
+
+#include <linux/reiserfs_fs.h>
+#include <linux/dcache.h>
+#include <linux/namei.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/reiserfs_xattr.h>
+#include <linux/reiserfs_acl.h>
+#include <linux/mbcache.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+#include <linux/smp_lock.h>
+#include <linux/stat.h>
+#include <asm/semaphore.h>
+
+#define FL_READONLY 128
+#define FL_DIR_SEM_HELD 256
+#define PRIVROOT_NAME ".reiserfs_priv"
+#define XAROOT_NAME "xattrs"
+
+static struct reiserfs_xattr_handler *find_xattr_handler_prefix (const char *prefix);
+
+static struct dentry *
+create_xa_root (struct super_block *sb)
+{
+ struct dentry *privroot = dget (REISERFS_SB(sb)->priv_root);
+ struct dentry *xaroot;
+
+ /* This needs to be created at mount-time */
+ if (!privroot)
+ return ERR_PTR(-EOPNOTSUPP);
+
+ xaroot = lookup_one_len (XAROOT_NAME, privroot, strlen (XAROOT_NAME));
+ if (IS_ERR (xaroot)) {
+ goto out;
+ } else if (!xaroot->d_inode) {
+ int err;
+ down (&privroot->d_inode->i_sem);
+ err = privroot->d_inode->i_op->mkdir (privroot->d_inode, xaroot, 0700);
+ up (&privroot->d_inode->i_sem);
+
+ if (err) {
+ dput (xaroot);
+ dput (privroot);
+ return ERR_PTR (err);
+ }
+ REISERFS_SB(sb)->xattr_root = dget (xaroot);
+ }
+
+out:
+ dput (privroot);
+ return xaroot;
+}
+
+/* This will return a dentry, or error, refering to the xa root directory.
+ * If the xa root doesn't exist yet, the dentry will be returned without
+ * an associated inode. This dentry can be used with ->mkdir to create
+ * the xa directory. */
+static struct dentry *
+__get_xa_root (struct super_block *s)
+{
+ struct dentry *privroot = dget (REISERFS_SB(s)->priv_root);
+ struct dentry *xaroot = NULL;
+
+ if (IS_ERR (privroot) || !privroot)
+ return privroot;
+
+ xaroot = lookup_one_len (XAROOT_NAME, privroot, strlen (XAROOT_NAME));
+ if (IS_ERR (xaroot)) {
+ goto out;
+ } else if (!xaroot->d_inode) {
+ dput (xaroot);
+ xaroot = NULL;
+ goto out;
+ }
+
+ REISERFS_SB(s)->xattr_root = dget (xaroot);
+
+out:
+ dput (privroot);
+ return xaroot;
+}
+
+/* Returns the dentry (or NULL) referring to the root of the extended
+ * attribute directory tree. If it has already been retreived, it is used.
+ * Otherwise, we attempt to retreive it from disk. It may also return
+ * a pointer-encoded error.
+ */
+static inline struct dentry *
+get_xa_root (struct super_block *s)
+{
+ struct dentry *dentry = dget (REISERFS_SB(s)->xattr_root);
+
+ if (!dentry)
+ dentry = __get_xa_root (s);
+
+ return dentry;
+}
+
+/* Opens the directory corresponding to the inode's extended attribute store.
+ * If flags allow, the tree to the directory may be created. If creation is
+ * prohibited, -ENODATA is returned. */
+static struct dentry *
+open_xa_dir (const struct inode *inode, int flags)
+{
+ struct dentry *xaroot, *xadir;
+ char namebuf[17];
+
+ xaroot = get_xa_root (inode->i_sb);
+ if (IS_ERR (xaroot)) {
+ return xaroot;
+ } else if (!xaroot) {
+ if (flags == 0 || flags & XATTR_CREATE) {
+ xaroot = create_xa_root (inode->i_sb);
+ if (IS_ERR (xaroot))
+ return xaroot;
+ }
+ if (!xaroot)
+ return ERR_PTR (-ENODATA);
+ }
+
+ /* ok, we have xaroot open */
+
+ snprintf (namebuf, sizeof (namebuf), "%X.%X",
+ le32_to_cpu (INODE_PKEY (inode)->k_objectid),
+ inode->i_generation);
+ xadir = lookup_one_len (namebuf, xaroot, strlen (namebuf));
+ if (IS_ERR (xadir)) {
+ dput (xaroot);
+ return xadir;
+ }
+
+ if (!xadir->d_inode) {
+ int err;
+ if (flags == 0 || flags & XATTR_CREATE) {
+ /* Although there is nothing else trying to create this directory,
+ * another directory with the same hash may be created, so we need
+ * to protect against that */
+ err = xaroot->d_inode->i_op->mkdir (xaroot->d_inode, xadir, 0700);
+ if (err) {
+ dput (xaroot);
+ dput (xadir);
+ return ERR_PTR (err);
+ }
+ }
+ if (!xadir->d_inode) {
+ dput (xaroot);
+ dput (xadir);
+ return ERR_PTR (-ENODATA);
+ }
+ /* Newly created object.. Need to mark it private */
+ REISERFS_I(xadir->d_inode)->i_flags |= i_priv_object;
+ }
+
+ dput (xaroot);
+ return xadir;
+}
+
+/* Returns a dentry corresponding to a specific extended attribute file
+ * for the inode. If flags allow, the file is created. Otherwise, a
+ * valid or negative dentry, or an error is returned. */
+static struct dentry *
+get_xa_file_dentry (const struct inode *inode, const char *name, int flags)
+{
+ struct dentry *xadir, *xafile;
+ int err = 0;
+
+ xadir = open_xa_dir (inode, flags);
+ if (IS_ERR (xadir)) {
+ return ERR_PTR (PTR_ERR (xadir));
+ } else if (xadir && !xadir->d_inode) {
+ dput (xadir);
+ return ERR_PTR (-ENODATA);
+ }
+
+ xafile = lookup_one_len (name, xadir, strlen (name));
+ if (IS_ERR (xafile)) {
+ dput (xadir);
+ return ERR_PTR (PTR_ERR (xafile));
+ }
+
+ if (xafile->d_inode) { /* file exists */
+ if (flags & XATTR_CREATE) {
+ err = -EEXIST;
+ dput (xafile);
+ goto out;
+ }
+ } else if (flags & XATTR_REPLACE || flags & FL_READONLY) {
+ goto out;
+ } else {
+ /* inode->i_sem is down, so nothing else can try to create
+ * the same xattr */
+ err = xadir->d_inode->i_op->create (xadir->d_inode, xafile,
+ 0700|S_IFREG, NULL);
+
+ if (err) {
+ dput (xafile);
+ goto out;
+ }
+ /* Newly created object.. Need to mark it private */
+ REISERFS_I(xafile->d_inode)->i_flags |= i_priv_object;
+ }
+
+out:
+ dput (xadir);
+ if (err)
+ xafile = ERR_PTR (err);
+ return xafile;
+}
+
+
+/* Opens a file pointer to the attribute associated with inode */
+static struct file *
+open_xa_file (const struct inode *inode, const char *name, int flags)
+{
+ struct dentry *xafile;
+ struct file *fp;
+
+ xafile = get_xa_file_dentry (inode, name, flags);
+ if (IS_ERR (xafile))
+ return ERR_PTR (PTR_ERR (xafile));
+ else if (!xafile->d_inode) {
+ dput (xafile);
+ return ERR_PTR (-ENODATA);
+ }
+
+ fp = dentry_open (xafile, NULL, O_RDWR);
+ /* dentry_open dputs the dentry if it fails */
+
+ return fp;
+}
+
+
+/*
+ * this is very similar to fs/reiserfs/dir.c:reiserfs_readdir, but
+ * we need to drop the path before calling the filldir struct. That
+ * would be a big performance hit to the non-xattr case, so I've copied
+ * the whole thing for now. --clm
+ *
+ * the big difference is that I go backwards through the directory,
+ * and don't mess with f->f_pos, but the idea is the same. Do some
+ * action on each and every entry in the directory.
+ *
+ * we're called with i_sem held, so there are no worries about the directory
+ * changing underneath us.
+ */
+static int __xattr_readdir(struct file * filp, void * dirent, filldir_t filldir)
+{
+ struct inode *inode = filp->f_dentry->d_inode;
+ struct cpu_key pos_key; /* key of current position in the directory (key of directory entry) */
+ INITIALIZE_PATH (path_to_entry);
+ struct buffer_head * bh;
+ int entry_num;
+ struct item_head * ih, tmp_ih;
+ int search_res;
+ char * local_buf;
+ loff_t next_pos;
+ char small_buf[32] ; /* avoid kmalloc if we can */
+ struct reiserfs_de_head *deh;
+ int d_reclen;
+ char * d_name;
+ off_t d_off;
+ ino_t d_ino;
+ struct reiserfs_dir_entry de;
+
+
+ /* form key for search the next directory entry using f_pos field of
+ file structure */
+ next_pos = max_reiserfs_offset(inode);
+
+ while (1) {
+research:
+ if (next_pos <= DOT_DOT_OFFSET)
+ break;
+ make_cpu_key (&pos_key, inode, next_pos, TYPE_DIRENTRY, 3);
+
+ search_res = search_by_entry_key(inode->i_sb, &pos_key, &path_to_entry, &de);
+ if (search_res == IO_ERROR) {
+ // FIXME: we could just skip part of directory which could
+ // not be read
+ pathrelse(&path_to_entry);
+ return -EIO;
+ }
+
+ if (search_res == NAME_NOT_FOUND)
+ de.de_entry_num--;
+
+ set_de_name_and_namelen(&de);
+ entry_num = de.de_entry_num;
+ deh = &(de.de_deh[entry_num]);
+
+ bh = de.de_bh;
+ ih = de.de_ih;
+
+ if (!is_direntry_le_ih(ih)) {
+ reiserfs_warning(inode->i_sb, "not direntry %h", ih);
+ break;
+ }
+ copy_item_head(&tmp_ih, ih);
+
+ /* we must have found item, that is item of this directory, */
+ RFALSE( COMP_SHORT_KEYS (&(ih->ih_key), &pos_key),
+ "vs-9000: found item %h does not match to dir we readdir %K",
+ ih, &pos_key);
+
+ if (deh_offset(deh) <= DOT_DOT_OFFSET) {
+ break;
+ }
+
+ /* look for the previous entry in the directory */
+ next_pos = deh_offset (deh) - 1;
+
+ if (!de_visible (deh))
+ /* it is hidden entry */
+ continue;
+
+ d_reclen = entry_length(bh, ih, entry_num);
+ d_name = B_I_DEH_ENTRY_FILE_NAME (bh, ih, deh);
+ d_off = deh_offset (deh);
+ d_ino = deh_objectid (deh);
+
+ if (!d_name[d_reclen - 1])
+ d_reclen = strlen (d_name);
+
+ if (d_reclen > REISERFS_MAX_NAME(inode->i_sb->s_blocksize)){
+ /* too big to send back to VFS */
+ continue ;
+ }
+
+ /* Ignore the .reiserfs_priv entry */
+ if (reiserfs_xattrs (inode->i_sb) &&
+ !old_format_only(inode->i_sb) &&
+ deh_objectid (deh) == le32_to_cpu (INODE_PKEY(REISERFS_SB(inode->i_sb)->priv_root->d_inode)->k_objectid))
+ continue;
+
+ if (d_reclen <= 32) {
+ local_buf = small_buf ;
+ } else {
+ local_buf = reiserfs_kmalloc(d_reclen, GFP_NOFS, inode->i_sb) ;
+ if (!local_buf) {
+ pathrelse (&path_to_entry);
+ return -ENOMEM ;
+ }
+ if (item_moved (&tmp_ih, &path_to_entry)) {
+ reiserfs_kfree(local_buf, d_reclen, inode->i_sb) ;
+
+ /* sigh, must retry. Do this same offset again */
+ next_pos = d_off;
+ goto research;
+ }
+ }
+
+ // Note, that we copy name to user space via temporary
+ // buffer (local_buf) because filldir will block if
+ // user space buffer is swapped out. At that time
+ // entry can move to somewhere else
+ memcpy (local_buf, d_name, d_reclen);
+
+ /* the filldir function might need to start transactions,
+ * or do who knows what. Release the path now that we've
+ * copied all the important stuff out of the deh
+ */
+ pathrelse (&path_to_entry);
+
+ if (filldir (dirent, local_buf, d_reclen, d_off, d_ino,
+ DT_UNKNOWN) < 0) {
+ if (local_buf != small_buf) {
+ reiserfs_kfree(local_buf, d_reclen, inode->i_sb) ;
+ }
+ goto end;
+ }
+ if (local_buf != small_buf) {
+ reiserfs_kfree(local_buf, d_reclen, inode->i_sb) ;
+ }
+ } /* while */
+
+end:
+ pathrelse (&path_to_entry);
+ return 0;
+}
+
+/*
+ * this could be done with dedicated readdir ops for the xattr files,
+ * but I want to get something working asap
+ * this is stolen from vfs_readdir
+ *
+ */
+static
+int xattr_readdir(struct file *file, filldir_t filler, void *buf)
+{
+ struct inode *inode = file->f_dentry->d_inode;
+ int res = -ENOTDIR;
+ if (!file->f_op || !file->f_op->readdir)
+ goto out;
+ down(&inode->i_sem);
+// down(&inode->i_zombie);
+ res = -ENOENT;
+ if (!IS_DEADDIR(inode)) {
+ lock_kernel();
+ res = __xattr_readdir(file, buf, filler);
+ unlock_kernel();
+ }
+// up(&inode->i_zombie);
+ up(&inode->i_sem);
+out:
+ return res;
+}
+
+
+/* Internal operations on file data */
+static inline void
+reiserfs_put_page(struct page *page)
+{
+ kunmap(page);
+ page_cache_release(page);
+}
+
+static struct page *
+reiserfs_get_page(struct inode *dir, unsigned long n)
+{
+ struct address_space *mapping = dir->i_mapping;
+ struct page *page;
+ /* We can deadlock if we try to free dentries,
+ and an unlink/rmdir has just occured - GFP_NOFS avoids this */
+ mapping->flags = (mapping->flags & ~__GFP_BITS_MASK) | GFP_NOFS;
+ page = read_cache_page (mapping, n,
+ (filler_t*)mapping->a_ops->readpage, NULL);
+ if (!IS_ERR(page)) {
+ wait_on_page_locked(page);
+ kmap(page);
+ if (!PageUptodate(page))
+ goto fail;
+
+ if (PageError(page))
+ goto fail;
+ }
+ return page;
+
+fail:
+ reiserfs_put_page(page);
+ return ERR_PTR(-EIO);
+}
+
+static inline __u32
+xattr_hash (const char *msg, int len)
+{
+ return csum_partial (msg, len, 0);
+}
+
+/* Generic extended attribute operations that can be used by xa plugins */
+
+/*
+ * inode->i_sem: down
+ */
+int
+reiserfs_xattr_set (struct inode *inode, const char *name, const void *buffer,
+ size_t buffer_size, int flags)
+{
+ int err = 0;
+ struct file *fp;
+ struct page *page;
+ char *data;
+ struct address_space *mapping;
+ size_t file_pos = 0;
+ size_t buffer_pos = 0;
+ struct inode *xinode;
+ struct iattr newattrs;
+ __u32 xahash = 0;
+
+ if (IS_RDONLY (inode))
+ return -EROFS;
+
+ if (IS_IMMUTABLE (inode) || IS_APPEND (inode))
+ return -EPERM;
+
+ if (get_inode_sd_version (inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ /* Empty xattrs are ok, they're just empty files, no hash */
+ if (buffer && buffer_size)
+ xahash = xattr_hash (buffer, buffer_size);
+
+open_file:
+ fp = open_xa_file (inode, name, flags);
+ if (IS_ERR (fp)) {
+ err = PTR_ERR (fp);
+ goto out;
+ }
+
+ xinode = fp->f_dentry->d_inode;
+ REISERFS_I(inode)->i_flags |= i_has_xattr_dir;
+
+ /* we need to copy it off.. */
+ if (xinode->i_nlink > 1) {
+ fput(fp);
+ err = reiserfs_xattr_del (inode, name);
+ if (err < 0)
+ goto out;
+ /* We just killed the old one, we're not replacing anymore */
+ if (flags & XATTR_REPLACE)
+ flags &= ~XATTR_REPLACE;
+ goto open_file;
+ }
+
+ /* Resize it so we're ok to write there */
+ newattrs.ia_size = buffer_size;
+ newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME;
+ down (&xinode->i_sem);
+ err = notify_change(fp->f_dentry, &newattrs);
+ if (err)
+ goto out_filp;
+
+ mapping = xinode->i_mapping;
+ while (buffer_pos < buffer_size || buffer_pos == 0) {
+ size_t chunk;
+ size_t skip = 0;
+ size_t page_offset = (file_pos & (PAGE_CACHE_SIZE - 1));
+ if (buffer_size - buffer_pos > PAGE_CACHE_SIZE)
+ chunk = PAGE_CACHE_SIZE;
+ else
+ chunk = buffer_size - buffer_pos;
+
+ page = reiserfs_get_page (xinode, file_pos >> PAGE_CACHE_SHIFT);
+ if (IS_ERR (page)) {
+ err = PTR_ERR (page);
+ goto out_filp;
+ }
+
+ lock_page (page);
+ data = page_address (page);
+
+ if (file_pos == 0) {
+ struct reiserfs_xattr_header *rxh;
+ skip = file_pos = sizeof (struct reiserfs_xattr_header);
+ if (chunk + skip > PAGE_CACHE_SIZE)
+ chunk = PAGE_CACHE_SIZE - skip;
+ rxh = (struct reiserfs_xattr_header *)data;
+ rxh->h_magic = cpu_to_le32 (REISERFS_XATTR_MAGIC);
+ rxh->h_hash = cpu_to_le32 (xahash);
+ }
+
+ err = mapping->a_ops->prepare_write (fp, page, page_offset,
+ page_offset + chunk + skip);
+ if (!err) {
+ if (buffer)
+ memcpy (data + skip, buffer + buffer_pos, chunk);
+ err = mapping->a_ops->commit_write (fp, page, page_offset,
+ page_offset + chunk + skip);
+ }
+ unlock_page (page);
+ reiserfs_put_page (page);
+ buffer_pos += chunk;
+ file_pos += chunk;
+ skip = 0;
+ if (err || buffer_size == 0 || !buffer)
+ break;
+ }
+
+ inode->i_ctime = CURRENT_TIME;
+ mark_inode_dirty (inode);
+
+out_filp:
+ up (&xinode->i_sem);
+ fput(fp);
+
+out:
+ return err;
+}
+
+/*
+ * inode->i_sem: down
+ */
+int
+reiserfs_xattr_get (const struct inode *inode, const char *name, void *buffer,
+ size_t buffer_size)
+{
+ ssize_t err = 0;
+ struct file *fp;
+ size_t isize;
+ size_t file_pos = 0;
+ size_t buffer_pos = 0;
+ struct page *page;
+ struct inode *xinode;
+ __u32 hash = 0;
+
+ if (name == NULL)
+ return -EINVAL;
+
+ /* We can't have xattrs attached to v1 items since they don't have
+ * generation numbers */
+ if (get_inode_sd_version (inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ fp = open_xa_file (inode, name, FL_READONLY);
+ if (IS_ERR (fp)) {
+ err = PTR_ERR (fp);
+ goto out;
+ }
+
+ xinode = fp->f_dentry->d_inode;
+ isize = xinode->i_size;
+ REISERFS_I(inode)->i_flags |= i_has_xattr_dir;
+
+ /* Just return the size needed */
+ if (buffer == NULL) {
+ err = isize - sizeof (struct reiserfs_xattr_header);
+ goto out_dput;
+ }
+
+ if (buffer_size < isize - sizeof (struct reiserfs_xattr_header)) {
+ err = -ERANGE;
+ goto out_dput;
+ }
+
+ while (file_pos < isize) {
+ size_t chunk;
+ char *data;
+ size_t skip = 0;
+ if (isize - file_pos > PAGE_CACHE_SIZE)
+ chunk = PAGE_CACHE_SIZE;
+ else
+ chunk = isize - file_pos;
+
+ page = reiserfs_get_page (xinode, file_pos >> PAGE_CACHE_SHIFT);
+ if (IS_ERR (page)) {
+ err = PTR_ERR (page);
+ goto out_dput;
+ }
+
+ lock_page (page);
+ data = page_address (page);
+ if (file_pos == 0) {
+ struct reiserfs_xattr_header *rxh =
+ (struct reiserfs_xattr_header *)data;
+ skip = file_pos = sizeof (struct reiserfs_xattr_header);
+ chunk -= skip;
+ /* Magic doesn't match up.. */
+ if (rxh->h_magic != cpu_to_le32 (REISERFS_XATTR_MAGIC)) {
+ unlock_page (page);
+ reiserfs_put_page (page);
+ reiserfs_warning (inode->i_sb, "Invalid magic for xattr (%s) "
+ "associated with %k", name,
+ INODE_PKEY (inode));
+ err = -EIO;
+ goto out_dput;
+ }
+ hash = le32_to_cpu (rxh->h_hash);
+ }
+ memcpy (buffer + buffer_pos, data + skip, chunk);
+ unlock_page (page);
+ reiserfs_put_page (page);
+ file_pos += chunk;
+ buffer_pos += chunk;
+ skip = 0;
+ }
+ err = isize - sizeof (struct reiserfs_xattr_header);
+
+ if (xattr_hash (buffer, isize - sizeof (struct reiserfs_xattr_header)) != hash) {
+ reiserfs_warning (inode->i_sb, "Invalid hash for xattr (%s) associated "
+ "with %k", name, INODE_PKEY (inode));
+ err = -EIO;
+ }
+
+out_dput:
+ fput(fp);
+
+out:
+ return err;
+}
+
+static int
+__reiserfs_xattr_del (struct dentry *xadir, const char *name, int namelen)
+{
+ struct dentry *dentry;
+ struct inode *dir = xadir->d_inode;
+ int err = 0;
+
+ dentry = lookup_one_len (name, xadir, namelen);
+ if (IS_ERR (dentry)) {
+ err = PTR_ERR (dentry);
+ goto out;
+ } else if (!dentry->d_inode) {
+ err = -ENODATA;
+ goto out_file;
+ }
+
+ /* Skip directories.. */
+ if (S_ISDIR (dentry->d_inode->i_mode))
+ goto out_file;
+
+ if (!is_reiserfs_priv_object (dentry->d_inode)) {
+ reiserfs_warning (dir->i_sb, "OID %08x [%.*s/%.*s] doesn't have "
+ "priv flag set [parent is %sset].",
+ le32_to_cpu (INODE_PKEY (dentry->d_inode)->k_objectid),
+ xadir->d_name.len, xadir->d_name.name, namelen, name,
+ is_reiserfs_priv_object (xadir->d_inode) ? "" : "not ");
+ dput (dentry);
+ return -EIO;
+ }
+
+ err = dir->i_op->unlink (dir, dentry);
+ if (!err)
+ d_delete (dentry);
+
+out_file:
+ dput (dentry);
+
+out:
+ return err;
+}
+
+
+int
+reiserfs_xattr_del (struct inode *inode, const char *name)
+{
+ struct dentry *dir;
+ int err;
+
+ if (IS_RDONLY (inode))
+ return -EROFS;
+
+ dir = open_xa_dir (inode, FL_READONLY);
+ if (IS_ERR (dir)) {
+ err = PTR_ERR (dir);
+ goto out;
+ }
+
+ err = __reiserfs_xattr_del (dir, name, strlen (name));
+ dput (dir);
+
+out:
+ return err;
+}
+
+/* The following are side effects of other operations that aren't explicitly
+ * modifying extended attributes. This includes operations such as permissions
+ * or ownership changes, object deletions, etc. */
+
+static int
+reiserfs_delete_xattrs_filler (void *buf, const char *name, int namelen,
+ loff_t offset, ino_t ino, unsigned int d_type)
+{
+ struct dentry *xadir = (struct dentry *)buf;
+
+ return __reiserfs_xattr_del (xadir, name, namelen);
+
+}
+
+/* This is called w/ inode->i_sem downed */
+int
+reiserfs_delete_xattrs (struct inode *inode)
+{
+ struct file *fp;
+ struct dentry *dir, *root;
+ int err = 0;
+
+ /* Skip out, an xattr has no xattrs associated with it */
+ if (is_reiserfs_priv_object (inode) ||
+ get_inode_sd_version (inode) == STAT_DATA_V1 ||
+ !reiserfs_xattrs(inode->i_sb))
+ {
+ return 0;
+ }
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ dir = open_xa_dir (inode, FL_READONLY);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ if (IS_ERR (dir)) {
+ err = PTR_ERR (dir);
+ goto out;
+ } else if (!dir->d_inode) {
+ dput (dir);
+ return 0;
+ }
+
+ fp = dentry_open (dir, NULL, O_RDWR);
+ if (IS_ERR (fp)) {
+ err = PTR_ERR (fp);
+ /* dentry_open dputs the dentry if it fails */
+ goto out;
+ }
+
+ lock_kernel ();
+ err = xattr_readdir (fp, reiserfs_delete_xattrs_filler, dir);
+ if (err) {
+ unlock_kernel ();
+ goto out_dir;
+ }
+
+ /* Leftovers besides . and .. -- that's not good. */
+ if (dir->d_inode->i_nlink <= 2) {
+ root = get_xa_root (inode->i_sb);
+ reiserfs_write_lock_xattrs (inode->i_sb);
+ err = vfs_rmdir (root->d_inode, dir);
+ reiserfs_write_unlock_xattrs (inode->i_sb);
+ dput (root);
+ } else {
+ reiserfs_warning (inode->i_sb,
+ "Couldn't remove all entries in directory");
+ }
+ unlock_kernel ();
+
+out_dir:
+ fput(fp);
+
+out:
+ if (!err)
+ REISERFS_I(inode)->i_flags = REISERFS_I(inode)->i_flags & ~i_has_xattr_dir;
+ return err;
+}
+
+struct reiserfs_chown_buf {
+ struct inode *inode;
+ struct dentry *xadir;
+ struct iattr *attrs;
+};
+
+/* XXX: If there is a better way to do this, I'd love to hear about it */
+static int
+reiserfs_chown_xattrs_filler (void *buf, const char *name, int namelen,
+ loff_t offset, ino_t ino, unsigned int d_type)
+{
+ struct reiserfs_chown_buf *chown_buf = (struct reiserfs_chown_buf *)buf;
+ struct dentry *xafile, *xadir = chown_buf->xadir;
+ struct iattr *attrs = chown_buf->attrs;
+ int err = 0;
+
+ xafile = lookup_one_len (name, xadir, namelen);
+ if (IS_ERR (xafile))
+ return PTR_ERR (xafile);
+ else if (!xafile->d_inode) {
+ dput (xafile);
+ return -ENODATA;
+ }
+
+ if (!S_ISDIR (xafile->d_inode->i_mode))
+ err = notify_change (xafile, attrs);
+ dput (xafile);
+
+ return err;
+}
+
+int
+reiserfs_chown_xattrs (struct inode *inode, struct iattr *attrs)
+{
+ struct file *fp;
+ struct dentry *dir;
+ int err = 0;
+ struct reiserfs_chown_buf buf;
+ unsigned int ia_valid = attrs->ia_valid;
+
+ /* Skip out, an xattr has no xattrs associated with it */
+ if (is_reiserfs_priv_object (inode) ||
+ get_inode_sd_version (inode) == STAT_DATA_V1 ||
+ !reiserfs_xattrs(inode->i_sb))
+ {
+ return 0;
+ }
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ dir = open_xa_dir (inode, FL_READONLY);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ if (IS_ERR (dir)) {
+ if (PTR_ERR (dir) != -ENODATA)
+ err = PTR_ERR (dir);
+ goto out;
+ } else if (!dir->d_inode) {
+ dput (dir);
+ goto out;
+ }
+
+ fp = dentry_open (dir, NULL, O_RDWR);
+ if (IS_ERR (fp)) {
+ err = PTR_ERR (fp);
+ /* dentry_open dputs the dentry if it fails */
+ goto out;
+ }
+
+ lock_kernel ();
+
+ attrs->ia_valid &= (ATTR_UID | ATTR_GID | ATTR_CTIME);
+ buf.xadir = dir;
+ buf.attrs = attrs;
+ buf.inode = inode;
+
+ err = xattr_readdir (fp, reiserfs_chown_xattrs_filler, &buf);
+ if (err) {
+ unlock_kernel ();
+ goto out_dir;
+ }
+
+ err = notify_change (dir, attrs);
+ unlock_kernel ();
+
+out_dir:
+ fput(fp);
+
+out:
+ attrs->ia_valid = ia_valid;
+ return err;
+}
+
+
+/* Actual operations that are exported to VFS-land */
+
+/*
+ * Inode operation getxattr()
+ * Preliminary locking: we down dentry->d_inode->i_sem
+ */
+ssize_t
+reiserfs_getxattr (struct dentry *dentry, const char *name, void *buffer,
+ size_t size)
+{
+ struct reiserfs_xattr_handler *xah = find_xattr_handler_prefix (name);
+ int err;
+
+ if (!xah || !reiserfs_xattrs(dentry->d_sb) ||
+ get_inode_sd_version (dentry->d_inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ reiserfs_read_lock_xattr_i (dentry->d_inode);
+ reiserfs_read_lock_xattrs (dentry->d_sb);
+ err = xah->get (dentry->d_inode, name, buffer, size);
+ reiserfs_read_unlock_xattrs (dentry->d_sb);
+ reiserfs_read_unlock_xattr_i (dentry->d_inode);
+ return err;
+}
+
+
+/*
+ * Inode operation setxattr()
+ *
+ * dentry->d_inode->i_sem down
+ */
+int
+reiserfs_setxattr (struct dentry *dentry, const char *name, const void *value,
+ size_t size, int flags)
+{
+ struct reiserfs_xattr_handler *xah = find_xattr_handler_prefix (name);
+ int err;
+ int lock;
+
+ if (!xah || !reiserfs_xattrs(dentry->d_sb) ||
+ get_inode_sd_version (dentry->d_inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ if (IS_RDONLY (dentry->d_inode))
+ return -EROFS;
+
+ if (IS_IMMUTABLE (dentry->d_inode) || IS_APPEND (dentry->d_inode))
+ return -EROFS;
+
+ reiserfs_write_lock_xattr_i (dentry->d_inode);
+ lock = !has_xattr_dir (dentry->d_inode);
+ if (lock)
+ reiserfs_write_lock_xattrs (dentry->d_sb);
+ else
+ reiserfs_read_lock_xattrs (dentry->d_sb);
+ err = xah->set (dentry->d_inode, name, value, size, flags);
+ if (lock)
+ reiserfs_write_unlock_xattrs (dentry->d_sb);
+ else
+ reiserfs_read_unlock_xattrs (dentry->d_sb);
+ reiserfs_write_unlock_xattr_i (dentry->d_inode);
+ return err;
+}
+
+/*
+ * Inode operation removexattr()
+ *
+ * dentry->d_inode->i_sem down
+ */
+int
+reiserfs_removexattr (struct dentry *dentry, const char *name)
+{
+ int err;
+ struct reiserfs_xattr_handler *xah = find_xattr_handler_prefix (name);
+
+ if (!xah || !reiserfs_xattrs(dentry->d_sb) ||
+ get_inode_sd_version (dentry->d_inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ if (IS_RDONLY (dentry->d_inode))
+ return -EROFS;
+
+ if (IS_IMMUTABLE (dentry->d_inode) || IS_APPEND (dentry->d_inode))
+ return -EPERM;
+
+ reiserfs_write_lock_xattr_i (dentry->d_inode);
+ reiserfs_read_lock_xattrs (dentry->d_sb);
+
+ /* Deletion pre-operation */
+ if (xah->del) {
+ err = xah->del (dentry->d_inode, name);
+ if (err)
+ goto out;
+ }
+
+ err = reiserfs_xattr_del (dentry->d_inode, name);
+
+ dentry->d_inode->i_ctime = CURRENT_TIME;
+ mark_inode_dirty (dentry->d_inode);
+
+out:
+ reiserfs_read_unlock_xattrs (dentry->d_sb);
+ reiserfs_write_unlock_xattr_i (dentry->d_inode);
+ return err;
+}
+
+
+/* This is what filldir will use:
+ * r_pos will always contain the amount of space required for the entire
+ * list. If r_pos becomes larger than r_size, we need more space and we
+ * return an error indicating this. If r_pos is less than r_size, then we've
+ * filled the buffer successfully and we return success */
+struct reiserfs_listxattr_buf {
+ int r_pos;
+ int r_size;
+ char *r_buf;
+ struct inode *r_inode;
+};
+
+static int
+reiserfs_listxattr_filler (void *buf, const char *name, int namelen,
+ loff_t offset, ino_t ino, unsigned int d_type)
+{
+ struct reiserfs_listxattr_buf *b = (struct reiserfs_listxattr_buf *)buf;
+ int len = 0;
+ if (name[0] != '.' || (namelen != 1 && (name[1] != '.' || namelen != 2))) {
+ struct reiserfs_xattr_handler *xah = find_xattr_handler_prefix (name);
+ if (!xah) return 0; /* Unsupported xattr name, skip it */
+
+ /* We call ->list() twice because the operation isn't required to just
+ * return the name back - we want to make sure we have enough space */
+ len += xah->list (b->r_inode, name, namelen, NULL);
+
+ if (len) {
+ if (b->r_pos + len + 1 <= b->r_size) {
+ char *p = b->r_buf + b->r_pos;
+ p += xah->list (b->r_inode, name, namelen, p);
+ *p++ = '\0';
+ }
+ b->r_pos += len + 1;
+ }
+ }
+
+ return 0;
+}
+/*
+ * Inode operation listxattr()
+ *
+ * Preliminary locking: we down dentry->d_inode->i_sem
+ */
+ssize_t
+reiserfs_listxattr (struct dentry *dentry, char *buffer, size_t size)
+{
+ struct file *fp;
+ struct dentry *dir;
+ int err = 0;
+ struct reiserfs_listxattr_buf buf;
+
+ if (!dentry->d_inode)
+ return -EINVAL;
+
+ if (!reiserfs_xattrs(dentry->d_sb) ||
+ get_inode_sd_version (dentry->d_inode) == STAT_DATA_V1)
+ return -EOPNOTSUPP;
+
+ reiserfs_read_lock_xattr_i (dentry->d_inode);
+ reiserfs_read_lock_xattrs (dentry->d_sb);
+ dir = open_xa_dir (dentry->d_inode, FL_READONLY);
+ reiserfs_read_unlock_xattrs (dentry->d_sb);
+ if (IS_ERR (dir)) {
+ err = PTR_ERR (dir);
+ if (err == -ENODATA)
+ err = 0; /* Not an error if there aren't any xattrs */
+ goto out;
+ }
+
+ fp = dentry_open (dir, NULL, O_RDWR);
+ if (IS_ERR (fp)) {
+ err = PTR_ERR (fp);
+ /* dentry_open dputs the dentry if it fails */
+ goto out;
+ }
+
+ buf.r_buf = buffer;
+ buf.r_size = buffer ? size : 0;
+ buf.r_pos = 0;
+ buf.r_inode = dentry->d_inode;
+
+ REISERFS_I(dentry->d_inode)->i_flags |= i_has_xattr_dir;
+
+ err = xattr_readdir (fp, reiserfs_listxattr_filler, &buf);
+ if (err)
+ goto out_dir;
+
+ if (buf.r_pos > buf.r_size && buffer != NULL)
+ err = -ERANGE;
+ else
+ err = buf.r_pos;
+
+out_dir:
+ fput(fp);
+
+out:
+ reiserfs_read_unlock_xattr_i (dentry->d_inode);
+ return err;
+}
+
+/* This is the implementation for the xattr plugin infrastructure */
+static struct list_head xattr_handlers = LIST_HEAD_INIT (xattr_handlers);
+static rwlock_t handler_lock = RW_LOCK_UNLOCKED;
+
+static struct reiserfs_xattr_handler *
+find_xattr_handler_prefix (const char *prefix)
+{
+ struct reiserfs_xattr_handler *xah = NULL;
+ struct list_head *p;
+
+ read_lock (&handler_lock);
+ list_for_each (p, &xattr_handlers) {
+ xah = list_entry (p, struct reiserfs_xattr_handler, handlers);
+ if (strncmp (xah->prefix, prefix, strlen (xah->prefix)) == 0)
+ break;
+ xah = NULL;
+ }
+
+ read_unlock (&handler_lock);
+ return xah;
+}
+
+static void
+__unregister_handlers (void)
+{
+ struct reiserfs_xattr_handler *xah;
+ struct list_head *p, *tmp;
+
+ list_for_each_safe (p, tmp, &xattr_handlers) {
+ xah = list_entry (p, struct reiserfs_xattr_handler, handlers);
+ if (xah->exit)
+ xah->exit();
+
+ list_del_init (p);
+ }
+ INIT_LIST_HEAD (&xattr_handlers);
+}
+
+int __init
+reiserfs_xattr_register_handlers (void)
+{
+ int err = 0;
+ struct reiserfs_xattr_handler *xah;
+ struct list_head *p;
+
+ write_lock (&handler_lock);
+
+ /* If we're already initialized, nothing to do */
+ if (!list_empty (&xattr_handlers)) {
+ write_unlock (&handler_lock);
+ return 0;
+ }
+
+ /* Add the handlers */
+ list_add_tail (&user_handler.handlers, &xattr_handlers);
+ list_add_tail (&trusted_handler.handlers, &xattr_handlers);
+#ifdef CONFIG_REISERFS_FS_SECURITY
+ list_add_tail (&security_handler.handlers, &xattr_handlers);
+#endif
+#ifdef CONFIG_REISERFS_FS_POSIX_ACL
+ list_add_tail (&posix_acl_access_handler.handlers, &xattr_handlers);
+ list_add_tail (&posix_acl_default_handler.handlers, &xattr_handlers);
+#endif
+
+ /* Run initializers, if available */
+ list_for_each (p, &xattr_handlers) {
+ xah = list_entry (p, struct reiserfs_xattr_handler, handlers);
+ if (xah->init) {
+ err = xah->init ();
+ if (err) {
+ list_del_init (p);
+ break;
+ }
+ }
+ }
+
+ /* Clean up other handlers, if any failed */
+ if (err)
+ __unregister_handlers ();
+
+ write_unlock (&handler_lock);
+ return err;
+}
+
+void
+reiserfs_xattr_unregister_handlers (void)
+{
+ write_lock (&handler_lock);
+ __unregister_handlers ();
+ write_unlock (&handler_lock);
+}
+
+/* This will catch lookups from the fs root to .reiserfs_priv */
+static int
+xattr_lookup_poison (struct dentry *dentry, struct qstr *q1, struct qstr *name)
+{
+ struct dentry *priv_root = REISERFS_SB(dentry->d_sb)->priv_root;
+ if (name->len == priv_root->d_name.len &&
+ name->hash == priv_root->d_name.hash &&
+ !memcmp (name->name, priv_root->d_name.name, name->len)) {
+ return -ENOENT;
+ }
+ return 0;
+}
+
+static struct dentry_operations xattr_lookup_poison_ops = {
+ .d_compare = xattr_lookup_poison,
+};
+
+
+/* We need to take a copy of the mount flags since things like
+ * MS_RDONLY don't get set until *after* we're called.
+ * mount_flags != mount_options */
+int
+reiserfs_xattr_init (struct super_block *s, int mount_flags)
+{
+ int err = 0;
+
+ /* We need generation numbers to ensure that the oid mapping is correct
+ * v3.5 filesystems don't have them. */
+ if (!old_format_only (s)) {
+ set_bit (REISERFS_XATTRS, &(REISERFS_SB(s)->s_mount_opt));
+ } else if (reiserfs_xattrs_optional (s)) {
+ /* Old format filesystem, but optional xattrs have been enabled
+ * at mount time. Error out. */
+ reiserfs_warning (s, "xattrs/ACLs not supported on pre v3.6 "
+ "format filesystem. Failing mount.");
+ err = -EOPNOTSUPP;
+ goto error;
+ } else {
+ /* Old format filesystem, but no optional xattrs have been enabled. This
+ * means we silently disable xattrs on the filesystem. */
+ clear_bit (REISERFS_XATTRS, &(REISERFS_SB(s)->s_mount_opt));
+ }
+
+ /* If we don't have the privroot located yet - go find it */
+ if (reiserfs_xattrs (s) && !REISERFS_SB(s)->priv_root) {
+ struct dentry *dentry;
+ dentry = lookup_one_len (PRIVROOT_NAME, s->s_root,
+ strlen (PRIVROOT_NAME));
+ if (!IS_ERR (dentry)) {
+ if (!(mount_flags & MS_RDONLY) && !dentry->d_inode) {
+ struct inode *inode = dentry->d_parent->d_inode;
+ down (&inode->i_sem);
+ err = inode->i_op->mkdir (inode, dentry, 0700);
+ up (&inode->i_sem);
+ if (err) {
+ dput (dentry);
+ dentry = NULL;
+ }
+
+ if (dentry && dentry->d_inode)
+ reiserfs_warning (s, "Created %s on %s - reserved for "
+ "xattr storage.", PRIVROOT_NAME,
+ reiserfs_bdevname (inode->i_sb));
+ } else if (!dentry->d_inode) {
+ dput (dentry);
+ dentry = NULL;
+ }
+ } else
+ err = PTR_ERR (dentry);
+
+ if (!err && dentry) {
+ s->s_root->d_op = &xattr_lookup_poison_ops;
+ REISERFS_I(dentry->d_inode)->i_flags |= i_priv_object;
+ REISERFS_SB(s)->priv_root = dentry;
+ } else if (!(mount_flags & MS_RDONLY)) { /* xattrs are unavailable */
+ /* If we're read-only it just means that the dir hasn't been
+ * created. Not an error -- just no xattrs on the fs. We'll
+ * check again if we go read-write */
+ reiserfs_warning (s, "xattrs/ACLs enabled and couldn't "
+ "find/create .reiserfs_priv. Failing mount.");
+ err = -EOPNOTSUPP;
+ }
+ }
+
+error:
+ /* This is only nonzero if there was an error initializing the xattr
+ * directory or if there is a condition where we don't support them. */
+ if (err) {
+ clear_bit (REISERFS_XATTRS, &(REISERFS_SB(s)->s_mount_opt));
+ clear_bit (REISERFS_XATTRS_USER, &(REISERFS_SB(s)->s_mount_opt));
+ clear_bit (REISERFS_POSIXACL, &(REISERFS_SB(s)->s_mount_opt));
+ }
+
+ /* The super_block MS_POSIXACL must mirror the (no)acl mount option. */
+ s->s_flags = s->s_flags & ~MS_POSIXACL;
+ if (reiserfs_posixacl (s))
+ s->s_flags |= MS_POSIXACL;
+
+ return err;
+}
+
+static int
+__reiserfs_permission (struct inode *inode, int mask, struct nameidata *nd,
+ int need_lock)
+{
+ umode_t mode = inode->i_mode;
+
+ if (mask & MAY_WRITE) {
+ /*
+ * Nobody gets write access to a read-only fs.
+ */
+ if (IS_RDONLY(inode) &&
+ (S_ISREG(mode) || S_ISDIR(mode) || S_ISLNK(mode)))
+ return -EROFS;
+
+ /*
+ * Nobody gets write access to an immutable file.
+ */
+ if (IS_IMMUTABLE(inode))
+ return -EACCES;
+ }
+
+ /* We don't do permission checks on the internal objects.
+ * Permissions are determined by the "owning" object. */
+ if (is_reiserfs_priv_object (inode))
+ return 0;
+
+ if (current->fsuid == inode->i_uid) {
+ mode >>= 6;
+#ifdef CONFIG_REISERFS_FS_POSIX_ACL
+ } else if (reiserfs_posixacl(inode->i_sb) &&
+ get_inode_sd_version (inode) != STAT_DATA_V1) {
+ struct posix_acl *acl;
+
+ /* ACL can't contain additional permissions if
+ the ACL_MASK entry is 0 */
+ if (!(mode & S_IRWXG))
+ goto check_groups;
+
+ if (need_lock) {
+ reiserfs_read_lock_xattr_i (inode);
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ }
+ acl = reiserfs_get_acl (inode, ACL_TYPE_ACCESS);
+ if (need_lock) {
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ reiserfs_read_unlock_xattr_i (inode);
+ }
+ if (IS_ERR (acl)) {
+ if (PTR_ERR (acl) == -ENODATA)
+ goto check_groups;
+ return PTR_ERR (acl);
+ }
+
+ if (acl) {
+ int err = posix_acl_permission (inode, acl, mask);
+ posix_acl_release (acl);
+ if (err == -EACCES) {
+ goto check_capabilities;
+ }
+ return err;
+ } else {
+ goto check_groups;
+ }
+#endif
+ } else {
+check_groups:
+ if (in_group_p(inode->i_gid))
+ mode >>= 3;
+ }
+
+ /*
+ * If the DACs are ok we don't need any capability check.
+ */
+ if (((mode & mask & (MAY_READ|MAY_WRITE|MAY_EXEC)) == mask))
+ return 0;
+
+check_capabilities:
+ /*
+ * Read/write DACs are always overridable.
+ * Executable DACs are overridable if at least one exec bit is set.
+ */
+ if (!(mask & MAY_EXEC) ||
+ (inode->i_mode & S_IXUGO) || S_ISDIR(inode->i_mode))
+ if (capable(CAP_DAC_OVERRIDE))
+ return 0;
+
+ /*
+ * Searching includes executable on directories, else just read.
+ */
+ if (mask == MAY_READ || (S_ISDIR(inode->i_mode) && !(mask & MAY_WRITE)))
+ if (capable(CAP_DAC_READ_SEARCH))
+ return 0;
+
+ return -EACCES;
+}
+
+int
+reiserfs_permission (struct inode *inode, int mask, struct nameidata *nd)
+{
+ return __reiserfs_permission (inode, mask, nd, 1);
+}
+
+int
+reiserfs_permission_locked (struct inode *inode, int mask, struct nameidata *nd)
+{
+ return __reiserfs_permission (inode, mask, nd, 0);
+}
--- /dev/null
+#include <linux/fs.h>
+#include <linux/posix_acl.h>
+#include <linux/reiserfs_fs.h>
+#include <linux/errno.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/xattr_acl.h>
+#include <linux/reiserfs_xattr.h>
+#include <linux/reiserfs_acl.h>
+#include <asm/uaccess.h>
+
+static int
+xattr_set_acl(struct inode *inode, int type, const void *value, size_t size)
+{
+ struct posix_acl *acl;
+ int error;
+
+ if (!reiserfs_posixacl(inode->i_sb))
+ return -EOPNOTSUPP;
+ if ((current->fsuid != inode->i_uid) && !capable(CAP_FOWNER))
+ return -EPERM;
+
+ if (value) {
+ acl = posix_acl_from_xattr(value, size);
+ if (IS_ERR(acl)) {
+ return PTR_ERR(acl);
+ } else if (acl) {
+ error = posix_acl_valid(acl);
+ if (error)
+ goto release_and_out;
+ }
+ } else
+ acl = NULL;
+
+ error = reiserfs_set_acl (inode, type, acl);
+
+release_and_out:
+ posix_acl_release(acl);
+ return error;
+}
+
+
+static int
+xattr_get_acl(struct inode *inode, int type, void *buffer, size_t size)
+{
+ struct posix_acl *acl;
+ int error;
+
+ if (!reiserfs_posixacl(inode->i_sb))
+ return -EOPNOTSUPP;
+
+ acl = reiserfs_get_acl (inode, type);
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+ if (acl == NULL)
+ return -ENODATA;
+ error = posix_acl_to_xattr(acl, buffer, size);
+ posix_acl_release(acl);
+
+ return error;
+}
+
+
+/*
+ * Convert from filesystem to in-memory representation.
+ */
+static struct posix_acl *
+posix_acl_from_disk(const void *value, size_t size)
+{
+ const char *end = (char *)value + size;
+ int n, count;
+ struct posix_acl *acl;
+
+ if (!value)
+ return NULL;
+ if (size < sizeof(reiserfs_acl_header))
+ return ERR_PTR(-EINVAL);
+ if (((reiserfs_acl_header *)value)->a_version !=
+ cpu_to_le32(REISERFS_ACL_VERSION))
+ return ERR_PTR(-EINVAL);
+ value = (char *)value + sizeof(reiserfs_acl_header);
+ count = reiserfs_acl_count(size);
+ if (count < 0)
+ return ERR_PTR(-EINVAL);
+ if (count == 0)
+ return NULL;
+ acl = posix_acl_alloc(count, GFP_NOFS);
+ if (!acl)
+ return ERR_PTR(-ENOMEM);
+ for (n=0; n < count; n++) {
+ reiserfs_acl_entry *entry =
+ (reiserfs_acl_entry *)value;
+ if ((char *)value + sizeof(reiserfs_acl_entry_short) > end)
+ goto fail;
+ acl->a_entries[n].e_tag = le16_to_cpu(entry->e_tag);
+ acl->a_entries[n].e_perm = le16_to_cpu(entry->e_perm);
+ switch(acl->a_entries[n].e_tag) {
+ case ACL_USER_OBJ:
+ case ACL_GROUP_OBJ:
+ case ACL_MASK:
+ case ACL_OTHER:
+ value = (char *)value +
+ sizeof(reiserfs_acl_entry_short);
+ acl->a_entries[n].e_id = ACL_UNDEFINED_ID;
+ break;
+
+ case ACL_USER:
+ case ACL_GROUP:
+ value = (char *)value + sizeof(reiserfs_acl_entry);
+ if ((char *)value > end)
+ goto fail;
+ acl->a_entries[n].e_id =
+ le32_to_cpu(entry->e_id);
+ break;
+
+ default:
+ goto fail;
+ }
+ }
+ if (value != end)
+ goto fail;
+ return acl;
+
+fail:
+ posix_acl_release(acl);
+ return ERR_PTR(-EINVAL);
+}
+
+/*
+ * Convert from in-memory to filesystem representation.
+ */
+static void *
+posix_acl_to_disk(const struct posix_acl *acl, size_t *size)
+{
+ reiserfs_acl_header *ext_acl;
+ char *e;
+ int n;
+
+ *size = reiserfs_acl_size(acl->a_count);
+ ext_acl = (reiserfs_acl_header *)kmalloc(sizeof(reiserfs_acl_header) +
+ acl->a_count * sizeof(reiserfs_acl_entry), GFP_NOFS);
+ if (!ext_acl)
+ return ERR_PTR(-ENOMEM);
+ ext_acl->a_version = cpu_to_le32(REISERFS_ACL_VERSION);
+ e = (char *)ext_acl + sizeof(reiserfs_acl_header);
+ for (n=0; n < acl->a_count; n++) {
+ reiserfs_acl_entry *entry = (reiserfs_acl_entry *)e;
+ entry->e_tag = cpu_to_le16(acl->a_entries[n].e_tag);
+ entry->e_perm = cpu_to_le16(acl->a_entries[n].e_perm);
+ switch(acl->a_entries[n].e_tag) {
+ case ACL_USER:
+ case ACL_GROUP:
+ entry->e_id =
+ cpu_to_le32(acl->a_entries[n].e_id);
+ e += sizeof(reiserfs_acl_entry);
+ break;
+
+ case ACL_USER_OBJ:
+ case ACL_GROUP_OBJ:
+ case ACL_MASK:
+ case ACL_OTHER:
+ e += sizeof(reiserfs_acl_entry_short);
+ break;
+
+ default:
+ goto fail;
+ }
+ }
+ return (char *)ext_acl;
+
+fail:
+ kfree(ext_acl);
+ return ERR_PTR(-EINVAL);
+}
+
+/*
+ * Inode operation get_posix_acl().
+ *
+ * inode->i_sem: down
+ * BKL held [before 2.5.x]
+ */
+struct posix_acl *
+reiserfs_get_acl(struct inode *inode, int type)
+{
+ char *name, *value;
+ struct posix_acl *acl, **p_acl;
+ size_t size;
+ int retval;
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ name = XATTR_NAME_ACL_ACCESS;
+ p_acl = &reiserfs_i->i_acl_access;
+ break;
+ case ACL_TYPE_DEFAULT:
+ name = XATTR_NAME_ACL_DEFAULT;
+ p_acl = &reiserfs_i->i_acl_default;
+ break;
+ default:
+ return ERR_PTR (-EINVAL);
+ }
+
+ if (IS_ERR (*p_acl)) {
+ if (PTR_ERR (*p_acl) == -ENODATA)
+ return NULL;
+ } else if (*p_acl != NULL)
+ return posix_acl_dup (*p_acl);
+
+ size = reiserfs_xattr_get (inode, name, NULL, 0);
+ if ((int)size < 0) {
+ if (size == -ENODATA || size == -ENOSYS) {
+ *p_acl = ERR_PTR (-ENODATA);
+ return NULL;
+ }
+ return ERR_PTR (size);
+ }
+
+ value = kmalloc (size, GFP_NOFS);
+ if (!value)
+ return ERR_PTR (-ENOMEM);
+
+ retval = reiserfs_xattr_get(inode, name, value, size);
+ if (retval == -ENODATA || retval == -ENOSYS) {
+ /* This shouldn't actually happen as it should have
+ been caught above.. but just in case */
+ acl = NULL;
+ *p_acl = ERR_PTR (-ENODATA);
+ } else if (retval < 0) {
+ acl = ERR_PTR(retval);
+ } else {
+ acl = posix_acl_from_disk(value, retval);
+ *p_acl = posix_acl_dup (acl);
+ }
+
+ kfree(value);
+ return acl;
+}
+
+/*
+ * Inode operation set_posix_acl().
+ *
+ * inode->i_sem: down
+ * BKL held [before 2.5.x]
+ */
+int
+reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
+{
+ char *name;
+ void *value = NULL;
+ struct posix_acl **p_acl;
+ size_t size;
+ int error;
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ name = XATTR_NAME_ACL_ACCESS;
+ p_acl = &reiserfs_i->i_acl_access;
+ if (acl) {
+ mode_t mode = inode->i_mode;
+ error = posix_acl_equiv_mode (acl, &mode);
+ if (error < 0)
+ return error;
+ else {
+ inode->i_mode = mode;
+ if (error == 0)
+ acl = NULL;
+ }
+ }
+ break;
+ case ACL_TYPE_DEFAULT:
+ name = XATTR_NAME_ACL_DEFAULT;
+ p_acl = &reiserfs_i->i_acl_default;
+ if (!S_ISDIR (inode->i_mode))
+ return acl ? -EACCES : 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (acl) {
+ value = posix_acl_to_disk(acl, &size);
+ if (IS_ERR(value))
+ return (int)PTR_ERR(value);
+ error = reiserfs_xattr_set(inode, name, value, size, 0);
+ } else {
+ error = reiserfs_xattr_del (inode, name);
+ if (error == -ENODATA)
+ error = 0;
+ }
+
+ if (value)
+ kfree(value);
+
+ if (!error) {
+ /* Release the old one */
+ if (!IS_ERR (*p_acl) && *p_acl)
+ posix_acl_release (*p_acl);
+
+ if (acl == NULL)
+ *p_acl = ERR_PTR (-ENODATA);
+ else
+ *p_acl = posix_acl_dup (acl);
+ }
+
+ return error;
+}
+
+/* dir->i_sem: down,
+ * inode is new and not released into the wild yet */
+int
+reiserfs_inherit_default_acl (struct inode *dir, struct dentry *dentry, struct inode *inode)
+{
+ struct posix_acl *acl;
+ int err = 0;
+
+ /* ACLs only get applied to files and directories */
+ if (S_ISLNK (inode->i_mode))
+ return 0;
+
+ /* ACLs can only be used on "new" objects, so if it's an old object
+ * there is nothing to inherit from */
+ if (get_inode_sd_version (dir) == STAT_DATA_V1)
+ goto apply_umask;
+
+ /* Don't apply ACLs to objects in the .reiserfs_priv tree.. This
+ * would be useless since permissions are ignored, and a pain because
+ * it introduces locking cycles */
+ if (is_reiserfs_priv_object (dir)) {
+ REISERFS_I(inode)->i_flags |= i_priv_object;
+ goto apply_umask;
+ }
+
+ acl = reiserfs_get_acl (dir, ACL_TYPE_DEFAULT);
+ if (IS_ERR (acl)) {
+ if (PTR_ERR (acl) == -ENODATA)
+ goto apply_umask;
+ return PTR_ERR (acl);
+ }
+
+ if (acl) {
+ struct posix_acl *acl_copy;
+ mode_t mode = inode->i_mode;
+ int need_acl;
+
+ /* Copy the default ACL to the default ACL of a new directory */
+ if (S_ISDIR (inode->i_mode)) {
+ err = reiserfs_set_acl (inode, ACL_TYPE_DEFAULT, acl);
+ if (err)
+ goto cleanup;
+ }
+
+ /* Now we reconcile the new ACL and the mode,
+ potentially modifying both */
+ acl_copy = posix_acl_clone (acl, GFP_NOFS);
+ if (!acl_copy) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+
+ need_acl = posix_acl_create_masq (acl_copy, &mode);
+ if (need_acl >= 0) {
+ if (mode != inode->i_mode) {
+ inode->i_mode = mode;
+ }
+
+ /* If we need an ACL.. */
+ if (need_acl > 0) {
+ err = reiserfs_set_acl (inode, ACL_TYPE_ACCESS, acl_copy);
+ if (err)
+ goto cleanup_copy;
+ }
+ }
+cleanup_copy:
+ posix_acl_release (acl_copy);
+cleanup:
+ posix_acl_release (acl);
+ } else {
+apply_umask:
+ /* no ACL, apply umask */
+ inode->i_mode &= ~current->fs->umask;
+ }
+
+ return err;
+}
+
+/* Looks up and caches the result of the default ACL.
+ * We do this so that we don't need to carry the xattr_sem into
+ * reiserfs_new_inode if we don't need to */
+int
+reiserfs_cache_default_acl (struct inode *inode)
+{
+ int ret = 0;
+ if (reiserfs_posixacl (inode->i_sb) &&
+ !is_reiserfs_priv_object (inode)) {
+ struct posix_acl *acl;
+ reiserfs_read_lock_xattr_i (inode);
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ acl = reiserfs_get_acl (inode, ACL_TYPE_DEFAULT);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ reiserfs_read_unlock_xattr_i (inode);
+ ret = acl ? 1 : 0;
+ posix_acl_release (acl);
+ }
+
+ return ret;
+}
+
+int
+reiserfs_acl_chmod (struct inode *inode)
+{
+ struct posix_acl *acl, *clone;
+ int error;
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
+ if (get_inode_sd_version (inode) == STAT_DATA_V1 ||
+ !reiserfs_posixacl(inode->i_sb))
+ {
+ return 0;
+ }
+
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ acl = reiserfs_get_acl(inode, ACL_TYPE_ACCESS);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ if (!acl)
+ return 0;
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+ clone = posix_acl_clone(acl, GFP_NOFS);
+ posix_acl_release(acl);
+ if (!clone)
+ return -ENOMEM;
+ error = posix_acl_chmod_masq(clone, inode->i_mode);
+ if (!error) {
+ int lock = !has_xattr_dir (inode);
+ reiserfs_write_lock_xattr_i (inode);
+ if (lock)
+ reiserfs_write_lock_xattrs (inode->i_sb);
+ else
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ error = reiserfs_set_acl(inode, ACL_TYPE_ACCESS, clone);
+ if (lock)
+ reiserfs_write_unlock_xattrs (inode->i_sb);
+ else
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ reiserfs_write_unlock_xattr_i (inode);
+ }
+ posix_acl_release(clone);
+ return error;
+}
+
+static int
+posix_acl_access_get(struct inode *inode, const char *name,
+ void *buffer, size_t size)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ return xattr_get_acl(inode, ACL_TYPE_ACCESS, buffer, size);
+}
+
+static int
+posix_acl_access_set(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ return xattr_set_acl(inode, ACL_TYPE_ACCESS, value, size);
+}
+
+static int
+posix_acl_access_del (struct inode *inode, const char *name)
+{
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+ struct posix_acl **acl = &reiserfs_i->i_acl_access;
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ if (!IS_ERR (*acl) && *acl) {
+ posix_acl_release (*acl);
+ *acl = ERR_PTR (-ENODATA);
+ }
+
+ return 0;
+}
+
+static int
+posix_acl_access_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+ if (!reiserfs_posixacl (inode->i_sb))
+ return 0;
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+struct reiserfs_xattr_handler posix_acl_access_handler = {
+ prefix: XATTR_NAME_ACL_ACCESS,
+ get: posix_acl_access_get,
+ set: posix_acl_access_set,
+ del: posix_acl_access_del,
+ list: posix_acl_access_list,
+};
+
+static int
+posix_acl_default_get (struct inode *inode, const char *name,
+ void *buffer, size_t size)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ return xattr_get_acl(inode, ACL_TYPE_DEFAULT, buffer, size);
+}
+
+static int
+posix_acl_default_set(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ return xattr_set_acl(inode, ACL_TYPE_DEFAULT, value, size);
+}
+
+static int
+posix_acl_default_del (struct inode *inode, const char *name)
+{
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+ struct posix_acl **acl = &reiserfs_i->i_acl_default;
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ if (!IS_ERR (*acl) && *acl) {
+ posix_acl_release (*acl);
+ *acl = ERR_PTR (-ENODATA);
+ }
+
+ return 0;
+}
+
+static int
+posix_acl_default_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+ if (!reiserfs_posixacl (inode->i_sb))
+ return 0;
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+struct reiserfs_xattr_handler posix_acl_default_handler = {
+ prefix: XATTR_NAME_ACL_DEFAULT,
+ get: posix_acl_default_get,
+ set: posix_acl_default_set,
+ del: posix_acl_default_del,
+ list: posix_acl_default_list,
+};
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/swap.h>
+
+#include "time.h"
+#include "kmem.h"
+
+#define MAX_VMALLOCS 6
+#define MAX_SLAB_SIZE 0x20000
+
+
+void *
+kmem_alloc(size_t size, int flags)
+{
+ int retries = 0, lflags = kmem_flags_convert(flags);
+ void *ptr;
+
+ do {
+ if (size < MAX_SLAB_SIZE || retries > MAX_VMALLOCS)
+ ptr = kmalloc(size, lflags);
+ else
+ ptr = __vmalloc(size, lflags, PAGE_KERNEL);
+ if (ptr || (flags & (KM_MAYFAIL|KM_NOSLEEP)))
+ return ptr;
+ if (!(++retries % 100))
+ printk(KERN_ERR "possible deadlock in %s (mode:0x%x)\n",
+ __FUNCTION__, lflags);
+ } while (1);
+}
+
+void *
+kmem_zalloc(size_t size, int flags)
+{
+ void *ptr;
+
+ ptr = kmem_alloc(size, flags);
+ if (ptr)
+ memset((char *)ptr, 0, (int)size);
+ return ptr;
+}
+
+void
+kmem_free(void *ptr, size_t size)
+{
+ if (((unsigned long)ptr < VMALLOC_START) ||
+ ((unsigned long)ptr >= VMALLOC_END)) {
+ kfree(ptr);
+ } else {
+ vfree(ptr);
+ }
+}
+
+void *
+kmem_realloc(void *ptr, size_t newsize, size_t oldsize, int flags)
+{
+ void *new;
+
+ new = kmem_alloc(newsize, flags);
+ if (ptr) {
+ if (new)
+ memcpy(new, ptr,
+ ((oldsize < newsize) ? oldsize : newsize));
+ kmem_free(ptr, oldsize);
+ }
+ return new;
+}
+
+void *
+kmem_zone_alloc(kmem_zone_t *zone, int flags)
+{
+ int retries = 0, lflags = kmem_flags_convert(flags);
+ void *ptr;
+
+ do {
+ ptr = kmem_cache_alloc(zone, lflags);
+ if (ptr || (flags & (KM_MAYFAIL|KM_NOSLEEP)))
+ return ptr;
+ if (!(++retries % 100))
+ printk(KERN_ERR "possible deadlock in %s (mode:0x%x)\n",
+ __FUNCTION__, lflags);
+ } while (1);
+}
+
+void *
+kmem_zone_zalloc(kmem_zone_t *zone, int flags)
+{
+ void *ptr;
+
+ ptr = kmem_zone_alloc(zone, flags);
+ if (ptr)
+ memset((char *)ptr, 0, kmem_cache_size(zone));
+ return ptr;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPPORT_KMEM_H__
+#define __XFS_SUPPORT_KMEM_H__
+
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+/*
+ * Cutoff point to use vmalloc instead of kmalloc.
+ */
+#define MAX_SLAB_SIZE 0x20000
+
+/*
+ * XFS uses slightly different names for these due to the
+ * IRIX heritage.
+ */
+#define kmem_zone kmem_cache_s
+#define kmem_zone_t kmem_cache_t
+
+#define KM_SLEEP 0x0001
+#define KM_NOSLEEP 0x0002
+#define KM_NOFS 0x0004
+#define KM_MAYFAIL 0x0005
+
+typedef unsigned long xfs_pflags_t;
+
+#define PFLAGS_TEST_FSTRANS() (current->flags & PF_FSTRANS)
+
+/* these could be nested, so we save state */
+#define PFLAGS_SET_FSTRANS(STATEP) do { \
+ *(STATEP) = current->flags; \
+ current->flags |= PF_FSTRANS; \
+} while (0)
+
+#define PFLAGS_CLEAR_FSTRANS(STATEP) do { \
+ *(STATEP) = current->flags; \
+ current->flags &= ~PF_FSTRANS; \
+} while (0)
+
+/* Restore the PF_FSTRANS state to what was saved in STATEP */
+#define PFLAGS_RESTORE_FSTRANS(STATEP) do { \
+ current->flags = ((current->flags & ~PF_FSTRANS) | \
+ (*(STATEP) & PF_FSTRANS)); \
+} while (0)
+
+#define PFLAGS_DUP(OSTATEP, NSTATEP) do { \
+ *(NSTATEP) = *(OSTATEP); \
+} while (0)
+
+static __inline unsigned int
+kmem_flags_convert(int flags)
+{
+ int lflags;
+
+#if DEBUG
+ if (unlikely(flags & ~(KM_SLEEP|KM_NOSLEEP|KM_NOFS|KM_MAYFAIL))) {
+ printk(KERN_WARNING
+ "XFS: memory allocation with wrong flags (%x)\n", flags);
+ BUG();
+ }
+#endif
+
+ if (flags & KM_NOSLEEP) {
+ lflags = GFP_ATOMIC;
+ } else {
+ lflags = GFP_KERNEL;
+
+ /* avoid recusive callbacks to filesystem during transactions */
+ if (PFLAGS_TEST_FSTRANS() || (flags & KM_NOFS))
+ lflags &= ~__GFP_FS;
+
+ if (!(flags & KM_MAYFAIL))
+ lflags |= __GFP_NOFAIL;
+ }
+
+ return lflags;
+}
+
+static __inline void *
+kmem_alloc(size_t size, int flags)
+{
+ if (unlikely(MAX_SLAB_SIZE < size))
+ /* Avoid doing filesystem sensitive stuff to get this */
+ return __vmalloc(size, kmem_flags_convert(flags), PAGE_KERNEL);
+ return kmalloc(size, kmem_flags_convert(flags));
+}
+
+static __inline void *
+kmem_zalloc(size_t size, int flags)
+{
+ void *ptr = kmem_alloc(size, flags);
+ if (likely(ptr != NULL))
+ memset(ptr, 0, size);
+ return ptr;
+}
+
+static __inline void
+kmem_free(void *ptr, size_t size)
+{
+ if (unlikely((unsigned long)ptr < VMALLOC_START ||
+ (unsigned long)ptr >= VMALLOC_END))
+ kfree(ptr);
+ else
+ vfree(ptr);
+}
+
+static __inline void *
+kmem_realloc(void *ptr, size_t newsize, size_t oldsize, int flags)
+{
+ void *new = kmem_alloc(newsize, flags);
+
+ if (likely(ptr != NULL)) {
+ if (likely(new != NULL))
+ memcpy(new, ptr, min(oldsize, newsize));
+ kmem_free(ptr, oldsize);
+ }
+
+ return new;
+}
+
+static __inline kmem_zone_t *
+kmem_zone_init(int size, char *zone_name)
+{
+ return kmem_cache_create(zone_name, size, 0, 0, NULL, NULL);
+}
+
+static __inline void *
+kmem_zone_alloc(kmem_zone_t *zone, int flags)
+{
+ return kmem_cache_alloc(zone, kmem_flags_convert(flags));
+}
+
+static __inline void *
+kmem_zone_zalloc(kmem_zone_t *zone, int flags)
+{
+ void *ptr = kmem_zone_alloc(zone, flags);
+ if (likely(ptr != NULL))
+ memset(ptr, 0, kmem_cache_size(zone));
+ return ptr;
+}
+
+static __inline void
+kmem_zone_free(kmem_zone_t *zone, void *ptr)
+{
+ kmem_cache_free(zone, ptr);
+}
+
+typedef struct shrinker *kmem_shaker_t;
+typedef int (*kmem_shake_func_t)(int, unsigned int);
+
+static __inline kmem_shaker_t
+kmem_shake_register(kmem_shake_func_t sfunc)
+{
+ return set_shrinker(DEFAULT_SEEKS, sfunc);
+}
+
+static __inline void
+kmem_shake_deregister(kmem_shaker_t shrinker)
+{
+ remove_shrinker(shrinker);
+}
+
+static __inline int
+kmem_shake_allow(unsigned int gfp_mask)
+{
+ return (gfp_mask & __GFP_WAIT);
+}
+
+#endif /* __XFS_SUPPORT_KMEM_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPPORT_MRLOCK_H__
+#define __XFS_SUPPORT_MRLOCK_H__
+
+#include <linux/rwsem.h>
+
+enum { MR_NONE, MR_ACCESS, MR_UPDATE };
+
+typedef struct {
+ struct rw_semaphore mr_lock;
+ int mr_writer;
+} mrlock_t;
+
+#define mrinit(mrp, name) \
+ ( (mrp)->mr_writer = 0, init_rwsem(&(mrp)->mr_lock) )
+#define mrlock_init(mrp, t,n,s) mrinit(mrp, n)
+#define mrfree(mrp) do { } while (0)
+#define mraccess(mrp) mraccessf(mrp, 0)
+#define mrupdate(mrp) mrupdatef(mrp, 0)
+
+static inline void mraccessf(mrlock_t *mrp, int flags)
+{
+ down_read(&mrp->mr_lock);
+}
+
+static inline void mrupdatef(mrlock_t *mrp, int flags)
+{
+ down_write(&mrp->mr_lock);
+ mrp->mr_writer = 1;
+}
+
+static inline int mrtryaccess(mrlock_t *mrp)
+{
+ return down_read_trylock(&mrp->mr_lock);
+}
+
+static inline int mrtryupdate(mrlock_t *mrp)
+{
+ if (!down_write_trylock(&mrp->mr_lock))
+ return 0;
+ mrp->mr_writer = 1;
+ return 1;
+}
+
+static inline void mrunlock(mrlock_t *mrp)
+{
+ if (mrp->mr_writer) {
+ mrp->mr_writer = 0;
+ up_write(&mrp->mr_lock);
+ } else {
+ up_read(&mrp->mr_lock);
+ }
+}
+
+static inline void mrdemote(mrlock_t *mrp)
+{
+ mrp->mr_writer = 0;
+ downgrade_write(&mrp->mr_lock);
+}
+
+#ifdef DEBUG
+/*
+ * Debug-only routine, without some platform-specific asm code, we can
+ * now only answer requests regarding whether we hold the lock for write
+ * (reader state is outside our visibility, we only track writer state).
+ * Note: means !ismrlocked would give false positivies, so don't do that.
+ */
+static inline int ismrlocked(mrlock_t *mrp, int type)
+{
+ if (mrp && type == MR_UPDATE)
+ return mrp->mr_writer;
+ return 1;
+}
+#endif
+
+#endif /* __XFS_SUPPORT_MRLOCK_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPPORT_SEMA_H__
+#define __XFS_SUPPORT_SEMA_H__
+
+#include <linux/time.h>
+#include <linux/wait.h>
+#include <asm/atomic.h>
+#include <asm/semaphore.h>
+
+/*
+ * sema_t structure just maps to struct semaphore in Linux kernel.
+ */
+
+typedef struct semaphore sema_t;
+
+#define init_sema(sp, val, c, d) sema_init(sp, val)
+#define initsema(sp, val) sema_init(sp, val)
+#define initnsema(sp, val, name) sema_init(sp, val)
+#define psema(sp, b) down(sp)
+#define vsema(sp) up(sp)
+#define valusema(sp) (atomic_read(&(sp)->count))
+#define freesema(sema)
+
+/*
+ * Map cpsema (try to get the sema) to down_trylock. We need to switch
+ * the return values since cpsema returns 1 (acquired) 0 (failed) and
+ * down_trylock returns the reverse 0 (acquired) 1 (failed).
+ */
+
+#define cpsema(sp) (down_trylock(sp) ? 0 : 1)
+
+/*
+ * Didn't do cvsema(sp). Not sure how to map this to up/down/...
+ * It does a vsema if the values is < 0 other wise nothing.
+ */
+
+#endif /* __XFS_SUPPORT_SEMA_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPPORT_SV_H__
+#define __XFS_SUPPORT_SV_H__
+
+#include <linux/wait.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+
+/*
+ * Synchronisation variables.
+ *
+ * (Parameters "pri", "svf" and "rts" are not implemented)
+ */
+
+typedef struct sv_s {
+ wait_queue_head_t waiters;
+} sv_t;
+
+#define SV_FIFO 0x0 /* sv_t is FIFO type */
+#define SV_LIFO 0x2 /* sv_t is LIFO type */
+#define SV_PRIO 0x4 /* sv_t is PRIO type */
+#define SV_KEYED 0x6 /* sv_t is KEYED type */
+#define SV_DEFAULT SV_FIFO
+
+
+static inline void _sv_wait(sv_t *sv, spinlock_t *lock, int state,
+ unsigned long timeout)
+{
+ DECLARE_WAITQUEUE(wait, current);
+
+ add_wait_queue_exclusive(&sv->waiters, &wait);
+ __set_current_state(state);
+ spin_unlock(lock);
+
+ schedule_timeout(timeout);
+
+ remove_wait_queue(&sv->waiters, &wait);
+}
+
+#define init_sv(sv,type,name,flag) \
+ init_waitqueue_head(&(sv)->waiters)
+#define sv_init(sv,flag,name) \
+ init_waitqueue_head(&(sv)->waiters)
+#define sv_destroy(sv) \
+ /*NOTHING*/
+#define sv_wait(sv, pri, lock, s) \
+ _sv_wait(sv, lock, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT)
+#define sv_wait_sig(sv, pri, lock, s) \
+ _sv_wait(sv, lock, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT)
+#define sv_timedwait(sv, pri, lock, s, svf, ts, rts) \
+ _sv_wait(sv, lock, TASK_UNINTERRUPTIBLE, timespec_to_jiffies(ts))
+#define sv_timedwait_sig(sv, pri, lock, s, svf, ts, rts) \
+ _sv_wait(sv, lock, TASK_INTERRUPTIBLE, timespec_to_jiffies(ts))
+#define sv_signal(sv) \
+ wake_up(&(sv)->waiters)
+#define sv_broadcast(sv) \
+ wake_up_all(&(sv)->waiters)
+
+#endif /* __XFS_SUPPORT_SV_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_trans.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_alloc.h"
+#include "xfs_btree.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_error.h"
+#include "xfs_rw.h"
+#include "xfs_iomap.h"
+#include <linux/mpage.h>
+#include <linux/writeback.h>
+
+STATIC void xfs_count_page_state(struct page *, int *, int *, int *);
+STATIC void xfs_convert_page(struct inode *, struct page *, xfs_iomap_t *,
+ struct writeback_control *wbc, void *, int, int);
+
+#if defined(XFS_RW_TRACE)
+void
+xfs_page_trace(
+ int tag,
+ struct inode *inode,
+ struct page *page,
+ int mask)
+{
+ xfs_inode_t *ip;
+ bhv_desc_t *bdp;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ loff_t isize = i_size_read(inode);
+ loff_t offset = page->index << PAGE_CACHE_SHIFT;
+ int delalloc = -1, unmapped = -1, unwritten = -1;
+
+ if (page_has_buffers(page))
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+
+ bdp = vn_bhv_lookup(VN_BHV_HEAD(vp), &xfs_vnodeops);
+ ip = XFS_BHVTOI(bdp);
+ if (!ip->i_rwtrace)
+ return;
+
+ ktrace_enter(ip->i_rwtrace,
+ (void *)((unsigned long)tag),
+ (void *)ip,
+ (void *)inode,
+ (void *)page,
+ (void *)((unsigned long)mask),
+ (void *)((unsigned long)((ip->i_d.di_size >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(ip->i_d.di_size & 0xffffffff)),
+ (void *)((unsigned long)((isize >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(isize & 0xffffffff)),
+ (void *)((unsigned long)((offset >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(offset & 0xffffffff)),
+ (void *)((unsigned long)delalloc),
+ (void *)((unsigned long)unmapped),
+ (void *)((unsigned long)unwritten),
+ (void *)NULL,
+ (void *)NULL);
+}
+#else
+#define xfs_page_trace(tag, inode, page, mask)
+#endif
+
+void
+linvfs_unwritten_done(
+ struct buffer_head *bh,
+ int uptodate)
+{
+ xfs_buf_t *pb = (xfs_buf_t *)bh->b_private;
+
+ ASSERT(buffer_unwritten(bh));
+ bh->b_end_io = NULL;
+ clear_buffer_unwritten(bh);
+ if (!uptodate)
+ pagebuf_ioerror(pb, EIO);
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pagebuf_iodone(pb, 1, 1);
+ }
+ end_buffer_async_write(bh, uptodate);
+}
+
+/*
+ * Issue transactions to convert a buffer range from unwritten
+ * to written extents (buffered IO).
+ */
+STATIC void
+linvfs_unwritten_convert(
+ xfs_buf_t *bp)
+{
+ vnode_t *vp = XFS_BUF_FSPRIVATE(bp, vnode_t *);
+ int error;
+
+ BUG_ON(atomic_read(&bp->pb_hold) < 1);
+ VOP_BMAP(vp, XFS_BUF_OFFSET(bp), XFS_BUF_SIZE(bp),
+ BMAPI_UNWRITTEN, NULL, NULL, error);
+ XFS_BUF_SET_FSPRIVATE(bp, NULL);
+ XFS_BUF_CLR_IODONE_FUNC(bp);
+ XFS_BUF_UNDATAIO(bp);
+ iput(LINVFS_GET_IP(vp));
+ pagebuf_iodone(bp, 0, 0);
+}
+
+/*
+ * Issue transactions to convert a buffer range from unwritten
+ * to written extents (direct IO).
+ */
+STATIC void
+linvfs_unwritten_convert_direct(
+ struct inode *inode,
+ loff_t offset,
+ ssize_t size,
+ void *private)
+{
+ ASSERT(!private || inode == (struct inode *)private);
+
+ /* private indicates an unwritten extent lay beneath this IO,
+ * see linvfs_get_block_core.
+ */
+ if (private && size > 0) {
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ VOP_BMAP(vp, offset, size, BMAPI_UNWRITTEN, NULL, NULL, error);
+ }
+}
+
+STATIC int
+xfs_map_blocks(
+ struct inode *inode,
+ loff_t offset,
+ ssize_t count,
+ xfs_iomap_t *iomapp,
+ int flags)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error, niomaps = 1;
+
+ if (((flags & (BMAPI_DIRECT|BMAPI_SYNC)) == BMAPI_DIRECT) &&
+ (offset >= i_size_read(inode)))
+ count = max_t(ssize_t, count, XFS_WRITE_IO_LOG);
+retry:
+ VOP_BMAP(vp, offset, count, flags, iomapp, &niomaps, error);
+ if ((error == EAGAIN) || (error == EIO))
+ return -error;
+ if (unlikely((flags & (BMAPI_WRITE|BMAPI_DIRECT)) ==
+ (BMAPI_WRITE|BMAPI_DIRECT) && niomaps &&
+ (iomapp->iomap_flags & IOMAP_DELAY))) {
+ flags = BMAPI_ALLOCATE;
+ goto retry;
+ }
+ if (flags & (BMAPI_WRITE|BMAPI_ALLOCATE)) {
+ VMODIFY(vp);
+ }
+ return -error;
+}
+
+/*
+ * Finds the corresponding mapping in block @map array of the
+ * given @offset within a @page.
+ */
+STATIC xfs_iomap_t *
+xfs_offset_to_map(
+ struct page *page,
+ xfs_iomap_t *iomapp,
+ unsigned long offset)
+{
+ loff_t full_offset; /* offset from start of file */
+
+ ASSERT(offset < PAGE_CACHE_SIZE);
+
+ full_offset = page->index; /* NB: using 64bit number */
+ full_offset <<= PAGE_CACHE_SHIFT; /* offset from file start */
+ full_offset += offset; /* offset from page start */
+
+ if (full_offset < iomapp->iomap_offset)
+ return NULL;
+ if (iomapp->iomap_offset + (iomapp->iomap_bsize -1) >= full_offset)
+ return iomapp;
+ return NULL;
+}
+
+STATIC void
+xfs_map_at_offset(
+ struct page *page,
+ struct buffer_head *bh,
+ unsigned long offset,
+ int block_bits,
+ xfs_iomap_t *iomapp)
+{
+ xfs_daddr_t bn;
+ loff_t delta;
+ int sector_shift;
+
+ ASSERT(!(iomapp->iomap_flags & IOMAP_HOLE));
+ ASSERT(!(iomapp->iomap_flags & IOMAP_DELAY));
+ ASSERT(iomapp->iomap_bn != IOMAP_DADDR_NULL);
+
+ delta = page->index;
+ delta <<= PAGE_CACHE_SHIFT;
+ delta += offset;
+ delta -= iomapp->iomap_offset;
+ delta >>= block_bits;
+
+ sector_shift = block_bits - BBSHIFT;
+ bn = iomapp->iomap_bn >> sector_shift;
+ bn += delta;
+ ASSERT((bn << sector_shift) >= iomapp->iomap_bn);
+
+ lock_buffer(bh);
+ bh->b_blocknr = bn;
+ bh->b_bdev = iomapp->iomap_target->pbr_bdev;
+ set_buffer_mapped(bh);
+ clear_buffer_delay(bh);
+}
+
+/*
+ * Look for a page at index which is unlocked and contains our
+ * unwritten extent flagged buffers at its head. Returns page
+ * locked and with an extra reference count, and length of the
+ * unwritten extent component on this page that we can write,
+ * in units of filesystem blocks.
+ */
+STATIC struct page *
+xfs_probe_unwritten_page(
+ struct address_space *mapping,
+ pgoff_t index,
+ xfs_iomap_t *iomapp,
+ xfs_buf_t *pb,
+ unsigned long max_offset,
+ unsigned long *fsbs,
+ unsigned int bbits)
+{
+ struct page *page;
+
+ page = find_trylock_page(mapping, index);
+ if (!page)
+ return 0;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+ unsigned long p_offset = 0;
+
+ *fsbs = 0;
+ bh = head = page_buffers(page);
+ do {
+ if (!buffer_unwritten(bh))
+ break;
+ if (!xfs_offset_to_map(page, iomapp, p_offset))
+ break;
+ if (p_offset >= max_offset)
+ break;
+ xfs_map_at_offset(page, bh, p_offset, bbits, iomapp);
+ set_buffer_unwritten_io(bh);
+ bh->b_private = pb;
+ p_offset += bh->b_size;
+ (*fsbs)++;
+ } while ((bh = bh->b_this_page) != head);
+
+ if (p_offset)
+ return page;
+ }
+
+out:
+ unlock_page(page);
+ return NULL;
+}
+
+/*
+ * Look for a page at index which is unlocked and not mapped
+ * yet - clustering for mmap write case.
+ */
+STATIC unsigned int
+xfs_probe_unmapped_page(
+ struct address_space *mapping,
+ pgoff_t index,
+ unsigned int pg_offset)
+{
+ struct page *page;
+ int ret = 0;
+
+ page = find_trylock_page(mapping, index);
+ if (!page)
+ return 0;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && PageDirty(page)) {
+ if (page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_mapped(bh) || !buffer_uptodate(bh))
+ break;
+ ret += bh->b_size;
+ if (ret >= pg_offset)
+ break;
+ } while ((bh = bh->b_this_page) != head);
+ } else
+ ret = PAGE_CACHE_SIZE;
+ }
+
+out:
+ unlock_page(page);
+ return ret;
+}
+
+STATIC unsigned int
+xfs_probe_unmapped_cluster(
+ struct inode *inode,
+ struct page *startpage,
+ struct buffer_head *bh,
+ struct buffer_head *head)
+{
+ pgoff_t tindex, tlast, tloff;
+ unsigned int pg_offset, len, total = 0;
+ struct address_space *mapping = inode->i_mapping;
+
+ /* First sum forwards in this page */
+ do {
+ if (buffer_mapped(bh))
+ break;
+ total += bh->b_size;
+ } while ((bh = bh->b_this_page) != head);
+
+ /* If we reached the end of the page, sum forwards in
+ * following pages.
+ */
+ if (bh == head) {
+ tlast = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ /* Prune this back to avoid pathological behavior */
+ tloff = min(tlast, startpage->index + 64);
+ for (tindex = startpage->index + 1; tindex < tloff; tindex++) {
+ len = xfs_probe_unmapped_page(mapping, tindex,
+ PAGE_CACHE_SIZE);
+ if (!len)
+ return total;
+ total += len;
+ }
+ if (tindex == tlast &&
+ (pg_offset = i_size_read(inode) & (PAGE_CACHE_SIZE - 1))) {
+ total += xfs_probe_unmapped_page(mapping,
+ tindex, pg_offset);
+ }
+ }
+ return total;
+}
+
+/*
+ * Probe for a given page (index) in the inode and test if it is delayed
+ * and without unwritten buffers. Returns page locked and with an extra
+ * reference count.
+ */
+STATIC struct page *
+xfs_probe_delalloc_page(
+ struct inode *inode,
+ pgoff_t index)
+{
+ struct page *page;
+
+ page = find_trylock_page(inode->i_mapping, index);
+ if (!page)
+ return NULL;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+ int acceptable = 0;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_unwritten(bh)) {
+ acceptable = 0;
+ break;
+ } else if (buffer_delay(bh)) {
+ acceptable = 1;
+ }
+ } while ((bh = bh->b_this_page) != head);
+
+ if (acceptable)
+ return page;
+ }
+
+out:
+ unlock_page(page);
+ return NULL;
+}
+
+STATIC int
+xfs_map_unwritten(
+ struct inode *inode,
+ struct page *start_page,
+ struct buffer_head *head,
+ struct buffer_head *curr,
+ unsigned long p_offset,
+ int block_bits,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ int startio,
+ int all_bh)
+{
+ struct buffer_head *bh = curr;
+ xfs_iomap_t *tmp;
+ xfs_buf_t *pb;
+ loff_t offset, size;
+ unsigned long nblocks = 0;
+
+ offset = start_page->index;
+ offset <<= PAGE_CACHE_SHIFT;
+ offset += p_offset;
+
+ /* get an "empty" pagebuf to manage IO completion
+ * Proper values will be set before returning */
+ pb = pagebuf_lookup(iomapp->iomap_target, 0, 0, 0);
+ if (!pb)
+ return -EAGAIN;
+
+ /* Take a reference to the inode to prevent it from
+ * being reclaimed while we have outstanding unwritten
+ * extent IO on it.
+ */
+ if ((igrab(inode)) != inode) {
+ pagebuf_free(pb);
+ return -EAGAIN;
+ }
+
+ /* Set the count to 1 initially, this will stop an I/O
+ * completion callout which happens before we have started
+ * all the I/O from calling pagebuf_iodone too early.
+ */
+ atomic_set(&pb->pb_io_remaining, 1);
+
+ /* First map forwards in the page consecutive buffers
+ * covering this unwritten extent
+ */
+ do {
+ if (!buffer_unwritten(bh))
+ break;
+ tmp = xfs_offset_to_map(start_page, iomapp, p_offset);
+ if (!tmp)
+ break;
+ xfs_map_at_offset(start_page, bh, p_offset, block_bits, iomapp);
+ set_buffer_unwritten_io(bh);
+ bh->b_private = pb;
+ p_offset += bh->b_size;
+ nblocks++;
+ } while ((bh = bh->b_this_page) != head);
+
+ atomic_add(nblocks, &pb->pb_io_remaining);
+
+ /* If we reached the end of the page, map forwards in any
+ * following pages which are also covered by this extent.
+ */
+ if (bh == head) {
+ struct address_space *mapping = inode->i_mapping;
+ pgoff_t tindex, tloff, tlast;
+ unsigned long bs;
+ unsigned int pg_offset, bbits = inode->i_blkbits;
+ struct page *page;
+
+ tlast = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ tloff = (iomapp->iomap_offset + iomapp->iomap_bsize) >> PAGE_CACHE_SHIFT;
+ tloff = min(tlast, tloff);
+ for (tindex = start_page->index + 1; tindex < tloff; tindex++) {
+ page = xfs_probe_unwritten_page(mapping,
+ tindex, iomapp, pb,
+ PAGE_CACHE_SIZE, &bs, bbits);
+ if (!page)
+ break;
+ nblocks += bs;
+ atomic_add(bs, &pb->pb_io_remaining);
+ xfs_convert_page(inode, page, iomapp, wbc, pb,
+ startio, all_bh);
+ /* stop if converting the next page might add
+ * enough blocks that the corresponding byte
+ * count won't fit in our ulong page buf length */
+ if (nblocks >= ((ULONG_MAX - PAGE_SIZE) >> block_bits))
+ goto enough;
+ }
+
+ if (tindex == tlast &&
+ (pg_offset = (i_size_read(inode) & (PAGE_CACHE_SIZE - 1)))) {
+ page = xfs_probe_unwritten_page(mapping,
+ tindex, iomapp, pb,
+ pg_offset, &bs, bbits);
+ if (page) {
+ nblocks += bs;
+ atomic_add(bs, &pb->pb_io_remaining);
+ xfs_convert_page(inode, page, iomapp, wbc, pb,
+ startio, all_bh);
+ if (nblocks >= ((ULONG_MAX - PAGE_SIZE) >> block_bits))
+ goto enough;
+ }
+ }
+ }
+
+enough:
+ size = nblocks; /* NB: using 64bit number here */
+ size <<= block_bits; /* convert fsb's to byte range */
+
+ XFS_BUF_DATAIO(pb);
+ XFS_BUF_ASYNC(pb);
+ XFS_BUF_SET_SIZE(pb, size);
+ XFS_BUF_SET_COUNT(pb, size);
+ XFS_BUF_SET_OFFSET(pb, offset);
+ XFS_BUF_SET_FSPRIVATE(pb, LINVFS_GET_VP(inode));
+ XFS_BUF_SET_IODONE_FUNC(pb, linvfs_unwritten_convert);
+
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pagebuf_iodone(pb, 1, 1);
+ }
+
+ return 0;
+}
+
+STATIC void
+xfs_submit_page(
+ struct page *page,
+ struct buffer_head *bh_arr[],
+ int cnt)
+{
+ struct buffer_head *bh;
+ int i;
+
+ BUG_ON(PageWriteback(page));
+ set_page_writeback(page);
+ clear_page_dirty(page);
+ unlock_page(page);
+
+ if (cnt) {
+ for (i = 0; i < cnt; i++) {
+ bh = bh_arr[i];
+ mark_buffer_async_write(bh);
+ if (buffer_unwritten(bh))
+ set_buffer_unwritten_io(bh);
+ set_buffer_uptodate(bh);
+ clear_buffer_dirty(bh);
+ }
+
+ for (i = 0; i < cnt; i++)
+ submit_bh(WRITE, bh_arr[i]);
+ } else
+ end_page_writeback(page);
+}
+
+/*
+ * Allocate & map buffers for page given the extent map. Write it out.
+ * except for the original page of a writepage, this is called on
+ * delalloc/unwritten pages only, for the original page it is possible
+ * that the page has no mapping at all.
+ */
+STATIC void
+xfs_convert_page(
+ struct inode *inode,
+ struct page *page,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ void *private,
+ int startio,
+ int all_bh)
+{
+ struct buffer_head *bh_arr[MAX_BUF_PER_PAGE], *bh, *head;
+ xfs_iomap_t *mp = iomapp, *tmp;
+ unsigned long end, offset;
+ pgoff_t end_index;
+ int i = 0, index = 0;
+ int bbits = inode->i_blkbits;
+
+ end_index = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ if (page->index < end_index) {
+ end = PAGE_CACHE_SIZE;
+ } else {
+ end = i_size_read(inode) & (PAGE_CACHE_SIZE-1);
+ }
+ bh = head = page_buffers(page);
+ do {
+ offset = i << bbits;
+ if (!(PageUptodate(page) || buffer_uptodate(bh)))
+ continue;
+ if (buffer_mapped(bh) && all_bh &&
+ !buffer_unwritten(bh) && !buffer_delay(bh)) {
+ if (startio && (offset < end)) {
+ lock_buffer(bh);
+ bh_arr[index++] = bh;
+ }
+ continue;
+ }
+ tmp = xfs_offset_to_map(page, mp, offset);
+ if (!tmp)
+ continue;
+ ASSERT(!(tmp->iomap_flags & IOMAP_HOLE));
+ ASSERT(!(tmp->iomap_flags & IOMAP_DELAY));
+
+ /* If this is a new unwritten extent buffer (i.e. one
+ * that we haven't passed in private data for, we must
+ * now map this buffer too.
+ */
+ if (buffer_unwritten(bh) && !bh->b_end_io) {
+ ASSERT(tmp->iomap_flags & IOMAP_UNWRITTEN);
+ xfs_map_unwritten(inode, page, head, bh, offset,
+ bbits, tmp, wbc, startio, all_bh);
+ } else if (! (buffer_unwritten(bh) && buffer_locked(bh))) {
+ xfs_map_at_offset(page, bh, offset, bbits, tmp);
+ if (buffer_unwritten(bh)) {
+ set_buffer_unwritten_io(bh);
+ bh->b_private = private;
+ ASSERT(private);
+ }
+ }
+ if (startio && (offset < end)) {
+ bh_arr[index++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ } while (i++, (bh = bh->b_this_page) != head);
+
+ if (startio) {
+ wbc->nr_to_write--;
+ xfs_submit_page(page, bh_arr, index);
+ } else {
+ unlock_page(page);
+ }
+}
+
+/*
+ * Convert & write out a cluster of pages in the same extent as defined
+ * by mp and following the start page.
+ */
+STATIC void
+xfs_cluster_write(
+ struct inode *inode,
+ pgoff_t tindex,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ int startio,
+ int all_bh)
+{
+ pgoff_t tlast;
+ struct page *page;
+
+ tlast = (iomapp->iomap_offset + iomapp->iomap_bsize) >> PAGE_CACHE_SHIFT;
+ for (; tindex < tlast; tindex++) {
+ page = xfs_probe_delalloc_page(inode, tindex);
+ if (!page)
+ break;
+ xfs_convert_page(inode, page, iomapp, wbc, NULL,
+ startio, all_bh);
+ }
+}
+
+/*
+ * Calling this without startio set means we are being asked to make a dirty
+ * page ready for freeing it's buffers. When called with startio set then
+ * we are coming from writepage.
+ *
+ * When called with startio set it is important that we write the WHOLE
+ * page if possible.
+ * The bh->b_state's cannot know if any of the blocks or which block for
+ * that matter are dirty due to mmap writes, and therefore bh uptodate is
+ * only vaild if the page itself isn't completely uptodate. Some layers
+ * may clear the page dirty flag prior to calling write page, under the
+ * assumption the entire page will be written out; by not writing out the
+ * whole page the page can be reused before all valid dirty data is
+ * written out. Note: in the case of a page that has been dirty'd by
+ * mapwrite and but partially setup by block_prepare_write the
+ * bh->b_states's will not agree and only ones setup by BPW/BCW will have
+ * valid state, thus the whole page must be written out thing.
+ */
+
+STATIC int
+xfs_page_state_convert(
+ struct inode *inode,
+ struct page *page,
+ struct writeback_control *wbc,
+ int startio,
+ int unmapped) /* also implies page uptodate */
+{
+ struct buffer_head *bh_arr[MAX_BUF_PER_PAGE], *bh, *head;
+ xfs_iomap_t *iomp, iomap;
+ unsigned long p_offset = 0;
+ pgoff_t end_index;
+ loff_t offset;
+ unsigned long long end_offset;
+ int len, err, i, cnt = 0, uptodate = 1;
+ int flags = startio ? 0 : BMAPI_TRYLOCK;
+ int page_dirty = 1;
+
+
+ /* Are we off the end of the file ? */
+ end_index = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ if (page->index >= end_index) {
+ if ((page->index >= end_index + 1) ||
+ !(i_size_read(inode) & (PAGE_CACHE_SIZE - 1))) {
+ err = -EIO;
+ goto error;
+ }
+ }
+
+ offset = (loff_t)page->index << PAGE_CACHE_SHIFT;
+ end_offset = min_t(unsigned long long,
+ offset + PAGE_CACHE_SIZE, i_size_read(inode));
+
+ bh = head = page_buffers(page);
+ iomp = NULL;
+
+ len = bh->b_size;
+ do {
+ if (offset >= end_offset)
+ break;
+ if (!buffer_uptodate(bh))
+ uptodate = 0;
+ if (!(PageUptodate(page) || buffer_uptodate(bh)) && !startio)
+ continue;
+
+ if (iomp) {
+ iomp = xfs_offset_to_map(page, &iomap, p_offset);
+ }
+
+ /*
+ * First case, map an unwritten extent and prepare for
+ * extent state conversion transaction on completion.
+ */
+ if (buffer_unwritten(bh)) {
+ if (!iomp) {
+ err = xfs_map_blocks(inode, offset, len, &iomap,
+ BMAPI_READ|BMAPI_IGNSTATE);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp && startio) {
+ if (!bh->b_end_io) {
+ err = xfs_map_unwritten(inode, page,
+ head, bh, p_offset,
+ inode->i_blkbits, iomp,
+ wbc, startio, unmapped);
+ if (err) {
+ goto error;
+ }
+ }
+ bh_arr[cnt++] = bh;
+ page_dirty = 0;
+ }
+ /*
+ * Second case, allocate space for a delalloc buffer.
+ * We can return EAGAIN here in the release page case.
+ */
+ } else if (buffer_delay(bh)) {
+ if (!iomp) {
+ err = xfs_map_blocks(inode, offset, len, &iomap,
+ BMAPI_ALLOCATE | flags);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp) {
+ xfs_map_at_offset(page, bh, p_offset,
+ inode->i_blkbits, iomp);
+ if (startio) {
+ bh_arr[cnt++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ page_dirty = 0;
+ }
+ } else if ((buffer_uptodate(bh) || PageUptodate(page)) &&
+ (unmapped || startio)) {
+
+ if (!buffer_mapped(bh)) {
+ int size;
+
+ /*
+ * Getting here implies an unmapped buffer
+ * was found, and we are in a path where we
+ * need to write the whole page out.
+ */
+ if (!iomp) {
+ size = xfs_probe_unmapped_cluster(
+ inode, page, bh, head);
+ err = xfs_map_blocks(inode, offset,
+ size, &iomap,
+ BMAPI_WRITE|BMAPI_MMAP);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp) {
+ xfs_map_at_offset(page,
+ bh, p_offset,
+ inode->i_blkbits, iomp);
+ if (startio) {
+ bh_arr[cnt++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ page_dirty = 0;
+ }
+ } else if (startio) {
+ if (buffer_uptodate(bh) &&
+ !test_and_set_bit(BH_Lock, &bh->b_state)) {
+ bh_arr[cnt++] = bh;
+ page_dirty = 0;
+ }
+ }
+ }
+ } while (offset += len, p_offset += len,
+ ((bh = bh->b_this_page) != head));
+
+ if (uptodate && bh == head)
+ SetPageUptodate(page);
+
+ if (startio)
+ xfs_submit_page(page, bh_arr, cnt);
+
+ if (iomp) {
+ xfs_cluster_write(inode, page->index + 1, iomp, wbc,
+ startio, unmapped);
+ }
+
+ return page_dirty;
+
+error:
+ for (i = 0; i < cnt; i++) {
+ unlock_buffer(bh_arr[i]);
+ }
+
+ /*
+ * If it's delalloc and we have nowhere to put it,
+ * throw it away, unless the lower layers told
+ * us to try again.
+ */
+ if (err != -EAGAIN) {
+ if (!unmapped) {
+ block_invalidatepage(page, 0);
+ }
+ ClearPageUptodate(page);
+ }
+ return err;
+}
+
+STATIC int
+linvfs_get_block_core(
+ struct inode *inode,
+ sector_t iblock,
+ unsigned long blocks,
+ struct buffer_head *bh_result,
+ int create,
+ int direct,
+ bmapi_flags_t flags)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ xfs_iomap_t iomap;
+ int retpbbm = 1;
+ int error;
+ ssize_t size;
+ loff_t offset = (loff_t)iblock << inode->i_blkbits;
+
+ if (blocks)
+ size = blocks << inode->i_blkbits;
+ else
+ size = 1 << inode->i_blkbits;
+
+ VOP_BMAP(vp, offset, size,
+ create ? flags : BMAPI_READ, &iomap, &retpbbm, error);
+ if (error)
+ return -error;
+
+ if (retpbbm == 0)
+ return 0;
+
+ if (iomap.iomap_bn != IOMAP_DADDR_NULL) {
+ xfs_daddr_t bn;
+ loff_t delta;
+
+ /* For unwritten extents do not report a disk address on
+ * the read case (treat as if we're reading into a hole).
+ */
+ if (create || !(iomap.iomap_flags & IOMAP_UNWRITTEN)) {
+ delta = offset - iomap.iomap_offset;
+ delta >>= inode->i_blkbits;
+
+ bn = iomap.iomap_bn >> (inode->i_blkbits - BBSHIFT);
+ bn += delta;
+
+ bh_result->b_blocknr = bn;
+ bh_result->b_bdev = iomap.iomap_target->pbr_bdev;
+ set_buffer_mapped(bh_result);
+ }
+ if (create && (iomap.iomap_flags & IOMAP_UNWRITTEN)) {
+ if (direct)
+ bh_result->b_private = inode;
+ set_buffer_unwritten(bh_result);
+ set_buffer_delay(bh_result);
+ }
+ }
+
+ /* If this is a realtime file, data might be on a new device */
+ bh_result->b_bdev = iomap.iomap_target->pbr_bdev;
+
+ /* If we previously allocated a block out beyond eof and
+ * we are now coming back to use it then we will need to
+ * flag it as new even if it has a disk address.
+ */
+ if (create &&
+ ((!buffer_mapped(bh_result) && !buffer_uptodate(bh_result)) ||
+ (offset >= i_size_read(inode)) || (iomap.iomap_flags & IOMAP_NEW))) {
+ set_buffer_new(bh_result);
+ }
+
+ if (iomap.iomap_flags & IOMAP_DELAY) {
+ if (unlikely(direct))
+ BUG();
+ if (create) {
+ set_buffer_mapped(bh_result);
+ set_buffer_uptodate(bh_result);
+ }
+ bh_result->b_bdev = iomap.iomap_target->pbr_bdev;
+ set_buffer_delay(bh_result);
+ }
+
+ if (blocks) {
+ loff_t iosize;
+ iosize = (iomap.iomap_bsize - iomap.iomap_delta);
+ bh_result->b_size =
+ (ssize_t)min(iosize, (loff_t)(blocks << inode->i_blkbits));
+ }
+
+ return 0;
+}
+
+int
+linvfs_get_block(
+ struct inode *inode,
+ sector_t iblock,
+ struct buffer_head *bh_result,
+ int create)
+{
+ return linvfs_get_block_core(inode, iblock, 0, bh_result,
+ create, 0, BMAPI_WRITE);
+}
+
+STATIC int
+linvfs_get_block_sync(
+ struct inode *inode,
+ sector_t iblock,
+ struct buffer_head *bh_result,
+ int create)
+{
+ return linvfs_get_block_core(inode, iblock, 0, bh_result,
+ create, 0, BMAPI_SYNC|BMAPI_WRITE);
+}
+
+STATIC int
+linvfs_get_blocks_direct(
+ struct inode *inode,
+ sector_t iblock,
+ unsigned long max_blocks,
+ struct buffer_head *bh_result,
+ int create)
+{
+ return linvfs_get_block_core(inode, iblock, max_blocks, bh_result,
+ create, 1, BMAPI_WRITE|BMAPI_DIRECT);
+}
+
+STATIC ssize_t
+linvfs_direct_IO(
+ int rw,
+ struct kiocb *iocb,
+ const struct iovec *iov,
+ loff_t offset,
+ unsigned long nr_segs)
+{
+ struct file *file = iocb->ki_filp;
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ xfs_iomap_t iomap;
+ int maps = 1;
+ int error;
+
+ VOP_BMAP(vp, offset, 0, BMAPI_DEVICE, &iomap, &maps, error);
+ if (error)
+ return -error;
+
+ return blockdev_direct_IO_no_locking(rw, iocb, inode,
+ iomap.iomap_target->pbr_bdev,
+ iov, offset, nr_segs,
+ linvfs_get_blocks_direct,
+ linvfs_unwritten_convert_direct);
+}
+
+
+STATIC sector_t
+linvfs_bmap(
+ struct address_space *mapping,
+ sector_t block)
+{
+ struct inode *inode = (struct inode *)mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ vn_trace_entry(vp, "linvfs_bmap", (inst_t *)__return_address);
+
+ VOP_RWLOCK(vp, VRWLOCK_READ);
+ VOP_FLUSH_PAGES(vp, (xfs_off_t)0, -1, 0, FI_REMAPF, error);
+ VOP_RWUNLOCK(vp, VRWLOCK_READ);
+ return generic_block_bmap(mapping, block, linvfs_get_block);
+}
+
+STATIC int
+linvfs_readpage(
+ struct file *unused,
+ struct page *page)
+{
+ return mpage_readpage(page, linvfs_get_block);
+}
+
+STATIC int
+linvfs_readpages(
+ struct file *unused,
+ struct address_space *mapping,
+ struct list_head *pages,
+ unsigned nr_pages)
+{
+ return mpage_readpages(mapping, pages, nr_pages, linvfs_get_block);
+}
+
+STATIC void
+xfs_count_page_state(
+ struct page *page,
+ int *delalloc,
+ int *unmapped,
+ int *unwritten)
+{
+ struct buffer_head *bh, *head;
+
+ *delalloc = *unmapped = *unwritten = 0;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_uptodate(bh) && !buffer_mapped(bh))
+ (*unmapped) = 1;
+ else if (buffer_unwritten(bh) && !buffer_delay(bh))
+ clear_buffer_unwritten(bh);
+ else if (buffer_unwritten(bh))
+ (*unwritten) = 1;
+ else if (buffer_delay(bh))
+ (*delalloc) = 1;
+ } while ((bh = bh->b_this_page) != head);
+}
+
+
+/*
+ * writepage: Called from one of two places:
+ *
+ * 1. we are flushing a delalloc buffer head.
+ *
+ * 2. we are writing out a dirty page. Typically the page dirty
+ * state is cleared before we get here. In this case is it
+ * conceivable we have no buffer heads.
+ *
+ * For delalloc space on the page we need to allocate space and
+ * flush it. For unmapped buffer heads on the page we should
+ * allocate space if the page is uptodate. For any other dirty
+ * buffer heads on the page we should flush them.
+ *
+ * If we detect that a transaction would be required to flush
+ * the page, we have to check the process flags first, if we
+ * are already in a transaction or disk I/O during allocations
+ * is off, we need to fail the writepage and redirty the page.
+ */
+
+STATIC int
+linvfs_writepage(
+ struct page *page,
+ struct writeback_control *wbc)
+{
+ int error;
+ int need_trans;
+ int delalloc, unmapped, unwritten;
+ struct inode *inode = page->mapping->host;
+
+ xfs_page_trace(XFS_WRITEPAGE_ENTER, inode, page, 0);
+
+ /*
+ * We need a transaction if:
+ * 1. There are delalloc buffers on the page
+ * 2. The page is uptodate and we have unmapped buffers
+ * 3. The page is uptodate and we have no buffers
+ * 4. There are unwritten buffers on the page
+ */
+
+ if (!page_has_buffers(page)) {
+ unmapped = 1;
+ need_trans = 1;
+ } else {
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+ if (!PageUptodate(page))
+ unmapped = 0;
+ need_trans = delalloc + unmapped + unwritten;
+ }
+
+ /*
+ * If we need a transaction and the process flags say
+ * we are already in a transaction, or no IO is allowed
+ * then mark the page dirty again and leave the page
+ * as is.
+ */
+ if (PFLAGS_TEST_FSTRANS() && need_trans)
+ goto out_fail;
+
+ /*
+ * Delay hooking up buffer heads until we have
+ * made our go/no-go decision.
+ */
+ if (!page_has_buffers(page))
+ create_empty_buffers(page, 1 << inode->i_blkbits, 0);
+
+ /*
+ * Convert delayed allocate, unwritten or unmapped space
+ * to real space and flush out to disk.
+ */
+ error = xfs_page_state_convert(inode, page, wbc, 1, unmapped);
+ if (error == -EAGAIN)
+ goto out_fail;
+ if (unlikely(error < 0))
+ goto out_unlock;
+
+ return 0;
+
+out_fail:
+ set_page_dirty(page);
+ unlock_page(page);
+ return 0;
+out_unlock:
+ unlock_page(page);
+ return error;
+}
+
+/*
+ * Called to move a page into cleanable state - and from there
+ * to be released. Possibly the page is already clean. We always
+ * have buffer heads in this call.
+ *
+ * Returns 0 if the page is ok to release, 1 otherwise.
+ *
+ * Possible scenarios are:
+ *
+ * 1. We are being called to release a page which has been written
+ * to via regular I/O. buffer heads will be dirty and possibly
+ * delalloc. If no delalloc buffer heads in this case then we
+ * can just return zero.
+ *
+ * 2. We are called to release a page which has been written via
+ * mmap, all we need to do is ensure there is no delalloc
+ * state in the buffer heads, if not we can let the caller
+ * free them and we should come back later via writepage.
+ */
+STATIC int
+linvfs_release_page(
+ struct page *page,
+ int gfp_mask)
+{
+ struct inode *inode = page->mapping->host;
+ int dirty, delalloc, unmapped, unwritten;
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_ALL,
+ .nr_to_write = 1,
+ };
+
+ xfs_page_trace(XFS_RELEASEPAGE_ENTER, inode, page, gfp_mask);
+
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+ if (!delalloc && !unwritten)
+ goto free_buffers;
+
+ if (!(gfp_mask & __GFP_FS))
+ return 0;
+
+ /* If we are already inside a transaction or the thread cannot
+ * do I/O, we cannot release this page.
+ */
+ if (PFLAGS_TEST_FSTRANS())
+ return 0;
+
+ /*
+ * Convert delalloc space to real space, do not flush the
+ * data out to disk, that will be done by the caller.
+ * Never need to allocate space here - we will always
+ * come back to writepage in that case.
+ */
+ dirty = xfs_page_state_convert(inode, page, &wbc, 0, 0);
+ if (dirty == 0 && !unwritten)
+ goto free_buffers;
+ return 0;
+
+free_buffers:
+ return try_to_free_buffers(page);
+}
+
+STATIC int
+linvfs_prepare_write(
+ struct file *file,
+ struct page *page,
+ unsigned int from,
+ unsigned int to)
+{
+ if (file && (file->f_flags & O_SYNC)) {
+ return block_prepare_write(page, from, to,
+ linvfs_get_block_sync);
+ } else {
+ return block_prepare_write(page, from, to,
+ linvfs_get_block);
+ }
+}
+
+struct address_space_operations linvfs_aops = {
+ .readpage = linvfs_readpage,
+ .readpages = linvfs_readpages,
+ .writepage = linvfs_writepage,
+ .sync_page = block_sync_page,
+ .releasepage = linvfs_release_page,
+ .prepare_write = linvfs_prepare_write,
+ .commit_write = generic_commit_write,
+ .bmap = linvfs_bmap,
+ .direct_IO = linvfs_direct_IO,
+};
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * The xfs_buf.c code provides an abstract buffer cache model on top
+ * of the Linux page cache. Cached metadata blocks for a file system
+ * are hashed to the inode for the block device. xfs_buf.c assembles
+ * buffers (xfs_buf_t) on demand to aggregate such cached pages for I/O.
+ *
+ * Written by Steve Lord, Jim Mostek, Russell Cattelan
+ * and Rajagopal Ananthanarayanan ("ananth") at SGI.
+ *
+ */
+
+#include <linux/stddef.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include <linux/init.h>
+#include <linux/vmalloc.h>
+#include <linux/bio.h>
+#include <linux/sysctl.h>
+#include <linux/proc_fs.h>
+#include <linux/workqueue.h>
+#include <linux/suspend.h>
+#include <linux/percpu.h>
+
+#include "xfs_linux.h"
+
+#ifndef GFP_READAHEAD
+#define GFP_READAHEAD (__GFP_NOWARN|__GFP_NORETRY)
+#endif
+
+/*
+ * File wide globals
+ */
+
+STATIC kmem_cache_t *pagebuf_cache;
+STATIC void pagebuf_daemon_wakeup(void);
+STATIC void pagebuf_delwri_queue(xfs_buf_t *, int);
+STATIC struct workqueue_struct *pagebuf_logio_workqueue;
+STATIC struct workqueue_struct *pagebuf_dataio_workqueue;
+
+/*
+ * Pagebuf debugging
+ */
+
+#ifdef PAGEBUF_TRACE
+void
+pagebuf_trace(
+ xfs_buf_t *pb,
+ char *id,
+ void *data,
+ void *ra)
+{
+ ktrace_enter(pagebuf_trace_buf,
+ pb, id,
+ (void *)(unsigned long)pb->pb_flags,
+ (void *)(unsigned long)pb->pb_hold.counter,
+ (void *)(unsigned long)pb->pb_sema.count.counter,
+ (void *)current,
+ data, ra,
+ (void *)(unsigned long)((pb->pb_file_offset>>32) & 0xffffffff),
+ (void *)(unsigned long)(pb->pb_file_offset & 0xffffffff),
+ (void *)(unsigned long)pb->pb_buffer_length,
+ NULL, NULL, NULL, NULL, NULL);
+}
+ktrace_t *pagebuf_trace_buf;
+#define PAGEBUF_TRACE_SIZE 4096
+#define PB_TRACE(pb, id, data) \
+ pagebuf_trace(pb, id, (void *)data, (void *)__builtin_return_address(0))
+#else
+#define PB_TRACE(pb, id, data) do { } while (0)
+#endif
+
+#ifdef PAGEBUF_LOCK_TRACKING
+# define PB_SET_OWNER(pb) ((pb)->pb_last_holder = current->pid)
+# define PB_CLEAR_OWNER(pb) ((pb)->pb_last_holder = -1)
+# define PB_GET_OWNER(pb) ((pb)->pb_last_holder)
+#else
+# define PB_SET_OWNER(pb) do { } while (0)
+# define PB_CLEAR_OWNER(pb) do { } while (0)
+# define PB_GET_OWNER(pb) do { } while (0)
+#endif
+
+/*
+ * Pagebuf allocation / freeing.
+ */
+
+#define pb_to_gfp(flags) \
+ (((flags) & PBF_READ_AHEAD) ? GFP_READAHEAD : \
+ ((flags) & PBF_DONT_BLOCK) ? GFP_NOFS : GFP_KERNEL)
+
+#define pb_to_km(flags) \
+ (((flags) & PBF_DONT_BLOCK) ? KM_NOFS : KM_SLEEP)
+
+
+#define pagebuf_allocate(flags) \
+ kmem_zone_alloc(pagebuf_cache, pb_to_km(flags))
+#define pagebuf_deallocate(pb) \
+ kmem_zone_free(pagebuf_cache, (pb));
+
+/*
+ * Pagebuf hashing
+ */
+
+#define NBITS 8
+#define NHASH (1<<NBITS)
+
+typedef struct {
+ struct list_head pb_hash;
+ spinlock_t pb_hash_lock;
+} pb_hash_t;
+
+STATIC pb_hash_t pbhash[NHASH];
+#define pb_hash(pb) &pbhash[pb->pb_hash_index]
+
+STATIC int
+_bhash(
+ struct block_device *bdev,
+ loff_t base)
+{
+ int bit, hval;
+
+ base >>= 9;
+ base ^= (unsigned long)bdev / L1_CACHE_BYTES;
+ for (bit = hval = 0; base && bit < sizeof(base) * 8; bit += NBITS) {
+ hval ^= (int)base & (NHASH-1);
+ base >>= NBITS;
+ }
+ return hval;
+}
+
+/*
+ * Mapping of multi-page buffers into contiguous virtual space
+ */
+
+typedef struct a_list {
+ void *vm_addr;
+ struct a_list *next;
+} a_list_t;
+
+STATIC a_list_t *as_free_head;
+STATIC int as_list_len;
+STATIC spinlock_t as_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Try to batch vunmaps because they are costly.
+ */
+STATIC void
+free_address(
+ void *addr)
+{
+ a_list_t *aentry;
+
+ aentry = kmalloc(sizeof(a_list_t), GFP_ATOMIC);
+ if (aentry) {
+ spin_lock(&as_lock);
+ aentry->next = as_free_head;
+ aentry->vm_addr = addr;
+ as_free_head = aentry;
+ as_list_len++;
+ spin_unlock(&as_lock);
+ } else {
+ vunmap(addr);
+ }
+}
+
+STATIC void
+purge_addresses(void)
+{
+ a_list_t *aentry, *old;
+
+ if (as_free_head == NULL)
+ return;
+
+ spin_lock(&as_lock);
+ aentry = as_free_head;
+ as_free_head = NULL;
+ as_list_len = 0;
+ spin_unlock(&as_lock);
+
+ while ((old = aentry) != NULL) {
+ vunmap(aentry->vm_addr);
+ aentry = aentry->next;
+ kfree(old);
+ }
+}
+
+/*
+ * Internal pagebuf object manipulation
+ */
+
+STATIC void
+_pagebuf_initialize(
+ xfs_buf_t *pb,
+ xfs_buftarg_t *target,
+ loff_t range_base,
+ size_t range_length,
+ page_buf_flags_t flags)
+{
+ /*
+ * We don't want certain flags to appear in pb->pb_flags.
+ */
+ flags &= ~(PBF_LOCK|PBF_MAPPED|PBF_DONT_BLOCK|PBF_READ_AHEAD);
+
+ memset(pb, 0, sizeof(xfs_buf_t));
+ atomic_set(&pb->pb_hold, 1);
+ init_MUTEX_LOCKED(&pb->pb_iodonesema);
+ INIT_LIST_HEAD(&pb->pb_list);
+ INIT_LIST_HEAD(&pb->pb_hash_list);
+ init_MUTEX_LOCKED(&pb->pb_sema); /* held, no waiters */
+ PB_SET_OWNER(pb);
+ pb->pb_target = target;
+ pb->pb_file_offset = range_base;
+ /*
+ * Set buffer_length and count_desired to the same value initially.
+ * I/O routines should use count_desired, which will be the same in
+ * most cases but may be reset (e.g. XFS recovery).
+ */
+ pb->pb_buffer_length = pb->pb_count_desired = range_length;
+ pb->pb_flags = flags | PBF_NONE;
+ pb->pb_bn = XFS_BUF_DADDR_NULL;
+ atomic_set(&pb->pb_pin_count, 0);
+ init_waitqueue_head(&pb->pb_waiters);
+
+ XFS_STATS_INC(pb_create);
+ PB_TRACE(pb, "initialize", target);
+}
+
+/*
+ * Allocate a page array capable of holding a specified number
+ * of pages, and point the page buf at it.
+ */
+STATIC int
+_pagebuf_get_pages(
+ xfs_buf_t *pb,
+ int page_count,
+ page_buf_flags_t flags)
+{
+ /* Make sure that we have a page list */
+ if (pb->pb_pages == NULL) {
+ pb->pb_offset = page_buf_poff(pb->pb_file_offset);
+ pb->pb_page_count = page_count;
+ if (page_count <= PB_PAGES) {
+ pb->pb_pages = pb->pb_page_array;
+ } else {
+ pb->pb_pages = kmem_alloc(sizeof(struct page *) *
+ page_count, pb_to_km(flags));
+ if (pb->pb_pages == NULL)
+ return -ENOMEM;
+ }
+ memset(pb->pb_pages, 0, sizeof(struct page *) * page_count);
+ }
+ return 0;
+}
+
+/*
+ * Frees pb_pages if it was malloced.
+ */
+STATIC void
+_pagebuf_free_pages(
+ xfs_buf_t *bp)
+{
+ if (bp->pb_pages != bp->pb_page_array) {
+ kmem_free(bp->pb_pages,
+ bp->pb_page_count * sizeof(struct page *));
+ }
+}
+
+/*
+ * Releases the specified buffer.
+ *
+ * The modification state of any associated pages is left unchanged.
+ * The buffer most not be on any hash - use pagebuf_rele instead for
+ * hashed and refcounted buffers
+ */
+void
+pagebuf_free(
+ xfs_buf_t *bp)
+{
+ PB_TRACE(bp, "free", 0);
+
+ ASSERT(list_empty(&bp->pb_hash_list));
+
+ if (bp->pb_flags & _PBF_PAGE_CACHE) {
+ uint i;
+
+ if ((bp->pb_flags & PBF_MAPPED) && (bp->pb_page_count > 1))
+ free_address(bp->pb_addr - bp->pb_offset);
+
+ for (i = 0; i < bp->pb_page_count; i++)
+ page_cache_release(bp->pb_pages[i]);
+ _pagebuf_free_pages(bp);
+ } else if (bp->pb_flags & _PBF_KMEM_ALLOC) {
+ /*
+ * XXX(hch): bp->pb_count_desired might be incorrect (see
+ * pagebuf_associate_memory for details), but fortunately
+ * the Linux version of kmem_free ignores the len argument..
+ */
+ kmem_free(bp->pb_addr, bp->pb_count_desired);
+ _pagebuf_free_pages(bp);
+ }
+
+ pagebuf_deallocate(bp);
+}
+
+/*
+ * Finds all pages for buffer in question and builds it's page list.
+ */
+STATIC int
+_pagebuf_lookup_pages(
+ xfs_buf_t *bp,
+ uint flags)
+{
+ struct address_space *mapping = bp->pb_target->pbr_mapping;
+ unsigned int sectorshift = bp->pb_target->pbr_sshift;
+ size_t blocksize = bp->pb_target->pbr_bsize;
+ size_t size = bp->pb_count_desired;
+ size_t nbytes, offset;
+ int gfp_mask = pb_to_gfp(flags);
+ unsigned short page_count, i;
+ pgoff_t first;
+ loff_t end;
+ int error;
+
+ end = bp->pb_file_offset + bp->pb_buffer_length;
+ page_count = page_buf_btoc(end) - page_buf_btoct(bp->pb_file_offset);
+
+ error = _pagebuf_get_pages(bp, page_count, flags);
+ if (unlikely(error))
+ return error;
+ bp->pb_flags |= _PBF_PAGE_CACHE;
+
+ offset = bp->pb_offset;
+ first = bp->pb_file_offset >> PAGE_CACHE_SHIFT;
+
+ for (i = 0; i < bp->pb_page_count; i++) {
+ struct page *page;
+ uint retries = 0;
+
+ retry:
+ page = find_or_create_page(mapping, first + i, gfp_mask);
+ if (unlikely(page == NULL)) {
+ if (flags & PBF_READ_AHEAD) {
+ bp->pb_page_count = i;
+ for (i = 0; i < bp->pb_page_count; i++)
+ unlock_page(bp->pb_pages[i]);
+ return -ENOMEM;
+ }
+
+ /*
+ * This could deadlock.
+ *
+ * But until all the XFS lowlevel code is revamped to
+ * handle buffer allocation failures we can't do much.
+ */
+ if (!(++retries % 100)) {
+ printk(KERN_ERR "possibly deadlocking in %s\n",
+ __FUNCTION__);
+ }
+
+ XFS_STATS_INC(pb_page_retries);
+ pagebuf_daemon_wakeup();
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(10);
+ goto retry;
+ }
+
+ XFS_STATS_INC(pb_page_found);
+
+ nbytes = min_t(size_t, size, PAGE_CACHE_SIZE - offset);
+ size -= nbytes;
+
+ if (!PageUptodate(page)) {
+ page_count--;
+ if (blocksize == PAGE_CACHE_SIZE) {
+ if (flags & PBF_READ)
+ bp->pb_locked = 1;
+ } else if (!PagePrivate(page)) {
+ unsigned long j, range;
+
+ /*
+ * In this case page->private holds a bitmap
+ * of uptodate sectors within the page
+ */
+ ASSERT(blocksize < PAGE_CACHE_SIZE);
+ range = (offset + nbytes) >> sectorshift;
+ for (j = offset >> sectorshift; j < range; j++)
+ if (!test_bit(j, &page->private))
+ break;
+ if (j == range)
+ page_count++;
+ }
+ }
+
+ bp->pb_pages[i] = page;
+ offset = 0;
+ }
+
+ if (!bp->pb_locked) {
+ for (i = 0; i < bp->pb_page_count; i++)
+ unlock_page(bp->pb_pages[i]);
+ }
+
+ if (page_count) {
+ /* if we have any uptodate pages, mark that in the buffer */
+ bp->pb_flags &= ~PBF_NONE;
+
+ /* if some pages aren't uptodate, mark that in the buffer */
+ if (page_count != bp->pb_page_count)
+ bp->pb_flags |= PBF_PARTIAL;
+ }
+
+ PB_TRACE(bp, "lookup_pages", (long)page_count);
+ return error;
+}
+
+/*
+ * Map buffer into kernel address-space if nessecary.
+ */
+STATIC int
+_pagebuf_map_pages(
+ xfs_buf_t *bp,
+ uint flags)
+{
+ /* A single page buffer is always mappable */
+ if (bp->pb_page_count == 1) {
+ bp->pb_addr = page_address(bp->pb_pages[0]) + bp->pb_offset;
+ bp->pb_flags |= PBF_MAPPED;
+ } else if (flags & PBF_MAPPED) {
+ if (as_list_len > 64)
+ purge_addresses();
+ bp->pb_addr = vmap(bp->pb_pages, bp->pb_page_count,
+ VM_MAP, PAGE_KERNEL);
+ if (unlikely(bp->pb_addr == NULL))
+ return -ENOMEM;
+ bp->pb_addr += bp->pb_offset;
+ bp->pb_flags |= PBF_MAPPED;
+ }
+
+ return 0;
+}
+
+/*
+ * Finding and Reading Buffers
+ */
+
+/*
+ * _pagebuf_find
+ *
+ * Looks up, and creates if absent, a lockable buffer for
+ * a given range of an inode. The buffer is returned
+ * locked. If other overlapping buffers exist, they are
+ * released before the new buffer is created and locked,
+ * which may imply that this call will block until those buffers
+ * are unlocked. No I/O is implied by this call.
+ */
+STATIC xfs_buf_t *
+_pagebuf_find( /* find buffer for block */
+ xfs_buftarg_t *target,/* target for block */
+ loff_t ioff, /* starting offset of range */
+ size_t isize, /* length of range */
+ page_buf_flags_t flags, /* PBF_TRYLOCK */
+ xfs_buf_t *new_pb)/* newly allocated buffer */
+{
+ loff_t range_base;
+ size_t range_length;
+ int hval;
+ pb_hash_t *h;
+ xfs_buf_t *pb, *n;
+ int not_locked;
+
+ range_base = (ioff << BBSHIFT);
+ range_length = (isize << BBSHIFT);
+
+ /* Ensure we never do IOs smaller than the sector size */
+ BUG_ON(range_length < (1 << target->pbr_sshift));
+
+ /* Ensure we never do IOs that are not sector aligned */
+ BUG_ON(range_base & (loff_t)target->pbr_smask);
+
+ hval = _bhash(target->pbr_bdev, range_base);
+ h = &pbhash[hval];
+
+ spin_lock(&h->pb_hash_lock);
+ list_for_each_entry_safe(pb, n, &h->pb_hash, pb_hash_list) {
+ if (pb->pb_target == target &&
+ pb->pb_file_offset == range_base &&
+ pb->pb_buffer_length == range_length) {
+ /* If we look at something bring it to the
+ * front of the list for next time
+ */
+ atomic_inc(&pb->pb_hold);
+ list_move(&pb->pb_hash_list, &h->pb_hash);
+ goto found;
+ }
+ }
+
+ /* No match found */
+ if (new_pb) {
+ _pagebuf_initialize(new_pb, target, range_base,
+ range_length, flags);
+ new_pb->pb_hash_index = hval;
+ list_add(&new_pb->pb_hash_list, &h->pb_hash);
+ } else {
+ XFS_STATS_INC(pb_miss_locked);
+ }
+
+ spin_unlock(&h->pb_hash_lock);
+ return (new_pb);
+
+found:
+ spin_unlock(&h->pb_hash_lock);
+
+ /* Attempt to get the semaphore without sleeping,
+ * if this does not work then we need to drop the
+ * spinlock and do a hard attempt on the semaphore.
+ */
+ not_locked = down_trylock(&pb->pb_sema);
+ if (not_locked) {
+ if (!(flags & PBF_TRYLOCK)) {
+ /* wait for buffer ownership */
+ PB_TRACE(pb, "get_lock", 0);
+ pagebuf_lock(pb);
+ XFS_STATS_INC(pb_get_locked_waited);
+ } else {
+ /* We asked for a trylock and failed, no need
+ * to look at file offset and length here, we
+ * know that this pagebuf at least overlaps our
+ * pagebuf and is locked, therefore our buffer
+ * either does not exist, or is this buffer
+ */
+
+ pagebuf_rele(pb);
+ XFS_STATS_INC(pb_busy_locked);
+ return (NULL);
+ }
+ } else {
+ /* trylock worked */
+ PB_SET_OWNER(pb);
+ }
+
+ if (pb->pb_flags & PBF_STALE)
+ pb->pb_flags &= PBF_MAPPED;
+ PB_TRACE(pb, "got_lock", 0);
+ XFS_STATS_INC(pb_get_locked);
+ return (pb);
+}
+
+
+/*
+ * pagebuf_find
+ *
+ * pagebuf_find returns a buffer matching the specified range of
+ * data for the specified target, if any of the relevant blocks
+ * are in memory. The buffer may have unallocated holes, if
+ * some, but not all, of the blocks are in memory. Even where
+ * pages are present in the buffer, not all of every page may be
+ * valid.
+ */
+xfs_buf_t *
+pagebuf_find( /* find buffer for block */
+ /* if the block is in memory */
+ xfs_buftarg_t *target,/* target for block */
+ loff_t ioff, /* starting offset of range */
+ size_t isize, /* length of range */
+ page_buf_flags_t flags) /* PBF_TRYLOCK */
+{
+ return _pagebuf_find(target, ioff, isize, flags, NULL);
+}
+
+/*
+ * pagebuf_get
+ *
+ * pagebuf_get assembles a buffer covering the specified range.
+ * Some or all of the blocks in the range may be valid. Storage
+ * in memory for all portions of the buffer will be allocated,
+ * although backing storage may not be. If PBF_READ is set in
+ * flags, pagebuf_iostart is called also.
+ */
+xfs_buf_t *
+pagebuf_get( /* allocate a buffer */
+ xfs_buftarg_t *target,/* target for buffer */
+ loff_t ioff, /* starting offset of range */
+ size_t isize, /* length of range */
+ page_buf_flags_t flags) /* PBF_TRYLOCK */
+{
+ xfs_buf_t *pb, *new_pb;
+ int error = 0, i;
+
+ new_pb = pagebuf_allocate(flags);
+ if (unlikely(!new_pb))
+ return NULL;
+
+ pb = _pagebuf_find(target, ioff, isize, flags, new_pb);
+ if (pb == new_pb) {
+ error = _pagebuf_lookup_pages(pb, flags);
+ if (unlikely(error)) {
+ printk(KERN_WARNING
+ "pagebuf_get: failed to lookup pages\n");
+ goto no_buffer;
+ }
+ } else {
+ pagebuf_deallocate(new_pb);
+ if (unlikely(pb == NULL))
+ return NULL;
+ }
+
+ for (i = 0; i < pb->pb_page_count; i++)
+ mark_page_accessed(pb->pb_pages[i]);
+
+ if (!(pb->pb_flags & PBF_MAPPED)) {
+ error = _pagebuf_map_pages(pb, flags);
+ if (unlikely(error)) {
+ printk(KERN_WARNING
+ "pagebuf_get: failed to map pages\n");
+ goto no_buffer;
+ }
+ }
+
+ XFS_STATS_INC(pb_get);
+
+ /*
+ * Always fill in the block number now, the mapped cases can do
+ * their own overlay of this later.
+ */
+ pb->pb_bn = ioff;
+ pb->pb_count_desired = pb->pb_buffer_length;
+
+ if (flags & PBF_READ) {
+ if (PBF_NOT_DONE(pb)) {
+ PB_TRACE(pb, "get_read", (unsigned long)flags);
+ XFS_STATS_INC(pb_get_read);
+ pagebuf_iostart(pb, flags);
+ } else if (flags & PBF_ASYNC) {
+ PB_TRACE(pb, "get_read_async", (unsigned long)flags);
+ /*
+ * Read ahead call which is already satisfied,
+ * drop the buffer
+ */
+ goto no_buffer;
+ } else {
+ PB_TRACE(pb, "get_read_done", (unsigned long)flags);
+ /* We do not want read in the flags */
+ pb->pb_flags &= ~PBF_READ;
+ }
+ } else {
+ PB_TRACE(pb, "get_write", (unsigned long)flags);
+ }
+
+ return pb;
+
+no_buffer:
+ if (flags & (PBF_LOCK | PBF_TRYLOCK))
+ pagebuf_unlock(pb);
+ pagebuf_rele(pb);
+ return NULL;
+}
+
+/*
+ * Create a skeletal pagebuf (no pages associated with it).
+ */
+xfs_buf_t *
+pagebuf_lookup(
+ xfs_buftarg_t *target,
+ loff_t ioff,
+ size_t isize,
+ page_buf_flags_t flags)
+{
+ xfs_buf_t *pb;
+
+ pb = pagebuf_allocate(flags);
+ if (pb) {
+ _pagebuf_initialize(pb, target, ioff, isize, flags);
+ }
+ return pb;
+}
+
+/*
+ * If we are not low on memory then do the readahead in a deadlock
+ * safe manner.
+ */
+void
+pagebuf_readahead(
+ xfs_buftarg_t *target,
+ loff_t ioff,
+ size_t isize,
+ page_buf_flags_t flags)
+{
+ struct backing_dev_info *bdi;
+
+ bdi = target->pbr_mapping->backing_dev_info;
+ if (bdi_read_congested(bdi))
+ return;
+ if (bdi_write_congested(bdi))
+ return;
+
+ flags |= (PBF_TRYLOCK|PBF_READ|PBF_ASYNC|PBF_READ_AHEAD);
+ pagebuf_get(target, ioff, isize, flags);
+}
+
+xfs_buf_t *
+pagebuf_get_empty(
+ size_t len,
+ xfs_buftarg_t *target)
+{
+ xfs_buf_t *pb;
+
+ pb = pagebuf_allocate(0);
+ if (pb)
+ _pagebuf_initialize(pb, target, 0, len, 0);
+ return pb;
+}
+
+static inline struct page *
+mem_to_page(
+ void *addr)
+{
+ if (((unsigned long)addr < VMALLOC_START) ||
+ ((unsigned long)addr >= VMALLOC_END)) {
+ return virt_to_page(addr);
+ } else {
+ return vmalloc_to_page(addr);
+ }
+}
+
+int
+pagebuf_associate_memory(
+ xfs_buf_t *pb,
+ void *mem,
+ size_t len)
+{
+ int rval;
+ int i = 0;
+ size_t ptr;
+ size_t end, end_cur;
+ off_t offset;
+ int page_count;
+
+ page_count = PAGE_CACHE_ALIGN(len) >> PAGE_CACHE_SHIFT;
+ offset = (off_t) mem - ((off_t)mem & PAGE_CACHE_MASK);
+ if (offset && (len > PAGE_CACHE_SIZE))
+ page_count++;
+
+ /* Free any previous set of page pointers */
+ if (pb->pb_pages)
+ _pagebuf_free_pages(pb);
+
+ pb->pb_pages = NULL;
+ pb->pb_addr = mem;
+
+ rval = _pagebuf_get_pages(pb, page_count, 0);
+ if (rval)
+ return rval;
+
+ pb->pb_offset = offset;
+ ptr = (size_t) mem & PAGE_CACHE_MASK;
+ end = PAGE_CACHE_ALIGN((size_t) mem + len);
+ end_cur = end;
+ /* set up first page */
+ pb->pb_pages[0] = mem_to_page(mem);
+
+ ptr += PAGE_CACHE_SIZE;
+ pb->pb_page_count = ++i;
+ while (ptr < end) {
+ pb->pb_pages[i] = mem_to_page((void *)ptr);
+ pb->pb_page_count = ++i;
+ ptr += PAGE_CACHE_SIZE;
+ }
+ pb->pb_locked = 0;
+
+ pb->pb_count_desired = pb->pb_buffer_length = len;
+ pb->pb_flags |= PBF_MAPPED;
+
+ return 0;
+}
+
+xfs_buf_t *
+pagebuf_get_no_daddr(
+ size_t len,
+ xfs_buftarg_t *target)
+{
+ size_t malloc_len = len;
+ xfs_buf_t *bp;
+ void *data;
+ int error;
+
+ bp = pagebuf_allocate(0);
+ if (unlikely(bp == NULL))
+ goto fail;
+ _pagebuf_initialize(bp, target, 0, len, PBF_FORCEIO);
+
+ try_again:
+ data = kmem_alloc(malloc_len, KM_SLEEP | KM_MAYFAIL);
+ if (unlikely(data == NULL))
+ goto fail_free_buf;
+
+ /* check whether alignment matches.. */
+ if ((__psunsigned_t)data !=
+ ((__psunsigned_t)data & ~target->pbr_smask)) {
+ /* .. else double the size and try again */
+ kmem_free(data, malloc_len);
+ malloc_len <<= 1;
+ goto try_again;
+ }
+
+ error = pagebuf_associate_memory(bp, data, len);
+ if (error)
+ goto fail_free_mem;
+ bp->pb_flags |= _PBF_KMEM_ALLOC;
+
+ pagebuf_unlock(bp);
+
+ PB_TRACE(bp, "no_daddr", data);
+ return bp;
+ fail_free_mem:
+ kmem_free(data, malloc_len);
+ fail_free_buf:
+ pagebuf_free(bp);
+ fail:
+ return NULL;
+}
+
+/*
+ * pagebuf_hold
+ *
+ * Increment reference count on buffer, to hold the buffer concurrently
+ * with another thread which may release (free) the buffer asynchronously.
+ *
+ * Must hold the buffer already to call this function.
+ */
+void
+pagebuf_hold(
+ xfs_buf_t *pb)
+{
+ atomic_inc(&pb->pb_hold);
+ PB_TRACE(pb, "hold", 0);
+}
+
+/*
+ * pagebuf_rele
+ *
+ * pagebuf_rele releases a hold on the specified buffer. If the
+ * the hold count is 1, pagebuf_rele calls pagebuf_free.
+ */
+void
+pagebuf_rele(
+ xfs_buf_t *pb)
+{
+ pb_hash_t *hash = pb_hash(pb);
+
+ PB_TRACE(pb, "rele", pb->pb_relse);
+
+ if (atomic_dec_and_lock(&pb->pb_hold, &hash->pb_hash_lock)) {
+ int do_free = 1;
+
+ if (pb->pb_relse) {
+ atomic_inc(&pb->pb_hold);
+ spin_unlock(&hash->pb_hash_lock);
+ (*(pb->pb_relse)) (pb);
+ spin_lock(&hash->pb_hash_lock);
+ do_free = 0;
+ }
+
+ if (pb->pb_flags & PBF_DELWRI) {
+ pb->pb_flags |= PBF_ASYNC;
+ atomic_inc(&pb->pb_hold);
+ pagebuf_delwri_queue(pb, 0);
+ do_free = 0;
+ } else if (pb->pb_flags & PBF_FS_MANAGED) {
+ do_free = 0;
+ }
+
+ if (do_free) {
+ list_del_init(&pb->pb_hash_list);
+ spin_unlock(&hash->pb_hash_lock);
+ pagebuf_free(pb);
+ } else {
+ spin_unlock(&hash->pb_hash_lock);
+ }
+ }
+}
+
+
+/*
+ * Mutual exclusion on buffers. Locking model:
+ *
+ * Buffers associated with inodes for which buffer locking
+ * is not enabled are not protected by semaphores, and are
+ * assumed to be exclusively owned by the caller. There is a
+ * spinlock in the buffer, used by the caller when concurrent
+ * access is possible.
+ */
+
+/*
+ * pagebuf_cond_lock
+ *
+ * pagebuf_cond_lock locks a buffer object, if it is not already locked.
+ * Note that this in no way
+ * locks the underlying pages, so it is only useful for synchronizing
+ * concurrent use of page buffer objects, not for synchronizing independent
+ * access to the underlying pages.
+ */
+int
+pagebuf_cond_lock( /* lock buffer, if not locked */
+ /* returns -EBUSY if locked) */
+ xfs_buf_t *pb)
+{
+ int locked;
+
+ locked = down_trylock(&pb->pb_sema) == 0;
+ if (locked) {
+ PB_SET_OWNER(pb);
+ }
+ PB_TRACE(pb, "cond_lock", (long)locked);
+ return(locked ? 0 : -EBUSY);
+}
+
+/*
+ * pagebuf_lock_value
+ *
+ * Return lock value for a pagebuf
+ */
+int
+pagebuf_lock_value(
+ xfs_buf_t *pb)
+{
+ return(atomic_read(&pb->pb_sema.count));
+}
+
+/*
+ * pagebuf_lock
+ *
+ * pagebuf_lock locks a buffer object. Note that this in no way
+ * locks the underlying pages, so it is only useful for synchronizing
+ * concurrent use of page buffer objects, not for synchronizing independent
+ * access to the underlying pages.
+ */
+int
+pagebuf_lock(
+ xfs_buf_t *pb)
+{
+ PB_TRACE(pb, "lock", 0);
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ down(&pb->pb_sema);
+ PB_SET_OWNER(pb);
+ PB_TRACE(pb, "locked", 0);
+ return 0;
+}
+
+/*
+ * pagebuf_unlock
+ *
+ * pagebuf_unlock releases the lock on the buffer object created by
+ * pagebuf_lock or pagebuf_cond_lock (not any
+ * pinning of underlying pages created by pagebuf_pin).
+ */
+void
+pagebuf_unlock( /* unlock buffer */
+ xfs_buf_t *pb) /* buffer to unlock */
+{
+ PB_CLEAR_OWNER(pb);
+ up(&pb->pb_sema);
+ PB_TRACE(pb, "unlock", 0);
+}
+
+
+/*
+ * Pinning Buffer Storage in Memory
+ */
+
+/*
+ * pagebuf_pin
+ *
+ * pagebuf_pin locks all of the memory represented by a buffer in
+ * memory. Multiple calls to pagebuf_pin and pagebuf_unpin, for
+ * the same or different buffers affecting a given page, will
+ * properly count the number of outstanding "pin" requests. The
+ * buffer may be released after the pagebuf_pin and a different
+ * buffer used when calling pagebuf_unpin, if desired.
+ * pagebuf_pin should be used by the file system when it wants be
+ * assured that no attempt will be made to force the affected
+ * memory to disk. It does not assure that a given logical page
+ * will not be moved to a different physical page.
+ */
+void
+pagebuf_pin(
+ xfs_buf_t *pb)
+{
+ atomic_inc(&pb->pb_pin_count);
+ PB_TRACE(pb, "pin", (long)pb->pb_pin_count.counter);
+}
+
+/*
+ * pagebuf_unpin
+ *
+ * pagebuf_unpin reverses the locking of memory performed by
+ * pagebuf_pin. Note that both functions affected the logical
+ * pages associated with the buffer, not the buffer itself.
+ */
+void
+pagebuf_unpin(
+ xfs_buf_t *pb)
+{
+ if (atomic_dec_and_test(&pb->pb_pin_count)) {
+ wake_up_all(&pb->pb_waiters);
+ }
+ PB_TRACE(pb, "unpin", (long)pb->pb_pin_count.counter);
+}
+
+int
+pagebuf_ispin(
+ xfs_buf_t *pb)
+{
+ return atomic_read(&pb->pb_pin_count);
+}
+
+/*
+ * pagebuf_wait_unpin
+ *
+ * pagebuf_wait_unpin waits until all of the memory associated
+ * with the buffer is not longer locked in memory. It returns
+ * immediately if none of the affected pages are locked.
+ */
+static inline void
+_pagebuf_wait_unpin(
+ xfs_buf_t *pb)
+{
+ DECLARE_WAITQUEUE (wait, current);
+
+ if (atomic_read(&pb->pb_pin_count) == 0)
+ return;
+
+ add_wait_queue(&pb->pb_waiters, &wait);
+ for (;;) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&pb->pb_pin_count) == 0)
+ break;
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ schedule();
+ }
+ remove_wait_queue(&pb->pb_waiters, &wait);
+ set_current_state(TASK_RUNNING);
+}
+
+/*
+ * Buffer Utility Routines
+ */
+
+/*
+ * pagebuf_iodone
+ *
+ * pagebuf_iodone marks a buffer for which I/O is in progress
+ * done with respect to that I/O. The pb_iodone routine, if
+ * present, will be called as a side-effect.
+ */
+void
+pagebuf_iodone_work(
+ void *v)
+{
+ xfs_buf_t *bp = (xfs_buf_t *)v;
+
+ if (bp->pb_iodone)
+ (*(bp->pb_iodone))(bp);
+ else if (bp->pb_flags & PBF_ASYNC)
+ xfs_buf_relse(bp);
+}
+
+void
+pagebuf_iodone(
+ xfs_buf_t *pb,
+ int dataio,
+ int schedule)
+{
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE);
+ if (pb->pb_error == 0) {
+ pb->pb_flags &= ~(PBF_PARTIAL | PBF_NONE);
+ }
+
+ PB_TRACE(pb, "iodone", pb->pb_iodone);
+
+ if ((pb->pb_iodone) || (pb->pb_flags & PBF_ASYNC)) {
+ if (schedule) {
+ INIT_WORK(&pb->pb_iodone_work, pagebuf_iodone_work, pb);
+ queue_work(dataio ? pagebuf_dataio_workqueue :
+ pagebuf_logio_workqueue, &pb->pb_iodone_work);
+ } else {
+ pagebuf_iodone_work(pb);
+ }
+ } else {
+ up(&pb->pb_iodonesema);
+ }
+}
+
+/*
+ * pagebuf_ioerror
+ *
+ * pagebuf_ioerror sets the error code for a buffer.
+ */
+void
+pagebuf_ioerror( /* mark/clear buffer error flag */
+ xfs_buf_t *pb, /* buffer to mark */
+ int error) /* error to store (0 if none) */
+{
+ ASSERT(error >= 0 && error <= 0xffff);
+ pb->pb_error = (unsigned short)error;
+ PB_TRACE(pb, "ioerror", (unsigned long)error);
+}
+
+/*
+ * pagebuf_iostart
+ *
+ * pagebuf_iostart initiates I/O on a buffer, based on the flags supplied.
+ * If necessary, it will arrange for any disk space allocation required,
+ * and it will break up the request if the block mappings require it.
+ * The pb_iodone routine in the buffer supplied will only be called
+ * when all of the subsidiary I/O requests, if any, have been completed.
+ * pagebuf_iostart calls the pagebuf_ioinitiate routine or
+ * pagebuf_iorequest, if the former routine is not defined, to start
+ * the I/O on a given low-level request.
+ */
+int
+pagebuf_iostart( /* start I/O on a buffer */
+ xfs_buf_t *pb, /* buffer to start */
+ page_buf_flags_t flags) /* PBF_LOCK, PBF_ASYNC, PBF_READ, */
+ /* PBF_WRITE, PBF_DELWRI, */
+ /* PBF_DONT_BLOCK */
+{
+ int status = 0;
+
+ PB_TRACE(pb, "iostart", (unsigned long)flags);
+
+ if (flags & PBF_DELWRI) {
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE | PBF_ASYNC);
+ pb->pb_flags |= flags & (PBF_DELWRI | PBF_ASYNC);
+ pagebuf_delwri_queue(pb, 1);
+ return status;
+ }
+
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE | PBF_ASYNC | PBF_DELWRI | \
+ PBF_READ_AHEAD | _PBF_RUN_QUEUES);
+ pb->pb_flags |= flags & (PBF_READ | PBF_WRITE | PBF_ASYNC | \
+ PBF_READ_AHEAD | _PBF_RUN_QUEUES);
+
+ BUG_ON(pb->pb_bn == XFS_BUF_DADDR_NULL);
+
+ /* For writes allow an alternate strategy routine to precede
+ * the actual I/O request (which may not be issued at all in
+ * a shutdown situation, for example).
+ */
+ status = (flags & PBF_WRITE) ?
+ pagebuf_iostrategy(pb) : pagebuf_iorequest(pb);
+
+ /* Wait for I/O if we are not an async request.
+ * Note: async I/O request completion will release the buffer,
+ * and that can already be done by this point. So using the
+ * buffer pointer from here on, after async I/O, is invalid.
+ */
+ if (!status && !(flags & PBF_ASYNC))
+ status = pagebuf_iowait(pb);
+
+ return status;
+}
+
+/*
+ * Helper routine for pagebuf_iorequest
+ */
+
+STATIC __inline__ int
+_pagebuf_iolocked(
+ xfs_buf_t *pb)
+{
+ ASSERT(pb->pb_flags & (PBF_READ|PBF_WRITE));
+ if (pb->pb_flags & PBF_READ)
+ return pb->pb_locked;
+ return 0;
+}
+
+STATIC __inline__ void
+_pagebuf_iodone(
+ xfs_buf_t *pb,
+ int schedule)
+{
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pb->pb_locked = 0;
+ pagebuf_iodone(pb, (pb->pb_flags & PBF_FS_DATAIOD), schedule);
+ }
+}
+
+STATIC int
+bio_end_io_pagebuf(
+ struct bio *bio,
+ unsigned int bytes_done,
+ int error)
+{
+ xfs_buf_t *pb = (xfs_buf_t *)bio->bi_private;
+ unsigned int i, blocksize = pb->pb_target->pbr_bsize;
+ unsigned int sectorshift = pb->pb_target->pbr_sshift;
+ struct bio_vec *bvec = bio->bi_io_vec;
+
+ if (bio->bi_size)
+ return 1;
+
+ if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ pb->pb_error = EIO;
+
+ for (i = 0; i < bio->bi_vcnt; i++, bvec++) {
+ struct page *page = bvec->bv_page;
+
+ if (pb->pb_error) {
+ SetPageError(page);
+ } else if (blocksize == PAGE_CACHE_SIZE) {
+ SetPageUptodate(page);
+ } else if (!PagePrivate(page) &&
+ (pb->pb_flags & _PBF_PAGE_CACHE)) {
+ unsigned long j, range;
+
+ ASSERT(blocksize < PAGE_CACHE_SIZE);
+ range = (bvec->bv_offset + bvec->bv_len) >> sectorshift;
+ for (j = bvec->bv_offset >> sectorshift; j < range; j++)
+ set_bit(j, &page->private);
+ if (page->private == (unsigned long)(PAGE_CACHE_SIZE-1))
+ SetPageUptodate(page);
+ }
+
+ if (_pagebuf_iolocked(pb)) {
+ unlock_page(page);
+ }
+ }
+
+ _pagebuf_iodone(pb, 1);
+ bio_put(bio);
+ return 0;
+}
+
+void
+_pagebuf_ioapply(
+ xfs_buf_t *pb)
+{
+ int i, map_i, total_nr_pages, nr_pages;
+ struct bio *bio;
+ int offset = pb->pb_offset;
+ int size = pb->pb_count_desired;
+ sector_t sector = pb->pb_bn;
+ unsigned int blocksize = pb->pb_target->pbr_bsize;
+ int locking = _pagebuf_iolocked(pb);
+
+ total_nr_pages = pb->pb_page_count;
+ map_i = 0;
+
+ /* Special code path for reading a sub page size pagebuf in --
+ * we populate up the whole page, and hence the other metadata
+ * in the same page. This optimization is only valid when the
+ * filesystem block size and the page size are equal.
+ */
+ if ((pb->pb_buffer_length < PAGE_CACHE_SIZE) &&
+ (pb->pb_flags & PBF_READ) && locking &&
+ (blocksize == PAGE_CACHE_SIZE)) {
+ bio = bio_alloc(GFP_NOIO, 1);
+
+ bio->bi_bdev = pb->pb_target->pbr_bdev;
+ bio->bi_sector = sector - (offset >> BBSHIFT);
+ bio->bi_end_io = bio_end_io_pagebuf;
+ bio->bi_private = pb;
+
+ bio_add_page(bio, pb->pb_pages[0], PAGE_CACHE_SIZE, 0);
+ size = 0;
+
+ atomic_inc(&pb->pb_io_remaining);
+
+ goto submit_io;
+ }
+
+ /* Lock down the pages which we need to for the request */
+ if (locking && (pb->pb_flags & PBF_WRITE) && (pb->pb_locked == 0)) {
+ for (i = 0; size; i++) {
+ int nbytes = PAGE_CACHE_SIZE - offset;
+ struct page *page = pb->pb_pages[i];
+
+ if (nbytes > size)
+ nbytes = size;
+
+ lock_page(page);
+
+ size -= nbytes;
+ offset = 0;
+ }
+ offset = pb->pb_offset;
+ size = pb->pb_count_desired;
+ }
+
+next_chunk:
+ atomic_inc(&pb->pb_io_remaining);
+ nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
+ if (nr_pages > total_nr_pages)
+ nr_pages = total_nr_pages;
+
+ bio = bio_alloc(GFP_NOIO, nr_pages);
+ bio->bi_bdev = pb->pb_target->pbr_bdev;
+ bio->bi_sector = sector;
+ bio->bi_end_io = bio_end_io_pagebuf;
+ bio->bi_private = pb;
+
+ for (; size && nr_pages; nr_pages--, map_i++) {
+ int nbytes = PAGE_CACHE_SIZE - offset;
+
+ if (nbytes > size)
+ nbytes = size;
+
+ if (bio_add_page(bio, pb->pb_pages[map_i],
+ nbytes, offset) < nbytes)
+ break;
+
+ offset = 0;
+ sector += nbytes >> BBSHIFT;
+ size -= nbytes;
+ total_nr_pages--;
+ }
+
+submit_io:
+ if (likely(bio->bi_size)) {
+ submit_bio((pb->pb_flags & PBF_READ) ? READ : WRITE, bio);
+ if (size)
+ goto next_chunk;
+ } else {
+ bio_put(bio);
+ pagebuf_ioerror(pb, EIO);
+ }
+
+ if (pb->pb_flags & _PBF_RUN_QUEUES) {
+ pb->pb_flags &= ~_PBF_RUN_QUEUES;
+ if (atomic_read(&pb->pb_io_remaining) > 1)
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ }
+}
+
+/*
+ * pagebuf_iorequest -- the core I/O request routine.
+ */
+int
+pagebuf_iorequest( /* start real I/O */
+ xfs_buf_t *pb) /* buffer to convey to device */
+{
+ PB_TRACE(pb, "iorequest", 0);
+
+ if (pb->pb_flags & PBF_DELWRI) {
+ pagebuf_delwri_queue(pb, 1);
+ return 0;
+ }
+
+ if (pb->pb_flags & PBF_WRITE) {
+ _pagebuf_wait_unpin(pb);
+ }
+
+ pagebuf_hold(pb);
+
+ /* Set the count to 1 initially, this will stop an I/O
+ * completion callout which happens before we have started
+ * all the I/O from calling pagebuf_iodone too early.
+ */
+ atomic_set(&pb->pb_io_remaining, 1);
+ _pagebuf_ioapply(pb);
+ _pagebuf_iodone(pb, 0);
+
+ pagebuf_rele(pb);
+ return 0;
+}
+
+/*
+ * pagebuf_iowait
+ *
+ * pagebuf_iowait waits for I/O to complete on the buffer supplied.
+ * It returns immediately if no I/O is pending. In any case, it returns
+ * the error code, if any, or 0 if there is no error.
+ */
+int
+pagebuf_iowait(
+ xfs_buf_t *pb)
+{
+ PB_TRACE(pb, "iowait", 0);
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ down(&pb->pb_iodonesema);
+ PB_TRACE(pb, "iowaited", (long)pb->pb_error);
+ return pb->pb_error;
+}
+
+caddr_t
+pagebuf_offset(
+ xfs_buf_t *pb,
+ size_t offset)
+{
+ struct page *page;
+
+ offset += pb->pb_offset;
+
+ page = pb->pb_pages[offset >> PAGE_CACHE_SHIFT];
+ return (caddr_t) page_address(page) + (offset & (PAGE_CACHE_SIZE - 1));
+}
+
+/*
+ * pagebuf_iomove
+ *
+ * Move data into or out of a buffer.
+ */
+void
+pagebuf_iomove(
+ xfs_buf_t *pb, /* buffer to process */
+ size_t boff, /* starting buffer offset */
+ size_t bsize, /* length to copy */
+ caddr_t data, /* data address */
+ page_buf_rw_t mode) /* read/write flag */
+{
+ size_t bend, cpoff, csize;
+ struct page *page;
+
+ bend = boff + bsize;
+ while (boff < bend) {
+ page = pb->pb_pages[page_buf_btoct(boff + pb->pb_offset)];
+ cpoff = page_buf_poff(boff + pb->pb_offset);
+ csize = min_t(size_t,
+ PAGE_CACHE_SIZE-cpoff, pb->pb_count_desired-boff);
+
+ ASSERT(((csize + cpoff) <= PAGE_CACHE_SIZE));
+
+ switch (mode) {
+ case PBRW_ZERO:
+ memset(page_address(page) + cpoff, 0, csize);
+ break;
+ case PBRW_READ:
+ memcpy(data, page_address(page) + cpoff, csize);
+ break;
+ case PBRW_WRITE:
+ memcpy(page_address(page) + cpoff, data, csize);
+ }
+
+ boff += csize;
+ data += csize;
+ }
+}
+
+/*
+ * Handling of buftargs.
+ */
+
+void
+xfs_free_buftarg(
+ xfs_buftarg_t *btp,
+ int external)
+{
+ xfs_flush_buftarg(btp, 1);
+ if (external)
+ xfs_blkdev_put(btp->pbr_bdev);
+ kmem_free(btp, sizeof(*btp));
+}
+
+void
+xfs_incore_relse(
+ xfs_buftarg_t *btp,
+ int delwri_only,
+ int wait)
+{
+ invalidate_bdev(btp->pbr_bdev, 1);
+ truncate_inode_pages(btp->pbr_mapping, 0LL);
+}
+
+void
+xfs_setsize_buftarg(
+ xfs_buftarg_t *btp,
+ unsigned int blocksize,
+ unsigned int sectorsize)
+{
+ btp->pbr_bsize = blocksize;
+ btp->pbr_sshift = ffs(sectorsize) - 1;
+ btp->pbr_smask = sectorsize - 1;
+
+ if (set_blocksize(btp->pbr_bdev, sectorsize)) {
+ printk(KERN_WARNING
+ "XFS: Cannot set_blocksize to %u on device %s\n",
+ sectorsize, XFS_BUFTARG_NAME(btp));
+ }
+}
+
+xfs_buftarg_t *
+xfs_alloc_buftarg(
+ struct block_device *bdev)
+{
+ xfs_buftarg_t *btp;
+
+ btp = kmem_zalloc(sizeof(*btp), KM_SLEEP);
+
+ btp->pbr_dev = bdev->bd_dev;
+ btp->pbr_bdev = bdev;
+ btp->pbr_mapping = bdev->bd_inode->i_mapping;
+ xfs_setsize_buftarg(btp, PAGE_CACHE_SIZE, bdev_hardsect_size(bdev));
+
+ return btp;
+}
+
+
+/*
+ * Pagebuf delayed write buffer handling
+ */
+
+STATIC LIST_HEAD(pbd_delwrite_queue);
+STATIC spinlock_t pbd_delwrite_lock = SPIN_LOCK_UNLOCKED;
+
+STATIC void
+pagebuf_delwri_queue(
+ xfs_buf_t *pb,
+ int unlock)
+{
+ PB_TRACE(pb, "delwri_q", (long)unlock);
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+
+ spin_lock(&pbd_delwrite_lock);
+ /* If already in the queue, dequeue and place at tail */
+ if (!list_empty(&pb->pb_list)) {
+ if (unlock) {
+ atomic_dec(&pb->pb_hold);
+ }
+ list_del(&pb->pb_list);
+ }
+
+ list_add_tail(&pb->pb_list, &pbd_delwrite_queue);
+ pb->pb_queuetime = jiffies;
+ spin_unlock(&pbd_delwrite_lock);
+
+ if (unlock)
+ pagebuf_unlock(pb);
+}
+
+void
+pagebuf_delwri_dequeue(
+ xfs_buf_t *pb)
+{
+ PB_TRACE(pb, "delwri_uq", 0);
+ spin_lock(&pbd_delwrite_lock);
+ list_del_init(&pb->pb_list);
+ pb->pb_flags &= ~PBF_DELWRI;
+ spin_unlock(&pbd_delwrite_lock);
+}
+
+STATIC void
+pagebuf_runall_queues(
+ struct workqueue_struct *queue)
+{
+ flush_workqueue(queue);
+}
+
+/* Defines for pagebuf daemon */
+STATIC DECLARE_COMPLETION(pagebuf_daemon_done);
+STATIC struct task_struct *pagebuf_daemon_task;
+STATIC int pagebuf_daemon_active;
+STATIC int force_flush;
+
+STATIC void
+pagebuf_daemon_wakeup(void)
+{
+ force_flush = 1;
+ barrier();
+ wake_up_process(pagebuf_daemon_task);
+}
+
+STATIC int
+pagebuf_daemon(
+ void *data)
+{
+ struct list_head tmp;
+ unsigned long age;
+ xfs_buf_t *pb, *n;
+
+ /* Set up the thread */
+ daemonize("xfsbufd");
+ current->flags |= PF_MEMALLOC;
+
+ pagebuf_daemon_task = current;
+ pagebuf_daemon_active = 1;
+ barrier();
+
+ INIT_LIST_HEAD(&tmp);
+ do {
+ /* swsusp */
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((xfs_buf_timer_centisecs * HZ) / 100);
+
+ age = (xfs_buf_age_centisecs * HZ) / 100;
+ spin_lock(&pbd_delwrite_lock);
+ list_for_each_entry_safe(pb, n, &pbd_delwrite_queue, pb_list) {
+ PB_TRACE(pb, "walkq1", (long)pagebuf_ispin(pb));
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+
+ if (!pagebuf_ispin(pb) && !pagebuf_cond_lock(pb)) {
+ if (!force_flush &&
+ time_before(jiffies,
+ pb->pb_queuetime + age)) {
+ pagebuf_unlock(pb);
+ break;
+ }
+
+ pb->pb_flags &= ~PBF_DELWRI;
+ pb->pb_flags |= PBF_WRITE;
+ list_move(&pb->pb_list, &tmp);
+ }
+ }
+ spin_unlock(&pbd_delwrite_lock);
+
+ while (!list_empty(&tmp)) {
+ pb = list_entry(tmp.next, xfs_buf_t, pb_list);
+ list_del_init(&pb->pb_list);
+ pagebuf_iostrategy(pb);
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ }
+
+ if (as_list_len > 0)
+ purge_addresses();
+
+ force_flush = 0;
+ } while (pagebuf_daemon_active);
+
+ complete_and_exit(&pagebuf_daemon_done, 0);
+}
+
+/*
+ * Go through all incore buffers, and release buffers if they belong to
+ * the given device. This is used in filesystem error handling to
+ * preserve the consistency of its metadata.
+ */
+int
+xfs_flush_buftarg(
+ xfs_buftarg_t *target,
+ int wait)
+{
+ struct list_head tmp;
+ xfs_buf_t *pb, *n;
+ int pincount = 0;
+
+ pagebuf_runall_queues(pagebuf_dataio_workqueue);
+ pagebuf_runall_queues(pagebuf_logio_workqueue);
+
+ INIT_LIST_HEAD(&tmp);
+ spin_lock(&pbd_delwrite_lock);
+ list_for_each_entry_safe(pb, n, &pbd_delwrite_queue, pb_list) {
+
+ if (pb->pb_target != target)
+ continue;
+
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+ PB_TRACE(pb, "walkq2", (long)pagebuf_ispin(pb));
+ if (pagebuf_ispin(pb)) {
+ pincount++;
+ continue;
+ }
+
+ pb->pb_flags &= ~PBF_DELWRI;
+ pb->pb_flags |= PBF_WRITE;
+ list_move(&pb->pb_list, &tmp);
+ }
+ spin_unlock(&pbd_delwrite_lock);
+
+ /*
+ * Dropped the delayed write list lock, now walk the temporary list
+ */
+ list_for_each_entry_safe(pb, n, &tmp, pb_list) {
+ if (wait)
+ pb->pb_flags &= ~PBF_ASYNC;
+ else
+ list_del_init(&pb->pb_list);
+
+ pagebuf_lock(pb);
+ pagebuf_iostrategy(pb);
+ }
+
+ /*
+ * Remaining list items must be flushed before returning
+ */
+ while (!list_empty(&tmp)) {
+ pb = list_entry(tmp.next, xfs_buf_t, pb_list);
+
+ list_del_init(&pb->pb_list);
+ xfs_iowait(pb);
+ xfs_buf_relse(pb);
+ }
+
+ if (wait)
+ blk_run_address_space(target->pbr_mapping);
+
+ return pincount;
+}
+
+STATIC int
+pagebuf_daemon_start(void)
+{
+ int rval;
+
+ pagebuf_logio_workqueue = create_workqueue("xfslogd");
+ if (!pagebuf_logio_workqueue)
+ return -ENOMEM;
+
+ pagebuf_dataio_workqueue = create_workqueue("xfsdatad");
+ if (!pagebuf_dataio_workqueue) {
+ destroy_workqueue(pagebuf_logio_workqueue);
+ return -ENOMEM;
+ }
+
+ rval = kernel_thread(pagebuf_daemon, NULL, CLONE_FS|CLONE_FILES);
+ if (rval < 0) {
+ destroy_workqueue(pagebuf_logio_workqueue);
+ destroy_workqueue(pagebuf_dataio_workqueue);
+ }
+
+ return rval;
+}
+
+/*
+ * pagebuf_daemon_stop
+ *
+ * Note: do not mark as __exit, it is called from pagebuf_terminate.
+ */
+STATIC void
+pagebuf_daemon_stop(void)
+{
+ pagebuf_daemon_active = 0;
+ barrier();
+ wait_for_completion(&pagebuf_daemon_done);
+
+ destroy_workqueue(pagebuf_logio_workqueue);
+ destroy_workqueue(pagebuf_dataio_workqueue);
+}
+
+/*
+ * Initialization and Termination
+ */
+
+int __init
+pagebuf_init(void)
+{
+ int i;
+
+ pagebuf_cache = kmem_cache_create("xfs_buf_t", sizeof(xfs_buf_t), 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (pagebuf_cache == NULL) {
+ printk("pagebuf: couldn't init pagebuf cache\n");
+ pagebuf_terminate();
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < NHASH; i++) {
+ spin_lock_init(&pbhash[i].pb_hash_lock);
+ INIT_LIST_HEAD(&pbhash[i].pb_hash);
+ }
+
+#ifdef PAGEBUF_TRACE
+ pagebuf_trace_buf = ktrace_alloc(PAGEBUF_TRACE_SIZE, KM_SLEEP);
+#endif
+
+ pagebuf_daemon_start();
+ return 0;
+}
+
+
+/*
+ * pagebuf_terminate.
+ *
+ * Note: do not mark as __exit, this is also called from the __init code.
+ */
+void
+pagebuf_terminate(void)
+{
+ pagebuf_daemon_stop();
+
+#ifdef PAGEBUF_TRACE
+ ktrace_free(pagebuf_trace_buf);
+#endif
+
+ kmem_cache_destroy(pagebuf_cache);
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * Written by Steve Lord, Jim Mostek, Russell Cattelan at SGI
+ */
+
+#ifndef __XFS_BUF_H__
+#define __XFS_BUF_H__
+
+#include <linux/config.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <asm/system.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/buffer_head.h>
+#include <linux/uio.h>
+
+/*
+ * Base types
+ */
+
+#define XFS_BUF_DADDR_NULL ((xfs_daddr_t) (-1LL))
+
+#define page_buf_ctob(pp) ((pp) * PAGE_CACHE_SIZE)
+#define page_buf_btoc(dd) (((dd) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT)
+#define page_buf_btoct(dd) ((dd) >> PAGE_CACHE_SHIFT)
+#define page_buf_poff(aa) ((aa) & ~PAGE_CACHE_MASK)
+
+typedef enum page_buf_rw_e {
+ PBRW_READ = 1, /* transfer into target memory */
+ PBRW_WRITE = 2, /* transfer from target memory */
+ PBRW_ZERO = 3 /* Zero target memory */
+} page_buf_rw_t;
+
+
+typedef enum page_buf_flags_e { /* pb_flags values */
+ PBF_READ = (1 << 0), /* buffer intended for reading from device */
+ PBF_WRITE = (1 << 1), /* buffer intended for writing to device */
+ PBF_MAPPED = (1 << 2), /* buffer mapped (pb_addr valid) */
+ PBF_PARTIAL = (1 << 3), /* buffer partially read */
+ PBF_ASYNC = (1 << 4), /* initiator will not wait for completion */
+ PBF_NONE = (1 << 5), /* buffer not read at all */
+ PBF_DELWRI = (1 << 6), /* buffer has dirty pages */
+ PBF_STALE = (1 << 7), /* buffer has been staled, do not find it */
+ PBF_FS_MANAGED = (1 << 8), /* filesystem controls freeing memory */
+ PBF_FS_DATAIOD = (1 << 9), /* schedule IO completion on fs datad */
+ PBF_FORCEIO = (1 << 10), /* ignore any cache state */
+ PBF_FLUSH = (1 << 11), /* flush disk write cache */
+ PBF_READ_AHEAD = (1 << 12), /* asynchronous read-ahead */
+
+ /* flags used only as arguments to access routines */
+ PBF_LOCK = (1 << 14), /* lock requested */
+ PBF_TRYLOCK = (1 << 15), /* lock requested, but do not wait */
+ PBF_DONT_BLOCK = (1 << 16), /* do not block in current thread */
+
+ /* flags used only internally */
+ _PBF_PAGE_CACHE = (1 << 17),/* backed by pagecache */
+ _PBF_KMEM_ALLOC = (1 << 18),/* backed by kmem_alloc() */
+ _PBF_RUN_QUEUES = (1 << 19),/* run block device task queue */
+} page_buf_flags_t;
+
+#define PBF_UPDATE (PBF_READ | PBF_WRITE)
+#define PBF_NOT_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) != 0)
+#define PBF_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) == 0)
+
+typedef struct xfs_buftarg {
+ dev_t pbr_dev;
+ struct block_device *pbr_bdev;
+ struct address_space *pbr_mapping;
+ unsigned int pbr_bsize;
+ unsigned int pbr_sshift;
+ size_t pbr_smask;
+} xfs_buftarg_t;
+
+/*
+ * xfs_buf_t: Buffer structure for page cache-based buffers
+ *
+ * This buffer structure is used by the page cache buffer management routines
+ * to refer to an assembly of pages forming a logical buffer. The actual
+ * I/O is performed with buffer_head or bio structures, as required by drivers,
+ * for drivers which do not understand this structure. The buffer structure is
+ * used on temporary basis only, and discarded when released.
+ *
+ * The real data storage is recorded in the page cache. Metadata is
+ * hashed to the inode for the block device on which the file system resides.
+ * File data is hashed to the inode for the file. Pages which are only
+ * partially filled with data have bits set in their block_map entry
+ * to indicate which disk blocks in the page are not valid.
+ */
+
+struct xfs_buf;
+typedef void (*page_buf_iodone_t)(struct xfs_buf *);
+ /* call-back function on I/O completion */
+typedef void (*page_buf_relse_t)(struct xfs_buf *);
+ /* call-back function on I/O completion */
+typedef int (*page_buf_bdstrat_t)(struct xfs_buf *);
+
+#define PB_PAGES 4
+
+typedef struct xfs_buf {
+ struct semaphore pb_sema; /* semaphore for lockables */
+ unsigned long pb_queuetime; /* time buffer was queued */
+ atomic_t pb_pin_count; /* pin count */
+ wait_queue_head_t pb_waiters; /* unpin waiters */
+ struct list_head pb_list;
+ page_buf_flags_t pb_flags; /* status flags */
+ struct list_head pb_hash_list;
+ xfs_buftarg_t *pb_target; /* logical object */
+ atomic_t pb_hold; /* reference count */
+ xfs_daddr_t pb_bn; /* block number for I/O */
+ loff_t pb_file_offset; /* offset in file */
+ size_t pb_buffer_length; /* size of buffer in bytes */
+ size_t pb_count_desired; /* desired transfer size */
+ void *pb_addr; /* virtual address of buffer */
+ struct work_struct pb_iodone_work;
+ atomic_t pb_io_remaining;/* #outstanding I/O requests */
+ page_buf_iodone_t pb_iodone; /* I/O completion function */
+ page_buf_relse_t pb_relse; /* releasing function */
+ page_buf_bdstrat_t pb_strat; /* pre-write function */
+ struct semaphore pb_iodonesema; /* Semaphore for I/O waiters */
+ void *pb_fspriv;
+ void *pb_fspriv2;
+ void *pb_fspriv3;
+ unsigned short pb_error; /* error code on I/O */
+ unsigned short pb_page_count; /* size of page array */
+ unsigned short pb_offset; /* page offset in first page */
+ unsigned char pb_locked; /* page array is locked */
+ unsigned char pb_hash_index; /* hash table index */
+ struct page **pb_pages; /* array of page pointers */
+ struct page *pb_page_array[PB_PAGES]; /* inline pages */
+#ifdef PAGEBUF_LOCK_TRACKING
+ int pb_last_holder;
+#endif
+} xfs_buf_t;
+
+
+/* Finding and Reading Buffers */
+
+extern xfs_buf_t *pagebuf_find( /* find buffer for block if */
+ /* the block is in memory */
+ xfs_buftarg_t *, /* inode for block */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_LOCK */
+
+extern xfs_buf_t *pagebuf_get( /* allocate a buffer */
+ xfs_buftarg_t *, /* inode for buffer */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_LOCK, PBF_READ, */
+ /* PBF_ASYNC */
+
+extern xfs_buf_t *pagebuf_lookup(
+ xfs_buftarg_t *,
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_READ, PBF_WRITE, */
+ /* PBF_FORCEIO, */
+
+extern xfs_buf_t *pagebuf_get_empty( /* allocate pagebuf struct with */
+ /* no memory or disk address */
+ size_t len,
+ xfs_buftarg_t *); /* mount point "fake" inode */
+
+extern xfs_buf_t *pagebuf_get_no_daddr(/* allocate pagebuf struct */
+ /* without disk address */
+ size_t len,
+ xfs_buftarg_t *); /* mount point "fake" inode */
+
+extern int pagebuf_associate_memory(
+ xfs_buf_t *,
+ void *,
+ size_t);
+
+extern void pagebuf_hold( /* increment reference count */
+ xfs_buf_t *); /* buffer to hold */
+
+extern void pagebuf_readahead( /* read ahead into cache */
+ xfs_buftarg_t *, /* target for buffer (or NULL) */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* additional read flags */
+
+/* Releasing Buffers */
+
+extern void pagebuf_free( /* deallocate a buffer */
+ xfs_buf_t *); /* buffer to deallocate */
+
+extern void pagebuf_rele( /* release hold on a buffer */
+ xfs_buf_t *); /* buffer to release */
+
+/* Locking and Unlocking Buffers */
+
+extern int pagebuf_cond_lock( /* lock buffer, if not locked */
+ /* (returns -EBUSY if locked) */
+ xfs_buf_t *); /* buffer to lock */
+
+extern int pagebuf_lock_value( /* return count on lock */
+ xfs_buf_t *); /* buffer to check */
+
+extern int pagebuf_lock( /* lock buffer */
+ xfs_buf_t *); /* buffer to lock */
+
+extern void pagebuf_unlock( /* unlock buffer */
+ xfs_buf_t *); /* buffer to unlock */
+
+/* Buffer Read and Write Routines */
+
+extern void pagebuf_iodone( /* mark buffer I/O complete */
+ xfs_buf_t *, /* buffer to mark */
+ int, /* use data/log helper thread. */
+ int); /* run completion locally, or in
+ * a helper thread. */
+
+extern void pagebuf_ioerror( /* mark buffer in error (or not) */
+ xfs_buf_t *, /* buffer to mark */
+ int); /* error to store (0 if none) */
+
+extern int pagebuf_iostart( /* start I/O on a buffer */
+ xfs_buf_t *, /* buffer to start */
+ page_buf_flags_t); /* PBF_LOCK, PBF_ASYNC, */
+ /* PBF_READ, PBF_WRITE, */
+ /* PBF_DELWRI */
+
+extern int pagebuf_iorequest( /* start real I/O */
+ xfs_buf_t *); /* buffer to convey to device */
+
+extern int pagebuf_iowait( /* wait for buffer I/O done */
+ xfs_buf_t *); /* buffer to wait on */
+
+extern void pagebuf_iomove( /* move data in/out of pagebuf */
+ xfs_buf_t *, /* buffer to manipulate */
+ size_t, /* starting buffer offset */
+ size_t, /* length in buffer */
+ caddr_t, /* data pointer */
+ page_buf_rw_t); /* direction */
+
+static inline int pagebuf_iostrategy(xfs_buf_t *pb)
+{
+ return pb->pb_strat ? pb->pb_strat(pb) : pagebuf_iorequest(pb);
+}
+
+static inline int pagebuf_geterror(xfs_buf_t *pb)
+{
+ return pb ? pb->pb_error : ENOMEM;
+}
+
+/* Buffer Utility Routines */
+
+extern caddr_t pagebuf_offset( /* pointer at offset in buffer */
+ xfs_buf_t *, /* buffer to offset into */
+ size_t); /* offset */
+
+/* Pinning Buffer Storage in Memory */
+
+extern void pagebuf_pin( /* pin buffer in memory */
+ xfs_buf_t *); /* buffer to pin */
+
+extern void pagebuf_unpin( /* unpin buffered data */
+ xfs_buf_t *); /* buffer to unpin */
+
+extern int pagebuf_ispin( /* check if buffer is pinned */
+ xfs_buf_t *); /* buffer to check */
+
+/* Delayed Write Buffer Routines */
+
+extern void pagebuf_delwri_dequeue(xfs_buf_t *);
+
+/* Buffer Daemon Setup Routines */
+
+extern int pagebuf_init(void);
+extern void pagebuf_terminate(void);
+
+
+#ifdef PAGEBUF_TRACE
+extern ktrace_t *pagebuf_trace_buf;
+extern void pagebuf_trace(
+ xfs_buf_t *, /* buffer being traced */
+ char *, /* description of operation */
+ void *, /* arbitrary diagnostic value */
+ void *); /* return address */
+#else
+# define pagebuf_trace(pb, id, ptr, ra) do { } while (0)
+#endif
+
+#define pagebuf_target_name(target) \
+ ({ char __b[BDEVNAME_SIZE]; bdevname((target)->pbr_bdev, __b); __b; })
+
+
+
+
+
+/* These are just for xfs_syncsub... it sets an internal variable
+ * then passes it to VOP_FLUSH_PAGES or adds the flags to a newly gotten buf_t
+ */
+#define XFS_B_ASYNC PBF_ASYNC
+#define XFS_B_DELWRI PBF_DELWRI
+#define XFS_B_READ PBF_READ
+#define XFS_B_WRITE PBF_WRITE
+#define XFS_B_STALE PBF_STALE
+
+#define XFS_BUF_TRYLOCK PBF_TRYLOCK
+#define XFS_INCORE_TRYLOCK PBF_TRYLOCK
+#define XFS_BUF_LOCK PBF_LOCK
+#define XFS_BUF_MAPPED PBF_MAPPED
+
+#define BUF_BUSY PBF_DONT_BLOCK
+
+#define XFS_BUF_BFLAGS(x) ((x)->pb_flags)
+#define XFS_BUF_ZEROFLAGS(x) \
+ ((x)->pb_flags &= ~(PBF_READ|PBF_WRITE|PBF_ASYNC|PBF_DELWRI))
+
+#define XFS_BUF_STALE(x) ((x)->pb_flags |= XFS_B_STALE)
+#define XFS_BUF_UNSTALE(x) ((x)->pb_flags &= ~XFS_B_STALE)
+#define XFS_BUF_ISSTALE(x) ((x)->pb_flags & XFS_B_STALE)
+#define XFS_BUF_SUPER_STALE(x) do { \
+ XFS_BUF_STALE(x); \
+ xfs_buf_undelay(x); \
+ XFS_BUF_DONE(x); \
+ } while (0)
+
+#define XFS_BUF_MANAGE PBF_FS_MANAGED
+#define XFS_BUF_UNMANAGE(x) ((x)->pb_flags &= ~PBF_FS_MANAGED)
+
+static inline void xfs_buf_undelay(xfs_buf_t *pb)
+{
+ if (pb->pb_flags & PBF_DELWRI) {
+ if (pb->pb_list.next != &pb->pb_list) {
+ pagebuf_delwri_dequeue(pb);
+ pagebuf_rele(pb);
+ } else {
+ pb->pb_flags &= ~PBF_DELWRI;
+ }
+ }
+}
+
+#define XFS_BUF_DELAYWRITE(x) ((x)->pb_flags |= PBF_DELWRI)
+#define XFS_BUF_UNDELAYWRITE(x) xfs_buf_undelay(x)
+#define XFS_BUF_ISDELAYWRITE(x) ((x)->pb_flags & PBF_DELWRI)
+
+#define XFS_BUF_ERROR(x,no) pagebuf_ioerror(x,no)
+#define XFS_BUF_GETERROR(x) pagebuf_geterror(x)
+#define XFS_BUF_ISERROR(x) (pagebuf_geterror(x)?1:0)
+
+#define XFS_BUF_DONE(x) ((x)->pb_flags &= ~(PBF_PARTIAL|PBF_NONE))
+#define XFS_BUF_UNDONE(x) ((x)->pb_flags |= PBF_PARTIAL|PBF_NONE)
+#define XFS_BUF_ISDONE(x) (!(PBF_NOT_DONE(x)))
+
+#define XFS_BUF_BUSY(x) ((x)->pb_flags |= PBF_FORCEIO)
+#define XFS_BUF_UNBUSY(x) ((x)->pb_flags &= ~PBF_FORCEIO)
+#define XFS_BUF_ISBUSY(x) (1)
+
+#define XFS_BUF_ASYNC(x) ((x)->pb_flags |= PBF_ASYNC)
+#define XFS_BUF_UNASYNC(x) ((x)->pb_flags &= ~PBF_ASYNC)
+#define XFS_BUF_ISASYNC(x) ((x)->pb_flags & PBF_ASYNC)
+
+#define XFS_BUF_FLUSH(x) ((x)->pb_flags |= PBF_FLUSH)
+#define XFS_BUF_UNFLUSH(x) ((x)->pb_flags &= ~PBF_FLUSH)
+#define XFS_BUF_ISFLUSH(x) ((x)->pb_flags & PBF_FLUSH)
+
+#define XFS_BUF_SHUT(x) printk("XFS_BUF_SHUT not implemented yet\n")
+#define XFS_BUF_UNSHUT(x) printk("XFS_BUF_UNSHUT not implemented yet\n")
+#define XFS_BUF_ISSHUT(x) (0)
+
+#define XFS_BUF_HOLD(x) pagebuf_hold(x)
+#define XFS_BUF_READ(x) ((x)->pb_flags |= PBF_READ)
+#define XFS_BUF_UNREAD(x) ((x)->pb_flags &= ~PBF_READ)
+#define XFS_BUF_ISREAD(x) ((x)->pb_flags & PBF_READ)
+
+#define XFS_BUF_WRITE(x) ((x)->pb_flags |= PBF_WRITE)
+#define XFS_BUF_UNWRITE(x) ((x)->pb_flags &= ~PBF_WRITE)
+#define XFS_BUF_ISWRITE(x) ((x)->pb_flags & PBF_WRITE)
+
+#define XFS_BUF_ISUNINITIAL(x) (0)
+#define XFS_BUF_UNUNINITIAL(x) (0)
+
+#define XFS_BUF_BP_ISMAPPED(bp) 1
+
+#define XFS_BUF_DATAIO(x) ((x)->pb_flags |= PBF_FS_DATAIOD)
+#define XFS_BUF_UNDATAIO(x) ((x)->pb_flags &= ~PBF_FS_DATAIOD)
+
+#define XFS_BUF_IODONE_FUNC(buf) (buf)->pb_iodone
+#define XFS_BUF_SET_IODONE_FUNC(buf, func) \
+ (buf)->pb_iodone = (func)
+#define XFS_BUF_CLR_IODONE_FUNC(buf) \
+ (buf)->pb_iodone = NULL
+#define XFS_BUF_SET_BDSTRAT_FUNC(buf, func) \
+ (buf)->pb_strat = (func)
+#define XFS_BUF_CLR_BDSTRAT_FUNC(buf) \
+ (buf)->pb_strat = NULL
+
+#define XFS_BUF_FSPRIVATE(buf, type) \
+ ((type)(buf)->pb_fspriv)
+#define XFS_BUF_SET_FSPRIVATE(buf, value) \
+ (buf)->pb_fspriv = (void *)(value)
+#define XFS_BUF_FSPRIVATE2(buf, type) \
+ ((type)(buf)->pb_fspriv2)
+#define XFS_BUF_SET_FSPRIVATE2(buf, value) \
+ (buf)->pb_fspriv2 = (void *)(value)
+#define XFS_BUF_FSPRIVATE3(buf, type) \
+ ((type)(buf)->pb_fspriv3)
+#define XFS_BUF_SET_FSPRIVATE3(buf, value) \
+ (buf)->pb_fspriv3 = (void *)(value)
+#define XFS_BUF_SET_START(buf)
+
+#define XFS_BUF_SET_BRELSE_FUNC(buf, value) \
+ (buf)->pb_relse = (value)
+
+#define XFS_BUF_PTR(bp) (xfs_caddr_t)((bp)->pb_addr)
+
+extern inline xfs_caddr_t xfs_buf_offset(xfs_buf_t *bp, size_t offset)
+{
+ if (bp->pb_flags & PBF_MAPPED)
+ return XFS_BUF_PTR(bp) + offset;
+ return (xfs_caddr_t) pagebuf_offset(bp, offset);
+}
+
+#define XFS_BUF_SET_PTR(bp, val, count) \
+ pagebuf_associate_memory(bp, val, count)
+#define XFS_BUF_ADDR(bp) ((bp)->pb_bn)
+#define XFS_BUF_SET_ADDR(bp, blk) \
+ ((bp)->pb_bn = (blk))
+#define XFS_BUF_OFFSET(bp) ((bp)->pb_file_offset)
+#define XFS_BUF_SET_OFFSET(bp, off) \
+ ((bp)->pb_file_offset = (off))
+#define XFS_BUF_COUNT(bp) ((bp)->pb_count_desired)
+#define XFS_BUF_SET_COUNT(bp, cnt) \
+ ((bp)->pb_count_desired = (cnt))
+#define XFS_BUF_SIZE(bp) ((bp)->pb_buffer_length)
+#define XFS_BUF_SET_SIZE(bp, cnt) \
+ ((bp)->pb_buffer_length = (cnt))
+#define XFS_BUF_SET_VTYPE_REF(bp, type, ref)
+#define XFS_BUF_SET_VTYPE(bp, type)
+#define XFS_BUF_SET_REF(bp, ref)
+
+#define XFS_BUF_ISPINNED(bp) pagebuf_ispin(bp)
+
+#define XFS_BUF_VALUSEMA(bp) pagebuf_lock_value(bp)
+#define XFS_BUF_CPSEMA(bp) (pagebuf_cond_lock(bp) == 0)
+#define XFS_BUF_VSEMA(bp) pagebuf_unlock(bp)
+#define XFS_BUF_PSEMA(bp,x) pagebuf_lock(bp)
+#define XFS_BUF_V_IODONESEMA(bp) up(&bp->pb_iodonesema);
+
+/* setup the buffer target from a buftarg structure */
+#define XFS_BUF_SET_TARGET(bp, target) \
+ (bp)->pb_target = (target)
+#define XFS_BUF_TARGET(bp) ((bp)->pb_target)
+#define XFS_BUFTARG_NAME(target) \
+ pagebuf_target_name(target)
+
+#define XFS_BUF_SET_VTYPE_REF(bp, type, ref)
+#define XFS_BUF_SET_VTYPE(bp, type)
+#define XFS_BUF_SET_REF(bp, ref)
+
+#define xfs_buf_read(target, blkno, len, flags) \
+ pagebuf_get((target), (blkno), (len), \
+ PBF_LOCK | PBF_READ | PBF_MAPPED)
+#define xfs_buf_get(target, blkno, len, flags) \
+ pagebuf_get((target), (blkno), (len), \
+ PBF_LOCK | PBF_MAPPED)
+
+#define xfs_buf_read_flags(target, blkno, len, flags) \
+ pagebuf_get((target), (blkno), (len), PBF_READ | (flags))
+#define xfs_buf_get_flags(target, blkno, len, flags) \
+ pagebuf_get((target), (blkno), (len), (flags))
+
+static inline int xfs_bawrite(void *mp, xfs_buf_t *bp)
+{
+ bp->pb_fspriv3 = mp;
+ bp->pb_strat = xfs_bdstrat_cb;
+ xfs_buf_undelay(bp);
+ return pagebuf_iostart(bp, PBF_WRITE | PBF_ASYNC | _PBF_RUN_QUEUES);
+}
+
+static inline void xfs_buf_relse(xfs_buf_t *bp)
+{
+ if (!bp->pb_relse)
+ pagebuf_unlock(bp);
+ pagebuf_rele(bp);
+}
+
+#define xfs_bpin(bp) pagebuf_pin(bp)
+#define xfs_bunpin(bp) pagebuf_unpin(bp)
+
+#define xfs_buftrace(id, bp) \
+ pagebuf_trace(bp, id, NULL, (void *)__builtin_return_address(0))
+
+#define xfs_biodone(pb) \
+ pagebuf_iodone(pb, (pb->pb_flags & PBF_FS_DATAIOD), 0)
+
+#define xfs_incore(buftarg,blkno,len,lockit) \
+ pagebuf_find(buftarg, blkno ,len, lockit)
+
+
+#define xfs_biomove(pb, off, len, data, rw) \
+ pagebuf_iomove((pb), (off), (len), (data), \
+ ((rw) == XFS_B_WRITE) ? PBRW_WRITE : PBRW_READ)
+
+#define xfs_biozero(pb, off, len) \
+ pagebuf_iomove((pb), (off), (len), NULL, PBRW_ZERO)
+
+
+static inline int XFS_bwrite(xfs_buf_t *pb)
+{
+ int iowait = (pb->pb_flags & PBF_ASYNC) == 0;
+ int error = 0;
+
+ if (!iowait)
+ pb->pb_flags |= _PBF_RUN_QUEUES;
+
+ xfs_buf_undelay(pb);
+ pagebuf_iostrategy(pb);
+ if (iowait) {
+ error = pagebuf_iowait(pb);
+ xfs_buf_relse(pb);
+ }
+ return error;
+}
+
+#define XFS_bdwrite(pb) \
+ pagebuf_iostart(pb, PBF_DELWRI | PBF_ASYNC)
+
+static inline int xfs_bdwrite(void *mp, xfs_buf_t *bp)
+{
+ bp->pb_strat = xfs_bdstrat_cb;
+ bp->pb_fspriv3 = mp;
+
+ return pagebuf_iostart(bp, PBF_DELWRI | PBF_ASYNC);
+}
+
+#define XFS_bdstrat(bp) pagebuf_iorequest(bp)
+
+#define xfs_iowait(pb) pagebuf_iowait(pb)
+
+#define xfs_baread(target, rablkno, ralen) \
+ pagebuf_readahead((target), (rablkno), (ralen), PBF_DONT_BLOCK)
+
+#define xfs_buf_get_empty(len, target) pagebuf_get_empty((len), (target))
+#define xfs_buf_get_noaddr(len, target) pagebuf_get_no_daddr((len), (target))
+#define xfs_buf_free(bp) pagebuf_free(bp)
+
+
+/*
+ * Handling of buftargs.
+ */
+
+extern xfs_buftarg_t *xfs_alloc_buftarg(struct block_device *);
+extern void xfs_free_buftarg(xfs_buftarg_t *, int);
+extern void xfs_setsize_buftarg(xfs_buftarg_t *, unsigned int, unsigned int);
+extern void xfs_incore_relse(xfs_buftarg_t *, int, int);
+extern int xfs_flush_buftarg(xfs_buftarg_t *, int);
+
+#define xfs_getsize_buftarg(buftarg) \
+ block_size((buftarg)->pbr_bdev)
+#define xfs_readonly_buftarg(buftarg) \
+ bdev_read_only((buftarg)->pbr_bdev)
+#define xfs_binval(buftarg) \
+ xfs_flush_buftarg(buftarg, 1)
+#define XFS_bflush(buftarg) \
+ xfs_flush_buftarg(buftarg, 1)
+
+#endif /* __XFS_BUF_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_trans.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_alloc.h"
+#include "xfs_btree.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_error.h"
+#include "xfs_rw.h"
+
+#include <linux/dcache.h>
+
+static struct vm_operations_struct linvfs_file_vm_ops;
+
+
+STATIC inline ssize_t
+__linvfs_read(
+ struct kiocb *iocb,
+ char __user *buf,
+ int ioflags,
+ size_t count,
+ loff_t pos)
+{
+ struct iovec iov = {buf, count};
+ struct file *file = iocb->ki_filp;
+ vnode_t *vp = LINVFS_GET_VP(file->f_dentry->d_inode);
+ ssize_t rval;
+
+ BUG_ON(iocb->ki_pos != pos);
+
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+ VOP_READ(vp, iocb, &iov, 1, &iocb->ki_pos, ioflags, NULL, rval);
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_read(
+ struct kiocb *iocb,
+ char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_read(iocb, buf, 0, count, pos);
+}
+
+STATIC ssize_t
+linvfs_read_invis(
+ struct kiocb *iocb,
+ char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_read(iocb, buf, IO_INVIS, count, pos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_write(
+ struct kiocb *iocb,
+ const char *buf,
+ int ioflags,
+ size_t count,
+ loff_t pos)
+{
+ struct iovec iov = {(void *)buf, count};
+ struct file *file = iocb->ki_filp;
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ ssize_t rval;
+
+ BUG_ON(iocb->ki_pos != pos);
+ if (unlikely(file->f_flags & O_DIRECT)) {
+ ioflags |= IO_ISDIRECT;
+ VOP_WRITE(vp, iocb, &iov, 1, &iocb->ki_pos,
+ ioflags, NULL, rval);
+ } else {
+ down(&inode->i_sem);
+ VOP_WRITE(vp, iocb, &iov, 1, &iocb->ki_pos,
+ ioflags, NULL, rval);
+ up(&inode->i_sem);
+ }
+
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_write(
+ struct kiocb *iocb,
+ const char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_write(iocb, buf, 0, count, pos);
+}
+
+STATIC ssize_t
+linvfs_write_invis(
+ struct kiocb *iocb,
+ const char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_write(iocb, buf, IO_INVIS, count, pos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_readv(
+ struct file *file,
+ const struct iovec *iov,
+ int ioflags,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ struct kiocb kiocb;
+ ssize_t rval;
+
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = *ppos;
+
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+ VOP_READ(vp, &kiocb, iov, nr_segs, &kiocb.ki_pos, ioflags, NULL, rval);
+ if (rval == -EIOCBQUEUED)
+ rval = wait_on_sync_kiocb(&kiocb);
+
+ *ppos = kiocb.ki_pos;
+ return rval;
+}
+
+STATIC ssize_t
+linvfs_readv(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_readv(file, iov, 0, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_readv_invis(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_readv(file, iov, IO_INVIS, nr_segs, ppos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_writev(
+ struct file *file,
+ const struct iovec *iov,
+ int ioflags,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ struct kiocb kiocb;
+ ssize_t rval;
+
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = *ppos;
+ if (unlikely(file->f_flags & O_DIRECT)) {
+ ioflags |= IO_ISDIRECT;
+ VOP_WRITE(vp, &kiocb, iov, nr_segs, &kiocb.ki_pos,
+ ioflags, NULL, rval);
+ } else {
+ down(&inode->i_sem);
+ VOP_WRITE(vp, &kiocb, iov, nr_segs, &kiocb.ki_pos,
+ ioflags, NULL, rval);
+ up(&inode->i_sem);
+ }
+
+ if (rval == -EIOCBQUEUED)
+ rval = wait_on_sync_kiocb(&kiocb);
+
+ *ppos = kiocb.ki_pos;
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_writev(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_writev(file, iov, 0, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_writev_invis(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_writev(file, iov, IO_INVIS, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_sendfile(
+ struct file *filp,
+ loff_t *ppos,
+ size_t count,
+ read_actor_t actor,
+ void *target)
+{
+ vnode_t *vp = LINVFS_GET_VP(filp->f_dentry->d_inode);
+ ssize_t rval;
+
+ VOP_SENDFILE(vp, filp, ppos, 0, count, actor, target, NULL, rval);
+ return rval;
+}
+
+
+STATIC int
+linvfs_open(
+ struct inode *inode,
+ struct file *filp)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ if (!(filp->f_flags & O_LARGEFILE) && i_size_read(inode) > MAX_NON_LFS)
+ return -EFBIG;
+
+ ASSERT(vp);
+ VOP_OPEN(vp, NULL, error);
+ return -error;
+}
+
+
+STATIC int
+linvfs_release(
+ struct inode *inode,
+ struct file *filp)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error = 0;
+
+ if (vp)
+ VOP_RELEASE(vp, error);
+ return -error;
+}
+
+
+STATIC int
+linvfs_fsync(
+ struct file *filp,
+ struct dentry *dentry,
+ int datasync)
+{
+ struct inode *inode = dentry->d_inode;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+ int flags = FSYNC_WAIT;
+
+ if (datasync)
+ flags |= FSYNC_DATA;
+
+ ASSERT(vp);
+ VOP_FSYNC(vp, flags, NULL, (xfs_off_t)0, (xfs_off_t)-1, error);
+ return -error;
+}
+
+/*
+ * linvfs_readdir maps to VOP_READDIR().
+ * We need to build a uio, cred, ...
+ */
+
+#define nextdp(dp) ((struct xfs_dirent *)((char *)(dp) + (dp)->d_reclen))
+
+STATIC int
+linvfs_readdir(
+ struct file *filp,
+ void *dirent,
+ filldir_t filldir)
+{
+ int error = 0;
+ vnode_t *vp;
+ uio_t uio;
+ iovec_t iov;
+ int eof = 0;
+ caddr_t read_buf;
+ int namelen, size = 0;
+ size_t rlen = PAGE_CACHE_SIZE;
+ xfs_off_t start_offset, curr_offset;
+ xfs_dirent_t *dbp = NULL;
+
+ vp = LINVFS_GET_VP(filp->f_dentry->d_inode);
+ ASSERT(vp);
+
+ /* Try fairly hard to get memory */
+ do {
+ if ((read_buf = (caddr_t)kmalloc(rlen, GFP_KERNEL)))
+ break;
+ rlen >>= 1;
+ } while (rlen >= 1024);
+
+ if (read_buf == NULL)
+ return -ENOMEM;
+
+ uio.uio_iov = &iov;
+ uio.uio_segflg = UIO_SYSSPACE;
+ curr_offset = filp->f_pos;
+ if (filp->f_pos != 0x7fffffff)
+ uio.uio_offset = filp->f_pos;
+ else
+ uio.uio_offset = 0xffffffff;
+
+ while (!eof) {
+ uio.uio_resid = iov.iov_len = rlen;
+ iov.iov_base = read_buf;
+ uio.uio_iovcnt = 1;
+
+ start_offset = uio.uio_offset;
+
+ VOP_READDIR(vp, &uio, NULL, &eof, error);
+ if ((uio.uio_offset == start_offset) || error) {
+ size = 0;
+ break;
+ }
+
+ size = rlen - uio.uio_resid;
+ dbp = (xfs_dirent_t *)read_buf;
+ while (size > 0) {
+ namelen = strlen(dbp->d_name);
+
+ if (filldir(dirent, dbp->d_name, namelen,
+ (loff_t) curr_offset & 0x7fffffff,
+ (ino_t) dbp->d_ino,
+ DT_UNKNOWN)) {
+ goto done;
+ }
+ size -= dbp->d_reclen;
+ curr_offset = (loff_t)dbp->d_off /* & 0x7fffffff */;
+ dbp = nextdp(dbp);
+ }
+ }
+done:
+ if (!error) {
+ if (size == 0)
+ filp->f_pos = uio.uio_offset & 0x7fffffff;
+ else if (dbp)
+ filp->f_pos = curr_offset;
+ }
+
+ kfree(read_buf);
+ return -error;
+}
+
+
+STATIC int
+linvfs_file_mmap(
+ struct file *filp,
+ struct vm_area_struct *vma)
+{
+ struct inode *ip = filp->f_dentry->d_inode;
+ vnode_t *vp = LINVFS_GET_VP(ip);
+ vattr_t va = { .va_mask = XFS_AT_UPDATIME };
+ int error;
+
+ if ((vp->v_type == VREG) && (vp->v_vfsp->vfs_flag & VFS_DMI)) {
+ xfs_mount_t *mp = XFS_VFSTOM(vp->v_vfsp);
+
+ error = -XFS_SEND_MMAP(mp, vma, 0);
+ if (error)
+ return error;
+ }
+
+ vma->vm_ops = &linvfs_file_vm_ops;
+
+ VOP_SETATTR(vp, &va, XFS_AT_UPDATIME, NULL, error);
+ return 0;
+}
+
+
+STATIC int
+linvfs_ioctl(
+ struct inode *inode,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int error;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ ASSERT(vp);
+ VOP_IOCTL(vp, inode, filp, 0, cmd, arg, error);
+ VMODIFY(vp);
+
+ /* NOTE: some of the ioctl's return positive #'s as a
+ * byte count indicating success, such as
+ * readlink_by_handle. So we don't "sign flip"
+ * like most other routines. This means true
+ * errors need to be returned as a negative value.
+ */
+ return error;
+}
+
+STATIC int
+linvfs_ioctl_invis(
+ struct inode *inode,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int error;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ ASSERT(vp);
+ VOP_IOCTL(vp, inode, filp, IO_INVIS, cmd, arg, error);
+ VMODIFY(vp);
+
+ /* NOTE: some of the ioctl's return positive #'s as a
+ * byte count indicating success, such as
+ * readlink_by_handle. So we don't "sign flip"
+ * like most other routines. This means true
+ * errors need to be returned as a negative value.
+ */
+ return error;
+}
+
+#ifdef HAVE_VMOP_MPROTECT
+STATIC int
+linvfs_mprotect(
+ struct vm_area_struct *vma,
+ unsigned int newflags)
+{
+ vnode_t *vp = LINVFS_GET_VP(vma->vm_file->f_dentry->d_inode);
+ int error = 0;
+
+ if ((vp->v_type == VREG) && (vp->v_vfsp->vfs_flag & VFS_DMI)) {
+ if ((vma->vm_flags & VM_MAYSHARE) &&
+ (newflags & VM_WRITE) && !(vma->vm_flags & VM_WRITE)) {
+ xfs_mount_t *mp = XFS_VFSTOM(vp->v_vfsp);
+
+ error = XFS_SEND_MMAP(mp, vma, VM_WRITE);
+ }
+ }
+ return error;
+}
+#endif /* HAVE_VMOP_MPROTECT */
+
+
+struct file_operations linvfs_file_operations = {
+ .llseek = generic_file_llseek,
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .readv = linvfs_readv,
+ .writev = linvfs_writev,
+ .aio_read = linvfs_read,
+ .aio_write = linvfs_write,
+ .sendfile = linvfs_sendfile,
+ .ioctl = linvfs_ioctl,
+ .mmap = linvfs_file_mmap,
+ .open = linvfs_open,
+ .release = linvfs_release,
+ .fsync = linvfs_fsync,
+};
+
+struct file_operations linvfs_invis_file_operations = {
+ .llseek = generic_file_llseek,
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .readv = linvfs_readv_invis,
+ .writev = linvfs_writev_invis,
+ .aio_read = linvfs_read_invis,
+ .aio_write = linvfs_write_invis,
+ .sendfile = linvfs_sendfile,
+ .ioctl = linvfs_ioctl_invis,
+ .mmap = linvfs_file_mmap,
+ .open = linvfs_open,
+ .release = linvfs_release,
+ .fsync = linvfs_fsync,
+};
+
+
+struct file_operations linvfs_dir_operations = {
+ .read = generic_read_dir,
+ .readdir = linvfs_readdir,
+ .ioctl = linvfs_ioctl,
+ .fsync = linvfs_fsync,
+};
+
+static struct vm_operations_struct linvfs_file_vm_ops = {
+ .nopage = filemap_nopage,
+#ifdef HAVE_VMOP_MPROTECT
+ .mprotect = linvfs_mprotect,
+#endif
+};
--- /dev/null
+/*
+ * Copyright (c) 2000-2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+/*
+ * Stub for no-op vnode operations that return error status.
+ */
+int
+fs_noerr()
+{
+ return 0;
+}
+
+/*
+ * Operation unsupported under this file system.
+ */
+int
+fs_nosys()
+{
+ return ENOSYS;
+}
+
+/*
+ * Stub for inactive, strategy, and read/write lock/unlock. Does nothing.
+ */
+/* ARGSUSED */
+void
+fs_noval()
+{
+}
+
+/*
+ * vnode pcache layer for vnode_tosspages.
+ * 'last' parameter unused but left in for IRIX compatibility
+ */
+void
+fs_tosspages(
+ bhv_desc_t *bdp,
+ xfs_off_t first,
+ xfs_off_t last,
+ int fiopt)
+{
+ vnode_t *vp = BHV_TO_VNODE(bdp);
+ struct inode *ip = LINVFS_GET_IP(vp);
+
+ if (VN_CACHED(vp))
+ truncate_inode_pages(ip->i_mapping, first);
+}
+
+
+/*
+ * vnode pcache layer for vnode_flushinval_pages.
+ * 'last' parameter unused but left in for IRIX compatibility
+ */
+void
+fs_flushinval_pages(
+ bhv_desc_t *bdp,
+ xfs_off_t first,
+ xfs_off_t last,
+ int fiopt)
+{
+ vnode_t *vp = BHV_TO_VNODE(bdp);
+ struct inode *ip = LINVFS_GET_IP(vp);
+
+ if (VN_CACHED(vp)) {
+ filemap_fdatawrite(ip->i_mapping);
+ filemap_fdatawait(ip->i_mapping);
+
+ truncate_inode_pages(ip->i_mapping, first);
+ }
+}
+
+/*
+ * vnode pcache layer for vnode_flush_pages.
+ * 'last' parameter unused but left in for IRIX compatibility
+ */
+int
+fs_flush_pages(
+ bhv_desc_t *bdp,
+ xfs_off_t first,
+ xfs_off_t last,
+ uint64_t flags,
+ int fiopt)
+{
+ vnode_t *vp = BHV_TO_VNODE(bdp);
+ struct inode *ip = LINVFS_GET_IP(vp);
+
+ if (VN_CACHED(vp)) {
+ filemap_fdatawrite(ip->i_mapping);
+ filemap_fdatawait(ip->i_mapping);
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * This file contains globals needed by XFS that were normally defined
+ * somewhere else in IRIX.
+ */
+
+#include "xfs.h"
+#include "xfs_cred.h"
+#include "xfs_sysctl.h"
+
+/*
+ * System memory size - used to scale certain data structures in XFS.
+ */
+unsigned long xfs_physmem;
+
+/*
+ * Tunable XFS parameters. xfs_params is required even when CONFIG_SYSCTL=n,
+ * other XFS code uses these values. Times are measured in centisecs (i.e.
+ * 100ths of a second).
+ */
+xfs_param_t xfs_params = {
+ /* MIN DFLT MAX */
+ .restrict_chown = { 0, 1, 1 },
+ .sgid_inherit = { 0, 0, 1 },
+ .symlink_mode = { 0, 0, 1 },
+ .panic_mask = { 0, 0, 127 },
+ .error_level = { 0, 3, 11 },
+ .syncd_timer = { 1*100, 30*100, 7200*100},
+ .stats_clear = { 0, 0, 1 },
+ .inherit_sync = { 0, 1, 1 },
+ .inherit_nodump = { 0, 1, 1 },
+ .inherit_noatim = { 0, 1, 1 },
+ .xfs_buf_timer = { 100/2, 1*100, 30*100 },
+ .xfs_buf_age = { 1*100, 15*100, 7200*100},
+};
+
+/*
+ * Global system credential structure.
+ */
+cred_t sys_cred_val, *sys_cred = &sys_cred_val;
+
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+#include "xfs_fs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_trans.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_alloc.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_btree.h"
+#include "xfs_ialloc.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_bmap.h"
+#include "xfs_bit.h"
+#include "xfs_rtalloc.h"
+#include "xfs_error.h"
+#include "xfs_itable.h"
+#include "xfs_rw.h"
+#include "xfs_acl.h"
+#include "xfs_cap.h"
+#include "xfs_mac.h"
+#include "xfs_attr.h"
+#include "xfs_buf_item.h"
+#include "xfs_utils.h"
+#include "xfs_dfrag.h"
+#include "xfs_fsops.h"
+
+#include <linux/dcache.h>
+#include <linux/mount.h>
+#include <linux/namei.h>
+#include <linux/pagemap.h>
+
+/*
+ * ioctl commands that are used by Linux filesystems
+ */
+#define XFS_IOC_GETXFLAGS _IOR('f', 1, long)
+#define XFS_IOC_SETXFLAGS _IOW('f', 2, long)
+#define XFS_IOC_GETVERSION _IOR('v', 1, long)
+
+
+/*
+ * xfs_find_handle maps from userspace xfs_fsop_handlereq structure to
+ * a file or fs handle.
+ *
+ * XFS_IOC_PATH_TO_FSHANDLE
+ * returns fs handle for a mount point or path within that mount point
+ * XFS_IOC_FD_TO_HANDLE
+ * returns full handle for a FD opened in user space
+ * XFS_IOC_PATH_TO_HANDLE
+ * returns full handle for a path
+ */
+STATIC int
+xfs_find_handle(
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int hsize;
+ xfs_handle_t handle;
+ xfs_fsop_handlereq_t hreq;
+ struct inode *inode;
+ struct vnode *vp;
+
+ if (copy_from_user(&hreq, (xfs_fsop_handlereq_t *)arg, sizeof(hreq)))
+ return -XFS_ERROR(EFAULT);
+
+ memset((char *)&handle, 0, sizeof(handle));
+
+ switch (cmd) {
+ case XFS_IOC_PATH_TO_FSHANDLE:
+ case XFS_IOC_PATH_TO_HANDLE: {
+ struct nameidata nd;
+ int error;
+
+ error = user_path_walk_link(hreq.path, &nd);
+ if (error)
+ return error;
+
+ ASSERT(nd.dentry);
+ ASSERT(nd.dentry->d_inode);
+ inode = igrab(nd.dentry->d_inode);
+ path_release(&nd);
+ break;
+ }
+
+ case XFS_IOC_FD_TO_HANDLE: {
+ struct file *file;
+
+ file = fget(hreq.fd);
+ if (!file)
+ return -EBADF;
+
+ ASSERT(file->f_dentry);
+ ASSERT(file->f_dentry->d_inode);
+ inode = igrab(file->f_dentry->d_inode);
+ fput(file);
+ break;
+ }
+
+ default:
+ ASSERT(0);
+ return -XFS_ERROR(EINVAL);
+ }
+
+ if (inode->i_sb->s_magic != XFS_SB_MAGIC) {
+ /* we're not in XFS anymore, Toto */
+ iput(inode);
+ return -XFS_ERROR(EINVAL);
+ }
+
+ /* we need the vnode */
+ vp = LINVFS_GET_VP(inode);
+ if (vp->v_type != VREG && vp->v_type != VDIR && vp->v_type != VLNK) {
+ iput(inode);
+ return -XFS_ERROR(EBADF);
+ }
+
+ /* now we can grab the fsid */
+ memcpy(&handle.ha_fsid, vp->v_vfsp->vfs_altfsid, sizeof(xfs_fsid_t));
+ hsize = sizeof(xfs_fsid_t);
+
+ if (cmd != XFS_IOC_PATH_TO_FSHANDLE) {
+ xfs_inode_t *ip;
+ bhv_desc_t *bhv;
+ int lock_mode;
+
+ /* need to get access to the xfs_inode to read the generation */
+ bhv = vn_bhv_lookup_unlocked(VN_BHV_HEAD(vp), &xfs_vnodeops);
+ ASSERT(bhv);
+ ip = XFS_BHVTOI(bhv);
+ ASSERT(ip);
+ lock_mode = xfs_ilock_map_shared(ip);
+
+ /* fill in fid section of handle from inode */
+ handle.ha_fid.xfs_fid_len = sizeof(xfs_fid_t) -
+ sizeof(handle.ha_fid.xfs_fid_len);
+ handle.ha_fid.xfs_fid_pad = 0;
+ handle.ha_fid.xfs_fid_gen = ip->i_d.di_gen;
+ handle.ha_fid.xfs_fid_ino = ip->i_ino;
+
+ xfs_iunlock_map_shared(ip, lock_mode);
+
+ hsize = XFS_HSIZE(handle);
+ }
+
+ /* now copy our handle into the user buffer & write out the size */
+ if (copy_to_user((xfs_handle_t *)hreq.ohandle, &handle, hsize) ||
+ copy_to_user(hreq.ohandlen, &hsize, sizeof(__s32))) {
+ iput(inode);
+ return -XFS_ERROR(EFAULT);
+ }
+
+ iput(inode);
+ return 0;
+}
+
+
+/*
+ * Convert userspace handle data into vnode (and inode).
+ * We [ab]use the fact that all the fsop_handlereq ioctl calls
+ * have a data structure argument whose first component is always
+ * a xfs_fsop_handlereq_t, so we can cast to and from this type.
+ * This allows us to optimise the copy_from_user calls and gives
+ * a handy, shared routine.
+ *
+ * If no error, caller must always VN_RELE the returned vp.
+ */
+STATIC int
+xfs_vget_fsop_handlereq(
+ xfs_mount_t *mp,
+ struct inode *parinode, /* parent inode pointer */
+ int cap, /* capability level for op */
+ unsigned long arg, /* userspace data pointer */
+ unsigned long size, /* size of expected struct */
+ /* output arguments */
+ xfs_fsop_handlereq_t *hreq,
+ vnode_t **vp,
+ struct inode **inode)
+{
+ void *hanp;
+ size_t hlen;
+ xfs_fid_t *xfid;
+ xfs_handle_t *handlep;
+ xfs_handle_t handle;
+ xfs_inode_t *ip;
+ struct inode *inodep;
+ vnode_t *vpp;
+ xfs_ino_t ino;
+ __u32 igen;
+ int error;
+
+ if (!capable(cap))
+ return XFS_ERROR(EPERM);
+
+ /*
+ * Only allow handle opens under a directory.
+ */
+ if (!S_ISDIR(parinode->i_mode))
+ return XFS_ERROR(ENOTDIR);
+
+ /*
+ * Copy the handle down from the user and validate
+ * that it looks to be in the correct format.
+ */
+ if (copy_from_user(hreq, (struct xfs_fsop_handlereq *)arg, size))
+ return XFS_ERROR(EFAULT);
+
+ hanp = hreq->ihandle;
+ hlen = hreq->ihandlen;
+ handlep = &handle;
+
+ if (hlen < sizeof(handlep->ha_fsid) || hlen > sizeof(*handlep))
+ return XFS_ERROR(EINVAL);
+ if (copy_from_user(handlep, hanp, hlen))
+ return XFS_ERROR(EFAULT);
+ if (hlen < sizeof(*handlep))
+ memset(((char *)handlep) + hlen, 0, sizeof(*handlep) - hlen);
+ if (hlen > sizeof(handlep->ha_fsid)) {
+ if (handlep->ha_fid.xfs_fid_len !=
+ (hlen - sizeof(handlep->ha_fsid)
+ - sizeof(handlep->ha_fid.xfs_fid_len))
+ || handlep->ha_fid.xfs_fid_pad)
+ return XFS_ERROR(EINVAL);
+ }
+
+ /*
+ * Crack the handle, obtain the inode # & generation #
+ */
+ xfid = (struct xfs_fid *)&handlep->ha_fid;
+ if (xfid->xfs_fid_len == sizeof(*xfid) - sizeof(xfid->xfs_fid_len)) {
+ ino = xfid->xfs_fid_ino;
+ igen = xfid->xfs_fid_gen;
+ } else {
+ return XFS_ERROR(EINVAL);
+ }
+
+ /*
+ * Get the XFS inode, building a vnode to go with it.
+ */
+ error = xfs_iget(mp, NULL, ino, XFS_ILOCK_SHARED, &ip, 0);
+ if (error)
+ return error;
+ if (ip == NULL)
+ return XFS_ERROR(EIO);
+ if (ip->i_d.di_mode == 0 || ip->i_d.di_gen != igen) {
+ xfs_iput_new(ip, XFS_ILOCK_SHARED);
+ return XFS_ERROR(ENOENT);
+ }
+
+ vpp = XFS_ITOV(ip);
+ inodep = LINVFS_GET_IP(vpp);
+ xfs_iunlock(ip, XFS_ILOCK_SHARED);
+
+ *vp = vpp;
+ *inode = inodep;
+ return 0;
+}
+
+STATIC int
+xfs_open_by_handle(
+ xfs_mount_t *mp,
+ unsigned long arg,
+ struct file *parfilp,
+ struct inode *parinode)
+{
+ int error;
+ int new_fd;
+ int permflag;
+ struct file *filp;
+ struct inode *inode;
+ struct dentry *dentry;
+ vnode_t *vp;
+ xfs_fsop_handlereq_t hreq;
+
+ error = xfs_vget_fsop_handlereq(mp, parinode, CAP_SYS_ADMIN, arg,
+ sizeof(xfs_fsop_handlereq_t),
+ &hreq, &vp, &inode);
+ if (error)
+ return -error;
+
+ /* Restrict xfs_open_by_handle to directories & regular files. */
+ if (!(S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode))) {
+ iput(inode);
+ return -XFS_ERROR(EINVAL);
+ }
+
+#if BITS_PER_LONG != 32
+ hreq.oflags |= O_LARGEFILE;
+#endif
+ /* Put open permission in namei format. */
+ permflag = hreq.oflags;
+ if ((permflag+1) & O_ACCMODE)
+ permflag++;
+ if (permflag & O_TRUNC)
+ permflag |= 2;
+
+ if ((!(permflag & O_APPEND) || (permflag & O_TRUNC)) &&
+ (permflag & FMODE_WRITE) && IS_APPEND(inode)) {
+ iput(inode);
+ return -XFS_ERROR(EPERM);
+ }
+
+ if ((permflag & FMODE_WRITE) && IS_IMMUTABLE(inode)) {
+ iput(inode);
+ return -XFS_ERROR(EACCES);
+ }
+
+ /* Can't write directories. */
+ if ( S_ISDIR(inode->i_mode) && (permflag & FMODE_WRITE)) {
+ iput(inode);
+ return -XFS_ERROR(EISDIR);
+ }
+
+ if ((new_fd = get_unused_fd()) < 0) {
+ iput(inode);
+ return new_fd;
+ }
+
+ dentry = d_alloc_anon(inode);
+ if (dentry == NULL) {
+ iput(inode);
+ put_unused_fd(new_fd);
+ return -XFS_ERROR(ENOMEM);
+ }
+
+ /* Ensure umount returns EBUSY on umounts while this file is open. */
+ mntget(parfilp->f_vfsmnt);
+
+ /* Create file pointer. */
+ filp = dentry_open(dentry, parfilp->f_vfsmnt, hreq.oflags);
+ if (IS_ERR(filp)) {
+ put_unused_fd(new_fd);
+ return -XFS_ERROR(-PTR_ERR(filp));
+ }
+ if (inode->i_mode & S_IFREG)
+ filp->f_op = &linvfs_invis_file_operations;
+
+ fd_install(new_fd, filp);
+ return new_fd;
+}
+
+STATIC int
+xfs_readlink_by_handle(
+ xfs_mount_t *mp,
+ unsigned long arg,
+ struct file *parfilp,
+ struct inode *parinode)
+{
+ int error;
+ struct iovec aiov;
+ struct uio auio;
+ struct inode *inode;
+ xfs_fsop_handlereq_t hreq;
+ vnode_t *vp;
+ __u32 olen;
+
+ error = xfs_vget_fsop_handlereq(mp, parinode, CAP_SYS_ADMIN, arg,
+ sizeof(xfs_fsop_handlereq_t),
+ &hreq, &vp, &inode);
+ if (error)
+ return -error;
+
+ /* Restrict this handle operation to symlinks only. */
+ if (vp->v_type != VLNK) {
+ VN_RELE(vp);
+ return -XFS_ERROR(EINVAL);
+ }
+
+ if (copy_from_user(&olen, hreq.ohandlen, sizeof(__u32))) {
+ VN_RELE(vp);
+ return -XFS_ERROR(EFAULT);
+ }
+ aiov.iov_len = olen;
+ aiov.iov_base = hreq.ohandle;
+
+ auio.uio_iov = &aiov;
+ auio.uio_iovcnt = 1;
+ auio.uio_offset = 0;
+ auio.uio_segflg = UIO_USERSPACE;
+ auio.uio_resid = olen;
+
+ VOP_READLINK(vp, &auio, IO_INVIS, NULL, error);
+
+ VN_RELE(vp);
+ return (olen - auio.uio_resid);
+}
+
+STATIC int
+xfs_fssetdm_by_handle(
+ xfs_mount_t *mp,
+ unsigned long arg,
+ struct file *parfilp,
+ struct inode *parinode)
+{
+ int error;
+ struct fsdmidata fsd;
+ xfs_fsop_setdm_handlereq_t dmhreq;
+ struct inode *inode;
+ bhv_desc_t *bdp;
+ vnode_t *vp;
+
+ error = xfs_vget_fsop_handlereq(mp, parinode, CAP_MKNOD, arg,
+ sizeof(xfs_fsop_setdm_handlereq_t),
+ (xfs_fsop_handlereq_t *)&dmhreq,
+ &vp, &inode);
+ if (error)
+ return -error;
+
+ if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) {
+ VN_RELE(vp);
+ return -XFS_ERROR(EPERM);
+ }
+
+ if (copy_from_user(&fsd, dmhreq.data, sizeof(fsd))) {
+ VN_RELE(vp);
+ return -XFS_ERROR(EFAULT);
+ }
+
+ bdp = bhv_base_unlocked(VN_BHV_HEAD(vp));
+ error = xfs_set_dmattrs(bdp, fsd.fsd_dmevmask, fsd.fsd_dmstate, NULL);
+
+ VN_RELE(vp);
+ if (error)
+ return -error;
+ return 0;
+}
+
+STATIC int
+xfs_attrlist_by_handle(
+ xfs_mount_t *mp,
+ unsigned long arg,
+ struct file *parfilp,
+ struct inode *parinode)
+{
+ int error;
+ attrlist_cursor_kern_t *cursor;
+ xfs_fsop_attrlist_handlereq_t al_hreq;
+ struct inode *inode;
+ vnode_t *vp;
+
+ error = xfs_vget_fsop_handlereq(mp, parinode, CAP_SYS_ADMIN, arg,
+ sizeof(xfs_fsop_attrlist_handlereq_t),
+ (xfs_fsop_handlereq_t *)&al_hreq,
+ &vp, &inode);
+ if (error)
+ return -error;
+
+ cursor = (attrlist_cursor_kern_t *)&al_hreq.pos;
+ VOP_ATTR_LIST(vp, al_hreq.buffer, al_hreq.buflen, al_hreq.flags,
+ cursor, NULL, error);
+ VN_RELE(vp);
+ if (error)
+ return -error;
+ return 0;
+}
+
+STATIC int
+xfs_attrmulti_by_handle(
+ xfs_mount_t *mp,
+ unsigned long arg,
+ struct file *parfilp,
+ struct inode *parinode)
+{
+ int error;
+ xfs_attr_multiop_t *ops;
+ xfs_fsop_attrmulti_handlereq_t am_hreq;
+ struct inode *inode;
+ vnode_t *vp;
+ int i, size;
+
+ error = xfs_vget_fsop_handlereq(mp, parinode, CAP_SYS_ADMIN, arg,
+ sizeof(xfs_fsop_attrmulti_handlereq_t),
+ (xfs_fsop_handlereq_t *)&am_hreq,
+ &vp, &inode);
+ if (error)
+ return -error;
+
+ size = am_hreq.opcount * sizeof(attr_multiop_t);
+ ops = (xfs_attr_multiop_t *)kmalloc(size, GFP_KERNEL);
+ if (!ops) {
+ VN_RELE(vp);
+ return -XFS_ERROR(ENOMEM);
+ }
+
+ if (copy_from_user(ops, am_hreq.ops, size)) {
+ kfree(ops);
+ VN_RELE(vp);
+ return -XFS_ERROR(EFAULT);
+ }
+
+ for (i = 0; i < am_hreq.opcount; i++) {
+ switch(ops[i].am_opcode) {
+ case ATTR_OP_GET:
+ VOP_ATTR_GET(vp,ops[i].am_attrname, ops[i].am_attrvalue,
+ &ops[i].am_length, ops[i].am_flags,
+ NULL, ops[i].am_error);
+ break;
+ case ATTR_OP_SET:
+ if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) {
+ ops[i].am_error = EPERM;
+ break;
+ }
+ VOP_ATTR_SET(vp,ops[i].am_attrname, ops[i].am_attrvalue,
+ ops[i].am_length, ops[i].am_flags,
+ NULL, ops[i].am_error);
+ break;
+ case ATTR_OP_REMOVE:
+ if (IS_IMMUTABLE(inode) || IS_APPEND(inode)) {
+ ops[i].am_error = EPERM;
+ break;
+ }
+ VOP_ATTR_REMOVE(vp, ops[i].am_attrname, ops[i].am_flags,
+ NULL, ops[i].am_error);
+ break;
+ default:
+ ops[i].am_error = EINVAL;
+ }
+ }
+
+ if (copy_to_user(am_hreq.ops, ops, size))
+ error = -XFS_ERROR(EFAULT);
+
+ kfree(ops);
+ VN_RELE(vp);
+ return error;
+}
+
+/* prototypes for a few of the stack-hungry cases that have
+ * their own functions. Functions are defined after their use
+ * so gcc doesn't get fancy and inline them with -03 */
+
+STATIC int
+xfs_ioc_space(
+ bhv_desc_t *bdp,
+ vnode_t *vp,
+ struct file *filp,
+ int flags,
+ unsigned int cmd,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_bulkstat(
+ xfs_mount_t *mp,
+ unsigned int cmd,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_fsgeometry_v1(
+ xfs_mount_t *mp,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_fsgeometry(
+ xfs_mount_t *mp,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_xattr(
+ vnode_t *vp,
+ xfs_inode_t *ip,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_getbmap(
+ bhv_desc_t *bdp,
+ struct file *filp,
+ int flags,
+ unsigned int cmd,
+ unsigned long arg);
+
+STATIC int
+xfs_ioc_getbmapx(
+ bhv_desc_t *bdp,
+ unsigned long arg);
+
+int
+xfs_ioctl(
+ bhv_desc_t *bdp,
+ struct inode *inode,
+ struct file *filp,
+ int ioflags,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int error;
+ vnode_t *vp;
+ xfs_inode_t *ip;
+ xfs_mount_t *mp;
+
+ vp = LINVFS_GET_VP(inode);
+
+ vn_trace_entry(vp, "xfs_ioctl", (inst_t *)__return_address);
+
+ ip = XFS_BHVTOI(bdp);
+ mp = ip->i_mount;
+
+ switch (cmd) {
+
+ case XFS_IOC_ALLOCSP:
+ case XFS_IOC_FREESP:
+ case XFS_IOC_RESVSP:
+ case XFS_IOC_UNRESVSP:
+ case XFS_IOC_ALLOCSP64:
+ case XFS_IOC_FREESP64:
+ case XFS_IOC_RESVSP64:
+ case XFS_IOC_UNRESVSP64:
+ /*
+ * Only allow the sys admin to reserve space unless
+ * unwritten extents are enabled.
+ */
+ if (!XFS_SB_VERSION_HASEXTFLGBIT(&mp->m_sb) &&
+ !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ return xfs_ioc_space(bdp, vp, filp, ioflags, cmd, arg);
+
+ case XFS_IOC_DIOINFO: {
+ struct dioattr da;
+ xfs_buftarg_t *target =
+ (ip->i_d.di_flags & XFS_DIFLAG_REALTIME) ?
+ mp->m_rtdev_targp : mp->m_ddev_targp;
+
+ da.d_mem = da.d_miniosz = 1 << target->pbr_sshift;
+ /* The size dio will do in one go */
+ da.d_maxiosz = 64 * PAGE_CACHE_SIZE;
+
+ if (copy_to_user((struct dioattr *)arg, &da, sizeof(da)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_FSBULKSTAT_SINGLE:
+ case XFS_IOC_FSBULKSTAT:
+ case XFS_IOC_FSINUMBERS:
+ return xfs_ioc_bulkstat(mp, cmd, arg);
+
+ case XFS_IOC_FSGEOMETRY_V1:
+ return xfs_ioc_fsgeometry_v1(mp, arg);
+
+ case XFS_IOC_FSGEOMETRY:
+ return xfs_ioc_fsgeometry(mp, arg);
+
+ case XFS_IOC_GETVERSION:
+ case XFS_IOC_GETXFLAGS:
+ case XFS_IOC_SETXFLAGS:
+ case XFS_IOC_FSGETXATTR:
+ case XFS_IOC_FSSETXATTR:
+ case XFS_IOC_FSGETXATTRA:
+ return xfs_ioc_xattr(vp, ip, filp, cmd, arg);
+
+ case XFS_IOC_FSSETDM: {
+ struct fsdmidata dmi;
+
+ if (copy_from_user(&dmi, (struct fsdmidata *)arg, sizeof(dmi)))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_set_dmattrs(bdp, dmi.fsd_dmevmask, dmi.fsd_dmstate,
+ NULL);
+ return -error;
+ }
+
+ case XFS_IOC_GETBMAP:
+ case XFS_IOC_GETBMAPA:
+ return xfs_ioc_getbmap(bdp, filp, ioflags, cmd, arg);
+
+ case XFS_IOC_GETBMAPX:
+ return xfs_ioc_getbmapx(bdp, arg);
+
+ case XFS_IOC_FD_TO_HANDLE:
+ case XFS_IOC_PATH_TO_HANDLE:
+ case XFS_IOC_PATH_TO_FSHANDLE:
+ return xfs_find_handle(cmd, arg);
+
+ case XFS_IOC_OPEN_BY_HANDLE:
+ return xfs_open_by_handle(mp, arg, filp, inode);
+
+ case XFS_IOC_FSSETDM_BY_HANDLE:
+ return xfs_fssetdm_by_handle(mp, arg, filp, inode);
+
+ case XFS_IOC_READLINK_BY_HANDLE:
+ return xfs_readlink_by_handle(mp, arg, filp, inode);
+
+ case XFS_IOC_ATTRLIST_BY_HANDLE:
+ return xfs_attrlist_by_handle(mp, arg, filp, inode);
+
+ case XFS_IOC_ATTRMULTI_BY_HANDLE:
+ return xfs_attrmulti_by_handle(mp, arg, filp, inode);
+
+ case XFS_IOC_SWAPEXT: {
+ error = xfs_swapext((struct xfs_swapext *)arg);
+ return -error;
+ }
+
+ case XFS_IOC_FSCOUNTS: {
+ xfs_fsop_counts_t out;
+
+ error = xfs_fs_counts(mp, &out);
+ if (error)
+ return -error;
+
+ if (copy_to_user((char *)arg, &out, sizeof(out)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_SET_RESBLKS: {
+ xfs_fsop_resblks_t inout;
+ __uint64_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&inout, (char *)arg, sizeof(inout)))
+ return -XFS_ERROR(EFAULT);
+
+ /* input parameter is passed in resblks field of structure */
+ in = inout.resblks;
+ error = xfs_reserve_blocks(mp, &in, &inout);
+ if (error)
+ return -error;
+
+ if (copy_to_user((char *)arg, &inout, sizeof(inout)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_GET_RESBLKS: {
+ xfs_fsop_resblks_t out;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ error = xfs_reserve_blocks(mp, NULL, &out);
+ if (error)
+ return -error;
+
+ if (copy_to_user((char *)arg, &out, sizeof(out)))
+ return -XFS_ERROR(EFAULT);
+
+ return 0;
+ }
+
+ case XFS_IOC_FSGROWFSDATA: {
+ xfs_growfs_data_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&in, (char *)arg, sizeof(in)))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_growfs_data(mp, &in);
+ return -error;
+ }
+
+ case XFS_IOC_FSGROWFSLOG: {
+ xfs_growfs_log_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&in, (char *)arg, sizeof(in)))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_growfs_log(mp, &in);
+ return -error;
+ }
+
+ case XFS_IOC_FSGROWFSRT: {
+ xfs_growfs_rt_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&in, (char *)arg, sizeof(in)))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_growfs_rt(mp, &in);
+ return -error;
+ }
+
+ case XFS_IOC_FREEZE:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ freeze_bdev(inode->i_sb->s_bdev);
+ return 0;
+
+ case XFS_IOC_THAW:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+ thaw_bdev(inode->i_sb->s_bdev, inode->i_sb);
+ return 0;
+
+ case XFS_IOC_GOINGDOWN: {
+ __uint32_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (get_user(in, (__uint32_t *)arg))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_fs_goingdown(mp, in);
+ return -error;
+ }
+
+ case XFS_IOC_ERROR_INJECTION: {
+ xfs_error_injection_t in;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (copy_from_user(&in, (char *)arg, sizeof(in)))
+ return -XFS_ERROR(EFAULT);
+
+ error = xfs_errortag_add(in.errtag, mp);
+ return -error;
+ }
+
+ case XFS_IOC_ERROR_CLEARALL:
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ error = xfs_errortag_clearall(mp);
+ return -error;
+
+ default:
+ return -ENOTTY;
+ }
+}
+
+STATIC int
+xfs_ioc_space(
+ bhv_desc_t *bdp,
+ vnode_t *vp,
+ struct file *filp,
+ int ioflags,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ xfs_flock64_t bf;
+ int attr_flags = 0;
+ int error;
+
+ if (vp->v_inode.i_flags & (S_IMMUTABLE|S_APPEND))
+ return -XFS_ERROR(EPERM);
+
+ if (!(filp->f_flags & FMODE_WRITE))
+ return -XFS_ERROR(EBADF);
+
+ if (vp->v_type != VREG)
+ return -XFS_ERROR(EINVAL);
+
+ if (copy_from_user(&bf, (xfs_flock64_t *)arg, sizeof(bf)))
+ return -XFS_ERROR(EFAULT);
+
+ if (filp->f_flags & (O_NDELAY|O_NONBLOCK))
+ attr_flags |= ATTR_NONBLOCK;
+ if (ioflags & IO_INVIS)
+ attr_flags |= ATTR_DMI;
+
+ error = xfs_change_file_space(bdp, cmd, &bf, filp->f_pos,
+ NULL, attr_flags);
+ return -error;
+}
+
+STATIC int
+xfs_ioc_bulkstat(
+ xfs_mount_t *mp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ xfs_fsop_bulkreq_t bulkreq;
+ int count; /* # of records returned */
+ xfs_ino_t inlast; /* last inode number */
+ int done;
+ int error;
+
+ /* done = 1 if there are more stats to get and if bulkstat */
+ /* should be called again (unused here, but used in dmapi) */
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (XFS_FORCED_SHUTDOWN(mp))
+ return -XFS_ERROR(EIO);
+
+ if (copy_from_user(&bulkreq, (xfs_fsop_bulkreq_t *)arg,
+ sizeof(xfs_fsop_bulkreq_t)))
+ return -XFS_ERROR(EFAULT);
+
+ if (copy_from_user(&inlast, (__s64 *)bulkreq.lastip,
+ sizeof(__s64)))
+ return -XFS_ERROR(EFAULT);
+
+ if ((count = bulkreq.icount) <= 0)
+ return -XFS_ERROR(EINVAL);
+
+ if (cmd == XFS_IOC_FSINUMBERS)
+ error = xfs_inumbers(mp, &inlast, &count,
+ bulkreq.ubuffer);
+ else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE)
+ error = xfs_bulkstat_single(mp, &inlast,
+ bulkreq.ubuffer, &done);
+ else { /* XFS_IOC_FSBULKSTAT */
+ if (count == 1 && inlast != 0) {
+ inlast++;
+ error = xfs_bulkstat_single(mp, &inlast,
+ bulkreq.ubuffer, &done);
+ } else {
+ error = xfs_bulkstat(mp, &inlast, &count,
+ (bulkstat_one_pf)xfs_bulkstat_one, NULL,
+ sizeof(xfs_bstat_t), bulkreq.ubuffer,
+ BULKSTAT_FG_QUICK, &done);
+ }
+ }
+
+ if (error)
+ return -error;
+
+ if (bulkreq.ocount != NULL) {
+ if (copy_to_user((xfs_ino_t *)bulkreq.lastip, &inlast,
+ sizeof(xfs_ino_t)))
+ return -XFS_ERROR(EFAULT);
+
+ if (copy_to_user((__s32 *)bulkreq.ocount, &count,
+ sizeof(count)))
+ return -XFS_ERROR(EFAULT);
+ }
+
+ return 0;
+}
+
+STATIC int
+xfs_ioc_fsgeometry_v1(
+ xfs_mount_t *mp,
+ unsigned long arg)
+{
+ xfs_fsop_geom_v1_t fsgeo;
+ int error;
+
+ error = xfs_fs_geometry(mp, (xfs_fsop_geom_t *)&fsgeo, 3);
+ if (error)
+ return -error;
+
+ if (copy_to_user((xfs_fsop_geom_t *)arg, &fsgeo, sizeof(fsgeo)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+}
+
+STATIC int
+xfs_ioc_fsgeometry(
+ xfs_mount_t *mp,
+ unsigned long arg)
+{
+ xfs_fsop_geom_t fsgeo;
+ int error;
+
+ error = xfs_fs_geometry(mp, &fsgeo, 4);
+ if (error)
+ return -error;
+
+ if (copy_to_user((xfs_fsop_geom_t *)arg, &fsgeo, sizeof(fsgeo)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+}
+
+/*
+ * Linux extended inode flags interface.
+ */
+#define LINUX_XFLAG_SYNC 0x00000008 /* Synchronous updates */
+#define LINUX_XFLAG_IMMUTABLE 0x00000010 /* Immutable file */
+#define LINUX_XFLAG_APPEND 0x00000020 /* writes to file may only append */
+#define LINUX_XFLAG_NODUMP 0x00000040 /* do not dump file */
+#define LINUX_XFLAG_NOATIME 0x00000080 /* do not update atime */
+
+STATIC unsigned int
+xfs_merge_ioc_xflags(
+ unsigned int flags,
+ unsigned int start)
+{
+ unsigned int xflags = start;
+
+ if (flags & LINUX_XFLAG_IMMUTABLE)
+ xflags |= XFS_XFLAG_IMMUTABLE;
+ else
+ xflags &= ~XFS_XFLAG_IMMUTABLE;
+ if (flags & LINUX_XFLAG_APPEND)
+ xflags |= XFS_XFLAG_APPEND;
+ else
+ xflags &= ~XFS_XFLAG_APPEND;
+ if (flags & LINUX_XFLAG_SYNC)
+ xflags |= XFS_XFLAG_SYNC;
+ else
+ xflags &= ~XFS_XFLAG_SYNC;
+ if (flags & LINUX_XFLAG_NOATIME)
+ xflags |= XFS_XFLAG_NOATIME;
+ else
+ xflags &= ~XFS_XFLAG_NOATIME;
+ if (flags & LINUX_XFLAG_NODUMP)
+ xflags |= XFS_XFLAG_NODUMP;
+ else
+ xflags &= ~XFS_XFLAG_NODUMP;
+
+ return xflags;
+}
+
+STATIC unsigned int
+xfs_di2lxflags(
+ __uint16_t di_flags)
+{
+ unsigned int flags = 0;
+
+ if (di_flags & XFS_DIFLAG_IMMUTABLE)
+ flags |= LINUX_XFLAG_IMMUTABLE;
+ if (di_flags & XFS_DIFLAG_APPEND)
+ flags |= LINUX_XFLAG_APPEND;
+ if (di_flags & XFS_DIFLAG_SYNC)
+ flags |= LINUX_XFLAG_SYNC;
+ if (di_flags & XFS_DIFLAG_NOATIME)
+ flags |= LINUX_XFLAG_NOATIME;
+ if (di_flags & XFS_DIFLAG_NODUMP)
+ flags |= LINUX_XFLAG_NODUMP;
+ return flags;
+}
+
+STATIC int
+xfs_ioc_xattr(
+ vnode_t *vp,
+ xfs_inode_t *ip,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct fsxattr fa;
+ vattr_t va;
+ int error;
+ int attr_flags;
+ unsigned int flags;
+
+ switch (cmd) {
+ case XFS_IOC_FSGETXATTR: {
+ va.va_mask = XFS_AT_XFLAGS|XFS_AT_EXTSIZE|XFS_AT_NEXTENTS;
+ VOP_GETATTR(vp, &va, 0, NULL, error);
+ if (error)
+ return -error;
+
+ fa.fsx_xflags = va.va_xflags;
+ fa.fsx_extsize = va.va_extsize;
+ fa.fsx_nextents = va.va_nextents;
+
+ if (copy_to_user((struct fsxattr *)arg, &fa, sizeof(fa)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_FSSETXATTR: {
+ if (copy_from_user(&fa, (struct fsxattr *)arg, sizeof(fa)))
+ return -XFS_ERROR(EFAULT);
+
+ attr_flags = 0;
+ if (filp->f_flags & (O_NDELAY|O_NONBLOCK))
+ attr_flags |= ATTR_NONBLOCK;
+
+ va.va_mask = XFS_AT_XFLAGS | XFS_AT_EXTSIZE;
+ va.va_xflags = fa.fsx_xflags;
+ va.va_extsize = fa.fsx_extsize;
+
+ VOP_SETATTR(vp, &va, attr_flags, NULL, error);
+ if (!error)
+ vn_revalidate(vp); /* update Linux inode flags */
+ return -error;
+ }
+
+ case XFS_IOC_FSGETXATTRA: {
+ va.va_mask = XFS_AT_XFLAGS|XFS_AT_EXTSIZE|XFS_AT_ANEXTENTS;
+ VOP_GETATTR(vp, &va, 0, NULL, error);
+ if (error)
+ return -error;
+
+ fa.fsx_xflags = va.va_xflags;
+ fa.fsx_extsize = va.va_extsize;
+ fa.fsx_nextents = va.va_anextents;
+
+ if (copy_to_user((struct fsxattr *)arg, &fa, sizeof(fa)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_GETXFLAGS: {
+ flags = xfs_di2lxflags(ip->i_d.di_flags);
+ if (copy_to_user((unsigned int *)arg, &flags, sizeof(flags)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ case XFS_IOC_SETXFLAGS: {
+ if (copy_from_user(&flags, (unsigned int *)arg, sizeof(flags)))
+ return -XFS_ERROR(EFAULT);
+
+ if (flags & ~(LINUX_XFLAG_IMMUTABLE | LINUX_XFLAG_APPEND | \
+ LINUX_XFLAG_NOATIME | LINUX_XFLAG_NODUMP | \
+ LINUX_XFLAG_SYNC))
+ return -XFS_ERROR(EOPNOTSUPP);
+
+ attr_flags = 0;
+ if (filp->f_flags & (O_NDELAY|O_NONBLOCK))
+ attr_flags |= ATTR_NONBLOCK;
+
+ va.va_mask = XFS_AT_XFLAGS;
+ va.va_xflags = xfs_merge_ioc_xflags(flags,
+ xfs_dic2xflags(&ip->i_d, ARCH_NOCONVERT));
+
+ VOP_SETATTR(vp, &va, attr_flags, NULL, error);
+ if (!error)
+ vn_revalidate(vp); /* update Linux inode flags */
+ return -error;
+ }
+
+ case XFS_IOC_GETVERSION: {
+ flags = LINVFS_GET_IP(vp)->i_generation;
+ if (copy_to_user((unsigned int *)arg, &flags, sizeof(flags)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+ }
+
+ default:
+ return -ENOTTY;
+ }
+}
+
+STATIC int
+xfs_ioc_getbmap(
+ bhv_desc_t *bdp,
+ struct file *filp,
+ int ioflags,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct getbmap bm;
+ int iflags;
+ int error;
+
+ if (copy_from_user(&bm, (struct getbmap *)arg, sizeof(bm)))
+ return -XFS_ERROR(EFAULT);
+
+ if (bm.bmv_count < 2)
+ return -XFS_ERROR(EINVAL);
+
+ iflags = (cmd == XFS_IOC_GETBMAPA ? BMV_IF_ATTRFORK : 0);
+ if (ioflags & IO_INVIS)
+ iflags |= BMV_IF_NO_DMAPI_READ;
+
+ error = xfs_getbmap(bdp, &bm, (struct getbmap *)arg+1, iflags);
+ if (error)
+ return -error;
+
+ if (copy_to_user((struct getbmap *)arg, &bm, sizeof(bm)))
+ return -XFS_ERROR(EFAULT);
+ return 0;
+}
+
+STATIC int
+xfs_ioc_getbmapx(
+ bhv_desc_t *bdp,
+ unsigned long arg)
+{
+ struct getbmapx bmx;
+ struct getbmap bm;
+ int iflags;
+ int error;
+
+ if (copy_from_user(&bmx, (struct getbmapx *)arg, sizeof(bmx)))
+ return -XFS_ERROR(EFAULT);
+
+ if (bmx.bmv_count < 2)
+ return -XFS_ERROR(EINVAL);
+
+ /*
+ * Map input getbmapx structure to a getbmap
+ * structure for xfs_getbmap.
+ */
+ GETBMAP_CONVERT(bmx, bm);
+
+ iflags = bmx.bmv_iflags;
+
+ if (iflags & (~BMV_IF_VALID))
+ return -XFS_ERROR(EINVAL);
+
+ iflags |= BMV_IF_EXTENDED;
+
+ error = xfs_getbmap(bdp, &bm, (struct getbmapx *)arg+1, iflags);
+ if (error)
+ return -error;
+
+ GETBMAP_CONVERT(bm, bmx);
+
+ if (copy_to_user((struct getbmapx *)arg, &bmx, sizeof(bmx)))
+ return -XFS_ERROR(EFAULT);
+
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_IOPS_H__
+#define __XFS_IOPS_H__
+
+extern struct inode_operations linvfs_file_inode_operations;
+extern struct inode_operations linvfs_dir_inode_operations;
+extern struct inode_operations linvfs_symlink_inode_operations;
+
+extern struct file_operations linvfs_file_operations;
+extern struct file_operations linvfs_invis_file_operations;
+extern struct file_operations linvfs_dir_operations;
+
+extern struct address_space_operations linvfs_aops;
+
+extern int linvfs_get_block(struct inode *, sector_t, struct buffer_head *, int);
+extern void linvfs_unwritten_done(struct buffer_head *, int);
+
+extern int xfs_ioctl(struct bhv_desc *, struct inode *, struct file *,
+ int, unsigned int, unsigned long);
+
+#endif /* __XFS_IOPS_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_LINUX__
+#define __XFS_LINUX__
+
+#include <linux/types.h>
+#include <linux/config.h>
+
+/*
+ * Some types are conditional depending on the target system.
+ * XFS_BIG_BLKNOS needs block layer disk addresses to be 64 bits.
+ * XFS_BIG_INUMS needs the VFS inode number to be 64 bits, as well
+ * as requiring XFS_BIG_BLKNOS to be set.
+ */
+#if defined(CONFIG_LBD) || (BITS_PER_LONG == 64)
+# define XFS_BIG_BLKNOS 1
+# if BITS_PER_LONG == 64
+# define XFS_BIG_INUMS 1
+# else
+# define XFS_BIG_INUMS 0
+# endif
+#else
+# define XFS_BIG_BLKNOS 0
+# define XFS_BIG_INUMS 0
+#endif
+
+#include <xfs_types.h>
+#include <xfs_arch.h>
+
+#include <kmem.h>
+#include <mrlock.h>
+#include <spin.h>
+#include <sv.h>
+#include <mutex.h>
+#include <sema.h>
+#include <time.h>
+
+#include <support/qsort.h>
+#include <support/ktrace.h>
+#include <support/debug.h>
+#include <support/move.h>
+#include <support/uuid.h>
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/file.h>
+#include <linux/swap.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/bitops.h>
+#include <linux/major.h>
+#include <linux/pagemap.h>
+#include <linux/vfs.h>
+#include <linux/seq_file.h>
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+#include <linux/version.h>
+
+#include <asm/page.h>
+#include <asm/div64.h>
+#include <asm/param.h>
+#include <asm/uaccess.h>
+#include <asm/byteorder.h>
+#include <asm/unaligned.h>
+
+#include <xfs_behavior.h>
+#include <xfs_vfs.h>
+#include <xfs_cred.h>
+#include <xfs_vnode.h>
+#include <xfs_stats.h>
+#include <xfs_sysctl.h>
+#include <xfs_iops.h>
+#include <xfs_super.h>
+#include <xfs_globals.h>
+#include <xfs_fs_subr.h>
+#include <xfs_lrw.h>
+#include <xfs_buf.h>
+
+/*
+ * Feature macros (disable/enable)
+ */
+#undef HAVE_REFCACHE /* reference cache not needed for NFS in 2.6 */
+#define HAVE_SENDFILE /* sendfile(2) exists in 2.6, but not in 2.4 */
+
+/*
+ * State flag for unwritten extent buffers.
+ *
+ * We need to be able to distinguish between these and delayed
+ * allocate buffers within XFS. The generic IO path code does
+ * not need to distinguish - we use the BH_Delay flag for both
+ * delalloc and these ondisk-uninitialised buffers.
+ */
+BUFFER_FNS(PrivateStart, unwritten);
+static inline void set_buffer_unwritten_io(struct buffer_head *bh)
+{
+ bh->b_end_io = linvfs_unwritten_done;
+}
+
+#define restricted_chown xfs_params.restrict_chown.val
+#define irix_sgid_inherit xfs_params.sgid_inherit.val
+#define irix_symlink_mode xfs_params.symlink_mode.val
+#define xfs_panic_mask xfs_params.panic_mask.val
+#define xfs_error_level xfs_params.error_level.val
+#define xfs_syncd_centisecs xfs_params.syncd_timer.val
+#define xfs_stats_clear xfs_params.stats_clear.val
+#define xfs_inherit_sync xfs_params.inherit_sync.val
+#define xfs_inherit_nodump xfs_params.inherit_nodump.val
+#define xfs_inherit_noatime xfs_params.inherit_noatim.val
+#define xfs_buf_timer_centisecs xfs_params.xfs_buf_timer.val
+#define xfs_buf_age_centisecs xfs_params.xfs_buf_age.val
+
+#define current_cpu() smp_processor_id()
+#define current_pid() (current->pid)
+#define current_fsuid(cred) (current->fsuid)
+#define current_fsgid(cred) (current->fsgid)
+
+#define NBPP PAGE_SIZE
+#define DPPSHFT (PAGE_SHIFT - 9)
+#define NDPP (1 << (PAGE_SHIFT - 9))
+#define dtop(DD) (((DD) + NDPP - 1) >> DPPSHFT)
+#define dtopt(DD) ((DD) >> DPPSHFT)
+#define dpoff(DD) ((DD) & (NDPP-1))
+
+#define NBBY 8 /* number of bits per byte */
+#define NBPC PAGE_SIZE /* Number of bytes per click */
+#define BPCSHIFT PAGE_SHIFT /* LOG2(NBPC) if exact */
+
+/*
+ * Size of block device i/o is parameterized here.
+ * Currently the system supports page-sized i/o.
+ */
+#define BLKDEV_IOSHIFT BPCSHIFT
+#define BLKDEV_IOSIZE (1<<BLKDEV_IOSHIFT)
+/* number of BB's per block device block */
+#define BLKDEV_BB BTOBB(BLKDEV_IOSIZE)
+
+/* bytes to clicks */
+#define btoc(x) (((__psunsigned_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define btoct(x) ((__psunsigned_t)(x)>>BPCSHIFT)
+#define btoc64(x) (((__uint64_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define btoct64(x) ((__uint64_t)(x)>>BPCSHIFT)
+#define io_btoc(x) (((__psunsigned_t)(x)+(IO_NBPC-1))>>IO_BPCSHIFT)
+#define io_btoct(x) ((__psunsigned_t)(x)>>IO_BPCSHIFT)
+
+/* off_t bytes to clicks */
+#define offtoc(x) (((__uint64_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define offtoct(x) ((xfs_off_t)(x)>>BPCSHIFT)
+
+/* clicks to off_t bytes */
+#define ctooff(x) ((xfs_off_t)(x)<<BPCSHIFT)
+
+/* clicks to bytes */
+#define ctob(x) ((__psunsigned_t)(x)<<BPCSHIFT)
+#define btoct(x) ((__psunsigned_t)(x)>>BPCSHIFT)
+#define ctob64(x) ((__uint64_t)(x)<<BPCSHIFT)
+#define io_ctob(x) ((__psunsigned_t)(x)<<IO_BPCSHIFT)
+
+/* bytes to clicks */
+#define btoc(x) (((__psunsigned_t)(x)+(NBPC-1))>>BPCSHIFT)
+
+#ifndef CELL_CAPABLE
+#define FSC_NOTIFY_NAME_CHANGED(vp)
+#endif
+
+#ifndef ENOATTR
+#define ENOATTR ENODATA /* Attribute not found */
+#endif
+
+/* Note: EWRONGFS never visible outside the kernel */
+#define EWRONGFS EINVAL /* Mount with wrong filesystem type */
+
+/*
+ * XXX EFSCORRUPTED needs a real value in errno.h. asm-i386/errno.h won't
+ * return codes out of its known range in errno.
+ * XXX Also note: needs to be < 1000 and fairly unique on Linux (mustn't
+ * conflict with any code we use already or any code a driver may use)
+ * XXX Some options (currently we do #2):
+ * 1/ New error code ["Filesystem is corrupted", _after_ glibc updated]
+ * 2/ 990 ["Unknown error 990"]
+ * 3/ EUCLEAN ["Structure needs cleaning"]
+ * 4/ Convert EFSCORRUPTED to EIO [just prior to return into userspace]
+ */
+#define EFSCORRUPTED 990 /* Filesystem is corrupted */
+
+#define SYNCHRONIZE() barrier()
+#define __return_address __builtin_return_address(0)
+
+/*
+ * IRIX (BSD) quotactl makes use of separate commands for user/group,
+ * whereas on Linux the syscall encodes this information into the cmd
+ * field (see the QCMD macro in quota.h). These macros help keep the
+ * code portable - they are not visible from the syscall interface.
+ */
+#define Q_XSETGQLIM XQM_CMD(0x8) /* set groups disk limits */
+#define Q_XGETGQUOTA XQM_CMD(0x9) /* get groups disk limits */
+
+/* IRIX uses a dynamic sizing algorithm (ndquot = 200 + numprocs*2) */
+/* we may well need to fine-tune this if it ever becomes an issue. */
+#define DQUOT_MAX_HEURISTIC 1024 /* NR_DQUOTS */
+#define ndquot DQUOT_MAX_HEURISTIC
+
+/* IRIX uses the current size of the name cache to guess a good value */
+/* - this isn't the same but is a good enough starting point for now. */
+#define DQUOT_HASH_HEURISTIC files_stat.nr_files
+
+/* IRIX inodes maintain the project ID also, zero this field on Linux */
+#define DEFAULT_PROJID 0
+#define dfltprid DEFAULT_PROJID
+
+#define MAXPATHLEN 1024
+
+#define MIN(a,b) (min(a,b))
+#define MAX(a,b) (max(a,b))
+#define howmany(x, y) (((x)+((y)-1))/(y))
+#define roundup(x, y) ((((x)+((y)-1))/(y))*(y))
+
+#define xfs_stack_trace() dump_stack()
+
+#define xfs_itruncate_data(ip, off) \
+ (-vmtruncate(LINVFS_GET_IP(XFS_ITOV(ip)), (off)))
+
+
+/* Move the kernel do_div definition off to one side */
+
+#if defined __i386__
+/* For ia32 we need to pull some tricks to get past various versions
+ * of the compiler which do not like us using do_div in the middle
+ * of large functions.
+ */
+static inline __u32 xfs_do_div(void *a, __u32 b, int n)
+{
+ __u32 mod;
+
+ switch (n) {
+ case 4:
+ mod = *(__u32 *)a % b;
+ *(__u32 *)a = *(__u32 *)a / b;
+ return mod;
+ case 8:
+ {
+ unsigned long __upper, __low, __high, __mod;
+ __u64 c = *(__u64 *)a;
+ __upper = __high = c >> 32;
+ __low = c;
+ if (__high) {
+ __upper = __high % (b);
+ __high = __high / (b);
+ }
+ asm("divl %2":"=a" (__low), "=d" (__mod):"rm" (b), "0" (__low), "1" (__upper));
+ asm("":"=A" (c):"a" (__low),"d" (__high));
+ *(__u64 *)a = c;
+ return __mod;
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+
+/* Side effect free 64 bit mod operation */
+static inline __u32 xfs_do_mod(void *a, __u32 b, int n)
+{
+ switch (n) {
+ case 4:
+ return *(__u32 *)a % b;
+ case 8:
+ {
+ unsigned long __upper, __low, __high, __mod;
+ __u64 c = *(__u64 *)a;
+ __upper = __high = c >> 32;
+ __low = c;
+ if (__high) {
+ __upper = __high % (b);
+ __high = __high / (b);
+ }
+ asm("divl %2":"=a" (__low), "=d" (__mod):"rm" (b), "0" (__low), "1" (__upper));
+ asm("":"=A" (c):"a" (__low),"d" (__high));
+ return __mod;
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+#else
+static inline __u32 xfs_do_div(void *a, __u32 b, int n)
+{
+ __u32 mod;
+
+ switch (n) {
+ case 4:
+ mod = *(__u32 *)a % b;
+ *(__u32 *)a = *(__u32 *)a / b;
+ return mod;
+ case 8:
+ mod = do_div(*(__u64 *)a, b);
+ return mod;
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+
+/* Side effect free 64 bit mod operation */
+static inline __u32 xfs_do_mod(void *a, __u32 b, int n)
+{
+ switch (n) {
+ case 4:
+ return *(__u32 *)a % b;
+ case 8:
+ {
+ __u64 c = *(__u64 *)a;
+ return do_div(c, b);
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+#endif
+
+#undef do_div
+#define do_div(a, b) xfs_do_div(&(a), (b), sizeof(a))
+#define do_mod(a, b) xfs_do_mod(&(a), (b), sizeof(a))
+
+static inline __uint64_t roundup_64(__uint64_t x, __uint32_t y)
+{
+ x += y - 1;
+ do_div(x, y);
+ return(x * y);
+}
+
+#endif /* __XFS_LINUX__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+/*
+ * fs/xfs/linux/xfs_lrw.c (Linux Read Write stuff)
+ *
+ */
+
+#include "xfs.h"
+
+#include "xfs_fs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_trans.h"
+#include "xfs_sb.h"
+#include "xfs_ag.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_alloc.h"
+#include "xfs_dmapi.h"
+#include "xfs_quota.h"
+#include "xfs_mount.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_btree.h"
+#include "xfs_ialloc.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_bmap.h"
+#include "xfs_bit.h"
+#include "xfs_rtalloc.h"
+#include "xfs_error.h"
+#include "xfs_itable.h"
+#include "xfs_rw.h"
+#include "xfs_acl.h"
+#include "xfs_cap.h"
+#include "xfs_mac.h"
+#include "xfs_attr.h"
+#include "xfs_inode_item.h"
+#include "xfs_buf_item.h"
+#include "xfs_utils.h"
+#include "xfs_iomap.h"
+
+#include <linux/capability.h>
+
+
+#if defined(XFS_RW_TRACE)
+void
+xfs_rw_enter_trace(
+ int tag,
+ xfs_iocore_t *io,
+ const struct iovec *iovp,
+ size_t segs,
+ loff_t offset,
+ int ioflags)
+{
+ xfs_inode_t *ip = XFS_IO_INODE(io);
+
+ if (ip->i_rwtrace == NULL)
+ return;
+ ktrace_enter(ip->i_rwtrace,
+ (void *)(unsigned long)tag,
+ (void *)ip,
+ (void *)((unsigned long)((ip->i_d.di_size >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(ip->i_d.di_size & 0xffffffff)),
+ (void *)(__psint_t)iovp,
+ (void *)((unsigned long)segs),
+ (void *)((unsigned long)((offset >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(offset & 0xffffffff)),
+ (void *)((unsigned long)ioflags),
+ (void *)((unsigned long)((io->io_new_size >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(io->io_new_size & 0xffffffff)),
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL);
+}
+
+void
+xfs_inval_cached_trace(
+ xfs_iocore_t *io,
+ xfs_off_t offset,
+ xfs_off_t len,
+ xfs_off_t first,
+ xfs_off_t last)
+{
+ xfs_inode_t *ip = XFS_IO_INODE(io);
+
+ if (ip->i_rwtrace == NULL)
+ return;
+ ktrace_enter(ip->i_rwtrace,
+ (void *)(__psint_t)XFS_INVAL_CACHED,
+ (void *)ip,
+ (void *)((unsigned long)((offset >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(offset & 0xffffffff)),
+ (void *)((unsigned long)((len >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(len & 0xffffffff)),
+ (void *)((unsigned long)((first >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(first & 0xffffffff)),
+ (void *)((unsigned long)((last >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(last & 0xffffffff)),
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL,
+ (void *)NULL);
+}
+#endif
+
+/*
+ * xfs_iozero
+ *
+ * xfs_iozero clears the specified range of buffer supplied,
+ * and marks all the affected blocks as valid and modified. If
+ * an affected block is not allocated, it will be allocated. If
+ * an affected block is not completely overwritten, and is not
+ * valid before the operation, it will be read from disk before
+ * being partially zeroed.
+ */
+STATIC int
+xfs_iozero(
+ struct inode *ip, /* inode */
+ loff_t pos, /* offset in file */
+ size_t count, /* size of data to zero */
+ loff_t end_size) /* max file size to set */
+{
+ unsigned bytes;
+ struct page *page;
+ struct address_space *mapping;
+ char *kaddr;
+ int status;
+
+ mapping = ip->i_mapping;
+ do {
+ unsigned long index, offset;
+
+ offset = (pos & (PAGE_CACHE_SIZE -1)); /* Within page */
+ index = pos >> PAGE_CACHE_SHIFT;
+ bytes = PAGE_CACHE_SIZE - offset;
+ if (bytes > count)
+ bytes = count;
+
+ status = -ENOMEM;
+ page = grab_cache_page(mapping, index);
+ if (!page)
+ break;
+
+ kaddr = kmap(page);
+ status = mapping->a_ops->prepare_write(NULL, page, offset,
+ offset + bytes);
+ if (status) {
+ goto unlock;
+ }
+
+ memset((void *) (kaddr + offset), 0, bytes);
+ flush_dcache_page(page);
+ status = mapping->a_ops->commit_write(NULL, page, offset,
+ offset + bytes);
+ if (!status) {
+ pos += bytes;
+ count -= bytes;
+ if (pos > i_size_read(ip))
+ i_size_write(ip, pos < end_size ? pos : end_size);
+ }
+
+unlock:
+ kunmap(page);
+ unlock_page(page);
+ page_cache_release(page);
+ if (status)
+ break;
+ } while (count);
+
+ return (-status);
+}
+
+/*
+ * xfs_inval_cached_pages
+ *
+ * This routine is responsible for keeping direct I/O and buffered I/O
+ * somewhat coherent. From here we make sure that we're at least
+ * temporarily holding the inode I/O lock exclusively and then call
+ * the page cache to flush and invalidate any cached pages. If there
+ * are no cached pages this routine will be very quick.
+ */
+void
+xfs_inval_cached_pages(
+ vnode_t *vp,
+ xfs_iocore_t *io,
+ xfs_off_t offset,
+ int write,
+ int relock)
+{
+ xfs_mount_t *mp;
+
+ if (!VN_CACHED(vp)) {
+ return;
+ }
+
+ mp = io->io_mount;
+
+ /*
+ * We need to get the I/O lock exclusively in order
+ * to safely invalidate pages and mappings.
+ */
+ if (relock) {
+ XFS_IUNLOCK(mp, io, XFS_IOLOCK_SHARED);
+ XFS_ILOCK(mp, io, XFS_IOLOCK_EXCL);
+ }
+
+ /* Writing beyond EOF creates a hole that must be zeroed */
+ if (write && (offset > XFS_SIZE(mp, io))) {
+ xfs_fsize_t isize;
+
+ XFS_ILOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+ isize = XFS_SIZE(mp, io);
+ if (offset > isize) {
+ xfs_zero_eof(vp, io, offset, isize, offset);
+ }
+ XFS_IUNLOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+ }
+
+ xfs_inval_cached_trace(io, offset, -1, ctooff(offtoct(offset)), -1);
+ VOP_FLUSHINVAL_PAGES(vp, ctooff(offtoct(offset)), -1, FI_REMAPF_LOCKED);
+ if (relock) {
+ XFS_ILOCK_DEMOTE(mp, io, XFS_IOLOCK_EXCL);
+ }
+}
+
+ssize_t /* bytes read, or (-) error */
+xfs_read(
+ bhv_desc_t *bdp,
+ struct kiocb *iocb,
+ const struct iovec *iovp,
+ unsigned int segs,
+ loff_t *offset,
+ int ioflags,
+ cred_t *credp)
+{
+ struct file *file = iocb->ki_filp;
+ size_t size = 0;
+ ssize_t ret;
+ xfs_fsize_t n;
+ xfs_inode_t *ip;
+ xfs_mount_t *mp;
+ vnode_t *vp;
+ unsigned long seg;
+
+ ip = XFS_BHVTOI(bdp);
+ vp = BHV_TO_VNODE(bdp);
+ mp = ip->i_mount;
+
+ XFS_STATS_INC(xs_read_calls);
+
+ /* START copy & waste from filemap.c */
+ for (seg = 0; seg < segs; seg++) {
+ const struct iovec *iv = &iovp[seg];
+
+ /*
+ * If any segment has a negative length, or the cumulative
+ * length ever wraps negative then return -EINVAL.
+ */
+ size += iv->iov_len;
+ if (unlikely((ssize_t)(size|iv->iov_len) < 0))
+ return XFS_ERROR(-EINVAL);
+ }
+ /* END copy & waste from filemap.c */
+
+ if (ioflags & IO_ISDIRECT) {
+ xfs_buftarg_t *target =
+ (ip->i_d.di_flags & XFS_DIFLAG_REALTIME) ?
+ mp->m_rtdev_targp : mp->m_ddev_targp;
+ if ((*offset & target->pbr_smask) ||
+ (size & target->pbr_smask)) {
+ if (*offset == ip->i_d.di_size) {
+ return (0);
+ }
+ return -XFS_ERROR(EINVAL);
+ }
+ }
+
+ n = XFS_MAXIOFFSET(mp) - *offset;
+ if ((n <= 0) || (size == 0))
+ return 0;
+
+ if (n < size)
+ size = n;
+
+ if (XFS_FORCED_SHUTDOWN(mp)) {
+ return -EIO;
+ }
+
+ /* OK so we are holding the I/O lock for the duration
+ * of the submission, then what happens if the I/O
+ * does not really happen here, but is scheduled
+ * later?
+ */
+ xfs_ilock(ip, XFS_IOLOCK_SHARED);
+
+ if (DM_EVENT_ENABLED(vp->v_vfsp, ip, DM_EVENT_READ) &&
+ !(ioflags & IO_INVIS)) {
+ vrwlock_t locktype = VRWLOCK_READ;
+
+ ret = XFS_SEND_DATA(mp, DM_EVENT_READ,
+ BHV_TO_VNODE(bdp), *offset, size,
+ FILP_DELAY_FLAG(file), &locktype);
+ if (ret) {
+ xfs_iunlock(ip, XFS_IOLOCK_SHARED);
+ return -ret;
+ }
+ }
+
+ xfs_rw_enter_trace(XFS_READ_ENTER, &ip->i_iocore,
+ iovp, segs, *offset, ioflags);
+ ret = __generic_file_aio_read(iocb, iovp, segs, offset);
+ xfs_iunlock(ip, XFS_IOLOCK_SHARED);
+
+ if (ret > 0)
+ XFS_STATS_ADD(xs_read_bytes, ret);
+
+ if (likely(!(ioflags & IO_INVIS)))
+ xfs_ichgtime(ip, XFS_ICHGTIME_ACC);
+
+ return ret;
+}
+
+ssize_t
+xfs_sendfile(
+ bhv_desc_t *bdp,
+ struct file *filp,
+ loff_t *offset,
+ int ioflags,
+ size_t count,
+ read_actor_t actor,
+ void *target,
+ cred_t *credp)
+{
+ ssize_t ret;
+ xfs_fsize_t n;
+ xfs_inode_t *ip;
+ xfs_mount_t *mp;
+ vnode_t *vp;
+
+ ip = XFS_BHVTOI(bdp);
+ vp = BHV_TO_VNODE(bdp);
+ mp = ip->i_mount;
+
+ XFS_STATS_INC(xs_read_calls);
+
+ n = XFS_MAXIOFFSET(mp) - *offset;
+ if ((n <= 0) || (count == 0))
+ return 0;
+
+ if (n < count)
+ count = n;
+
+ if (XFS_FORCED_SHUTDOWN(ip->i_mount))
+ return -EIO;
+
+ xfs_ilock(ip, XFS_IOLOCK_SHARED);
+
+ if (DM_EVENT_ENABLED(vp->v_vfsp, ip, DM_EVENT_READ) &&
+ (!(ioflags & IO_INVIS))) {
+ vrwlock_t locktype = VRWLOCK_READ;
+ int error;
+
+ error = XFS_SEND_DATA(mp, DM_EVENT_READ, BHV_TO_VNODE(bdp), *offset, count,
+ FILP_DELAY_FLAG(filp), &locktype);
+ if (error) {
+ xfs_iunlock(ip, XFS_IOLOCK_SHARED);
+ return -error;
+ }
+ }
+ xfs_rw_enter_trace(XFS_SENDFILE_ENTER, &ip->i_iocore,
+ target, count, *offset, ioflags);
+ ret = generic_file_sendfile(filp, offset, count, actor, target);
+ xfs_iunlock(ip, XFS_IOLOCK_SHARED);
+
+ XFS_STATS_ADD(xs_read_bytes, ret);
+ xfs_ichgtime(ip, XFS_ICHGTIME_ACC);
+ return ret;
+}
+
+/*
+ * This routine is called to handle zeroing any space in the last
+ * block of the file that is beyond the EOF. We do this since the
+ * size is being increased without writing anything to that block
+ * and we don't want anyone to read the garbage on the disk.
+ */
+STATIC int /* error (positive) */
+xfs_zero_last_block(
+ struct inode *ip,
+ xfs_iocore_t *io,
+ xfs_off_t offset,
+ xfs_fsize_t isize,
+ xfs_fsize_t end_size)
+{
+ xfs_fileoff_t last_fsb;
+ xfs_mount_t *mp;
+ int nimaps;
+ int zero_offset;
+ int zero_len;
+ int isize_fsb_offset;
+ int error = 0;
+ xfs_bmbt_irec_t imap;
+ loff_t loff;
+ size_t lsize;
+
+ ASSERT(ismrlocked(io->io_lock, MR_UPDATE) != 0);
+ ASSERT(offset > isize);
+
+ mp = io->io_mount;
+
+ isize_fsb_offset = XFS_B_FSB_OFFSET(mp, isize);
+ if (isize_fsb_offset == 0) {
+ /*
+ * There are no extra bytes in the last block on disk to
+ * zero, so return.
+ */
+ return 0;
+ }
+
+ last_fsb = XFS_B_TO_FSBT(mp, isize);
+ nimaps = 1;
+ error = XFS_BMAPI(mp, NULL, io, last_fsb, 1, 0, NULL, 0, &imap,
+ &nimaps, NULL);
+ if (error) {
+ return error;
+ }
+ ASSERT(nimaps > 0);
+ /*
+ * If the block underlying isize is just a hole, then there
+ * is nothing to zero.
+ */
+ if (imap.br_startblock == HOLESTARTBLOCK) {
+ return 0;
+ }
+ /*
+ * Zero the part of the last block beyond the EOF, and write it
+ * out sync. We need to drop the ilock while we do this so we
+ * don't deadlock when the buffer cache calls back to us.
+ */
+ XFS_IUNLOCK(mp, io, XFS_ILOCK_EXCL| XFS_EXTSIZE_RD);
+ loff = XFS_FSB_TO_B(mp, last_fsb);
+ lsize = XFS_FSB_TO_B(mp, 1);
+
+ zero_offset = isize_fsb_offset;
+ zero_len = mp->m_sb.sb_blocksize - isize_fsb_offset;
+
+ error = xfs_iozero(ip, loff + zero_offset, zero_len, end_size);
+
+ XFS_ILOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+ ASSERT(error >= 0);
+ return error;
+}
+
+/*
+ * Zero any on disk space between the current EOF and the new,
+ * larger EOF. This handles the normal case of zeroing the remainder
+ * of the last block in the file and the unusual case of zeroing blocks
+ * out beyond the size of the file. This second case only happens
+ * with fixed size extents and when the system crashes before the inode
+ * size was updated but after blocks were allocated. If fill is set,
+ * then any holes in the range are filled and zeroed. If not, the holes
+ * are left alone as holes.
+ */
+
+int /* error (positive) */
+xfs_zero_eof(
+ vnode_t *vp,
+ xfs_iocore_t *io,
+ xfs_off_t offset, /* starting I/O offset */
+ xfs_fsize_t isize, /* current inode size */
+ xfs_fsize_t end_size) /* terminal inode size */
+{
+ struct inode *ip = LINVFS_GET_IP(vp);
+ xfs_fileoff_t start_zero_fsb;
+ xfs_fileoff_t end_zero_fsb;
+ xfs_fileoff_t prev_zero_fsb;
+ xfs_fileoff_t zero_count_fsb;
+ xfs_fileoff_t last_fsb;
+ xfs_extlen_t buf_len_fsb;
+ xfs_extlen_t prev_zero_count;
+ xfs_mount_t *mp;
+ int nimaps;
+ int error = 0;
+ xfs_bmbt_irec_t imap;
+ loff_t loff;
+ size_t lsize;
+
+ ASSERT(ismrlocked(io->io_lock, MR_UPDATE));
+ ASSERT(ismrlocked(io->io_iolock, MR_UPDATE));
+
+ mp = io->io_mount;
+
+ /*
+ * First handle zeroing the block on which isize resides.
+ * We only zero a part of that block so it is handled specially.
+ */
+ error = xfs_zero_last_block(ip, io, offset, isize, end_size);
+ if (error) {
+ ASSERT(ismrlocked(io->io_lock, MR_UPDATE));
+ ASSERT(ismrlocked(io->io_iolock, MR_UPDATE));
+ return error;
+ }
+
+ /*
+ * Calculate the range between the new size and the old
+ * where blocks needing to be zeroed may exist. To get the
+ * block where the last byte in the file currently resides,
+ * we need to subtract one from the size and truncate back
+ * to a block boundary. We subtract 1 in case the size is
+ * exactly on a block boundary.
+ */
+ last_fsb = isize ? XFS_B_TO_FSBT(mp, isize - 1) : (xfs_fileoff_t)-1;
+ start_zero_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)isize);
+ end_zero_fsb = XFS_B_TO_FSBT(mp, offset - 1);
+ ASSERT((xfs_sfiloff_t)last_fsb < (xfs_sfiloff_t)start_zero_fsb);
+ if (last_fsb == end_zero_fsb) {
+ /*
+ * The size was only incremented on its last block.
+ * We took care of that above, so just return.
+ */
+ return 0;
+ }
+
+ ASSERT(start_zero_fsb <= end_zero_fsb);
+ prev_zero_fsb = NULLFILEOFF;
+ prev_zero_count = 0;
+ while (start_zero_fsb <= end_zero_fsb) {
+ nimaps = 1;
+ zero_count_fsb = end_zero_fsb - start_zero_fsb + 1;
+ error = XFS_BMAPI(mp, NULL, io, start_zero_fsb, zero_count_fsb,
+ 0, NULL, 0, &imap, &nimaps, NULL);
+ if (error) {
+ ASSERT(ismrlocked(io->io_lock, MR_UPDATE));
+ ASSERT(ismrlocked(io->io_iolock, MR_UPDATE));
+ return error;
+ }
+ ASSERT(nimaps > 0);
+
+ if (imap.br_state == XFS_EXT_UNWRITTEN ||
+ imap.br_startblock == HOLESTARTBLOCK) {
+ /*
+ * This loop handles initializing pages that were
+ * partially initialized by the code below this
+ * loop. It basically zeroes the part of the page
+ * that sits on a hole and sets the page as P_HOLE
+ * and calls remapf if it is a mapped file.
+ */
+ prev_zero_fsb = NULLFILEOFF;
+ prev_zero_count = 0;
+ start_zero_fsb = imap.br_startoff +
+ imap.br_blockcount;
+ ASSERT(start_zero_fsb <= (end_zero_fsb + 1));
+ continue;
+ }
+
+ /*
+ * There are blocks in the range requested.
+ * Zero them a single write at a time. We actually
+ * don't zero the entire range returned if it is
+ * too big and simply loop around to get the rest.
+ * That is not the most efficient thing to do, but it
+ * is simple and this path should not be exercised often.
+ */
+ buf_len_fsb = XFS_FILBLKS_MIN(imap.br_blockcount,
+ mp->m_writeio_blocks << 8);
+ /*
+ * Drop the inode lock while we're doing the I/O.
+ * We'll still have the iolock to protect us.
+ */
+ XFS_IUNLOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+
+ loff = XFS_FSB_TO_B(mp, start_zero_fsb);
+ lsize = XFS_FSB_TO_B(mp, buf_len_fsb);
+
+ error = xfs_iozero(ip, loff, lsize, end_size);
+
+ if (error) {
+ goto out_lock;
+ }
+
+ prev_zero_fsb = start_zero_fsb;
+ prev_zero_count = buf_len_fsb;
+ start_zero_fsb = imap.br_startoff + buf_len_fsb;
+ ASSERT(start_zero_fsb <= (end_zero_fsb + 1));
+
+ XFS_ILOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+ }
+
+ return 0;
+
+out_lock:
+
+ XFS_ILOCK(mp, io, XFS_ILOCK_EXCL|XFS_EXTSIZE_RD);
+ ASSERT(error >= 0);
+ return error;
+}
+
+ssize_t /* bytes written, or (-) error */
+xfs_write(
+ bhv_desc_t *bdp,
+ struct kiocb *iocb,
+ const struct iovec *iovp,
+ unsigned int segs,
+ loff_t *offset,
+ int ioflags,
+ cred_t *credp)
+{
+ struct file *file = iocb->ki_filp;
+ size_t size = 0;
+ xfs_inode_t *xip;
+ xfs_mount_t *mp;
+ ssize_t ret;
+ int error = 0;
+ xfs_fsize_t isize, new_size;
+ xfs_fsize_t n, limit;
+ xfs_iocore_t *io;
+ vnode_t *vp;
+ unsigned long seg;
+ int iolock;
+ int eventsent = 0;
+ vrwlock_t locktype;
+
+ XFS_STATS_INC(xs_write_calls);
+
+ vp = BHV_TO_VNODE(bdp);
+ xip = XFS_BHVTOI(bdp);
+
+ /* START copy & waste from filemap.c */
+ for (seg = 0; seg < segs; seg++) {
+ const struct iovec *iv = &iovp[seg];
+
+ /*
+ * If any segment has a negative length, or the cumulative
+ * length ever wraps negative then return -EINVAL.
+ */
+ size += iv->iov_len;
+ if (unlikely((ssize_t)(size|iv->iov_len) < 0))
+ return XFS_ERROR(-EINVAL);
+ }
+ /* END copy & waste from filemap.c */
+
+ if (size == 0)
+ return 0;
+
+ io = &xip->i_iocore;
+ mp = io->io_mount;
+
+ if (XFS_FORCED_SHUTDOWN(mp)) {
+ return -EIO;
+ }
+
+ if (ioflags & IO_ISDIRECT) {
+ xfs_buftarg_t *target =
+ (xip->i_d.di_flags & XFS_DIFLAG_REALTIME) ?
+ mp->m_rtdev_targp : mp->m_ddev_targp;
+
+ if ((*offset & target->pbr_smask) ||
+ (size & target->pbr_smask)) {
+ return XFS_ERROR(-EINVAL);
+ }
+ iolock = XFS_IOLOCK_SHARED;
+ locktype = VRWLOCK_WRITE_DIRECT;
+ } else {
+ iolock = XFS_IOLOCK_EXCL;
+ locktype = VRWLOCK_WRITE;
+ }
+
+ xfs_ilock(xip, XFS_ILOCK_EXCL|iolock);
+
+ isize = xip->i_d.di_size;
+ limit = XFS_MAXIOFFSET(mp);
+
+ if (file->f_flags & O_APPEND)
+ *offset = isize;
+
+start:
+ n = limit - *offset;
+ if (n <= 0) {
+ xfs_iunlock(xip, XFS_ILOCK_EXCL|iolock);
+ return -EFBIG;
+ }
+
+ if (n < size)
+ size = n;
+
+ new_size = *offset + size;
+ if (new_size > isize) {
+ io->io_new_size = new_size;
+ }
+
+ if ((DM_EVENT_ENABLED(vp->v_vfsp, xip, DM_EVENT_WRITE) &&
+ !(ioflags & IO_INVIS) && !eventsent)) {
+ loff_t savedsize = *offset;
+ int dmflags = FILP_DELAY_FLAG(file) | DM_SEM_FLAG_RD(ioflags);
+
+ xfs_iunlock(xip, XFS_ILOCK_EXCL);
+ error = XFS_SEND_DATA(xip->i_mount, DM_EVENT_WRITE, vp,
+ *offset, size,
+ dmflags, &locktype);
+ if (error) {
+ xfs_iunlock(xip, iolock);
+ return -error;
+ }
+ xfs_ilock(xip, XFS_ILOCK_EXCL);
+ eventsent = 1;
+
+ /*
+ * The iolock was dropped and reaquired in XFS_SEND_DATA
+ * so we have to recheck the size when appending.
+ * We will only "goto start;" once, since having sent the
+ * event prevents another call to XFS_SEND_DATA, which is
+ * what allows the size to change in the first place.
+ */
+ if ((file->f_flags & O_APPEND) &&
+ savedsize != xip->i_d.di_size) {
+ *offset = isize = xip->i_d.di_size;
+ goto start;
+ }
+ }
+
+ /*
+ * On Linux, generic_file_write updates the times even if
+ * no data is copied in so long as the write had a size.
+ *
+ * We must update xfs' times since revalidate will overcopy xfs.
+ */
+ if (size && !(ioflags & IO_INVIS))
+ xfs_ichgtime(xip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
+
+ /*
+ * If the offset is beyond the size of the file, we have a couple
+ * of things to do. First, if there is already space allocated
+ * we need to either create holes or zero the disk or ...
+ *
+ * If there is a page where the previous size lands, we need
+ * to zero it out up to the new size.
+ */
+
+ if (!(ioflags & IO_ISDIRECT) && (*offset > isize && isize)) {
+ error = xfs_zero_eof(BHV_TO_VNODE(bdp), io, *offset,
+ isize, *offset + size);
+ if (error) {
+ xfs_iunlock(xip, XFS_ILOCK_EXCL|iolock);
+ return(-error);
+ }
+ }
+ xfs_iunlock(xip, XFS_ILOCK_EXCL);
+
+ /*
+ * If we're writing the file then make sure to clear the
+ * setuid and setgid bits if the process is not being run
+ * by root. This keeps people from modifying setuid and
+ * setgid binaries.
+ */
+
+ if (((xip->i_d.di_mode & S_ISUID) ||
+ ((xip->i_d.di_mode & (S_ISGID | S_IXGRP)) ==
+ (S_ISGID | S_IXGRP))) &&
+ !capable(CAP_FSETID)) {
+ error = xfs_write_clear_setuid(xip);
+ if (error) {
+ xfs_iunlock(xip, iolock);
+ return -error;
+ }
+ }
+
+retry:
+ if (ioflags & IO_ISDIRECT) {
+ xfs_inval_cached_pages(vp, io, *offset, 1, 1);
+ xfs_rw_enter_trace(XFS_DIOWR_ENTER,
+ io, iovp, segs, *offset, ioflags);
+ } else {
+ xfs_rw_enter_trace(XFS_WRITE_ENTER,
+ io, iovp, segs, *offset, ioflags);
+ }
+ ret = generic_file_aio_write_nolock(iocb, iovp, segs, offset);
+
+ if ((ret == -ENOSPC) &&
+ DM_EVENT_ENABLED(vp->v_vfsp, xip, DM_EVENT_NOSPACE) &&
+ !(ioflags & IO_INVIS)) {
+
+ xfs_rwunlock(bdp, locktype);
+ error = XFS_SEND_NAMESP(xip->i_mount, DM_EVENT_NOSPACE, vp,
+ DM_RIGHT_NULL, vp, DM_RIGHT_NULL, NULL, NULL,
+ 0, 0, 0); /* Delay flag intentionally unused */
+ if (error)
+ return -error;
+ xfs_rwlock(bdp, locktype);
+ *offset = xip->i_d.di_size;
+ goto retry;
+ }
+
+ if (*offset > xip->i_d.di_size) {
+ xfs_ilock(xip, XFS_ILOCK_EXCL);
+ if (*offset > xip->i_d.di_size) {
+ struct inode *inode = LINVFS_GET_IP(vp);
+
+ xip->i_d.di_size = *offset;
+ i_size_write(inode, *offset);
+ xip->i_update_core = 1;
+ xip->i_update_size = 1;
+ }
+ xfs_iunlock(xip, XFS_ILOCK_EXCL);
+ }
+
+ if (ret <= 0) {
+ xfs_rwunlock(bdp, locktype);
+ return ret;
+ }
+
+ XFS_STATS_ADD(xs_write_bytes, ret);
+
+ /* Handle various SYNC-type writes */
+ if ((file->f_flags & O_SYNC) || IS_SYNC(file->f_dentry->d_inode)) {
+
+ /*
+ * If we're treating this as O_DSYNC and we have not updated the
+ * size, force the log.
+ */
+
+ if (!(mp->m_flags & XFS_MOUNT_OSYNCISOSYNC)
+ && !(xip->i_update_size)) {
+ /*
+ * If an allocation transaction occurred
+ * without extending the size, then we have to force
+ * the log up the proper point to ensure that the
+ * allocation is permanent. We can't count on
+ * the fact that buffered writes lock out direct I/O
+ * writes - the direct I/O write could have extended
+ * the size nontransactionally, then finished before
+ * we started. xfs_write_file will think that the file
+ * didn't grow but the update isn't safe unless the
+ * size change is logged.
+ *
+ * Force the log if we've committed a transaction
+ * against the inode or if someone else has and
+ * the commit record hasn't gone to disk (e.g.
+ * the inode is pinned). This guarantees that
+ * all changes affecting the inode are permanent
+ * when we return.
+ */
+
+ xfs_inode_log_item_t *iip;
+ xfs_lsn_t lsn;
+
+ iip = xip->i_itemp;
+ if (iip && iip->ili_last_lsn) {
+ lsn = iip->ili_last_lsn;
+ xfs_log_force(mp, lsn,
+ XFS_LOG_FORCE | XFS_LOG_SYNC);
+ } else if (xfs_ipincount(xip) > 0) {
+ xfs_log_force(mp, (xfs_lsn_t)0,
+ XFS_LOG_FORCE | XFS_LOG_SYNC);
+ }
+
+ } else {
+ xfs_trans_t *tp;
+
+ /*
+ * O_SYNC or O_DSYNC _with_ a size update are handled
+ * the same way.
+ *
+ * If the write was synchronous then we need to make
+ * sure that the inode modification time is permanent.
+ * We'll have updated the timestamp above, so here
+ * we use a synchronous transaction to log the inode.
+ * It's not fast, but it's necessary.
+ *
+ * If this a dsync write and the size got changed
+ * non-transactionally, then we need to ensure that
+ * the size change gets logged in a synchronous
+ * transaction.
+ */
+
+ tp = xfs_trans_alloc(mp, XFS_TRANS_WRITE_SYNC);
+ if ((error = xfs_trans_reserve(tp, 0,
+ XFS_SWRITE_LOG_RES(mp),
+ 0, 0, 0))) {
+ /* Transaction reserve failed */
+ xfs_trans_cancel(tp, 0);
+ } else {
+ /* Transaction reserve successful */
+ xfs_ilock(xip, XFS_ILOCK_EXCL);
+ xfs_trans_ijoin(tp, xip, XFS_ILOCK_EXCL);
+ xfs_trans_ihold(tp, xip);
+ xfs_trans_log_inode(tp, xip, XFS_ILOG_CORE);
+ xfs_trans_set_sync(tp);
+ error = xfs_trans_commit(tp, 0, NULL);
+ xfs_iunlock(xip, XFS_ILOCK_EXCL);
+ }
+ }
+ } /* (ioflags & O_SYNC) */
+
+ xfs_rwunlock(bdp, locktype);
+ return(ret);
+}
+
+/*
+ * All xfs metadata buffers except log state machine buffers
+ * get this attached as their b_bdstrat callback function.
+ * This is so that we can catch a buffer
+ * after prematurely unpinning it to forcibly shutdown the filesystem.
+ */
+int
+xfs_bdstrat_cb(struct xfs_buf *bp)
+{
+ xfs_mount_t *mp;
+
+ mp = XFS_BUF_FSPRIVATE3(bp, xfs_mount_t *);
+ if (!XFS_FORCED_SHUTDOWN(mp)) {
+ pagebuf_iorequest(bp);
+ return 0;
+ } else {
+ xfs_buftrace("XFS__BDSTRAT IOERROR", bp);
+ /*
+ * Metadata write that didn't get logged but
+ * written delayed anyway. These aren't associated
+ * with a transaction, and can be ignored.
+ */
+ if (XFS_BUF_IODONE_FUNC(bp) == NULL &&
+ (XFS_BUF_ISREAD(bp)) == 0)
+ return (xfs_bioerror_relse(bp));
+ else
+ return (xfs_bioerror(bp));
+ }
+}
+
+
+int
+xfs_bmap(bhv_desc_t *bdp,
+ xfs_off_t offset,
+ ssize_t count,
+ int flags,
+ xfs_iomap_t *iomapp,
+ int *niomaps)
+{
+ xfs_inode_t *ip = XFS_BHVTOI(bdp);
+ xfs_iocore_t *io = &ip->i_iocore;
+
+ ASSERT((ip->i_d.di_mode & S_IFMT) == S_IFREG);
+ ASSERT(((ip->i_d.di_flags & XFS_DIFLAG_REALTIME) != 0) ==
+ ((ip->i_iocore.io_flags & XFS_IOCORE_RT) != 0));
+
+ return xfs_iomap(io, offset, count, flags, iomapp, niomaps);
+}
+
+/*
+ * Wrapper around bdstrat so that we can stop data
+ * from going to disk in case we are shutting down the filesystem.
+ * Typically user data goes thru this path; one of the exceptions
+ * is the superblock.
+ */
+int
+xfsbdstrat(
+ struct xfs_mount *mp,
+ struct xfs_buf *bp)
+{
+ ASSERT(mp);
+ if (!XFS_FORCED_SHUTDOWN(mp)) {
+ /* Grio redirection would go here
+ * if (XFS_BUF_IS_GRIO(bp)) {
+ */
+
+ pagebuf_iorequest(bp);
+ return 0;
+ }
+
+ xfs_buftrace("XFSBDSTRAT IOERROR", bp);
+ return (xfs_bioerror_relse(bp));
+}
+
+/*
+ * If the underlying (data/log/rt) device is readonly, there are some
+ * operations that cannot proceed.
+ */
+int
+xfs_dev_is_read_only(
+ xfs_mount_t *mp,
+ char *message)
+{
+ if (xfs_readonly_buftarg(mp->m_ddev_targp) ||
+ xfs_readonly_buftarg(mp->m_logdev_targp) ||
+ (mp->m_rtdev_targp && xfs_readonly_buftarg(mp->m_rtdev_targp))) {
+ cmn_err(CE_NOTE,
+ "XFS: %s required on read-only device.", message);
+ cmn_err(CE_NOTE,
+ "XFS: write access unavailable, cannot proceed.");
+ return EROFS;
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_LRW_H__
+#define __XFS_LRW_H__
+
+struct vnode;
+struct bhv_desc;
+struct xfs_mount;
+struct xfs_iocore;
+struct xfs_inode;
+struct xfs_bmbt_irec;
+struct xfs_buf;
+struct xfs_iomap;
+
+#if defined(XFS_RW_TRACE)
+/*
+ * Defines for the trace mechanisms in xfs_lrw.c.
+ */
+#define XFS_RW_KTRACE_SIZE 128
+
+#define XFS_READ_ENTER 1
+#define XFS_WRITE_ENTER 2
+#define XFS_IOMAP_READ_ENTER 3
+#define XFS_IOMAP_WRITE_ENTER 4
+#define XFS_IOMAP_READ_MAP 5
+#define XFS_IOMAP_WRITE_MAP 6
+#define XFS_IOMAP_WRITE_NOSPACE 7
+#define XFS_ITRUNC_START 8
+#define XFS_ITRUNC_FINISH1 9
+#define XFS_ITRUNC_FINISH2 10
+#define XFS_CTRUNC1 11
+#define XFS_CTRUNC2 12
+#define XFS_CTRUNC3 13
+#define XFS_CTRUNC4 14
+#define XFS_CTRUNC5 15
+#define XFS_CTRUNC6 16
+#define XFS_BUNMAPI 17
+#define XFS_INVAL_CACHED 18
+#define XFS_DIORD_ENTER 19
+#define XFS_DIOWR_ENTER 20
+#define XFS_SENDFILE_ENTER 21
+#define XFS_WRITEPAGE_ENTER 22
+#define XFS_RELEASEPAGE_ENTER 23
+#define XFS_IOMAP_ALLOC_ENTER 24
+#define XFS_IOMAP_ALLOC_MAP 25
+#define XFS_IOMAP_UNWRITTEN 26
+extern void xfs_rw_enter_trace(int, struct xfs_iocore *,
+ const struct iovec *, size_t, loff_t, int);
+extern void xfs_inval_cached_trace(struct xfs_iocore *,
+ xfs_off_t, xfs_off_t, xfs_off_t, xfs_off_t);
+#else
+#define xfs_rw_enter_trace(tag, io, iovec, segs, offset, ioflags)
+#define xfs_inval_cached_trace(io, offset, len, first, last)
+#endif
+
+/*
+ * Maximum count of bmaps used by read and write paths.
+ */
+#define XFS_MAX_RW_NBMAPS 4
+
+extern int xfs_bmap(struct bhv_desc *, xfs_off_t, ssize_t, int,
+ struct xfs_iomap *, int *);
+extern int xfsbdstrat(struct xfs_mount *, struct xfs_buf *);
+extern int xfs_bdstrat_cb(struct xfs_buf *);
+
+extern int xfs_zero_eof(struct vnode *, struct xfs_iocore *, xfs_off_t,
+ xfs_fsize_t, xfs_fsize_t);
+extern void xfs_inval_cached_pages(struct vnode *, struct xfs_iocore *,
+ xfs_off_t, int, int);
+extern ssize_t xfs_read(struct bhv_desc *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+extern ssize_t xfs_write(struct bhv_desc *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+extern ssize_t xfs_sendfile(struct bhv_desc *, struct file *,
+ loff_t *, int, size_t, read_actor_t,
+ void *, struct cred *);
+
+extern int xfs_dev_is_read_only(struct xfs_mount *, char *);
+
+#define XFS_FSB_TO_DB_IO(io,fsb) \
+ (((io)->io_flags & XFS_IOCORE_RT) ? \
+ XFS_FSB_TO_BB((io)->io_mount, (fsb)) : \
+ XFS_FSB_TO_DADDR((io)->io_mount, (fsb)))
+
+#endif /* __XFS_LRW_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include <linux/proc_fs.h>
+
+DEFINE_PER_CPU(struct xfsstats, xfsstats);
+
+STATIC int
+xfs_read_xfsstats(
+ char *buffer,
+ char **start,
+ off_t offset,
+ int count,
+ int *eof,
+ void *data)
+{
+ int c, i, j, len, val;
+ __uint64_t xs_xstrat_bytes = 0;
+ __uint64_t xs_write_bytes = 0;
+ __uint64_t xs_read_bytes = 0;
+
+ static struct xstats_entry {
+ char *desc;
+ int endpoint;
+ } xstats[] = {
+ { "extent_alloc", XFSSTAT_END_EXTENT_ALLOC },
+ { "abt", XFSSTAT_END_ALLOC_BTREE },
+ { "blk_map", XFSSTAT_END_BLOCK_MAPPING },
+ { "bmbt", XFSSTAT_END_BLOCK_MAP_BTREE },
+ { "dir", XFSSTAT_END_DIRECTORY_OPS },
+ { "trans", XFSSTAT_END_TRANSACTIONS },
+ { "ig", XFSSTAT_END_INODE_OPS },
+ { "log", XFSSTAT_END_LOG_OPS },
+ { "push_ail", XFSSTAT_END_TAIL_PUSHING },
+ { "xstrat", XFSSTAT_END_WRITE_CONVERT },
+ { "rw", XFSSTAT_END_READ_WRITE_OPS },
+ { "attr", XFSSTAT_END_ATTRIBUTE_OPS },
+ { "icluster", XFSSTAT_END_INODE_CLUSTER },
+ { "vnodes", XFSSTAT_END_VNODE_OPS },
+ { "buf", XFSSTAT_END_BUF },
+ };
+
+ /* Loop over all stats groups */
+ for (i=j=len = 0; i < sizeof(xstats)/sizeof(struct xstats_entry); i++) {
+ len += sprintf(buffer + len, xstats[i].desc);
+ /* inner loop does each group */
+ while (j < xstats[i].endpoint) {
+ val = 0;
+ /* sum over all cpus */
+ for (c = 0; c < NR_CPUS; c++) {
+ if (!cpu_possible(c)) continue;
+ val += *(((__u32*)&per_cpu(xfsstats, c) + j));
+ }
+ len += sprintf(buffer + len, " %u", val);
+ j++;
+ }
+ buffer[len++] = '\n';
+ }
+ /* extra precision counters */
+ for (i = 0; i < NR_CPUS; i++) {
+ if (!cpu_possible(i)) continue;
+ xs_xstrat_bytes += per_cpu(xfsstats, i).xs_xstrat_bytes;
+ xs_write_bytes += per_cpu(xfsstats, i).xs_write_bytes;
+ xs_read_bytes += per_cpu(xfsstats, i).xs_read_bytes;
+ }
+
+ len += sprintf(buffer + len, "xpc %Lu %Lu %Lu\n",
+ xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
+ len += sprintf(buffer + len, "debug %u\n",
+#if defined(DEBUG)
+ 1);
+#else
+ 0);
+#endif
+
+ if (offset >= len) {
+ *start = buffer;
+ *eof = 1;
+ return 0;
+ }
+ *start = buffer + offset;
+ if ((len -= offset) > count)
+ return count;
+ *eof = 1;
+
+ return len;
+}
+
+void
+xfs_init_procfs(void)
+{
+ if (!proc_mkdir("fs/xfs", 0))
+ return;
+ create_proc_read_entry("fs/xfs/stat", 0, 0, xfs_read_xfsstats, NULL);
+}
+
+void
+xfs_cleanup_procfs(void)
+{
+ remove_proc_entry("fs/xfs/stat", NULL);
+ remove_proc_entry("fs/xfs", NULL);
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_clnt.h"
+#include "xfs_trans.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_alloc.h"
+#include "xfs_dmapi.h"
+#include "xfs_quota.h"
+#include "xfs_mount.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_btree.h"
+#include "xfs_ialloc.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_bmap.h"
+#include "xfs_bit.h"
+#include "xfs_rtalloc.h"
+#include "xfs_error.h"
+#include "xfs_itable.h"
+#include "xfs_rw.h"
+#include "xfs_acl.h"
+#include "xfs_cap.h"
+#include "xfs_mac.h"
+#include "xfs_attr.h"
+#include "xfs_buf_item.h"
+#include "xfs_utils.h"
+#include "xfs_version.h"
+
+#include <linux/namei.h>
+#include <linux/init.h>
+#include <linux/mount.h>
+#include <linux/suspend.h>
+#include <linux/writeback.h>
+
+STATIC struct quotactl_ops linvfs_qops;
+STATIC struct super_operations linvfs_sops;
+STATIC struct export_operations linvfs_export_ops;
+STATIC kmem_cache_t * linvfs_inode_cachep;
+
+STATIC struct xfs_mount_args *
+xfs_args_allocate(
+ struct super_block *sb)
+{
+ struct xfs_mount_args *args;
+
+ args = kmem_zalloc(sizeof(struct xfs_mount_args), KM_SLEEP);
+ args->logbufs = args->logbufsize = -1;
+ strncpy(args->fsname, sb->s_id, MAXNAMELEN);
+
+ /* Copy the already-parsed mount(2) flags we're interested in */
+ if (sb->s_flags & MS_NOATIME)
+ args->flags |= XFSMNT_NOATIME;
+
+ /* Default to 32 bit inodes on Linux all the time */
+ args->flags |= XFSMNT_32BITINODES;
+
+ return args;
+}
+
+__uint64_t
+xfs_max_file_offset(
+ unsigned int blockshift)
+{
+ unsigned int pagefactor = 1;
+ unsigned int bitshift = BITS_PER_LONG - 1;
+
+ /* Figure out maximum filesize, on Linux this can depend on
+ * the filesystem blocksize (on 32 bit platforms).
+ * __block_prepare_write does this in an [unsigned] long...
+ * page->index << (PAGE_CACHE_SHIFT - bbits)
+ * So, for page sized blocks (4K on 32 bit platforms),
+ * this wraps at around 8Tb (hence MAX_LFS_FILESIZE which is
+ * (((u64)PAGE_CACHE_SIZE << (BITS_PER_LONG-1))-1)
+ * but for smaller blocksizes it is less (bbits = log2 bsize).
+ * Note1: get_block_t takes a long (implicit cast from above)
+ * Note2: The Large Block Device (LBD and HAVE_SECTOR_T) patch
+ * can optionally convert the [unsigned] long from above into
+ * an [unsigned] long long.
+ */
+
+#if BITS_PER_LONG == 32
+# if defined(CONFIG_LBD)
+ ASSERT(sizeof(sector_t) == 8);
+ pagefactor = PAGE_CACHE_SIZE;
+ bitshift = BITS_PER_LONG;
+# else
+ pagefactor = PAGE_CACHE_SIZE >> (PAGE_CACHE_SHIFT - blockshift);
+# endif
+#endif
+
+ return (((__uint64_t)pagefactor) << bitshift) - 1;
+}
+
+STATIC __inline__ void
+xfs_set_inodeops(
+ struct inode *inode)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ if (vp->v_type == VNON) {
+ make_bad_inode(inode);
+ } else if (S_ISREG(inode->i_mode)) {
+ inode->i_op = &linvfs_file_inode_operations;
+ inode->i_fop = &linvfs_file_operations;
+ inode->i_mapping->a_ops = &linvfs_aops;
+ } else if (S_ISDIR(inode->i_mode)) {
+ inode->i_op = &linvfs_dir_inode_operations;
+ inode->i_fop = &linvfs_dir_operations;
+ } else if (S_ISLNK(inode->i_mode)) {
+ inode->i_op = &linvfs_symlink_inode_operations;
+ if (inode->i_blocks)
+ inode->i_mapping->a_ops = &linvfs_aops;
+ } else {
+ inode->i_op = &linvfs_file_inode_operations;
+ init_special_inode(inode, inode->i_mode, inode->i_rdev);
+ }
+}
+
+STATIC __inline__ void
+xfs_revalidate_inode(
+ xfs_mount_t *mp,
+ vnode_t *vp,
+ xfs_inode_t *ip)
+{
+ struct inode *inode = LINVFS_GET_IP(vp);
+
+ inode->i_mode = (ip->i_d.di_mode & MODEMASK) | VTTOIF(vp->v_type);
+ inode->i_nlink = ip->i_d.di_nlink;
+ inode->i_uid = ip->i_d.di_uid;
+ inode->i_gid = ip->i_d.di_gid;
+ if (((1 << vp->v_type) & ((1<<VBLK) | (1<<VCHR))) == 0) {
+ inode->i_rdev = 0;
+ } else {
+ xfs_dev_t dev = ip->i_df.if_u2.if_rdev;
+ inode->i_rdev = MKDEV(sysv_major(dev) & 0x1ff, sysv_minor(dev));
+ }
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_generation = ip->i_d.di_gen;
+ i_size_write(inode, ip->i_d.di_size);
+ inode->i_blocks =
+ XFS_FSB_TO_BB(mp, ip->i_d.di_nblocks + ip->i_delayed_blks);
+ inode->i_atime.tv_sec = ip->i_d.di_atime.t_sec;
+ inode->i_atime.tv_nsec = ip->i_d.di_atime.t_nsec;
+ inode->i_mtime.tv_sec = ip->i_d.di_mtime.t_sec;
+ inode->i_mtime.tv_nsec = ip->i_d.di_mtime.t_nsec;
+ inode->i_ctime.tv_sec = ip->i_d.di_ctime.t_sec;
+ inode->i_ctime.tv_nsec = ip->i_d.di_ctime.t_nsec;
+ if (ip->i_d.di_flags & XFS_DIFLAG_IMMUTABLE)
+ inode->i_flags |= S_IMMUTABLE;
+ else
+ inode->i_flags &= ~S_IMMUTABLE;
+ if (ip->i_d.di_flags & XFS_DIFLAG_APPEND)
+ inode->i_flags |= S_APPEND;
+ else
+ inode->i_flags &= ~S_APPEND;
+ if (ip->i_d.di_flags & XFS_DIFLAG_SYNC)
+ inode->i_flags |= S_SYNC;
+ else
+ inode->i_flags &= ~S_SYNC;
+ if (ip->i_d.di_flags & XFS_DIFLAG_NOATIME)
+ inode->i_flags |= S_NOATIME;
+ else
+ inode->i_flags &= ~S_NOATIME;
+ vp->v_flag &= ~VMODIFIED;
+}
+
+void
+xfs_initialize_vnode(
+ bhv_desc_t *bdp,
+ vnode_t *vp,
+ bhv_desc_t *inode_bhv,
+ int unlock)
+{
+ xfs_inode_t *ip = XFS_BHVTOI(inode_bhv);
+ struct inode *inode = LINVFS_GET_IP(vp);
+
+ if (!inode_bhv->bd_vobj) {
+ vp->v_vfsp = bhvtovfs(bdp);
+ bhv_desc_init(inode_bhv, ip, vp, &xfs_vnodeops);
+ bhv_insert(VN_BHV_HEAD(vp), inode_bhv);
+ }
+
+ vp->v_type = IFTOVT(ip->i_d.di_mode);
+
+ /* Have we been called during the new inode create process,
+ * in which case we are too early to fill in the Linux inode.
+ */
+ if (vp->v_type == VNON)
+ return;
+
+ xfs_revalidate_inode(XFS_BHVTOM(bdp), vp, ip);
+
+ /* For new inodes we need to set the ops vectors,
+ * and unlock the inode.
+ */
+ if (unlock && (inode->i_state & I_NEW)) {
+ xfs_set_inodeops(inode);
+ unlock_new_inode(inode);
+ }
+}
+
+void
+xfs_flush_inode(
+ xfs_inode_t *ip)
+{
+ struct inode *inode = LINVFS_GET_IP(XFS_ITOV(ip));
+
+ filemap_flush(inode->i_mapping);
+}
+
+void
+xfs_flush_device(
+ xfs_inode_t *ip)
+{
+ sync_blockdev(XFS_ITOV(ip)->v_vfsp->vfs_super->s_bdev);
+ xfs_log_force(ip->i_mount, (xfs_lsn_t)0, XFS_LOG_FORCE|XFS_LOG_SYNC);
+}
+
+int
+xfs_blkdev_get(
+ xfs_mount_t *mp,
+ const char *name,
+ struct block_device **bdevp)
+{
+ int error = 0;
+
+ *bdevp = open_bdev_excl(name, 0, mp);
+ if (IS_ERR(*bdevp)) {
+ error = PTR_ERR(*bdevp);
+ printk("XFS: Invalid device [%s], error=%d\n", name, error);
+ }
+
+ return -error;
+}
+
+void
+xfs_blkdev_put(
+ struct block_device *bdev)
+{
+ if (bdev)
+ close_bdev_excl(bdev);
+}
+
+
+STATIC struct inode *
+linvfs_alloc_inode(
+ struct super_block *sb)
+{
+ vnode_t *vp;
+
+ vp = (vnode_t *)kmem_cache_alloc(linvfs_inode_cachep,
+ kmem_flags_convert(KM_SLEEP));
+ if (!vp)
+ return NULL;
+ return LINVFS_GET_IP(vp);
+}
+
+STATIC void
+linvfs_destroy_inode(
+ struct inode *inode)
+{
+ kmem_cache_free(linvfs_inode_cachep, LINVFS_GET_VP(inode));
+}
+
+STATIC void
+init_once(
+ void *data,
+ kmem_cache_t *cachep,
+ unsigned long flags)
+{
+ vnode_t *vp = (vnode_t *)data;
+
+ if ((flags & (SLAB_CTOR_VERIFY|SLAB_CTOR_CONSTRUCTOR)) ==
+ SLAB_CTOR_CONSTRUCTOR)
+ inode_init_once(LINVFS_GET_IP(vp));
+}
+
+STATIC int
+init_inodecache( void )
+{
+ linvfs_inode_cachep = kmem_cache_create("linvfs_icache",
+ sizeof(vnode_t), 0,
+ SLAB_HWCACHE_ALIGN|SLAB_RECLAIM_ACCOUNT,
+ init_once, NULL);
+
+ if (linvfs_inode_cachep == NULL)
+ return -ENOMEM;
+ return 0;
+}
+
+STATIC void
+destroy_inodecache( void )
+{
+ if (kmem_cache_destroy(linvfs_inode_cachep))
+ printk(KERN_WARNING "%s: cache still in use!\n", __FUNCTION__);
+}
+
+/*
+ * Attempt to flush the inode, this will actually fail
+ * if the inode is pinned, but we dirty the inode again
+ * at the point when it is unpinned after a log write,
+ * since this is when the inode itself becomes flushable.
+ */
+STATIC void
+linvfs_write_inode(
+ struct inode *inode,
+ int sync)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error, flags = FLUSH_INODE;
+
+ if (vp) {
+ vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address);
+ if (sync)
+ flags |= FLUSH_SYNC;
+ VOP_IFLUSH(vp, flags, error);
+ }
+}
+
+STATIC void
+linvfs_clear_inode(
+ struct inode *inode)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ if (vp) {
+ vn_rele(vp);
+ vn_trace_entry(vp, __FUNCTION__, (inst_t *)__return_address);
+ /*
+ * Do all our cleanup, and remove this vnode.
+ */
+ vn_remove(vp);
+ }
+}
+
+
+#define SYNCD_FLAGS (SYNC_FSDATA|SYNC_BDFLUSH|SYNC_ATTR)
+
+STATIC int
+xfssyncd(
+ void *arg)
+{
+ vfs_t *vfsp = (vfs_t *) arg;
+ int error;
+
+ daemonize("xfssyncd");
+
+ vfsp->vfs_sync_task = current;
+ wmb();
+ wake_up(&vfsp->vfs_wait_sync_task);
+
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((xfs_syncd_centisecs * HZ) / 100);
+ /* swsusp */
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+ if (vfsp->vfs_flag & VFS_UMOUNT)
+ break;
+ if (vfsp->vfs_flag & VFS_RDONLY)
+ continue;
+ VFS_SYNC(vfsp, SYNCD_FLAGS, NULL, error);
+
+ vfsp->vfs_sync_seq++;
+ wmb();
+ wake_up(&vfsp->vfs_wait_single_sync_task);
+ }
+
+ vfsp->vfs_sync_task = NULL;
+ wmb();
+ wake_up(&vfsp->vfs_wait_sync_task);
+
+ return 0;
+}
+
+STATIC int
+linvfs_start_syncd(
+ vfs_t *vfsp)
+{
+ int pid;
+
+ pid = kernel_thread(xfssyncd, (void *) vfsp,
+ CLONE_VM | CLONE_FS | CLONE_FILES);
+ if (pid < 0)
+ return -pid;
+ wait_event(vfsp->vfs_wait_sync_task, vfsp->vfs_sync_task);
+ return 0;
+}
+
+STATIC void
+linvfs_stop_syncd(
+ vfs_t *vfsp)
+{
+ vfsp->vfs_flag |= VFS_UMOUNT;
+ wmb();
+
+ wake_up_process(vfsp->vfs_sync_task);
+ wait_event(vfsp->vfs_wait_sync_task, !vfsp->vfs_sync_task);
+}
+
+STATIC void
+linvfs_put_super(
+ struct super_block *sb)
+{
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ linvfs_stop_syncd(vfsp);
+ VFS_SYNC(vfsp, SYNC_ATTR|SYNC_DELWRI, NULL, error);
+ if (!error)
+ VFS_UNMOUNT(vfsp, 0, NULL, error);
+ if (error) {
+ printk("XFS unmount got error %d\n", error);
+ printk("%s: vfsp/0x%p left dangling!\n", __FUNCTION__, vfsp);
+ return;
+ }
+
+ vfs_deallocate(vfsp);
+}
+
+STATIC void
+linvfs_write_super(
+ struct super_block *sb)
+{
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ if (sb->s_flags & MS_RDONLY) {
+ sb->s_dirt = 0; /* paranoia */
+ return;
+ }
+ /* Push the log and superblock a little */
+ VFS_SYNC(vfsp, SYNC_FSDATA, NULL, error);
+ sb->s_dirt = 0;
+}
+
+STATIC int
+linvfs_sync_super(
+ struct super_block *sb,
+ int wait)
+{
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+ int flags = SYNC_FSDATA;
+
+ if (wait)
+ flags |= SYNC_WAIT;
+
+ VFS_SYNC(vfsp, flags, NULL, error);
+ sb->s_dirt = 0;
+
+ if (unlikely(laptop_mode)) {
+ int prev_sync_seq = vfsp->vfs_sync_seq;
+ /*
+ * The disk must be active because we're syncing.
+ * We schedule syncd now (now that the disk is
+ * active) instead of later (when it might not be).
+ */
+ wake_up_process(vfsp->vfs_sync_task);
+ /*
+ * We have to wait for the sync iteration to complete.
+ * If we don't, the disk activity caused by the sync
+ * will come after the sync is completed, and that
+ * triggers another sync from laptop mode.
+ */
+ wait_event(vfsp->vfs_wait_single_sync_task,
+ vfsp->vfs_sync_seq != prev_sync_seq);
+ }
+
+ return -error;
+}
+
+STATIC int
+linvfs_statfs(
+ struct super_block *sb,
+ struct kstatfs *statp)
+{
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ VFS_STATVFS(vfsp, statp, NULL, error);
+ return -error;
+}
+
+STATIC int
+linvfs_remount(
+ struct super_block *sb,
+ int *flags,
+ char *options)
+{
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ struct xfs_mount_args *args = xfs_args_allocate(sb);
+ int error;
+
+ VFS_PARSEARGS(vfsp, options, args, 1, error);
+ if (!error)
+ VFS_MNTUPDATE(vfsp, flags, args, error);
+ kmem_free(args, sizeof(*args));
+ return -error;
+}
+
+STATIC void
+linvfs_freeze_fs(
+ struct super_block *sb)
+{
+ VFS_FREEZE(LINVFS_GET_VFS(sb));
+}
+
+STATIC struct dentry *
+linvfs_get_parent(
+ struct dentry *child)
+{
+ int error;
+ vnode_t *vp, *cvp;
+ struct dentry *parent;
+ struct inode *ip = NULL;
+ struct dentry dotdot;
+
+ dotdot.d_name.name = "..";
+ dotdot.d_name.len = 2;
+ dotdot.d_inode = 0;
+
+ cvp = NULL;
+ vp = LINVFS_GET_VP(child->d_inode);
+ VOP_LOOKUP(vp, &dotdot, &cvp, 0, NULL, NULL, error);
+
+ if (!error) {
+ ASSERT(cvp);
+ ip = LINVFS_GET_IP(cvp);
+ if (!ip) {
+ VN_RELE(cvp);
+ return ERR_PTR(-EACCES);
+ }
+ }
+ if (error)
+ return ERR_PTR(-error);
+ parent = d_alloc_anon(ip);
+ if (!parent) {
+ VN_RELE(cvp);
+ parent = ERR_PTR(-ENOMEM);
+ }
+ return parent;
+}
+
+STATIC struct dentry *
+linvfs_get_dentry(
+ struct super_block *sb,
+ void *data)
+{
+ vnode_t *vp;
+ struct inode *inode;
+ struct dentry *result;
+ xfs_fid2_t xfid;
+ vfs_t *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ xfid.fid_len = sizeof(xfs_fid2_t) - sizeof(xfid.fid_len);
+ xfid.fid_pad = 0;
+ xfid.fid_gen = ((__u32 *)data)[1];
+ xfid.fid_ino = ((__u32 *)data)[0];
+
+ VFS_VGET(vfsp, &vp, (fid_t *)&xfid, error);
+ if (error || vp == NULL)
+ return ERR_PTR(-ESTALE) ;
+
+ inode = LINVFS_GET_IP(vp);
+ result = d_alloc_anon(inode);
+ if (!result) {
+ iput(inode);
+ return ERR_PTR(-ENOMEM);
+ }
+ return result;
+}
+
+STATIC int
+linvfs_show_options(
+ struct seq_file *m,
+ struct vfsmount *mnt)
+{
+ struct vfs *vfsp = LINVFS_GET_VFS(mnt->mnt_sb);
+ int error;
+
+ VFS_SHOWARGS(vfsp, m, error);
+ return error;
+}
+
+STATIC int
+linvfs_getxstate(
+ struct super_block *sb,
+ struct fs_quota_stat *fqs)
+{
+ struct vfs *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ VFS_QUOTACTL(vfsp, Q_XGETQSTAT, 0, (caddr_t)fqs, error);
+ return -error;
+}
+
+STATIC int
+linvfs_setxstate(
+ struct super_block *sb,
+ unsigned int flags,
+ int op)
+{
+ struct vfs *vfsp = LINVFS_GET_VFS(sb);
+ int error;
+
+ VFS_QUOTACTL(vfsp, op, 0, (caddr_t)&flags, error);
+ return -error;
+}
+
+STATIC int
+linvfs_getxquota(
+ struct super_block *sb,
+ int type,
+ qid_t id,
+ struct fs_disk_quota *fdq)
+{
+ struct vfs *vfsp = LINVFS_GET_VFS(sb);
+ int error, getmode;
+
+ getmode = (type == GRPQUOTA) ? Q_XGETGQUOTA : Q_XGETQUOTA;
+ VFS_QUOTACTL(vfsp, getmode, id, (caddr_t)fdq, error);
+ return -error;
+}
+
+STATIC int
+linvfs_setxquota(
+ struct super_block *sb,
+ int type,
+ qid_t id,
+ struct fs_disk_quota *fdq)
+{
+ struct vfs *vfsp = LINVFS_GET_VFS(sb);
+ int error, setmode;
+
+ setmode = (type == GRPQUOTA) ? Q_XSETGQLIM : Q_XSETQLIM;
+ VFS_QUOTACTL(vfsp, setmode, id, (caddr_t)fdq, error);
+ return -error;
+}
+
+STATIC int
+linvfs_fill_super(
+ struct super_block *sb,
+ void *data,
+ int silent)
+{
+ vnode_t *rootvp;
+ struct vfs *vfsp = vfs_allocate();
+ struct xfs_mount_args *args = xfs_args_allocate(sb);
+ struct kstatfs statvfs;
+ int error, error2;
+
+ vfsp->vfs_super = sb;
+ LINVFS_SET_VFS(sb, vfsp);
+ if (sb->s_flags & MS_RDONLY)
+ vfsp->vfs_flag |= VFS_RDONLY;
+ bhv_insert_all_vfsops(vfsp);
+
+ VFS_PARSEARGS(vfsp, (char *)data, args, 0, error);
+ if (error) {
+ bhv_remove_all_vfsops(vfsp, 1);
+ goto fail_vfsop;
+ }
+
+ sb_min_blocksize(sb, BBSIZE);
+ sb->s_export_op = &linvfs_export_ops;
+ sb->s_qcop = &linvfs_qops;
+ sb->s_op = &linvfs_sops;
+
+ VFS_MOUNT(vfsp, args, NULL, error);
+ if (error) {
+ bhv_remove_all_vfsops(vfsp, 1);
+ goto fail_vfsop;
+ }
+
+ VFS_STATVFS(vfsp, &statvfs, NULL, error);
+ if (error)
+ goto fail_unmount;
+
+ sb->s_dirt = 1;
+ sb->s_magic = statvfs.f_type;
+ sb->s_blocksize = statvfs.f_bsize;
+ sb->s_blocksize_bits = ffs(statvfs.f_bsize) - 1;
+ sb->s_maxbytes = xfs_max_file_offset(sb->s_blocksize_bits);
+ set_posix_acl_flag(sb);
+
+ VFS_ROOT(vfsp, &rootvp, error);
+ if (error)
+ goto fail_unmount;
+
+ sb->s_root = d_alloc_root(LINVFS_GET_IP(rootvp));
+ if (!sb->s_root) {
+ error = ENOMEM;
+ goto fail_vnrele;
+ }
+ if (is_bad_inode(sb->s_root->d_inode)) {
+ error = EINVAL;
+ goto fail_vnrele;
+ }
+ if ((error = linvfs_start_syncd(vfsp)))
+ goto fail_vnrele;
+ vn_trace_exit(rootvp, __FUNCTION__, (inst_t *)__return_address);
+
+ kmem_free(args, sizeof(*args));
+ return 0;
+
+fail_vnrele:
+ if (sb->s_root) {
+ dput(sb->s_root);
+ sb->s_root = NULL;
+ } else {
+ VN_RELE(rootvp);
+ }
+
+fail_unmount:
+ VFS_UNMOUNT(vfsp, 0, NULL, error2);
+
+fail_vfsop:
+ vfs_deallocate(vfsp);
+ kmem_free(args, sizeof(*args));
+ return -error;
+}
+
+STATIC struct super_block *
+linvfs_get_sb(
+ struct file_system_type *fs_type,
+ int flags,
+ const char *dev_name,
+ void *data)
+{
+ return get_sb_bdev(fs_type, flags, dev_name, data, linvfs_fill_super);
+}
+
+
+STATIC struct export_operations linvfs_export_ops = {
+ .get_parent = linvfs_get_parent,
+ .get_dentry = linvfs_get_dentry,
+};
+
+STATIC struct super_operations linvfs_sops = {
+ .alloc_inode = linvfs_alloc_inode,
+ .destroy_inode = linvfs_destroy_inode,
+ .write_inode = linvfs_write_inode,
+ .clear_inode = linvfs_clear_inode,
+ .put_super = linvfs_put_super,
+ .write_super = linvfs_write_super,
+ .sync_fs = linvfs_sync_super,
+ .write_super_lockfs = linvfs_freeze_fs,
+ .statfs = linvfs_statfs,
+ .remount_fs = linvfs_remount,
+ .show_options = linvfs_show_options,
+};
+
+STATIC struct quotactl_ops linvfs_qops = {
+ .get_xstate = linvfs_getxstate,
+ .set_xstate = linvfs_setxstate,
+ .get_xquota = linvfs_getxquota,
+ .set_xquota = linvfs_setxquota,
+};
+
+STATIC struct file_system_type xfs_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "xfs",
+ .get_sb = linvfs_get_sb,
+ .kill_sb = kill_block_super,
+ .fs_flags = FS_REQUIRES_DEV,
+};
+
+
+STATIC int __init
+init_xfs_fs( void )
+{
+ int error;
+ struct sysinfo si;
+ static char message[] __initdata = KERN_INFO \
+ XFS_VERSION_STRING " with " XFS_BUILD_OPTIONS " enabled\n";
+
+ printk(message);
+
+ si_meminfo(&si);
+ xfs_physmem = si.totalram;
+
+ ktrace_init(64);
+
+ error = init_inodecache();
+ if (error < 0)
+ goto undo_inodecache;
+
+ error = pagebuf_init();
+ if (error < 0)
+ goto undo_pagebuf;
+
+ vn_init();
+ xfs_init();
+ uuid_init();
+ vfs_initdmapi();
+ vfs_initquota();
+
+ error = register_filesystem(&xfs_fs_type);
+ if (error)
+ goto undo_register;
+ return 0;
+
+undo_register:
+ pagebuf_terminate();
+
+undo_pagebuf:
+ destroy_inodecache();
+
+undo_inodecache:
+ return error;
+}
+
+STATIC void __exit
+exit_xfs_fs( void )
+{
+ vfs_exitquota();
+ vfs_exitdmapi();
+ unregister_filesystem(&xfs_fs_type);
+ xfs_cleanup();
+ pagebuf_terminate();
+ destroy_inodecache();
+ ktrace_uninit();
+}
+
+module_init(init_xfs_fs);
+module_exit(exit_xfs_fs);
+
+MODULE_AUTHOR("Silicon Graphics, Inc.");
+MODULE_DESCRIPTION(XFS_VERSION_STRING " with " XFS_BUILD_OPTIONS " enabled");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPER_H__
+#define __XFS_SUPER_H__
+
+#ifdef CONFIG_XFS_DMAPI
+# define vfs_insertdmapi(vfs) vfs_insertops(vfsp, &xfs_dmops)
+# define vfs_initdmapi() dmapi_init()
+# define vfs_exitdmapi() dmapi_uninit()
+#else
+# define vfs_insertdmapi(vfs) do { } while (0)
+# define vfs_initdmapi() do { } while (0)
+# define vfs_exitdmapi() do { } while (0)
+#endif
+
+#ifdef CONFIG_XFS_QUOTA
+# define vfs_insertquota(vfs) vfs_insertops(vfsp, &xfs_qmops)
+extern void xfs_qm_init(void);
+extern void xfs_qm_exit(void);
+# define vfs_initquota() xfs_qm_init()
+# define vfs_exitquota() xfs_qm_exit()
+#else
+# define vfs_insertquota(vfs) do { } while (0)
+# define vfs_initquota() do { } while (0)
+# define vfs_exitquota() do { } while (0)
+#endif
+
+#ifdef CONFIG_XFS_POSIX_ACL
+# define XFS_ACL_STRING "ACLs, "
+# define set_posix_acl_flag(sb) ((sb)->s_flags |= MS_POSIXACL)
+#else
+# define XFS_ACL_STRING
+# define set_posix_acl_flag(sb) do { } while (0)
+#endif
+
+#ifdef CONFIG_XFS_SECURITY
+# define XFS_SECURITY_STRING "security attributes, "
+# define ENOSECURITY 0
+#else
+# define XFS_SECURITY_STRING
+# define ENOSECURITY EOPNOTSUPP
+#endif
+
+#ifdef CONFIG_XFS_RT
+# define XFS_REALTIME_STRING "realtime, "
+#else
+# define XFS_REALTIME_STRING
+#endif
+
+#if XFS_BIG_BLKNOS
+# if XFS_BIG_INUMS
+# define XFS_BIGFS_STRING "large block/inode numbers, "
+# else
+# define XFS_BIGFS_STRING "large block numbers, "
+# endif
+#else
+# define XFS_BIGFS_STRING
+#endif
+
+#ifdef CONFIG_XFS_TRACE
+# define XFS_TRACE_STRING "tracing, "
+#else
+# define XFS_TRACE_STRING
+#endif
+
+#ifdef DEBUG
+# define XFS_DBG_STRING "debug"
+#else
+# define XFS_DBG_STRING "no debug"
+#endif
+
+#define XFS_BUILD_OPTIONS XFS_ACL_STRING \
+ XFS_SECURITY_STRING \
+ XFS_REALTIME_STRING \
+ XFS_BIGFS_STRING \
+ XFS_TRACE_STRING \
+ XFS_DBG_STRING /* DBG must be last */
+
+#define LINVFS_GET_VFS(s) \
+ (vfs_t *)((s)->s_fs_info)
+#define LINVFS_SET_VFS(s, vfsp) \
+ ((s)->s_fs_info = vfsp)
+
+struct xfs_inode;
+struct xfs_mount;
+struct xfs_buftarg;
+struct block_device;
+
+extern __uint64_t xfs_max_file_offset(unsigned int);
+
+extern void xfs_initialize_vnode(bhv_desc_t *, vnode_t *, bhv_desc_t *, int);
+
+extern void xfs_flush_inode(struct xfs_inode *);
+extern void xfs_flush_device(struct xfs_inode *);
+
+extern int xfs_blkdev_get(struct xfs_mount *, const char *,
+ struct block_device **);
+extern void xfs_blkdev_put(struct block_device *);
+
+#endif /* __XFS_SUPER_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_rw.h"
+#include <linux/sysctl.h>
+#include <linux/proc_fs.h>
+
+
+static struct ctl_table_header *xfs_table_header;
+
+
+#ifdef CONFIG_PROC_FS
+STATIC int
+xfs_stats_clear_proc_handler(
+ ctl_table *ctl,
+ int write,
+ struct file *filp,
+ void *buffer,
+ size_t *lenp)
+{
+ int c, ret, *valp = ctl->data;
+ __uint32_t vn_active;
+
+ ret = proc_dointvec_minmax(ctl, write, filp, buffer, lenp);
+
+ if (!ret && write && *valp) {
+ printk("XFS Clearing xfsstats\n");
+ for (c = 0; c < NR_CPUS; c++) {
+ if (!cpu_possible(c)) continue;
+ preempt_disable();
+ /* save vn_active, it's a universal truth! */
+ vn_active = per_cpu(xfsstats, c).vn_active;
+ memset(&per_cpu(xfsstats, c), 0,
+ sizeof(struct xfsstats));
+ per_cpu(xfsstats, c).vn_active = vn_active;
+ preempt_enable();
+ }
+ xfs_stats_clear = 0;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_PROC_FS */
+
+STATIC ctl_table xfs_table[] = {
+ {XFS_RESTRICT_CHOWN, "restrict_chown", &xfs_params.restrict_chown.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.restrict_chown.min, &xfs_params.restrict_chown.max},
+
+ {XFS_SGID_INHERIT, "irix_sgid_inherit", &xfs_params.sgid_inherit.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.sgid_inherit.min, &xfs_params.sgid_inherit.max},
+
+ {XFS_SYMLINK_MODE, "irix_symlink_mode", &xfs_params.symlink_mode.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.symlink_mode.min, &xfs_params.symlink_mode.max},
+
+ {XFS_PANIC_MASK, "panic_mask", &xfs_params.panic_mask.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.panic_mask.min, &xfs_params.panic_mask.max},
+
+ {XFS_ERRLEVEL, "error_level", &xfs_params.error_level.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.error_level.min, &xfs_params.error_level.max},
+
+ {XFS_SYNCD_TIMER, "xfssyncd_centisecs", &xfs_params.syncd_timer.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.syncd_timer.min, &xfs_params.syncd_timer.max},
+
+ {XFS_INHERIT_SYNC, "inherit_sync", &xfs_params.inherit_sync.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_sync.min, &xfs_params.inherit_sync.max},
+
+ {XFS_INHERIT_NODUMP, "inherit_nodump", &xfs_params.inherit_nodump.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_nodump.min, &xfs_params.inherit_nodump.max},
+
+ {XFS_INHERIT_NOATIME, "inherit_noatime", &xfs_params.inherit_noatim.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_noatim.min, &xfs_params.inherit_noatim.max},
+
+ {XFS_BUF_TIMER, "xfsbufd_centisecs", &xfs_params.xfs_buf_timer.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.xfs_buf_timer.min, &xfs_params.xfs_buf_timer.max},
+
+ {XFS_BUF_AGE, "age_buffer_centisecs", &xfs_params.xfs_buf_age.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.xfs_buf_age.min, &xfs_params.xfs_buf_age.max},
+
+ /* please keep this the last entry */
+#ifdef CONFIG_PROC_FS
+ {XFS_STATS_CLEAR, "stats_clear", &xfs_params.stats_clear.val,
+ sizeof(int), 0644, NULL, &xfs_stats_clear_proc_handler,
+ &sysctl_intvec, NULL,
+ &xfs_params.stats_clear.min, &xfs_params.stats_clear.max},
+#endif /* CONFIG_PROC_FS */
+
+ {0}
+};
+
+STATIC ctl_table xfs_dir_table[] = {
+ {FS_XFS, "xfs", NULL, 0, 0555, xfs_table},
+ {0}
+};
+
+STATIC ctl_table xfs_root_table[] = {
+ {CTL_FS, "fs", NULL, 0, 0555, xfs_dir_table},
+ {0}
+};
+
+void
+xfs_sysctl_register(void)
+{
+ xfs_table_header = register_sysctl_table(xfs_root_table, 1);
+}
+
+void
+xfs_sysctl_unregister(void)
+{
+ if (xfs_table_header)
+ unregister_sysctl_table(xfs_table_header);
+}
--- /dev/null
+/*
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#ifndef __XFS_SYSCTL_H__
+#define __XFS_SYSCTL_H__
+
+#include <linux/sysctl.h>
+
+/*
+ * Tunable xfs parameters
+ */
+
+typedef struct xfs_sysctl_val {
+ int min;
+ int val;
+ int max;
+} xfs_sysctl_val_t;
+
+typedef struct xfs_param {
+ xfs_sysctl_val_t restrict_chown;/* Root/non-root can give away files.*/
+ xfs_sysctl_val_t sgid_inherit; /* Inherit S_ISGID if process' GID is
+ * not a member of parent dir GID. */
+ xfs_sysctl_val_t symlink_mode; /* Link creat mode affected by umask */
+ xfs_sysctl_val_t panic_mask; /* bitmask to cause panic on errors. */
+ xfs_sysctl_val_t error_level; /* Degree of reporting for problems */
+ xfs_sysctl_val_t syncd_timer; /* Interval between xfssyncd wakeups */
+ xfs_sysctl_val_t stats_clear; /* Reset all XFS statistics to zero. */
+ xfs_sysctl_val_t inherit_sync; /* Inherit the "sync" inode flag. */
+ xfs_sysctl_val_t inherit_nodump;/* Inherit the "nodump" inode flag. */
+ xfs_sysctl_val_t inherit_noatim;/* Inherit the "noatime" inode flag. */
+ xfs_sysctl_val_t xfs_buf_timer; /* Interval between xfsbufd wakeups. */
+ xfs_sysctl_val_t xfs_buf_age; /* Metadata buffer age before flush. */
+} xfs_param_t;
+
+/*
+ * xfs_error_level:
+ *
+ * How much error reporting will be done when internal problems are
+ * encountered. These problems normally return an EFSCORRUPTED to their
+ * caller, with no other information reported.
+ *
+ * 0 No error reports
+ * 1 Report EFSCORRUPTED errors that will cause a filesystem shutdown
+ * 5 Report all EFSCORRUPTED errors (all of the above errors, plus any
+ * additional errors that are known to not cause shutdowns)
+ *
+ * xfs_panic_mask bit 0x8 turns the error reports into panics
+ */
+
+enum {
+ /* XFS_REFCACHE_SIZE = 1 */
+ /* XFS_REFCACHE_PURGE = 2 */
+ XFS_RESTRICT_CHOWN = 3,
+ XFS_SGID_INHERIT = 4,
+ XFS_SYMLINK_MODE = 5,
+ XFS_PANIC_MASK = 6,
+ XFS_ERRLEVEL = 7,
+ XFS_SYNCD_TIMER = 8,
+ /* XFS_PROBE_DMAPI = 9 */
+ /* XFS_PROBE_IOOPS = 10 */
+ /* XFS_PROBE_QUOTA = 11 */
+ XFS_STATS_CLEAR = 12,
+ XFS_INHERIT_SYNC = 13,
+ XFS_INHERIT_NODUMP = 14,
+ XFS_INHERIT_NOATIME = 15,
+ XFS_BUF_TIMER = 16,
+ XFS_BUF_AGE = 17,
+ /* XFS_IO_BYPASS = 18 */
+};
+
+extern xfs_param_t xfs_params;
+
+#ifdef CONFIG_SYSCTL
+extern void xfs_sysctl_register(void);
+extern void xfs_sysctl_unregister(void);
+#else
+# define xfs_sysctl_register() do { } while (0)
+# define xfs_sysctl_unregister() do { } while (0)
+#endif /* CONFIG_SYSCTL */
+
+#endif /* __XFS_SYSCTL_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_fs.h"
+#include "xfs_macros.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_clnt.h"
+#include "xfs_trans.h"
+#include "xfs_sb.h"
+#include "xfs_ag.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_imap.h"
+#include "xfs_alloc.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_quota.h"
+
+int
+vfs_mount(
+ struct bhv_desc *bdp,
+ struct xfs_mount_args *args,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_mount)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_mount)(next, args, cr));
+}
+
+int
+vfs_parseargs(
+ struct bhv_desc *bdp,
+ char *s,
+ struct xfs_mount_args *args,
+ int f)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_parseargs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_parseargs)(next, s, args, f));
+}
+
+int
+vfs_showargs(
+ struct bhv_desc *bdp,
+ struct seq_file *m)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_showargs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_showargs)(next, m));
+}
+
+int
+vfs_unmount(
+ struct bhv_desc *bdp,
+ int fl,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_unmount)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_unmount)(next, fl, cr));
+}
+
+int
+vfs_mntupdate(
+ struct bhv_desc *bdp,
+ int *fl,
+ struct xfs_mount_args *args)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_mntupdate)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_mntupdate)(next, fl, args));
+}
+
+int
+vfs_root(
+ struct bhv_desc *bdp,
+ struct vnode **vpp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_root)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_root)(next, vpp));
+}
+
+int
+vfs_statvfs(
+ struct bhv_desc *bdp,
+ xfs_statfs_t *sp,
+ struct vnode *vp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_statvfs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_statvfs)(next, sp, vp));
+}
+
+int
+vfs_sync(
+ struct bhv_desc *bdp,
+ int fl,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_sync)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_sync)(next, fl, cr));
+}
+
+int
+vfs_vget(
+ struct bhv_desc *bdp,
+ struct vnode **vpp,
+ struct fid *fidp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_vget)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_vget)(next, vpp, fidp));
+}
+
+int
+vfs_dmapiops(
+ struct bhv_desc *bdp,
+ caddr_t addr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_dmapiops)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_dmapiops)(next, addr));
+}
+
+int
+vfs_quotactl(
+ struct bhv_desc *bdp,
+ int cmd,
+ int id,
+ caddr_t addr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_quotactl)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_quotactl)(next, cmd, id, addr));
+}
+
+void
+vfs_init_vnode(
+ struct bhv_desc *bdp,
+ struct vnode *vp,
+ struct bhv_desc *bp,
+ int unlock)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_init_vnode)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_init_vnode)(next, vp, bp, unlock));
+}
+
+void
+vfs_force_shutdown(
+ struct bhv_desc *bdp,
+ int fl,
+ char *file,
+ int line)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_force_shutdown)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_force_shutdown)(next, fl, file, line));
+}
+
+void
+vfs_freeze(
+ struct bhv_desc *bdp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_freeze)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_freeze)(next));
+}
+
+vfs_t *
+vfs_allocate( void )
+{
+ struct vfs *vfsp;
+
+ vfsp = kmem_zalloc(sizeof(vfs_t), KM_SLEEP);
+ bhv_head_init(VFS_BHVHEAD(vfsp), "vfs");
+ init_waitqueue_head(&vfsp->vfs_wait_sync_task);
+ init_waitqueue_head(&vfsp->vfs_wait_single_sync_task);
+ return vfsp;
+}
+
+void
+vfs_deallocate(
+ struct vfs *vfsp)
+{
+ bhv_head_destroy(VFS_BHVHEAD(vfsp));
+ kmem_free(vfsp, sizeof(vfs_t));
+}
+
+void
+vfs_insertops(
+ struct vfs *vfsp,
+ struct bhv_vfsops *vfsops)
+{
+ struct bhv_desc *bdp;
+
+ bdp = kmem_alloc(sizeof(struct bhv_desc), KM_SLEEP);
+ bhv_desc_init(bdp, NULL, vfsp, vfsops);
+ bhv_insert(&vfsp->vfs_bh, bdp);
+}
+
+void
+vfs_insertbhv(
+ struct vfs *vfsp,
+ struct bhv_desc *bdp,
+ struct vfsops *vfsops,
+ void *mount)
+{
+ bhv_desc_init(bdp, mount, vfsp, vfsops);
+ bhv_insert_initial(&vfsp->vfs_bh, bdp);
+}
+
+void
+bhv_remove_vfsops(
+ struct vfs *vfsp,
+ int pos)
+{
+ struct bhv_desc *bhv;
+
+ bhv = bhv_lookup_range(&vfsp->vfs_bh, pos, pos);
+ if (!bhv)
+ return;
+ bhv_remove(&vfsp->vfs_bh, bhv);
+ kmem_free(bhv, sizeof(*bhv));
+}
+
+void
+bhv_remove_all_vfsops(
+ struct vfs *vfsp,
+ int freebase)
+{
+ struct xfs_mount *mp;
+
+ bhv_remove_vfsops(vfsp, VFS_POSITION_QM);
+ bhv_remove_vfsops(vfsp, VFS_POSITION_DM);
+ if (!freebase)
+ return;
+ mp = XFS_BHVTOM(bhv_lookup(VFS_BHVHEAD(vfsp), &xfs_vfsops));
+ VFS_REMOVEBHV(vfsp, &mp->m_bhv);
+ xfs_mount_free(mp, 0);
+}
+
+void
+bhv_insert_all_vfsops(
+ struct vfs *vfsp)
+{
+ struct xfs_mount *mp;
+
+ mp = xfs_mount_init();
+ vfs_insertbhv(vfsp, &mp->m_bhv, &xfs_vfsops, mp);
+ vfs_insertdmapi(vfsp);
+ vfs_insertquota(vfsp);
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_VFS_H__
+#define __XFS_VFS_H__
+
+#include <linux/vfs.h>
+#include "xfs_fs.h"
+
+struct fid;
+struct cred;
+struct vnode;
+struct kstatfs;
+struct seq_file;
+struct super_block;
+struct xfs_mount_args;
+
+typedef struct kstatfs xfs_statfs_t;
+
+typedef struct vfs {
+ u_int vfs_flag; /* flags */
+ xfs_fsid_t vfs_fsid; /* file system ID */
+ xfs_fsid_t *vfs_altfsid; /* An ID fixed for life of FS */
+ bhv_head_t vfs_bh; /* head of vfs behavior chain */
+ struct super_block *vfs_super; /* Linux superblock structure */
+ struct task_struct *vfs_sync_task; /* xfssyncd process */
+ int vfs_sync_seq; /* xfssyncd generation number */
+ wait_queue_head_t vfs_wait_single_sync_task;
+ wait_queue_head_t vfs_wait_sync_task;
+} vfs_t;
+
+#define vfs_fbhv vfs_bh.bh_first /* 1st on vfs behavior chain */
+
+#define bhvtovfs(bdp) ( (struct vfs *)BHV_VOBJ(bdp) )
+#define bhvtovfsops(bdp) ( (struct vfsops *)BHV_OPS(bdp) )
+#define VFS_BHVHEAD(vfs) ( &(vfs)->vfs_bh )
+#define VFS_REMOVEBHV(vfs, bdp) ( bhv_remove(VFS_BHVHEAD(vfs), bdp) )
+
+#define VFS_POSITION_BASE BHV_POSITION_BASE /* chain bottom */
+#define VFS_POSITION_TOP BHV_POSITION_TOP /* chain top */
+#define VFS_POSITION_INVALID BHV_POSITION_INVALID /* invalid pos. num */
+
+typedef enum {
+ VFS_BHV_UNKNOWN, /* not specified */
+ VFS_BHV_XFS, /* xfs */
+ VFS_BHV_DM, /* data migration */
+ VFS_BHV_QM, /* quota manager */
+ VFS_BHV_IO, /* IO path */
+ VFS_BHV_END /* housekeeping end-of-range */
+} vfs_bhv_t;
+
+#define VFS_POSITION_XFS (BHV_POSITION_BASE)
+#define VFS_POSITION_DM (VFS_POSITION_BASE+10)
+#define VFS_POSITION_QM (VFS_POSITION_BASE+20)
+#define VFS_POSITION_IO (VFS_POSITION_BASE+30)
+
+#define VFS_RDONLY 0x0001 /* read-only vfs */
+#define VFS_GRPID 0x0002 /* group-ID assigned from directory */
+#define VFS_DMI 0x0004 /* filesystem has the DMI enabled */
+#define VFS_UMOUNT 0x0008 /* unmount in progress */
+#define VFS_END 0x0008 /* max flag */
+
+#define SYNC_ATTR 0x0001 /* sync attributes */
+#define SYNC_CLOSE 0x0002 /* close file system down */
+#define SYNC_DELWRI 0x0004 /* look at delayed writes */
+#define SYNC_WAIT 0x0008 /* wait for i/o to complete */
+#define SYNC_BDFLUSH 0x0010 /* BDFLUSH is calling -- don't block */
+#define SYNC_FSDATA 0x0020 /* flush fs data (e.g. superblocks) */
+#define SYNC_REFCACHE 0x0040 /* prune some of the nfs ref cache */
+#define SYNC_REMOUNT 0x0080 /* remount readonly, no dummy LRs */
+
+typedef int (*vfs_mount_t)(bhv_desc_t *,
+ struct xfs_mount_args *, struct cred *);
+typedef int (*vfs_parseargs_t)(bhv_desc_t *, char *,
+ struct xfs_mount_args *, int);
+typedef int (*vfs_showargs_t)(bhv_desc_t *, struct seq_file *);
+typedef int (*vfs_unmount_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vfs_mntupdate_t)(bhv_desc_t *, int *,
+ struct xfs_mount_args *);
+typedef int (*vfs_root_t)(bhv_desc_t *, struct vnode **);
+typedef int (*vfs_statvfs_t)(bhv_desc_t *, xfs_statfs_t *, struct vnode *);
+typedef int (*vfs_sync_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vfs_vget_t)(bhv_desc_t *, struct vnode **, struct fid *);
+typedef int (*vfs_dmapiops_t)(bhv_desc_t *, caddr_t);
+typedef int (*vfs_quotactl_t)(bhv_desc_t *, int, int, caddr_t);
+typedef void (*vfs_init_vnode_t)(bhv_desc_t *,
+ struct vnode *, bhv_desc_t *, int);
+typedef void (*vfs_force_shutdown_t)(bhv_desc_t *, int, char *, int);
+typedef void (*vfs_freeze_t)(bhv_desc_t *);
+
+typedef struct vfsops {
+ bhv_position_t vf_position; /* behavior chain position */
+ vfs_mount_t vfs_mount; /* mount file system */
+ vfs_parseargs_t vfs_parseargs; /* parse mount options */
+ vfs_showargs_t vfs_showargs; /* unparse mount options */
+ vfs_unmount_t vfs_unmount; /* unmount file system */
+ vfs_mntupdate_t vfs_mntupdate; /* update file system options */
+ vfs_root_t vfs_root; /* get root vnode */
+ vfs_statvfs_t vfs_statvfs; /* file system statistics */
+ vfs_sync_t vfs_sync; /* flush files */
+ vfs_vget_t vfs_vget; /* get vnode from fid */
+ vfs_dmapiops_t vfs_dmapiops; /* data migration */
+ vfs_quotactl_t vfs_quotactl; /* disk quota */
+ vfs_init_vnode_t vfs_init_vnode; /* initialize a new vnode */
+ vfs_force_shutdown_t vfs_force_shutdown; /* crash and burn */
+ vfs_freeze_t vfs_freeze; /* freeze fs for snapshot */
+} vfsops_t;
+
+/*
+ * VFS's. Operates on vfs structure pointers (starts at bhv head).
+ */
+#define VHEAD(v) ((v)->vfs_fbhv)
+#define VFS_MOUNT(v, ma,cr, rv) ((rv) = vfs_mount(VHEAD(v), ma,cr))
+#define VFS_PARSEARGS(v, o,ma,f, rv) ((rv) = vfs_parseargs(VHEAD(v), o,ma,f))
+#define VFS_SHOWARGS(v, m, rv) ((rv) = vfs_showargs(VHEAD(v), m))
+#define VFS_UNMOUNT(v, f, cr, rv) ((rv) = vfs_unmount(VHEAD(v), f,cr))
+#define VFS_MNTUPDATE(v, fl, args, rv) ((rv) = vfs_mntupdate(VHEAD(v), fl, args))
+#define VFS_ROOT(v, vpp, rv) ((rv) = vfs_root(VHEAD(v), vpp))
+#define VFS_STATVFS(v, sp,vp, rv) ((rv) = vfs_statvfs(VHEAD(v), sp,vp))
+#define VFS_SYNC(v, flag,cr, rv) ((rv) = vfs_sync(VHEAD(v), flag,cr))
+#define VFS_VGET(v, vpp,fidp, rv) ((rv) = vfs_vget(VHEAD(v), vpp,fidp))
+#define VFS_DMAPIOPS(v, p, rv) ((rv) = vfs_dmapiops(VHEAD(v), p))
+#define VFS_QUOTACTL(v, c,id,p, rv) ((rv) = vfs_quotactl(VHEAD(v), c,id,p))
+#define VFS_INIT_VNODE(v, vp,b,ul) ( vfs_init_vnode(VHEAD(v), vp,b,ul) )
+#define VFS_FORCE_SHUTDOWN(v, fl,f,l) ( vfs_force_shutdown(VHEAD(v), fl,f,l) )
+#define VFS_FREEZE(v) ( vfs_freeze(VHEAD(v)) )
+
+/*
+ * PVFS's. Operates on behavior descriptor pointers.
+ */
+#define PVFS_MOUNT(b, ma,cr, rv) ((rv) = vfs_mount(b, ma,cr))
+#define PVFS_PARSEARGS(b, o,ma,f, rv) ((rv) = vfs_parseargs(b, o,ma,f))
+#define PVFS_SHOWARGS(b, m, rv) ((rv) = vfs_showargs(b, m))
+#define PVFS_UNMOUNT(b, f,cr, rv) ((rv) = vfs_unmount(b, f,cr))
+#define PVFS_MNTUPDATE(b, fl, args, rv) ((rv) = vfs_mntupdate(b, fl, args))
+#define PVFS_ROOT(b, vpp, rv) ((rv) = vfs_root(b, vpp))
+#define PVFS_STATVFS(b, sp,vp, rv) ((rv) = vfs_statvfs(b, sp,vp))
+#define PVFS_SYNC(b, flag,cr, rv) ((rv) = vfs_sync(b, flag,cr))
+#define PVFS_VGET(b, vpp,fidp, rv) ((rv) = vfs_vget(b, vpp,fidp))
+#define PVFS_DMAPIOPS(b, p, rv) ((rv) = vfs_dmapiops(b, p))
+#define PVFS_QUOTACTL(b, c,id,p, rv) ((rv) = vfs_quotactl(b, c,id,p))
+#define PVFS_INIT_VNODE(b, vp,b2,ul) ( vfs_init_vnode(b, vp,b2,ul) )
+#define PVFS_FORCE_SHUTDOWN(b, fl,f,l) ( vfs_force_shutdown(b, fl,f,l) )
+#define PVFS_FREEZE(b) ( vfs_freeze(b) )
+
+extern int vfs_mount(bhv_desc_t *, struct xfs_mount_args *, struct cred *);
+extern int vfs_parseargs(bhv_desc_t *, char *, struct xfs_mount_args *, int);
+extern int vfs_showargs(bhv_desc_t *, struct seq_file *);
+extern int vfs_unmount(bhv_desc_t *, int, struct cred *);
+extern int vfs_mntupdate(bhv_desc_t *, int *, struct xfs_mount_args *);
+extern int vfs_root(bhv_desc_t *, struct vnode **);
+extern int vfs_statvfs(bhv_desc_t *, xfs_statfs_t *, struct vnode *);
+extern int vfs_sync(bhv_desc_t *, int, struct cred *);
+extern int vfs_vget(bhv_desc_t *, struct vnode **, struct fid *);
+extern int vfs_dmapiops(bhv_desc_t *, caddr_t);
+extern int vfs_quotactl(bhv_desc_t *, int, int, caddr_t);
+extern void vfs_init_vnode(bhv_desc_t *, struct vnode *, bhv_desc_t *, int);
+extern void vfs_force_shutdown(bhv_desc_t *, int, char *, int);
+extern void vfs_freeze(bhv_desc_t *);
+
+typedef struct bhv_vfsops {
+ struct vfsops bhv_common;
+ void * bhv_custom;
+} bhv_vfsops_t;
+
+#define vfs_bhv_lookup(v, id) ( bhv_lookup_range(&(v)->vfs_bh, (id), (id)) )
+#define vfs_bhv_custom(b) ( ((bhv_vfsops_t *)BHV_OPS(b))->bhv_custom )
+#define vfs_bhv_set_custom(b,o) ( (b)->bhv_custom = (void *)(o))
+#define vfs_bhv_clr_custom(b) ( (b)->bhv_custom = NULL )
+
+extern vfs_t *vfs_allocate(void);
+extern void vfs_deallocate(vfs_t *);
+extern void vfs_insertops(vfs_t *, bhv_vfsops_t *);
+extern void vfs_insertbhv(vfs_t *, bhv_desc_t *, vfsops_t *, void *);
+
+extern void bhv_insert_all_vfsops(struct vfs *);
+extern void bhv_remove_all_vfsops(struct vfs *, int);
+extern void bhv_remove_vfsops(struct vfs *, int);
+
+#endif /* __XFS_VFS_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+
+uint64_t vn_generation; /* vnode generation number */
+spinlock_t vnumber_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Dedicated vnode inactive/reclaim sync semaphores.
+ * Prime number of hash buckets since address is used as the key.
+ */
+#define NVSYNC 37
+#define vptosync(v) (&vsync[((unsigned long)v) % NVSYNC])
+sv_t vsync[NVSYNC];
+
+/*
+ * Translate stat(2) file types to vnode types and vice versa.
+ * Aware of numeric order of S_IFMT and vnode type values.
+ */
+enum vtype iftovt_tab[] = {
+ VNON, VFIFO, VCHR, VNON, VDIR, VNON, VBLK, VNON,
+ VREG, VNON, VLNK, VNON, VSOCK, VNON, VNON, VNON
+};
+
+u_short vttoif_tab[] = {
+ 0, S_IFREG, S_IFDIR, S_IFBLK, S_IFCHR, S_IFLNK, S_IFIFO, 0, S_IFSOCK
+};
+
+
+void
+vn_init(void)
+{
+ register sv_t *svp;
+ register int i;
+
+ for (svp = vsync, i = 0; i < NVSYNC; i++, svp++)
+ init_sv(svp, SV_DEFAULT, "vsy", i);
+}
+
+/*
+ * Clean a vnode of filesystem-specific data and prepare it for reuse.
+ */
+STATIC int
+vn_reclaim(
+ struct vnode *vp)
+{
+ int error;
+
+ XFS_STATS_INC(vn_reclaim);
+ vn_trace_entry(vp, "vn_reclaim", (inst_t *)__return_address);
+
+ /*
+ * Only make the VOP_RECLAIM call if there are behaviors
+ * to call.
+ */
+ if (vp->v_fbhv) {
+ VOP_RECLAIM(vp, error);
+ if (error)
+ return -error;
+ }
+ ASSERT(vp->v_fbhv == NULL);
+
+ VN_LOCK(vp);
+ vp->v_flag &= (VRECLM|VWAIT);
+ VN_UNLOCK(vp, 0);
+
+ vp->v_type = VNON;
+ vp->v_fbhv = NULL;
+
+#ifdef XFS_VNODE_TRACE
+ ktrace_free(vp->v_trace);
+ vp->v_trace = NULL;
+#endif
+
+ return 0;
+}
+
+STATIC void
+vn_wakeup(
+ struct vnode *vp)
+{
+ VN_LOCK(vp);
+ if (vp->v_flag & VWAIT)
+ sv_broadcast(vptosync(vp));
+ vp->v_flag &= ~(VRECLM|VWAIT|VMODIFIED);
+ VN_UNLOCK(vp, 0);
+}
+
+int
+vn_wait(
+ struct vnode *vp)
+{
+ VN_LOCK(vp);
+ if (vp->v_flag & (VINACT | VRECLM)) {
+ vp->v_flag |= VWAIT;
+ sv_wait(vptosync(vp), PINOD, &vp->v_lock, 0);
+ return 1;
+ }
+ VN_UNLOCK(vp, 0);
+ return 0;
+}
+
+struct vnode *
+vn_initialize(
+ struct inode *inode)
+{
+ struct vnode *vp = LINVFS_GET_VP(inode);
+
+ XFS_STATS_INC(vn_active);
+ XFS_STATS_INC(vn_alloc);
+
+ vp->v_flag = VMODIFIED;
+ spinlock_init(&vp->v_lock, "v_lock");
+
+ spin_lock(&vnumber_lock);
+ if (!++vn_generation) /* v_number shouldn't be zero */
+ vn_generation++;
+ vp->v_number = vn_generation;
+ spin_unlock(&vnumber_lock);
+
+ ASSERT(VN_CACHED(vp) == 0);
+
+ /* Initialize the first behavior and the behavior chain head. */
+ vn_bhv_head_init(VN_BHV_HEAD(vp), "vnode");
+
+#ifdef XFS_VNODE_TRACE
+ vp->v_trace = ktrace_alloc(VNODE_TRACE_SIZE, KM_SLEEP);
+ printk("Allocated VNODE_TRACE at 0x%p\n", vp->v_trace);
+#endif /* XFS_VNODE_TRACE */
+
+ vn_trace_exit(vp, "vn_initialize", (inst_t *)__return_address);
+ return vp;
+}
+
+/*
+ * Get a reference on a vnode.
+ */
+vnode_t *
+vn_get(
+ struct vnode *vp,
+ vmap_t *vmap)
+{
+ struct inode *inode;
+
+ XFS_STATS_INC(vn_get);
+ inode = LINVFS_GET_IP(vp);
+ if (inode->i_state & I_FREEING)
+ return NULL;
+
+ inode = ilookup(vmap->v_vfsp->vfs_super, vmap->v_ino);
+ if (!inode) /* Inode not present */
+ return NULL;
+
+ vn_trace_exit(vp, "vn_get", (inst_t *)__return_address);
+
+ return vp;
+}
+
+/*
+ * Revalidate the Linux inode from the vnode.
+ */
+int
+vn_revalidate(
+ struct vnode *vp)
+{
+ struct inode *inode;
+ vattr_t va;
+ int error;
+
+ vn_trace_entry(vp, "vn_revalidate", (inst_t *)__return_address);
+ ASSERT(vp->v_fbhv != NULL);
+
+ va.va_mask = XFS_AT_STAT|XFS_AT_XFLAGS;
+ VOP_GETATTR(vp, &va, 0, NULL, error);
+ if (!error) {
+ inode = LINVFS_GET_IP(vp);
+ inode->i_mode = VTTOIF(va.va_type) | va.va_mode;
+ inode->i_nlink = va.va_nlink;
+ inode->i_uid = va.va_uid;
+ inode->i_gid = va.va_gid;
+ inode->i_blocks = va.va_nblocks;
+ inode->i_mtime = va.va_mtime;
+ inode->i_ctime = va.va_ctime;
+ inode->i_atime = va.va_atime;
+ if (va.va_xflags & XFS_XFLAG_IMMUTABLE)
+ inode->i_flags |= S_IMMUTABLE;
+ else
+ inode->i_flags &= ~S_IMMUTABLE;
+ if (va.va_xflags & XFS_XFLAG_APPEND)
+ inode->i_flags |= S_APPEND;
+ else
+ inode->i_flags &= ~S_APPEND;
+ if (va.va_xflags & XFS_XFLAG_SYNC)
+ inode->i_flags |= S_SYNC;
+ else
+ inode->i_flags &= ~S_SYNC;
+ if (va.va_xflags & XFS_XFLAG_NOATIME)
+ inode->i_flags |= S_NOATIME;
+ else
+ inode->i_flags &= ~S_NOATIME;
+ VUNMODIFY(vp);
+ }
+ return -error;
+}
+
+/*
+ * purge a vnode from the cache
+ * At this point the vnode is guaranteed to have no references (vn_count == 0)
+ * The caller has to make sure that there are no ways someone could
+ * get a handle (via vn_get) on the vnode (usually done via a mount/vfs lock).
+ */
+void
+vn_purge(
+ struct vnode *vp,
+ vmap_t *vmap)
+{
+ vn_trace_entry(vp, "vn_purge", (inst_t *)__return_address);
+
+again:
+ /*
+ * Check whether vp has already been reclaimed since our caller
+ * sampled its version while holding a filesystem cache lock that
+ * its VOP_RECLAIM function acquires.
+ */
+ VN_LOCK(vp);
+ if (vp->v_number != vmap->v_number) {
+ VN_UNLOCK(vp, 0);
+ return;
+ }
+
+ /*
+ * If vp is being reclaimed or inactivated, wait until it is inert,
+ * then proceed. Can't assume that vnode is actually reclaimed
+ * just because the reclaimed flag is asserted -- a vn_alloc
+ * reclaim can fail.
+ */
+ if (vp->v_flag & (VINACT | VRECLM)) {
+ ASSERT(vn_count(vp) == 0);
+ vp->v_flag |= VWAIT;
+ sv_wait(vptosync(vp), PINOD, &vp->v_lock, 0);
+ goto again;
+ }
+
+ /*
+ * Another process could have raced in and gotten this vnode...
+ */
+ if (vn_count(vp) > 0) {
+ VN_UNLOCK(vp, 0);
+ return;
+ }
+
+ XFS_STATS_DEC(vn_active);
+ vp->v_flag |= VRECLM;
+ VN_UNLOCK(vp, 0);
+
+ /*
+ * Call VOP_RECLAIM and clean vp. The FSYNC_INVAL flag tells
+ * vp's filesystem to flush and invalidate all cached resources.
+ * When vn_reclaim returns, vp should have no private data,
+ * either in a system cache or attached to v_data.
+ */
+ if (vn_reclaim(vp) != 0)
+ panic("vn_purge: cannot reclaim");
+
+ /*
+ * Wakeup anyone waiting for vp to be reclaimed.
+ */
+ vn_wakeup(vp);
+}
+
+/*
+ * Add a reference to a referenced vnode.
+ */
+struct vnode *
+vn_hold(
+ struct vnode *vp)
+{
+ struct inode *inode;
+
+ XFS_STATS_INC(vn_hold);
+
+ VN_LOCK(vp);
+ inode = igrab(LINVFS_GET_IP(vp));
+ ASSERT(inode);
+ VN_UNLOCK(vp, 0);
+
+ return vp;
+}
+
+/*
+ * Call VOP_INACTIVE on last reference.
+ */
+void
+vn_rele(
+ struct vnode *vp)
+{
+ int vcnt;
+ int cache;
+
+ XFS_STATS_INC(vn_rele);
+
+ VN_LOCK(vp);
+
+ vn_trace_entry(vp, "vn_rele", (inst_t *)__return_address);
+ vcnt = vn_count(vp);
+
+ /*
+ * Since we always get called from put_inode we know
+ * that i_count won't be decremented after we
+ * return.
+ */
+ if (!vcnt) {
+ /*
+ * As soon as we turn this on, noone can find us in vn_get
+ * until we turn off VINACT or VRECLM
+ */
+ vp->v_flag |= VINACT;
+ VN_UNLOCK(vp, 0);
+
+ /*
+ * Do not make the VOP_INACTIVE call if there
+ * are no behaviors attached to the vnode to call.
+ */
+ if (vp->v_fbhv)
+ VOP_INACTIVE(vp, NULL, cache);
+
+ VN_LOCK(vp);
+ if (vp->v_flag & VWAIT)
+ sv_broadcast(vptosync(vp));
+
+ vp->v_flag &= ~(VINACT|VWAIT|VRECLM|VMODIFIED);
+ }
+
+ VN_UNLOCK(vp, 0);
+
+ vn_trace_exit(vp, "vn_rele", (inst_t *)__return_address);
+}
+
+/*
+ * Finish the removal of a vnode.
+ */
+void
+vn_remove(
+ struct vnode *vp)
+{
+ vmap_t vmap;
+
+ /* Make sure we don't do this to the same vnode twice */
+ if (!(vp->v_fbhv))
+ return;
+
+ XFS_STATS_INC(vn_remove);
+ vn_trace_exit(vp, "vn_remove", (inst_t *)__return_address);
+
+ /*
+ * After the following purge the vnode
+ * will no longer exist.
+ */
+ VMAP(vp, vmap);
+ vn_purge(vp, &vmap);
+}
+
+
+#ifdef XFS_VNODE_TRACE
+
+#define KTRACE_ENTER(vp, vk, s, line, ra) \
+ ktrace_enter( (vp)->v_trace, \
+/* 0 */ (void *)(__psint_t)(vk), \
+/* 1 */ (void *)(s), \
+/* 2 */ (void *)(__psint_t) line, \
+/* 3 */ (void *)(vn_count(vp)), \
+/* 4 */ (void *)(ra), \
+/* 5 */ (void *)(__psunsigned_t)(vp)->v_flag, \
+/* 6 */ (void *)(__psint_t)smp_processor_id(), \
+/* 7 */ (void *)(__psint_t)(current->pid), \
+/* 8 */ (void *)__return_address, \
+/* 9 */ 0, 0, 0, 0, 0, 0, 0)
+
+/*
+ * Vnode tracing code.
+ */
+void
+vn_trace_entry(vnode_t *vp, char *func, inst_t *ra)
+{
+ KTRACE_ENTER(vp, VNODE_KTRACE_ENTRY, func, 0, ra);
+}
+
+void
+vn_trace_exit(vnode_t *vp, char *func, inst_t *ra)
+{
+ KTRACE_ENTER(vp, VNODE_KTRACE_EXIT, func, 0, ra);
+}
+
+void
+vn_trace_hold(vnode_t *vp, char *file, int line, inst_t *ra)
+{
+ KTRACE_ENTER(vp, VNODE_KTRACE_HOLD, file, line, ra);
+}
+
+void
+vn_trace_ref(vnode_t *vp, char *file, int line, inst_t *ra)
+{
+ KTRACE_ENTER(vp, VNODE_KTRACE_REF, file, line, ra);
+}
+
+void
+vn_trace_rele(vnode_t *vp, char *file, int line, inst_t *ra)
+{
+ KTRACE_ENTER(vp, VNODE_KTRACE_RELE, file, line, ra);
+}
+#endif /* XFS_VNODE_TRACE */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ *
+ * Portions Copyright (c) 1989, 1993
+ * The Regents of the University of California. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#ifndef __XFS_VNODE_H__
+#define __XFS_VNODE_H__
+
+struct uio;
+struct file;
+struct vattr;
+struct xfs_iomap;
+struct attrlist_cursor_kern;
+
+/*
+ * Vnode types. VNON means no type.
+ */
+enum vtype { VNON, VREG, VDIR, VBLK, VCHR, VLNK, VFIFO, VBAD, VSOCK };
+
+typedef xfs_ino_t vnumber_t;
+typedef struct dentry vname_t;
+typedef bhv_head_t vn_bhv_head_t;
+
+/*
+ * MP locking protocols:
+ * v_flag, v_vfsp VN_LOCK/VN_UNLOCK
+ * v_type read-only or fs-dependent
+ */
+typedef struct vnode {
+ __u32 v_flag; /* vnode flags (see below) */
+ enum vtype v_type; /* vnode type */
+ struct vfs *v_vfsp; /* ptr to containing VFS */
+ vnumber_t v_number; /* in-core vnode number */
+ vn_bhv_head_t v_bh; /* behavior head */
+ spinlock_t v_lock; /* VN_LOCK/VN_UNLOCK */
+ struct inode v_inode; /* Linux inode */
+#ifdef XFS_VNODE_TRACE
+ struct ktrace *v_trace; /* trace header structure */
+#endif
+} vnode_t;
+
+#define v_fbhv v_bh.bh_first /* first behavior */
+#define v_fops v_bh.bh_first->bd_ops /* first behavior ops */
+
+#define VNODE_POSITION_BASE BHV_POSITION_BASE /* chain bottom */
+#define VNODE_POSITION_TOP BHV_POSITION_TOP /* chain top */
+#define VNODE_POSITION_INVALID BHV_POSITION_INVALID /* invalid pos. num */
+
+typedef enum {
+ VN_BHV_UNKNOWN, /* not specified */
+ VN_BHV_XFS, /* xfs */
+ VN_BHV_DM, /* data migration */
+ VN_BHV_QM, /* quota manager */
+ VN_BHV_IO, /* IO path */
+ VN_BHV_END /* housekeeping end-of-range */
+} vn_bhv_t;
+
+#define VNODE_POSITION_XFS (VNODE_POSITION_BASE)
+#define VNODE_POSITION_DM (VNODE_POSITION_BASE+10)
+#define VNODE_POSITION_QM (VNODE_POSITION_BASE+20)
+#define VNODE_POSITION_IO (VNODE_POSITION_BASE+30)
+
+/*
+ * Macros for dealing with the behavior descriptor inside of the vnode.
+ */
+#define BHV_TO_VNODE(bdp) ((vnode_t *)BHV_VOBJ(bdp))
+#define BHV_TO_VNODE_NULL(bdp) ((vnode_t *)BHV_VOBJNULL(bdp))
+
+#define VN_BHV_HEAD(vp) ((bhv_head_t *)(&((vp)->v_bh)))
+#define vn_bhv_head_init(bhp,name) bhv_head_init(bhp,name)
+#define vn_bhv_remove(bhp,bdp) bhv_remove(bhp,bdp)
+#define vn_bhv_lookup(bhp,ops) bhv_lookup(bhp,ops)
+#define vn_bhv_lookup_unlocked(bhp,ops) bhv_lookup_unlocked(bhp,ops)
+
+/*
+ * Vnode to Linux inode mapping.
+ */
+#define LINVFS_GET_VP(inode) ((vnode_t *)list_entry(inode, vnode_t, v_inode))
+#define LINVFS_GET_IP(vp) (&(vp)->v_inode)
+
+/*
+ * Convert between vnode types and inode formats (since POSIX.1
+ * defines mode word of stat structure in terms of inode formats).
+ */
+extern enum vtype iftovt_tab[];
+extern u_short vttoif_tab[];
+#define IFTOVT(mode) (iftovt_tab[((mode) & S_IFMT) >> 12])
+#define VTTOIF(indx) (vttoif_tab[(int)(indx)])
+#define MAKEIMODE(indx, mode) (int)(VTTOIF(indx) | (mode))
+
+
+/*
+ * Vnode flags.
+ */
+#define VINACT 0x1 /* vnode is being inactivated */
+#define VRECLM 0x2 /* vnode is being reclaimed */
+#define VWAIT 0x4 /* waiting for VINACT/VRECLM to end */
+#define VMODIFIED 0x8 /* XFS inode state possibly differs */
+ /* to the Linux inode state. */
+
+/*
+ * Values for the VOP_RWLOCK and VOP_RWUNLOCK flags parameter.
+ */
+typedef enum vrwlock {
+ VRWLOCK_NONE,
+ VRWLOCK_READ,
+ VRWLOCK_WRITE,
+ VRWLOCK_WRITE_DIRECT,
+ VRWLOCK_TRY_READ,
+ VRWLOCK_TRY_WRITE
+} vrwlock_t;
+
+/*
+ * Return values for VOP_INACTIVE. A return value of
+ * VN_INACTIVE_NOCACHE implies that the file system behavior
+ * has disassociated its state and bhv_desc_t from the vnode.
+ */
+#define VN_INACTIVE_CACHE 0
+#define VN_INACTIVE_NOCACHE 1
+
+/*
+ * Values for the cmd code given to VOP_VNODE_CHANGE.
+ */
+typedef enum vchange {
+ VCHANGE_FLAGS_FRLOCKS = 0,
+ VCHANGE_FLAGS_ENF_LOCKING = 1,
+ VCHANGE_FLAGS_TRUNCATED = 2,
+ VCHANGE_FLAGS_PAGE_DIRTY = 3,
+ VCHANGE_FLAGS_IOEXCL_COUNT = 4
+} vchange_t;
+
+
+typedef int (*vop_open_t)(bhv_desc_t *, struct cred *);
+typedef ssize_t (*vop_read_t)(bhv_desc_t *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+typedef ssize_t (*vop_write_t)(bhv_desc_t *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+typedef ssize_t (*vop_sendfile_t)(bhv_desc_t *, struct file *,
+ loff_t *, int, size_t, read_actor_t,
+ void *, struct cred *);
+typedef int (*vop_ioctl_t)(bhv_desc_t *, struct inode *, struct file *,
+ int, unsigned int, unsigned long);
+typedef int (*vop_getattr_t)(bhv_desc_t *, struct vattr *, int,
+ struct cred *);
+typedef int (*vop_setattr_t)(bhv_desc_t *, struct vattr *, int,
+ struct cred *);
+typedef int (*vop_access_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vop_lookup_t)(bhv_desc_t *, vname_t *, vnode_t **,
+ int, vnode_t *, struct cred *);
+typedef int (*vop_create_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ vnode_t **, struct cred *);
+typedef int (*vop_remove_t)(bhv_desc_t *, vname_t *, struct cred *);
+typedef int (*vop_link_t)(bhv_desc_t *, vnode_t *, vname_t *,
+ struct cred *);
+typedef int (*vop_rename_t)(bhv_desc_t *, vname_t *, vnode_t *, vname_t *,
+ struct cred *);
+typedef int (*vop_mkdir_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ vnode_t **, struct cred *);
+typedef int (*vop_rmdir_t)(bhv_desc_t *, vname_t *, struct cred *);
+typedef int (*vop_readdir_t)(bhv_desc_t *, struct uio *, struct cred *,
+ int *);
+typedef int (*vop_symlink_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ char *, vnode_t **, struct cred *);
+typedef int (*vop_readlink_t)(bhv_desc_t *, struct uio *, int,
+ struct cred *);
+typedef int (*vop_fsync_t)(bhv_desc_t *, int, struct cred *,
+ xfs_off_t, xfs_off_t);
+typedef int (*vop_inactive_t)(bhv_desc_t *, struct cred *);
+typedef int (*vop_fid2_t)(bhv_desc_t *, struct fid *);
+typedef int (*vop_release_t)(bhv_desc_t *);
+typedef int (*vop_rwlock_t)(bhv_desc_t *, vrwlock_t);
+typedef void (*vop_rwunlock_t)(bhv_desc_t *, vrwlock_t);
+typedef int (*vop_bmap_t)(bhv_desc_t *, xfs_off_t, ssize_t, int,
+ struct xfs_iomap *, int *);
+typedef int (*vop_reclaim_t)(bhv_desc_t *);
+typedef int (*vop_attr_get_t)(bhv_desc_t *, char *, char *, int *, int,
+ struct cred *);
+typedef int (*vop_attr_set_t)(bhv_desc_t *, char *, char *, int, int,
+ struct cred *);
+typedef int (*vop_attr_remove_t)(bhv_desc_t *, char *, int, struct cred *);
+typedef int (*vop_attr_list_t)(bhv_desc_t *, char *, int, int,
+ struct attrlist_cursor_kern *, struct cred *);
+typedef void (*vop_link_removed_t)(bhv_desc_t *, vnode_t *, int);
+typedef void (*vop_vnode_change_t)(bhv_desc_t *, vchange_t, __psint_t);
+typedef void (*vop_ptossvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+typedef void (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+typedef int (*vop_pflushvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t,
+ uint64_t, int);
+typedef int (*vop_iflush_t)(bhv_desc_t *, int);
+
+
+typedef struct vnodeops {
+ bhv_position_t vn_position; /* position within behavior chain */
+ vop_open_t vop_open;
+ vop_read_t vop_read;
+ vop_write_t vop_write;
+ vop_sendfile_t vop_sendfile;
+ vop_ioctl_t vop_ioctl;
+ vop_getattr_t vop_getattr;
+ vop_setattr_t vop_setattr;
+ vop_access_t vop_access;
+ vop_lookup_t vop_lookup;
+ vop_create_t vop_create;
+ vop_remove_t vop_remove;
+ vop_link_t vop_link;
+ vop_rename_t vop_rename;
+ vop_mkdir_t vop_mkdir;
+ vop_rmdir_t vop_rmdir;
+ vop_readdir_t vop_readdir;
+ vop_symlink_t vop_symlink;
+ vop_readlink_t vop_readlink;
+ vop_fsync_t vop_fsync;
+ vop_inactive_t vop_inactive;
+ vop_fid2_t vop_fid2;
+ vop_rwlock_t vop_rwlock;
+ vop_rwunlock_t vop_rwunlock;
+ vop_bmap_t vop_bmap;
+ vop_reclaim_t vop_reclaim;
+ vop_attr_get_t vop_attr_get;
+ vop_attr_set_t vop_attr_set;
+ vop_attr_remove_t vop_attr_remove;
+ vop_attr_list_t vop_attr_list;
+ vop_link_removed_t vop_link_removed;
+ vop_vnode_change_t vop_vnode_change;
+ vop_ptossvp_t vop_tosspages;
+ vop_pflushinvalvp_t vop_flushinval_pages;
+ vop_pflushvp_t vop_flush_pages;
+ vop_release_t vop_release;
+ vop_iflush_t vop_iflush;
+} vnodeops_t;
+
+/*
+ * VOP's.
+ */
+#define _VOP_(op, vp) (*((vnodeops_t *)(vp)->v_fops)->op)
+
+#define VOP_READ(vp,file,iov,segs,offset,ioflags,cr,rv) \
+ rv = _VOP_(vop_read, vp)((vp)->v_fbhv,file,iov,segs,offset,ioflags,cr)
+#define VOP_WRITE(vp,file,iov,segs,offset,ioflags,cr,rv) \
+ rv = _VOP_(vop_write, vp)((vp)->v_fbhv,file,iov,segs,offset,ioflags,cr)
+#define VOP_SENDFILE(vp,f,off,ioflags,cnt,act,targ,cr,rv) \
+ rv = _VOP_(vop_sendfile, vp)((vp)->v_fbhv,f,off,ioflags,cnt,act,targ,cr)
+#define VOP_BMAP(vp,of,sz,rw,b,n,rv) \
+ rv = _VOP_(vop_bmap, vp)((vp)->v_fbhv,of,sz,rw,b,n)
+#define VOP_OPEN(vp, cr, rv) \
+ rv = _VOP_(vop_open, vp)((vp)->v_fbhv, cr)
+#define VOP_GETATTR(vp, vap, f, cr, rv) \
+ rv = _VOP_(vop_getattr, vp)((vp)->v_fbhv, vap, f, cr)
+#define VOP_SETATTR(vp, vap, f, cr, rv) \
+ rv = _VOP_(vop_setattr, vp)((vp)->v_fbhv, vap, f, cr)
+#define VOP_ACCESS(vp, mode, cr, rv) \
+ rv = _VOP_(vop_access, vp)((vp)->v_fbhv, mode, cr)
+#define VOP_LOOKUP(vp,d,vpp,f,rdir,cr,rv) \
+ rv = _VOP_(vop_lookup, vp)((vp)->v_fbhv,d,vpp,f,rdir,cr)
+#define VOP_CREATE(dvp,d,vap,vpp,cr,rv) \
+ rv = _VOP_(vop_create, dvp)((dvp)->v_fbhv,d,vap,vpp,cr)
+#define VOP_REMOVE(dvp,d,cr,rv) \
+ rv = _VOP_(vop_remove, dvp)((dvp)->v_fbhv,d,cr)
+#define VOP_LINK(tdvp,fvp,d,cr,rv) \
+ rv = _VOP_(vop_link, tdvp)((tdvp)->v_fbhv,fvp,d,cr)
+#define VOP_RENAME(fvp,fnm,tdvp,tnm,cr,rv) \
+ rv = _VOP_(vop_rename, fvp)((fvp)->v_fbhv,fnm,tdvp,tnm,cr)
+#define VOP_MKDIR(dp,d,vap,vpp,cr,rv) \
+ rv = _VOP_(vop_mkdir, dp)((dp)->v_fbhv,d,vap,vpp,cr)
+#define VOP_RMDIR(dp,d,cr,rv) \
+ rv = _VOP_(vop_rmdir, dp)((dp)->v_fbhv,d,cr)
+#define VOP_READDIR(vp,uiop,cr,eofp,rv) \
+ rv = _VOP_(vop_readdir, vp)((vp)->v_fbhv,uiop,cr,eofp)
+#define VOP_SYMLINK(dvp,d,vap,tnm,vpp,cr,rv) \
+ rv = _VOP_(vop_symlink, dvp) ((dvp)->v_fbhv,d,vap,tnm,vpp,cr)
+#define VOP_READLINK(vp,uiop,fl,cr,rv) \
+ rv = _VOP_(vop_readlink, vp)((vp)->v_fbhv,uiop,fl,cr)
+#define VOP_FSYNC(vp,f,cr,b,e,rv) \
+ rv = _VOP_(vop_fsync, vp)((vp)->v_fbhv,f,cr,b,e)
+#define VOP_INACTIVE(vp, cr, rv) \
+ rv = _VOP_(vop_inactive, vp)((vp)->v_fbhv, cr)
+#define VOP_RELEASE(vp, rv) \
+ rv = _VOP_(vop_release, vp)((vp)->v_fbhv)
+#define VOP_FID2(vp, fidp, rv) \
+ rv = _VOP_(vop_fid2, vp)((vp)->v_fbhv, fidp)
+#define VOP_RWLOCK(vp,i) \
+ (void)_VOP_(vop_rwlock, vp)((vp)->v_fbhv, i)
+#define VOP_RWLOCK_TRY(vp,i) \
+ _VOP_(vop_rwlock, vp)((vp)->v_fbhv, i)
+#define VOP_RWUNLOCK(vp,i) \
+ (void)_VOP_(vop_rwunlock, vp)((vp)->v_fbhv, i)
+#define VOP_FRLOCK(vp,c,fl,flags,offset,fr,rv) \
+ rv = _VOP_(vop_frlock, vp)((vp)->v_fbhv,c,fl,flags,offset,fr)
+#define VOP_RECLAIM(vp, rv) \
+ rv = _VOP_(vop_reclaim, vp)((vp)->v_fbhv)
+#define VOP_ATTR_GET(vp, name, val, vallenp, fl, cred, rv) \
+ rv = _VOP_(vop_attr_get, vp)((vp)->v_fbhv,name,val,vallenp,fl,cred)
+#define VOP_ATTR_SET(vp, name, val, vallen, fl, cred, rv) \
+ rv = _VOP_(vop_attr_set, vp)((vp)->v_fbhv,name,val,vallen,fl,cred)
+#define VOP_ATTR_REMOVE(vp, name, flags, cred, rv) \
+ rv = _VOP_(vop_attr_remove, vp)((vp)->v_fbhv,name,flags,cred)
+#define VOP_ATTR_LIST(vp, buf, buflen, fl, cursor, cred, rv) \
+ rv = _VOP_(vop_attr_list, vp)((vp)->v_fbhv,buf,buflen,fl,cursor,cred)
+#define VOP_LINK_REMOVED(vp, dvp, linkzero) \
+ (void)_VOP_(vop_link_removed, vp)((vp)->v_fbhv, dvp, linkzero)
+#define VOP_VNODE_CHANGE(vp, cmd, val) \
+ (void)_VOP_(vop_vnode_change, vp)((vp)->v_fbhv,cmd,val)
+/*
+ * These are page cache functions that now go thru VOPs.
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_TOSS_PAGES(vp, first, last, fiopt) \
+ _VOP_(vop_tosspages, vp)((vp)->v_fbhv,first, last, fiopt)
+/*
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_FLUSHINVAL_PAGES(vp, first, last, fiopt) \
+ _VOP_(vop_flushinval_pages, vp)((vp)->v_fbhv,first,last,fiopt)
+/*
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_FLUSH_PAGES(vp, first, last, flags, fiopt, rv) \
+ rv = _VOP_(vop_flush_pages, vp)((vp)->v_fbhv,first,last,flags,fiopt)
+#define VOP_IOCTL(vp, inode, filp, fl, cmd, arg, rv) \
+ rv = _VOP_(vop_ioctl, vp)((vp)->v_fbhv,inode,filp,fl,cmd,arg)
+#define VOP_IFLUSH(vp, flags, rv) \
+ rv = _VOP_(vop_iflush, vp)((vp)->v_fbhv, flags)
+
+/*
+ * Flags for read/write calls - same values as IRIX
+ */
+#define IO_ISDIRECT 0x00004 /* bypass page cache */
+#define IO_INVIS 0x00020 /* don't update inode timestamps */
+
+/*
+ * Flags for VOP_IFLUSH call
+ */
+#define FLUSH_SYNC 1 /* wait for flush to complete */
+#define FLUSH_INODE 2 /* flush the inode itself */
+#define FLUSH_LOG 4 /* force the last log entry for
+ * this inode out to disk */
+
+/*
+ * Flush/Invalidate options for VOP_TOSS_PAGES, VOP_FLUSHINVAL_PAGES and
+ * VOP_FLUSH_PAGES.
+ */
+#define FI_NONE 0 /* none */
+#define FI_REMAPF 1 /* Do a remapf prior to the operation */
+#define FI_REMAPF_LOCKED 2 /* Do a remapf prior to the operation.
+ Prevent VM access to the pages until
+ the operation completes. */
+
+/*
+ * Vnode attributes. va_mask indicates those attributes the caller
+ * wants to set or extract.
+ */
+typedef struct vattr {
+ int va_mask; /* bit-mask of attributes present */
+ enum vtype va_type; /* vnode type (for create) */
+ mode_t va_mode; /* file access mode and type */
+ nlink_t va_nlink; /* number of references to file */
+ uid_t va_uid; /* owner user id */
+ gid_t va_gid; /* owner group id */
+ xfs_ino_t va_nodeid; /* file id */
+ xfs_off_t va_size; /* file size in bytes */
+ u_long va_blocksize; /* blocksize preferred for i/o */
+ struct timespec va_atime; /* time of last access */
+ struct timespec va_mtime; /* time of last modification */
+ struct timespec va_ctime; /* time file changed */
+ u_int va_gen; /* generation number of file */
+ xfs_dev_t va_rdev; /* device the special file represents */
+ __int64_t va_nblocks; /* number of blocks allocated */
+ u_long va_xflags; /* random extended file flags */
+ u_long va_extsize; /* file extent size */
+ u_long va_nextents; /* number of extents in file */
+ u_long va_anextents; /* number of attr extents in file */
+ int va_projid; /* project id */
+} vattr_t;
+
+/*
+ * setattr or getattr attributes
+ */
+#define XFS_AT_TYPE 0x00000001
+#define XFS_AT_MODE 0x00000002
+#define XFS_AT_UID 0x00000004
+#define XFS_AT_GID 0x00000008
+#define XFS_AT_FSID 0x00000010
+#define XFS_AT_NODEID 0x00000020
+#define XFS_AT_NLINK 0x00000040
+#define XFS_AT_SIZE 0x00000080
+#define XFS_AT_ATIME 0x00000100
+#define XFS_AT_MTIME 0x00000200
+#define XFS_AT_CTIME 0x00000400
+#define XFS_AT_RDEV 0x00000800
+#define XFS_AT_BLKSIZE 0x00001000
+#define XFS_AT_NBLOCKS 0x00002000
+#define XFS_AT_VCODE 0x00004000
+#define XFS_AT_MAC 0x00008000
+#define XFS_AT_UPDATIME 0x00010000
+#define XFS_AT_UPDMTIME 0x00020000
+#define XFS_AT_UPDCTIME 0x00040000
+#define XFS_AT_ACL 0x00080000
+#define XFS_AT_CAP 0x00100000
+#define XFS_AT_INF 0x00200000
+#define XFS_AT_XFLAGS 0x00400000
+#define XFS_AT_EXTSIZE 0x00800000
+#define XFS_AT_NEXTENTS 0x01000000
+#define XFS_AT_ANEXTENTS 0x02000000
+#define XFS_AT_PROJID 0x04000000
+#define XFS_AT_SIZE_NOPERM 0x08000000
+#define XFS_AT_GENCOUNT 0x10000000
+
+#define XFS_AT_ALL (XFS_AT_TYPE|XFS_AT_MODE|XFS_AT_UID|XFS_AT_GID|\
+ XFS_AT_FSID|XFS_AT_NODEID|XFS_AT_NLINK|XFS_AT_SIZE|\
+ XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME|XFS_AT_RDEV|\
+ XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_VCODE|XFS_AT_MAC|\
+ XFS_AT_ACL|XFS_AT_CAP|XFS_AT_INF|XFS_AT_XFLAGS|XFS_AT_EXTSIZE|\
+ XFS_AT_NEXTENTS|XFS_AT_ANEXTENTS|XFS_AT_PROJID|XFS_AT_GENCOUNT)
+
+#define XFS_AT_STAT (XFS_AT_TYPE|XFS_AT_MODE|XFS_AT_UID|XFS_AT_GID|\
+ XFS_AT_FSID|XFS_AT_NODEID|XFS_AT_NLINK|XFS_AT_SIZE|\
+ XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME|XFS_AT_RDEV|\
+ XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_PROJID)
+
+#define XFS_AT_TIMES (XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME)
+
+#define XFS_AT_UPDTIMES (XFS_AT_UPDATIME|XFS_AT_UPDMTIME|XFS_AT_UPDCTIME)
+
+#define XFS_AT_NOSET (XFS_AT_NLINK|XFS_AT_RDEV|XFS_AT_FSID|XFS_AT_NODEID|\
+ XFS_AT_TYPE|XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_VCODE|\
+ XFS_AT_NEXTENTS|XFS_AT_ANEXTENTS|XFS_AT_GENCOUNT)
+
+/*
+ * Modes.
+ */
+#define VSUID S_ISUID /* set user id on execution */
+#define VSGID S_ISGID /* set group id on execution */
+#define VSVTX S_ISVTX /* save swapped text even after use */
+#define VREAD S_IRUSR /* read, write, execute permissions */
+#define VWRITE S_IWUSR
+#define VEXEC S_IXUSR
+
+#define MODEMASK S_IALLUGO /* mode bits plus permission bits */
+
+/*
+ * Check whether mandatory file locking is enabled.
+ */
+#define MANDLOCK(vp, mode) \
+ ((vp)->v_type == VREG && ((mode) & (VSGID|(VEXEC>>3))) == VSGID)
+
+extern void vn_init(void);
+extern int vn_wait(struct vnode *);
+extern vnode_t *vn_initialize(struct inode *);
+
+/*
+ * Acquiring and invalidating vnodes:
+ *
+ * if (vn_get(vp, version, 0))
+ * ...;
+ * vn_purge(vp, version);
+ *
+ * vn_get and vn_purge must be called with vmap_t arguments, sampled
+ * while a lock that the vnode's VOP_RECLAIM function acquires is
+ * held, to ensure that the vnode sampled with the lock held isn't
+ * recycled (VOP_RECLAIMed) or deallocated between the release of the lock
+ * and the subsequent vn_get or vn_purge.
+ */
+
+/*
+ * vnode_map structures _must_ match vn_epoch and vnode structure sizes.
+ */
+typedef struct vnode_map {
+ vfs_t *v_vfsp;
+ vnumber_t v_number; /* in-core vnode number */
+ xfs_ino_t v_ino; /* inode # */
+} vmap_t;
+
+#define VMAP(vp, vmap) {(vmap).v_vfsp = (vp)->v_vfsp, \
+ (vmap).v_number = (vp)->v_number, \
+ (vmap).v_ino = (vp)->v_inode.i_ino; }
+
+extern void vn_purge(struct vnode *, vmap_t *);
+extern vnode_t *vn_get(struct vnode *, vmap_t *);
+extern int vn_revalidate(struct vnode *);
+extern void vn_remove(struct vnode *);
+
+static inline int vn_count(struct vnode *vp)
+{
+ return atomic_read(&LINVFS_GET_IP(vp)->i_count);
+}
+
+/*
+ * Vnode reference counting functions (and macros for compatibility).
+ */
+extern vnode_t *vn_hold(struct vnode *);
+extern void vn_rele(struct vnode *);
+
+#if defined(XFS_VNODE_TRACE)
+#define VN_HOLD(vp) \
+ ((void)vn_hold(vp), \
+ vn_trace_hold(vp, __FILE__, __LINE__, (inst_t *)__return_address))
+#define VN_RELE(vp) \
+ (vn_trace_rele(vp, __FILE__, __LINE__, (inst_t *)__return_address), \
+ iput(LINVFS_GET_IP(vp)))
+#else
+#define VN_HOLD(vp) ((void)vn_hold(vp))
+#define VN_RELE(vp) (iput(LINVFS_GET_IP(vp)))
+#endif
+
+/*
+ * Vname handling macros.
+ */
+#define VNAME(dentry) ((char *) (dentry)->d_name.name)
+#define VNAMELEN(dentry) ((dentry)->d_name.len)
+#define VNAME_TO_VNODE(dentry) (LINVFS_GET_VP((dentry)->d_inode))
+
+/*
+ * Vnode spinlock manipulation.
+ */
+#define VN_LOCK(vp) mutex_spinlock(&(vp)->v_lock)
+#define VN_UNLOCK(vp, s) mutex_spinunlock(&(vp)->v_lock, s)
+#define VN_FLAGSET(vp,b) vn_flagset(vp,b)
+#define VN_FLAGCLR(vp,b) vn_flagclr(vp,b)
+
+static __inline__ void vn_flagset(struct vnode *vp, uint flag)
+{
+ spin_lock(&vp->v_lock);
+ vp->v_flag |= flag;
+ spin_unlock(&vp->v_lock);
+}
+
+static __inline__ void vn_flagclr(struct vnode *vp, uint flag)
+{
+ spin_lock(&vp->v_lock);
+ vp->v_flag &= ~flag;
+ spin_unlock(&vp->v_lock);
+}
+
+/*
+ * Update modify/access/change times on the vnode
+ */
+#define VN_MTIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_mtime = *(tvp))
+#define VN_ATIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_atime = *(tvp))
+#define VN_CTIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_ctime = *(tvp))
+
+/*
+ * Some useful predicates.
+ */
+#define VN_MAPPED(vp) mapping_mapped(LINVFS_GET_IP(vp)->i_mapping)
+#define VN_CACHED(vp) (LINVFS_GET_IP(vp)->i_mapping->nrpages)
+#define VN_DIRTY(vp) mapping_tagged(LINVFS_GET_IP(vp)->i_mapping, \
+ PAGECACHE_TAG_DIRTY)
+#define VMODIFY(vp) VN_FLAGSET(vp, VMODIFIED)
+#define VUNMODIFY(vp) VN_FLAGCLR(vp, VMODIFIED)
+
+/*
+ * Flags to VOP_SETATTR/VOP_GETATTR.
+ */
+#define ATTR_UTIME 0x01 /* non-default utime(2) request */
+#define ATTR_DMI 0x08 /* invocation from a DMI function */
+#define ATTR_LAZY 0x80 /* set/get attributes lazily */
+#define ATTR_NONBLOCK 0x100 /* return EAGAIN if operation would block */
+
+/*
+ * Flags to VOP_FSYNC and VOP_RECLAIM.
+ */
+#define FSYNC_NOWAIT 0 /* asynchronous flush */
+#define FSYNC_WAIT 0x1 /* synchronous fsync or forced reclaim */
+#define FSYNC_INVAL 0x2 /* flush and invalidate cached data */
+#define FSYNC_DATA 0x4 /* synchronous fsync of data only */
+
+/*
+ * Tracking vnode activity.
+ */
+#if defined(XFS_VNODE_TRACE)
+
+#define VNODE_TRACE_SIZE 16 /* number of trace entries */
+#define VNODE_KTRACE_ENTRY 1
+#define VNODE_KTRACE_EXIT 2
+#define VNODE_KTRACE_HOLD 3
+#define VNODE_KTRACE_REF 4
+#define VNODE_KTRACE_RELE 5
+
+extern void vn_trace_entry(struct vnode *, char *, inst_t *);
+extern void vn_trace_exit(struct vnode *, char *, inst_t *);
+extern void vn_trace_hold(struct vnode *, char *, int, inst_t *);
+extern void vn_trace_ref(struct vnode *, char *, int, inst_t *);
+extern void vn_trace_rele(struct vnode *, char *, int, inst_t *);
+
+#define VN_TRACE(vp) \
+ vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address)
+#else
+#define vn_trace_entry(a,b,c)
+#define vn_trace_exit(a,b,c)
+#define vn_trace_hold(a,b,c,d)
+#define vn_trace_ref(a,b,c,d)
+#define vn_trace_rele(a,b,c,d)
+#define VN_TRACE(vp)
+#endif
+
+#endif /* __XFS_VNODE_H__ */
--- /dev/null
+#ifndef __ALPHA_SETUP_H
+#define __ALPHA_SETUP_H
+
+#define COMMAND_LINE_SIZE 256
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/dma.h
+ *
+ * Copyright (C) 2001-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#ifndef __ASM_ARCH_DMA_H
+#define __ASM_ARCH_DMA_H
+
+#include <linux/config.h>
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <asm/page.h>
+#include <asm/sizes.h>
+#include <asm/hardware.h>
+
+#define MAX_DMA_ADDRESS (PAGE_OFFSET + SZ_64M)
+
+/* No DMA */
+#define MAX_DMA_CHANNELS 0
+
+/*
+ * Only first 64MB of memory can be accessed via PCI.
+ * We use GFP_DMA to allocate safe buffers to do map/unmap.
+ * This is really ugly and we need a better way of specifying
+ * DMA-capable regions of memory.
+ */
+static inline void __arch_adjust_zones(int node, unsigned long *zone_size,
+ unsigned long *zhole_size)
+{
+ unsigned int sz = SZ_64M >> PAGE_SHIFT;
+
+ /*
+ * Only adjust if > 64M on current system
+ */
+ if (node || (zone_size[0] <= sz))
+ return;
+
+ zone_size[1] = zone_size[0] - sz;
+ zone_size[0] = sz;
+ zhole_size[1] = zhole_size[0];
+ zhole_size[0] = 0;
+}
+
+#define arch_adjust_zones(node, size, holes) \
+ __arch_adjust_zones(node, size, holes)
+
+#endif /* _ASM_ARCH_DMA_H */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ixp4xx/io.h
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (C) 2002-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+#include <asm/hardware.h>
+
+#define IO_SPACE_LIMIT 0xffff0000
+
+#define BIT(x) ((1)<<(x))
+
+
+extern int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data);
+extern int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data);
+
+
+/*
+ * IXP4xx provides two methods of accessing PCI memory space:
+ *
+ * 1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
+ * To access PCI via this space, we simply ioremap() the BAR
+ * into the kernel and we can use the standard read[bwl]/write[bwl]
+ * macros. This is the preffered method due to speed but it
+ * limits the system to just 64MB of PCI memory. This can be
+ * problamatic if using video cards and other memory-heavy
+ * targets.
+ *
+ * 2) If > 64MB of memory space is required, the IXP4xx can be configured
+ * to use indirect registers to access PCI (as we do below for I/O
+ * transactions). This allows for up to 128MB (0x48000000 to 0x4fffffff)
+ * of memory on the bus. The disadvantadge of this is that every
+ * PCI access requires three local register accesses plus a spinlock,
+ * but in some cases the performance hit is acceptable. In addition,
+ * you cannot mmap() PCI devices in this case.
+ *
+ */
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+
+#define __mem_pci(a) ((unsigned long)(a))
+
+#else
+
+#include <linux/mm.h>
+
+/*
+ * In the case of using indirect PCI, we simply return the actual PCI
+ * address and our read/write implementation use that to drive the
+ * access registers. If something outside of PCI is ioremap'd, we
+ * fallback to the default.
+ */
+static inline void *
+__ixp4xx_ioremap(unsigned long addr, size_t size, unsigned long flags, unsigned long align)
+{
+ extern void * __ioremap(unsigned long, size_t, unsigned long, unsigned long);
+ if((addr < 0x48000000) || (addr > 0x4fffffff))
+ return __ioremap(addr, size, flags, align);
+
+ return (void *)addr;
+}
+
+static inline void
+__ixp4xx_iounmap(void *addr)
+{
+ extern void __iounmap(void *addr);
+
+ if ((u32)addr > VMALLOC_START)
+ __iounmap(addr);
+}
+
+#define __arch_ioremap(a, s, f, x) __ixp4xx_ioremap(a, s, f, x)
+#define __arch_iounmap(a) __ixp4xx_iounmap(a)
+
+#define writeb(p, v) __ixp4xx_writeb(p, v)
+#define writew(p, v) __ixp4xx_writew(p, v)
+#define writel(p, v) __ixp4xx_writel(p, v)
+
+#define writesb(p, v, l) __ixp4xx_writesb(p, v, l)
+#define writesw(p, v, l) __ixp4xx_writesw(p, v, l)
+#define writesl(p, v, l) __ixp4xx_writesl(p, v, l)
+
+#define readb(p) __ixp4xx_readb(p)
+#define readw(p) __ixp4xx_readw(p)
+#define readl(p) __ixp4xx_readl(p)
+
+#define readsb(p, v, l) __ixp4xx_readsb(p, v, l)
+#define readsw(p, v, l) __ixp4xx_readsw(p, v, l)
+#define readsl(p, v, l) __ixp4xx_readsl(p, v, l)
+
+static inline void
+__ixp4xx_writeb(u8 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr > VMALLOC_START) {
+ __raw_writeb(value, addr);
+ return;
+ }
+
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_MEMWRITE, data);
+}
+
+static inline void
+__ixp4xx_writesb(u32 bus_addr, u8 *vaddr, int count)
+{
+ while (count--)
+ writeb(*vaddr++, bus_addr);
+}
+
+static inline void
+__ixp4xx_writew(u16 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr > VMALLOC_START) {
+ __raw_writew(value, addr);
+ return;
+ }
+
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_MEMWRITE, data);
+}
+
+static inline void
+__ixp4xx_writesw(u32 bus_addr, u16 *vaddr, int count)
+{
+ while (count--)
+ writew(*vaddr++, bus_addr);
+}
+
+static inline void
+__ixp4xx_writel(u32 value, u32 addr)
+{
+ if (addr > VMALLOC_START) {
+ __raw_writel(value, addr);
+ return;
+ }
+
+ ixp4xx_pci_write(addr, NP_CMD_MEMWRITE, value);
+}
+
+static inline void
+__ixp4xx_writesl(u32 bus_addr, u32 *vaddr, int count)
+{
+ while (count--)
+ writel(*vaddr++, bus_addr);
+}
+
+static inline unsigned char
+__ixp4xx_readb(u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr > VMALLOC_START)
+ return __raw_readb(addr);
+
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_MEMREAD, &data))
+ return 0xff;
+
+ return data >> (8*n);
+}
+
+static inline void
+__ixp4xx_readsb(u32 bus_addr, u8 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readb(bus_addr);
+}
+
+static inline unsigned short
+__ixp4xx_readw(u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr > VMALLOC_START)
+ return __raw_readw(addr);
+
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_MEMREAD, &data))
+ return 0xffff;
+
+ return data>>(8*n);
+}
+
+static inline void
+__ixp4xx_readsw(u32 bus_addr, u16 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readw(bus_addr);
+}
+
+static inline unsigned long
+__ixp4xx_readl(u32 addr)
+{
+ u32 data;
+
+ if (addr > VMALLOC_START)
+ return __raw_readl(addr);
+
+ if (ixp4xx_pci_read(addr, NP_CMD_MEMREAD, &data))
+ return 0xffffffff;
+
+ return data;
+}
+
+static inline void
+__ixp4xx_readsl(u32 bus_addr, u32 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readl(bus_addr);
+}
+
+
+/*
+ * We can use the built-in functions b/c they end up calling writeb/readb
+ */
+#define memset_io(c,v,l) _memset_io((c),(v),(l))
+#define memcpy_fromio(a,c,l) _memcpy_fromio((a),(c),(l))
+#define memcpy_toio(c,a,l) _memcpy_toio((c),(a),(l))
+
+#define eth_io_copy_and_sum(s,c,l,b) \
+ eth_copy_and_sum((s),__mem_pci(c),(l),(b))
+
+static inline int
+check_signature(unsigned long bus_addr, const unsigned char *signature,
+ int length)
+{
+ int retval = 0;
+ do {
+ if (readb(bus_addr) != *signature)
+ goto out;
+ bus_addr++;
+ signature++;
+ length--;
+ } while (length);
+ retval = 1;
+out:
+ return retval;
+}
+
+#endif
+
+/*
+ * IXP4xx does not have a transparent cpu -> PCI I/O translation
+ * window. Instead, it has a set of registers that must be tweaked
+ * with the proper byte lanes, command types, and address for the
+ * transaction. This means that we need to override the default
+ * I/O functions.
+ */
+#define outb(p, v) __ixp4xx_outb(p, v)
+#define outw(p, v) __ixp4xx_outw(p, v)
+#define outl(p, v) __ixp4xx_outl(p, v)
+
+#define outsb(p, v, l) __ixp4xx_outsb(p, v, l)
+#define outsw(p, v, l) __ixp4xx_outsw(p, v, l)
+#define outsl(p, v, l) __ixp4xx_outsl(p, v, l)
+
+#define inb(p) __ixp4xx_inb(p)
+#define inw(p) __ixp4xx_inw(p)
+#define inl(p) __ixp4xx_inl(p)
+
+#define insb(p, v, l) __ixp4xx_insb(p, v, l)
+#define insw(p, v, l) __ixp4xx_insw(p, v, l)
+#define insl(p, v, l) __ixp4xx_insl(p, v, l)
+
+
+static inline void
+__ixp4xx_outb(u8 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_IOWRITE, data);
+}
+
+static inline void
+__ixp4xx_outsb(u32 io_addr, u8 *vaddr, u32 count)
+{
+ while (count--)
+ outb(*vaddr++, io_addr);
+}
+
+static inline void
+__ixp4xx_outw(u16 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_IOWRITE, data);
+}
+
+static inline void
+__ixp4xx_outsw(u32 io_addr, u16 *vaddr, u32 count)
+{
+ while (count--)
+ outw(cpu_to_le16(*vaddr++), io_addr);
+}
+
+static inline void
+__ixp4xx_outl(u32 value, u32 addr)
+{
+ ixp4xx_pci_write(addr, NP_CMD_IOWRITE, value);
+}
+
+static inline void
+__ixp4xx_outsl(u32 io_addr, u32 *vaddr, u32 count)
+{
+ while (count--)
+ outl(*vaddr++, io_addr);
+}
+
+static inline u8
+__ixp4xx_inb(u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_IOREAD, &data))
+ return 0xff;
+
+ return data >> (8*n);
+}
+
+static inline void
+__ixp4xx_insb(u32 io_addr, u8 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = inb(io_addr);
+}
+
+static inline u16
+__ixp4xx_inw(u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_IOREAD, &data))
+ return 0xffff;
+
+ return data>>(8*n);
+}
+
+static inline void
+__ixp4xx_insw(u32 io_addr, u16 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = le16_to_cpu(inw(io_addr));
+}
+
+static inline u32
+__ixp4xx_inl(u32 addr)
+{
+ u32 data;
+ if (ixp4xx_pci_read(addr, NP_CMD_IOREAD, &data))
+ return 0xffffffff;
+
+ return data;
+}
+
+static inline void
+__ixp4xx_insl(u32 io_addr, u32 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = inl(io_addr);
+}
+
+
+#endif // __ASM_ARM_ARCH_IO_H
+
--- /dev/null
+/*
+ * irq.h
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define fixup_irq(irq) (irq)
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ixp4xx/memory.h
+ *
+ * Copyright (c) 2001-2004 MontaVista Software, Inc.
+ */
+
+#ifndef __ASM_ARCH_MEMORY_H
+#define __ASM_ARCH_MEMORY_H
+
+/*
+ * Physical DRAM offset.
+ */
+#define PHYS_OFFSET (0x00000000UL)
+
+/*
+ * Virtual view <-> DMA view memory address translations
+ * virt_to_bus: Used to translate the virtual address to an
+ * address suitable to be passed to set_dma_addr
+ * bus_to_virt: Used to convert an address for DMA operations
+ * to an address that the kernel can use.
+ *
+ * These are dummies for now.
+ */
+#define __virt_to_bus(x) __virt_to_phys(x)
+#define __bus_to_virt(x) __phys_to_virt(x)
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ixp4xx/param.h
+ */
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/platform.h
+ *
+ * Constants and functions that are useful to IXP4xx platform-specific code
+ * and device drivers.
+ *
+ * Copyright (C) 2004 MontaVista Software, Inc.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H__
+#error "Do not include this directly, instead #include <asm/hardware.h>"
+#endif
+
+#ifndef __ASSEMBLY__
+
+#include <asm/types.h>
+
+/*
+ * Expansion bus memory regions
+ */
+#define IXP4XX_EXP_BUS_BASE_PHYS (0x50000000)
+
+#define IXP4XX_EXP_BUS_CSX_REGION_SIZE (0x01000000)
+
+#define IXP4XX_EXP_BUS_CS0_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x00000000)
+#define IXP4XX_EXP_BUS_CS1_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x01000000)
+#define IXP4XX_EXP_BUS_CS2_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x02000000)
+#define IXP4XX_EXP_BUS_CS3_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x03000000)
+#define IXP4XX_EXP_BUS_CS4_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x04000000)
+#define IXP4XX_EXP_BUS_CS5_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x05000000)
+#define IXP4XX_EXP_BUS_CS6_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x06000000)
+#define IXP4XX_EXP_BUS_CS7_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x07000000)
+
+#define IXP4XX_FLASH_WRITABLE (0x2)
+#define IXP4XX_FLASH_DEFAULT (0xbcd23c40)
+#define IXP4XX_FLASH_WRITE (0xbcd23c42)
+
+/*
+ * Clock Speed Definitions.
+ */
+#define IXP4XX_PERIPHERAL_BUS_CLOCK (66) /* 66Mhzi APB BUS */
+#define IXP4XX_UART_XTAL 14745600
+
+/*
+ * The IXP4xx chips do not have an I2C unit, so GPIO lines are just
+ * used to
+ * Used as platform_data to provide GPIO pin information to the ixp42x
+ * I2C driver.
+ */
+struct ixp4xx_i2c_pins {
+ unsigned long sda_pin;
+ unsigned long scl_pin;
+};
+
+
+/*
+ * Functions used by platform-level setup code
+ */
+extern void ixp4xx_map_io(void);
+extern void ixp4xx_init_irq(void);
+extern void ixp4xx_pci_preinit(void);
+struct pci_sys_data;
+extern int ixp4xx_setup(int nr, struct pci_sys_data *sys);
+extern struct pci_bus *ixp4xx_scan_bus(int nr, struct pci_sys_data *sys);
+
+/*
+ * GPIO-functions
+ */
+/*
+ * The following converted to the real HW bits the gpio_line_config
+ */
+/* GPIO pin types */
+#define IXP4XX_GPIO_OUT 0x1
+#define IXP4XX_GPIO_IN 0x2
+
+#define IXP4XX_GPIO_INTSTYLE_MASK 0x7C /* Bits [6:2] define interrupt style */
+
+/*
+ * GPIO interrupt types.
+ */
+#define IXP4XX_GPIO_ACTIVE_HIGH 0x4 /* Default */
+#define IXP4XX_GPIO_ACTIVE_LOW 0x8
+#define IXP4XX_GPIO_RISING_EDGE 0x10
+#define IXP4XX_GPIO_FALLING_EDGE 0x20
+#define IXP4XX_GPIO_TRANSITIONAL 0x40
+
+/* GPIO signal types */
+#define IXP4XX_GPIO_LOW 0
+#define IXP4XX_GPIO_HIGH 1
+
+/* GPIO Clocks */
+#define IXP4XX_GPIO_CLK_0 14
+#define IXP4XX_GPIO_CLK_1 15
+
+extern void gpio_line_config(u8 line, u32 style);
+
+static inline void gpio_line_get(u8 line, int *value)
+{
+ *value = (*IXP4XX_GPIO_GPINR >> line) & 0x1;
+}
+
+static inline void gpio_line_set(u8 line, int value)
+{
+ if (value == IXP4XX_GPIO_HIGH)
+ *IXP4XX_GPIO_GPOUTR |= (1 << line);
+ else if (value == IXP4XX_GPIO_LOW)
+ *IXP4XX_GPIO_GPOUTR &= ~(1 << line);
+}
+
+static inline void gpio_line_isr_clear(u8 line)
+{
+ *IXP4XX_GPIO_GPISR = (1 << line);
+}
+
+#endif // __ASSEMBLY__
+
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/serial.h
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (C) 2002-2004 MontaVista Software, Inc.
+ *
+ */
+
+#ifndef _ARCH_SERIAL_H_
+#define _ARCH_SERIAL_H_
+
+/*
+ * We don't hardcode our serial port information but instead
+ * fill it in dynamically based on our platform in arch->map_io.
+ * This allows for per-board serial ports w/o a bunch of
+ * #ifdefs in this file.
+ */
+#define STD_SERIAL_PORT_DEFNS
+#define EXTRA_SERIAL_PORT_DEFNS
+
+/*
+ * IXP4XX uses 15.6MHz clock for uart
+ */
+#define BASE_BAUD ( IXP4XX_UART_XTAL / 16 )
+
+#endif // _ARCH_SERIAL_H_
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4x//system.h
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <asm/hardware.h>
+
+static inline void arch_idle(void)
+{
+#if 0
+ if (!hlt_counter)
+ cpu_do_idle(0);
+#endif
+}
+
+
+static inline void arch_reset(char mode)
+{
+ if ( 1 && mode == 's') {
+ /* Jump into ROM at address 0 */
+ cpu_reset(0);
+ } else {
+ /* Use on-chip reset capability */
+
+ /* set the "key" register to enable access to
+ * "timer" and "enable" registers
+ */
+ *IXP4XX_OSWK = 0x482e;
+
+ /* write 0 to the timer register for an immidiate reset */
+ *IXP4XX_OSWT = 0;
+
+ /* disable watchdog interrupt, enable reset, enable count */
+ *IXP4XX_OSWE = 0x3;
+ }
+}
+
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ixp4xx/time.h
+ *
+ * We implement timer code in arch/arm/mach-ixp4xx/time.c
+ *
+ */
+
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/uncompress.h
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef _ARCH_UNCOMPRESS_H_
+#define _ARCH_UNCOMPRESS_H_
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <linux/serial_reg.h>
+
+#define TX_DONE (UART_LSR_TEMT|UART_LSR_THRE)
+
+static volatile u32* uart_base;
+
+static __inline__ void putc(char c)
+{
+ /* Check THRE and TEMT bits before we transmit the character.
+ */
+ while ((uart_base[UART_LSR] & TX_DONE) != TX_DONE);
+ *uart_base = c;
+}
+
+/*
+ * This does not append a newline
+ */
+static void puts(const char *s)
+{
+ while (*s)
+ {
+ putc(*s);
+ if (*s == '\n')
+ putc('\r');
+ s++;
+ }
+}
+
+static __inline__ void __arch_decomp_setup(unsigned long arch_id)
+{
+ /*
+ * Coyote only has UART2 connected
+ */
+ if (machine_is_adi_coyote())
+ uart_base = (volatile u32*) IXP4XX_UART2_BASE_PHYS;
+ else
+ uart_base = (volatile u32*) IXP4XX_UART1_BASE_PHYS;
+}
+
+/*
+ * arch_id is a variable in decompress_kernel()
+ */
+#define arch_decomp_setup() __arch_decomp_setup(arch_id)
+
+#define arch_decomp_wdog()
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-pxa/pxafb.h
+ *
+ * Support for the xscale frame buffer.
+ *
+ * Author: Jean-Frederic Clere
+ * Created: Sep 22, 2003
+ * Copyright: jfclere@sinix.net
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * This structure describes the machine which we are running on.
+ * It is set in linux/arch/arm/mach-pxa/machine_name.c and used in the probe routine
+ * of linux/drivers/video/pxafb.c
+ */
+struct pxafb_mach_info {
+ u_long pixclock;
+
+ u_short xres;
+ u_short yres;
+
+ u_char bpp;
+ u_char hsync_len;
+ u_char left_margin;
+ u_char right_margin;
+
+ u_char vsync_len;
+ u_char upper_margin;
+ u_char lower_margin;
+ u_char sync;
+
+ u_int cmap_greyscale:1,
+ cmap_inverse:1,
+ cmap_static:1,
+ unused:29;
+
+ /* The following should be defined in LCCR0
+ * LCCR0_Act or LCCR0_Pas Active or Passive
+ * LCCR0_Sngl or LCCR0_Dual Single/Dual panel
+ * LCCR0_Mono or LCCR0_Color Mono/Color
+ * LCCR0_4PixMono or LCCR0_8PixMono (in mono single mode)
+ * LCCR0_DMADel(Tcpu) (optional) DMA request delay
+ *
+ * The following should not be defined in LCCR0:
+ * LCCR0_OUM, LCCR0_BM, LCCR0_QDM, LCCR0_DIS, LCCR0_EFM
+ * LCCR0_IUM, LCCR0_SFM, LCCR0_LDM, LCCR0_ENB
+ */
+ u_int lccr0;
+ /* The following should be defined in LCCR3
+ * LCCR3_OutEnH or LCCR3_OutEnL Output enable polarity
+ * LCCR3_PixRsEdg or LCCR3_PixFlEdg Pixel clock edge type
+ * LCCR3_Acb(X) AB Bias pin frequency
+ * LCCR3_DPC (optional) Double Pixel Clock mode (untested)
+ *
+ * The following should not be defined in LCCR3
+ * LCCR3_HSP, LCCR3_VSP, LCCR0_Pcd(x), LCCR3_Bpp
+ */
+ u_int lccr3;
+
+ void (*pxafb_backlight_power)(int);
+ void (*pxafb_lcd_power)(int);
+
+};
+void set_pxa_fb_info(struct pxafb_mach_info *hard_pxa_fb_info);
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-sa1100/collie.h
+ *
+ * This file contains the hardware specific definitions for Assabet
+ * Only include this file from SA1100-specific files.
+ *
+ * ChangeLog:
+ * 04-06-2001 Lineo Japan, Inc.
+ * 04-16-2001 SHARP Corporation
+ * 07-07-2002 Chris Larson <clarson@digi.com>
+ *
+ */
+#ifndef __ASM_ARCH_COLLIE_H
+#define __ASM_ARCH_COLLIE_H
+
+#include <linux/config.h>
+
+#define CF_BUF_CTRL_BASE 0xF0800000
+#define COLLIE_SCP_REG(adr) (*(volatile unsigned short*)(CF_BUF_CTRL_BASE+(adr)))
+#define COLLIE_SCP_MCR 0x00
+#define COLLIE_SCP_CDR 0x04
+#define COLLIE_SCP_CSR 0x08
+#define COLLIE_SCP_CPR 0x0C
+#define COLLIE_SCP_CCR 0x10
+#define COLLIE_SCP_IRR 0x14
+#define COLLIE_SCP_IRM 0x14
+#define COLLIE_SCP_IMR 0x18
+#define COLLIE_SCP_ISR 0x1C
+#define COLLIE_SCP_GPCR 0x20
+#define COLLIE_SCP_GPWR 0x24
+#define COLLIE_SCP_GPRR 0x28
+#define COLLIE_SCP_REG_MCR COLLIE_SCP_REG(COLLIE_SCP_MCR)
+#define COLLIE_SCP_REG_CDR COLLIE_SCP_REG(COLLIE_SCP_CDR)
+#define COLLIE_SCP_REG_CSR COLLIE_SCP_REG(COLLIE_SCP_CSR)
+#define COLLIE_SCP_REG_CPR COLLIE_SCP_REG(COLLIE_SCP_CPR)
+#define COLLIE_SCP_REG_CCR COLLIE_SCP_REG(COLLIE_SCP_CCR)
+#define COLLIE_SCP_REG_IRR COLLIE_SCP_REG(COLLIE_SCP_IRR)
+#define COLLIE_SCP_REG_IRM COLLIE_SCP_REG(COLLIE_SCP_IRM)
+#define COLLIE_SCP_REG_IMR COLLIE_SCP_REG(COLLIE_SCP_IMR)
+#define COLLIE_SCP_REG_ISR COLLIE_SCP_REG(COLLIE_SCP_ISR)
+#define COLLIE_SCP_REG_GPCR COLLIE_SCP_REG(COLLIE_SCP_GPCR)
+#define COLLIE_SCP_REG_GPWR COLLIE_SCP_REG(COLLIE_SCP_GPWR)
+#define COLLIE_SCP_REG_GPRR COLLIE_SCP_REG(COLLIE_SCP_GPRR)
+
+#define COLLIE_SCP_GPCR_PA19 ( 1 << 9 )
+#define COLLIE_SCP_GPCR_PA18 ( 1 << 8 )
+#define COLLIE_SCP_GPCR_PA17 ( 1 << 7 )
+#define COLLIE_SCP_GPCR_PA16 ( 1 << 6 )
+#define COLLIE_SCP_GPCR_PA15 ( 1 << 5 )
+#define COLLIE_SCP_GPCR_PA14 ( 1 << 4 )
+#define COLLIE_SCP_GPCR_PA13 ( 1 << 3 )
+#define COLLIE_SCP_GPCR_PA12 ( 1 << 2 )
+#define COLLIE_SCP_GPCR_PA11 ( 1 << 1 )
+
+#define COLLIE_SCP_CHARGE_ON COLLIE_SCP_GPCR_PA11
+#define COLLIE_SCP_DIAG_BOOT1 COLLIE_SCP_GPCR_PA12
+#define COLLIE_SCP_DIAG_BOOT2 COLLIE_SCP_GPCR_PA13
+#define COLLIE_SCP_MUTE_L COLLIE_SCP_GPCR_PA14
+#define COLLIE_SCP_MUTE_R COLLIE_SCP_GPCR_PA15
+#define COLLIE_SCP_5VON COLLIE_SCP_GPCR_PA16
+#define COLLIE_SCP_AMP_ON COLLIE_SCP_GPCR_PA17
+#define COLLIE_SCP_VPEN COLLIE_SCP_GPCR_PA18
+#define COLLIE_SCP_LB_VOL_CHG COLLIE_SCP_GPCR_PA19
+
+#define COLLIE_SCP_IO_DIR ( COLLIE_SCP_CHARGE_ON | COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | \
+ COLLIE_SCP_5VON | COLLIE_SCP_AMP_ON | COLLIE_SCP_VPEN | \
+ COLLIE_SCP_LB_VOL_CHG )
+#define COLLIE_SCP_IO_OUT ( COLLIE_SCP_MUTE_L | COLLIE_SCP_MUTE_R | COLLIE_SCP_VPEN | \
+ COLLIE_SCP_CHARGE_ON )
+
+/* GPIOs for which the generic definition doesn't say much */
+
+#define COLLIE_GPIO_ON_KEY GPIO_GPIO (0)
+#define COLLIE_GPIO_AC_IN GPIO_GPIO (1)
+#define COLLIE_GPIO_CF_IRQ GPIO_GPIO (14)
+#define COLLIE_GPIO_nREMOCON_INT GPIO_GPIO (15)
+#define COLLIE_GPIO_UCB1x00_RESET GPIO_GPIO (16)
+#define COLLIE_GPIO_CO GPIO_GPIO (20)
+#define COLLIE_GPIO_MCP_CLK GPIO_GPIO (21)
+#define COLLIE_GPIO_CF_CD GPIO_GPIO (22)
+#define COLLIE_GPIO_UCB1x00_IRQ GPIO_GPIO (23)
+#define COLLIE_GPIO_WAKEUP GPIO_GPIO (24)
+#define COLLIE_GPIO_GA_INT GPIO_GPIO (25)
+#define COLLIE_GPIO_MAIN_BAT_LOW GPIO_GPIO (26)
+
+/* Interrupts */
+
+#define COLLIE_IRQ_GPIO_ON_KEY IRQ_GPIO0
+#define COLLIE_IRQ_GPIO_AC_IN IRQ_GPIO1
+#define COLLIE_IRQ_GPIO_CF_IRQ IRQ_GPIO14
+#define COLLIE_IRQ_GPIO_nREMOCON_INT IRQ_GPIO15
+#define COLLIE_IRQ_GPIO_CO IRQ_GPIO20
+#define COLLIE_IRQ_GPIO_CF_CD IRQ_GPIO22
+#define COLLIE_IRQ_GPIO_UCB1x00_IRQ IRQ_GPIO23
+#define COLLIE_IRQ_GPIO_WAKEUP IRQ_GPIO24
+#define COLLIE_IRQ_GPIO_GA_INT IRQ_GPIO25
+#define COLLIE_IRQ_GPIO_MAIN_BAT_LOW IRQ_GPIO26
+
+#define COLLIE_LCM_IRQ_GPIO_RTS IRQ_LOCOMO_GPIO0
+#define COLLIE_LCM_IRQ_GPIO_CTS IRQ_LOCOMO_GPIO1
+#define COLLIE_LCM_IRQ_GPIO_DSR IRQ_LOCOMO_GPIO2
+#define COLLIE_LCM_IRQ_GPIO_DTR IRQ_LOCOMO_GPIO3
+#define COLLIE_LCM_IRQ_GPIO_nSD_DETECT IRQ_LOCOMO_GPIO13
+#define COLLIE_LCM_IRQ_GPIO_nSD_WP IRQ_LOCOMO_GPIO14
+
+/*
+ * Flash Memory mappings
+ *
+ */
+
+#define FLASH_MEM_BASE 0xe8ffc000
+#define FLASH_DATA(adr) (*(volatile unsigned int*)(FLASH_MEM_BASE+(adr)))
+#define FLASH_DATA_F(adr) (*(volatile float32 *)(FLASH_MEM_BASE+(adr)))
+#define FLASH_MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a )
+
+// COMADJ
+#define FLASH_COMADJ_MAJIC FLASH_MAGIC_CHG('C','M','A','D')
+#define FLASH_COMADJ_MAGIC_ADR 0x00
+#define FLASH_COMADJ_DATA_ADR 0x04
+
+// TOUCH PANEL
+#define FLASH_TOUCH_MAJIC FLASH_MAGIC_CHG('T','U','C','H')
+#define FLASH_TOUCH_MAGIC_ADR 0x1C
+#define FLASH_TOUCH_XP_DATA_ADR 0x20
+#define FLASH_TOUCH_YP_DATA_ADR 0x24
+#define FLASH_TOUCH_XD_DATA_ADR 0x28
+#define FLASH_TOUCH_YD_DATA_ADR 0x2C
+
+// AD
+#define FLASH_AD_MAJIC FLASH_MAGIC_CHG('B','V','A','D')
+#define FLASH_AD_MAGIC_ADR 0x30
+#define FLASH_AD_DATA_ADR 0x34
+
+/* GPIO's on the TC35143AF (Toshiba Analog Frontend) */
+#define COLLIE_TC35143_GPIO_VERSION0 UCB_IO_0 /* GPIO0=Version */
+#define COLLIE_TC35143_GPIO_TBL_CHK UCB_IO_1 /* GPIO1=TBL_CHK */
+#define COLLIE_TC35143_GPIO_VPEN_ON UCB_IO_2 /* GPIO2=VPNE_ON */
+#define COLLIE_TC35143_GPIO_IR_ON UCB_IO_3 /* GPIO3=IR_ON */
+#define COLLIE_TC35143_GPIO_AMP_ON UCB_IO_4 /* GPIO4=AMP_ON */
+#define COLLIE_TC35143_GPIO_VERSION1 UCB_IO_5 /* GPIO5=Version */
+#define COLLIE_TC35143_GPIO_FS8KLPF UCB_IO_5 /* GPIO5=fs 8k LPF */
+#define COLLIE_TC35143_GPIO_BUZZER_BIAS UCB_IO_6 /* GPIO6=BUZZER BIAS */
+#define COLLIE_TC35143_GPIO_MBAT_ON UCB_IO_7 /* GPIO7=MBAT_ON */
+#define COLLIE_TC35143_GPIO_BBAT_ON UCB_IO_8 /* GPIO8=BBAT_ON */
+#define COLLIE_TC35143_GPIO_TMP_ON UCB_IO_9 /* GPIO9=TMP_ON */
+#define COLLIE_TC35143_GPIO_IN ( UCB_IO_0 | UCB_IO_2 | UCB_IO_5 )
+#define COLLIE_TC35143_GPIO_OUT ( UCB_IO_1 | UCB_IO_3 | UCB_IO_4 | UCB_IO_6 | \
+ UCB_IO_7 | UCB_IO_8 | UCB_IO_9 )
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/hardware/clock.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef ASMARM_CLOCK_H
+#define ASMARM_CLOCK_H
+
+struct device;
+
+/*
+ * The base API.
+ */
+
+
+/*
+ * struct clk - an machine class defined object / cookie.
+ */
+struct clk;
+
+/**
+ * clk_get - lookup and obtain a reference to a clock producer.
+ * @dev: device for clock "consumer"
+ * @id: device ID
+ *
+ * Returns a struct clk corresponding to the clock producer, or
+ * valid IS_ERR() condition containing errno.
+ */
+struct clk *clk_get(struct device *dev, const char *id);
+
+/**
+ * clk_enable - inform the system when the clock source should be running.
+ * @clk: clock source
+ *
+ * If the clock can not be enabled/disabled, this should return success.
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_enable(struct clk *clk);
+
+/**
+ * clk_disable - inform the system when the clock source is no longer required.
+ * @clk: clock source
+ */
+void clk_disable(struct clk *clk);
+
+/**
+ * clk_use - increment the use count
+ * @clk: clock source
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_use(struct clk *clk);
+
+/**
+ * clk_unuse - decrement the use count
+ * @clk: clock source
+ */
+void clk_unuse(struct clk *clk);
+
+/**
+ * clk_get_rate - obtain the current clock rate for a clock source.
+ * This is only valid once the clock source has been enabled.
+ * @clk: clock source
+ */
+unsigned long clk_get_rate(struct clk *clk);
+
+/**
+ * clk_put - "free" the clock source
+ * @clk: clock source
+ */
+void clk_put(struct clk *clk);
+
+
+/*
+ * The remaining APIs are optional for machine class support.
+ */
+
+
+/**
+ * clk_round_rate - adjust a rate to the exact rate a clock can provide
+ * @clk: clock source
+ * @rate: desired clock rate in kHz
+ *
+ * Returns rounded clock rate, or negative errno.
+ */
+long clk_round_rate(struct clk *clk, unsigned long rate);
+
+/**
+ * clk_set_rate - set the clock rate for a clock source
+ * @clk: clock source
+ * @rate: desired clock rate in kHz
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_set_rate(struct clk *clk, unsigned long rate);
+
+/**
+ * clk_set_parent - set the parent clock source for this clock
+ * @clk: clock source
+ * @parent: parent clock source
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_set_parent(struct clk *clk, struct clk *parent);
+
+/**
+ * clk_get_parent - get the parent clock source for this clock
+ * @clk: clock source
+ *
+ * Returns struct clk corresponding to parent clock source, or
+ * valid IS_ERR() condition containing errno.
+ */
+struct clk *clk_get_parent(struct clk *clk);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/hardware/locomo.h
+ *
+ * This file contains the definitions for the LoCoMo G/A Chip
+ *
+ * (C) Copyright 2004 John Lenz
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Based on sa1111.h
+ */
+#ifndef _ASM_ARCH_LOCOMO
+#define _ASM_ARCH_LOCOMO
+
+#define locomo_writel(val,addr) ({ *(volatile u16 *)(addr) = (val); })
+#define locomo_readl(addr) (*(volatile u16 *)(addr))
+
+/* LOCOMO version */
+#define LOCOMO_VER 0x00
+
+/* Pin status */
+#define LOCOMO_ST 0x04
+
+/* Pin status */
+#define LOCOMO_C32K 0x08
+
+/* Interrupt controller */
+#define LOCOMO_ICR 0x0C
+
+/* MCS decoder for boot selecting */
+#define LOCOMO_MCSX0 0x10
+#define LOCOMO_MCSX1 0x14
+#define LOCOMO_MCSX2 0x18
+#define LOCOMO_MCSX3 0x1c
+
+/* Touch panel controller */
+#define LOCOMO_ASD 0x20 /* AD start delay */
+#define LOCOMO_HSD 0x28 /* HSYS delay */
+#define LOCOMO_HSC 0x2c /* HSYS period */
+#define LOCOMO_TADC 0x30 /* tablet ADC clock */
+
+/* TFT signal */
+#define LOCOMO_TC 0x38 /* TFT control signal */
+#define LOCOMO_CPSD 0x3c /* CPS delay */
+
+/* Key controller */
+#define LOCOMO_KIB 0x40 /* KIB level */
+#define LOCOMO_KSC 0x44 /* KSTRB control */
+#define LOCOMO_KCMD 0x48 /* KSTRB command */
+#define LOCOMO_KIC 0x4c /* Key interrupt */
+
+/* Audio clock */
+#define LOCOMO_ACC 0x54
+
+/* SPI interface */
+#define LOCOMO_SPIMD 0x60 /* SPI mode setting */
+#define LOCOMO_SPICT 0x64 /* SPI mode control */
+#define LOCOMO_SPIST 0x68 /* SPI status */
+#define LOCOMO_SPIIS 0x70 /* SPI interrupt status */
+#define LOCOMO_SPIWE 0x74 /* SPI interrupt status write enable */
+#define LOCOMO_SPIIE 0x78 /* SPI interrupt enable */
+#define LOCOMO_SPIIR 0x7c /* SPI interrupt request */
+#define LOCOMO_SPITD 0x80 /* SPI transfer data write */
+#define LOCOMO_SPIRD 0x84 /* SPI receive data read */
+#define LOCOMO_SPITS 0x88 /* SPI transfer data shift */
+#define LOCOMO_SPIRS 0x8C /* SPI receive data shift */
+
+#define LOCOMO_SPI_TEND (1 << 3) /* Transfer end bit */
+#define LOCOMO_SPI_OVRN (1 << 2) /* Over Run bit */
+#define LOCOMO_SPI_RFW (1 << 1) /* write buffer bit */
+#define LOCOMO_SPI_RFR (1) /* read buffer bit */
+
+/* GPIO */
+#define LOCOMO_GPD 0x90 /* GPIO direction */
+#define LOCOMO_GPE 0x94 /* GPIO input enable */
+#define LOCOMO_GPL 0x98 /* GPIO level */
+#define LOCOMO_GPO 0x9c /* GPIO out data setteing */
+#define LOCOMO_GRIE 0xa0 /* GPIO rise detection */
+#define LOCOMO_GFIE 0xa4 /* GPIO fall detection */
+#define LOCOMO_GIS 0xa8 /* GPIO edge detection status */
+#define LOCOMO_GWE 0xac /* GPIO status write enable */
+#define LOCOMO_GIE 0xb0 /* GPIO interrupt enable */
+#define LOCOMO_GIR 0xb4 /* GPIO interrupt request */
+
+#define LOCOMO_GPIO0 (1<<0)
+#define LOCOMO_GPIO1 (1<<1)
+#define LOCOMO_GPIO2 (1<<2)
+#define LOCOMO_GPIO3 (1<<3)
+#define LOCOMO_GPIO4 (1<<4)
+#define LOCOMO_GPIO5 (1<<5)
+#define LOCOMO_GPIO6 (1<<6)
+#define LOCOMO_GPIO7 (1<<7)
+#define LOCOMO_GPIO8 (1<<8)
+#define LOCOMO_GPIO9 (1<<9)
+#define LOCOMO_GPIO10 (1<<10)
+#define LOCOMO_GPIO11 (1<<11)
+#define LOCOMO_GPIO12 (1<<12)
+#define LOCOMO_GPIO13 (1<<13)
+#define LOCOMO_GPIO14 (1<<14)
+#define LOCOMO_GPIO15 (1<<15)
+
+/* Front light adjustment controller */
+#define LOCOMO_ALS 0xc8 /* Adjust light cycle */
+#define LOCOMO_ALD 0xcc /* Adjust light duty */
+
+/* PCM audio interface */
+#define LOCOMO_PAIF 0xd0
+
+/* Long time timer */
+#define LOCOMO_LTC 0xd8 /* LTC interrupt setting */
+#define LOCOMO_LTINT 0xdc /* LTC interrupt */
+
+/* DAC control signal for LCD (COMADJ ) */
+#define LOCOMO_DAC 0xe0
+
+/* DAC control */
+#define LOCOMO_DAC_SCLOEB 0x08 /* SCL pin output data */
+#define LOCOMO_DAC_TEST 0x04 /* Test bit */
+#define LOCOMO_DAC_SDA 0x02 /* SDA pin level (read-only) */
+#define LOCOMO_DAC_SDAOEB 0x01 /* SDA pin output data */
+
+/* LED controller */
+#define LOCOMO_LPT0 0xe8 /* LEDPWM0 timer */
+#define LOCOMO_LPT1 0xec /* LEDPWM1 timer */
+
+#define LOCOMO_LPT_TOFH 0x80 /* */
+#define LOCOMO_LPT_TOFL 0x08 /* */
+#define LOCOMO_LPT_TOH(TOH) ((TOH & 0x7) << 4) /* */
+#define LOCOMO_LPT_TOL(TOL) ((TOL & 0x7)) /* */
+
+/* Audio clock */
+#define LOCOMO_ACC_XON 0x80 /* */
+#define LOCOMO_ACC_XEN 0x40 /* */
+#define LOCOMO_ACC_XSEL0 0x00 /* */
+#define LOCOMO_ACC_XSEL1 0x20 /* */
+#define LOCOMO_ACC_MCLKEN 0x10 /* */
+#define LOCOMO_ACC_64FSEN 0x08 /* */
+#define LOCOMO_ACC_CLKSEL000 0x00 /* mclk 2 */
+#define LOCOMO_ACC_CLKSEL001 0x01 /* mclk 3 */
+#define LOCOMO_ACC_CLKSEL010 0x02 /* mclk 4 */
+#define LOCOMO_ACC_CLKSEL011 0x03 /* mclk 6 */
+#define LOCOMO_ACC_CLKSEL100 0x04 /* mclk 8 */
+#define LOCOMO_ACC_CLKSEL101 0x05 /* mclk 12 */
+
+/* PCM audio interface */
+#define LOCOMO_PAIF_SCINV 0x20 /* */
+#define LOCOMO_PAIF_SCEN 0x10 /* */
+#define LOCOMO_PAIF_LRCRST 0x08 /* */
+#define LOCOMO_PAIF_LRCEVE 0x04 /* */
+#define LOCOMO_PAIF_LRCINV 0x02 /* */
+#define LOCOMO_PAIF_LRCEN 0x01 /* */
+
+/* GPIO */
+#define LOCOMO_GPIO(Nb) (0x01 << (Nb)) /* LoCoMo GPIO [0...15] */
+#define LOCOMO_GPIO_RTS LOCOMO_GPIO(0) /* LoCoMo GPIO [0] */
+#define LOCOMO_GPIO_CTS LOCOMO_GPIO(1) /* LoCoMo GPIO [1] */
+#define LOCOMO_GPIO_DSR LOCOMO_GPIO(2) /* LoCoMo GPIO [2] */
+#define LOCOMO_GPIO_DTR LOCOMO_GPIO(3) /* LoCoMo GPIO [3] */
+#define LOCOMO_GPIO_LCD_VSHA_ON LOCOMO_GPIO(4) /* LoCoMo GPIO [4] */
+#define LOCOMO_GPIO_LCD_VSHD_ON LOCOMO_GPIO(5) /* LoCoMo GPIO [5] */
+#define LOCOMO_GPIO_LCD_VEE_ON LOCOMO_GPIO(6) /* LoCoMo GPIO [6] */
+#define LOCOMO_GPIO_LCD_MOD LOCOMO_GPIO(7) /* LoCoMo GPIO [7] */
+#define LOCOMO_GPIO_DAC_ON LOCOMO_GPIO(8) /* LoCoMo GPIO [8] */
+#define LOCOMO_GPIO_FL_VR LOCOMO_GPIO(9) /* LoCoMo GPIO [9] */
+#define LOCOMO_GPIO_DAC_SDATA LOCOMO_GPIO(10) /* LoCoMo GPIO [10] */
+#define LOCOMO_GPIO_DAC_SCK LOCOMO_GPIO(11) /* LoCoMo GPIO [11] */
+#define LOCOMO_GPIO_DAC_SLOAD LOCOMO_GPIO(12) /* LoCoMo GPIO [12] */
+
+extern struct bus_type locomo_bus_type;
+
+struct locomo_dev {
+ struct device dev;
+ unsigned int devid;
+ struct resource res;
+ void *mapbase;
+ unsigned int irq[1];
+ u64 dma_mask;
+};
+
+#define LOCOMO_DEV(_d) container_of((_d), struct locomo_dev, dev)
+
+#define locomo_get_drvdata(d) dev_get_drvdata(&(d)->dev)
+#define locomo_set_drvdata(d,p) dev_set_drvdata(&(d)->dev, p)
+
+struct locomo_driver {
+ struct device_driver drv;
+ unsigned int devid;
+ int (*probe)(struct locomo_dev *);
+ int (*remove)(struct locomo_dev *);
+ int (*suspend)(struct locomo_dev *, u32);
+ int (*resume)(struct locomo_dev *);
+};
+
+#define LOCOMO_DRV(_d) container_of((_d), struct locomo_driver, drv)
+
+#define LOCOMO_DRIVER_NAME(_ldev) ((_ldev)->dev.driver->name)
+
+void locomo_lcd_power(struct locomo_dev *, int, unsigned int);
+
+int locomo_driver_register(struct locomo_driver *);
+void locomo_driver_unregister(struct locomo_driver *);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/mach/time.h
+ *
+ * Copyright (C) 2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef __ASM_ARM_MACH_TIME_H
+#define __ASM_ARM_MACH_TIME_H
+
+extern void (*init_arch_time)(void);
+
+extern int (*set_rtc)(void);
+extern unsigned long(*gettimeoffset)(void);
+
+void timer_tick(struct pt_regs *);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/vfp.h
+ *
+ * VFP register definitions.
+ * First, the standard VFP set.
+ */
+
+#define FPSID cr0
+#define FPSCR cr1
+#define FPEXC cr8
+
+/* FPSID bits */
+#define FPSID_IMPLEMENTER_BIT (24)
+#define FPSID_IMPLEMENTER_MASK (0xff << FPSID_IMPLEMENTER_BIT)
+#define FPSID_SOFTWARE (1<<23)
+#define FPSID_FORMAT_BIT (21)
+#define FPSID_FORMAT_MASK (0x3 << FPSID_FORMAT_BIT)
+#define FPSID_NODOUBLE (1<<20)
+#define FPSID_ARCH_BIT (16)
+#define FPSID_ARCH_MASK (0xF << FPSID_ARCH_BIT)
+#define FPSID_PART_BIT (8)
+#define FPSID_PART_MASK (0xFF << FPSID_PART_BIT)
+#define FPSID_VARIANT_BIT (4)
+#define FPSID_VARIANT_MASK (0xF << FPSID_VARIANT_BIT)
+#define FPSID_REV_BIT (0)
+#define FPSID_REV_MASK (0xF << FPSID_REV_BIT)
+
+/* FPEXC bits */
+#define FPEXC_EXCEPTION (1<<31)
+#define FPEXC_ENABLE (1<<30)
+
+/* FPSCR bits */
+#define FPSCR_DEFAULT_NAN (1<<25)
+#define FPSCR_FLUSHTOZERO (1<<24)
+#define FPSCR_ROUND_NEAREST (0<<22)
+#define FPSCR_ROUND_PLUSINF (1<<22)
+#define FPSCR_ROUND_MINUSINF (2<<22)
+#define FPSCR_ROUND_TOZERO (3<<22)
+#define FPSCR_RMODE_BIT (22)
+#define FPSCR_RMODE_MASK (3 << FPSCR_RMODE_BIT)
+#define FPSCR_STRIDE_BIT (20)
+#define FPSCR_STRIDE_MASK (3 << FPSCR_STRIDE_BIT)
+#define FPSCR_LENGTH_BIT (16)
+#define FPSCR_LENGTH_MASK (7 << FPSCR_LENGTH_BIT)
+#define FPSCR_IOE (1<<8)
+#define FPSCR_DZE (1<<9)
+#define FPSCR_OFE (1<<10)
+#define FPSCR_UFE (1<<11)
+#define FPSCR_IXE (1<<12)
+#define FPSCR_IDE (1<<15)
+#define FPSCR_IOC (1<<0)
+#define FPSCR_DZC (1<<1)
+#define FPSCR_OFC (1<<2)
+#define FPSCR_UFC (1<<3)
+#define FPSCR_IXC (1<<4)
+#define FPSCR_IDC (1<<7)
+
+/*
+ * VFP9-S specific.
+ */
+#define FPINST cr9
+#define FPINST2 cr10
+
+/* FPEXC bits */
+#define FPEXC_FPV2 (1<<28)
+#define FPEXC_LENGTH_BIT (8)
+#define FPEXC_LENGTH_MASK (7 << FPEXC_LENGTH_BIT)
+#define FPEXC_INV (1 << 7)
+#define FPEXC_UFC (1 << 3)
+#define FPEXC_OFC (1 << 2)
+#define FPEXC_IOC (1 << 0)
+
+/* Bit patterns for decoding the packaged operation descriptors */
+#define VFPOPDESC_LENGTH_BIT (9)
+#define VFPOPDESC_LENGTH_MASK (0x07 << VFPOPDESC_LENGTH_BIT)
+#define VFPOPDESC_UNUSED_BIT (24)
+#define VFPOPDESC_UNUSED_MASK (0xFF << VFPOPDESC_UNUSED_BIT)
+#define VFPOPDESC_OPDESC_MASK (~(VFPOPDESC_LENGTH_MASK | VFPOPDESC_UNUSED_MASK))
--- /dev/null
+/*
+ * linux/include/asm-arm/vfpmacros.h
+ *
+ * Assembler-only file containing VFP macros and register definitions.
+ */
+#include "vfp.h"
+
+@ Macros to allow building with old toolkits (with no VFP support)
+ .macro VFPFMRX, rd, sysreg, cond
+ MRC\cond p10, 7, \rd, \sysreg, cr0, 0 @ FMRX \rd, \sysreg
+ .endm
+
+ .macro VFPFMXR, sysreg, rd, cond
+ MCR\cond p10, 7, \rd, \sysreg, cr0, 0 @ FMXR \sysreg, \rd
+ .endm
+
+ @ read all the working registers back into the VFP
+ .macro VFPFLDMIA, base
+ LDC p11, cr0, [\base],#33*4 @ FLDMIAX \base!, {d0-d15}
+ .endm
+
+ @ write all the working registers out of the VFP
+ .macro VFPFSTMIA, base
+ STC p11, cr0, [\base],#33*4 @ FSTMIAX \base!, {d0-d15}
+ .endm
--- /dev/null
+#ifndef _I386_PGTABLE_2LEVEL_DEFS_H
+#define _I386_PGTABLE_2LEVEL_DEFS_H
+
+/*
+ * traditional i386 two-level paging structure:
+ */
+
+#define PGDIR_SHIFT 22
+#define PTRS_PER_PGD 1024
+
+/*
+ * the i386 is two-level, so we don't really have any
+ * PMD directory physically.
+ */
+#define PMD_SHIFT 22
+#define PTRS_PER_PMD 1
+
+#define PTRS_PER_PTE 1024
+
+#endif /* _I386_PGTABLE_2LEVEL_DEFS_H */
--- /dev/null
+#ifndef _I386_PGTABLE_3LEVEL_DEFS_H
+#define _I386_PGTABLE_3LEVEL_DEFS_H
+
+/*
+ * PGDIR_SHIFT determines what a top-level page table entry can map
+ */
+#define PGDIR_SHIFT 30
+#define PTRS_PER_PGD 4
+
+/*
+ * PMD_SHIFT determines the size of the area a middle-level
+ * page table can map
+ */
+#define PMD_SHIFT 21
+#define PTRS_PER_PMD 512
+
+/*
+ * entries per page directory level
+ */
+#define PTRS_PER_PTE 512
+
+#endif /* _I386_PGTABLE_3LEVEL_DEFS_H */
--- /dev/null
+#ifndef __IA64_SETUP_H
+#define __IA64_SETUP_H
+
+#define COMMAND_LINE_SIZE 512
+
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright - Galileo technology.
+ * Copyright (C) 2004 by Ralf Baechle
+ */
+#ifndef __ASM_MIPS_MV64240_H
+#define __ASM_MIPS_MV64240_H
+
+#include <asm/addrspace.h>
+#include <asm/marvell.h>
+
+/*
+ * CPU Control Registers
+ */
+
+#define CPU_CONFIGURATION 0x000
+#define CPU_MODE 0x120
+#define CPU_READ_RESPONSE_CROSSBAR_LOW 0x170
+#define CPU_READ_RESPONSE_CROSSBAR_HIGH 0x178
+
+/*
+ * Processor Address Space
+ */
+
+/* Sdram's BAR'S */
+#define SCS_0_LOW_DECODE_ADDRESS 0x008
+#define SCS_0_HIGH_DECODE_ADDRESS 0x010
+#define SCS_1_LOW_DECODE_ADDRESS 0x208
+#define SCS_1_HIGH_DECODE_ADDRESS 0x210
+#define SCS_2_LOW_DECODE_ADDRESS 0x018
+#define SCS_2_HIGH_DECODE_ADDRESS 0x020
+#define SCS_3_LOW_DECODE_ADDRESS 0x218
+#define SCS_3_HIGH_DECODE_ADDRESS 0x220
+/* Devices BAR'S */
+#define CS_0_LOW_DECODE_ADDRESS 0x028
+#define CS_0_HIGH_DECODE_ADDRESS 0x030
+#define CS_1_LOW_DECODE_ADDRESS 0x228
+#define CS_1_HIGH_DECODE_ADDRESS 0x230
+#define CS_2_LOW_DECODE_ADDRESS 0x248
+#define CS_2_HIGH_DECODE_ADDRESS 0x250
+#define CS_3_LOW_DECODE_ADDRESS 0x038
+#define CS_3_HIGH_DECODE_ADDRESS 0x040
+#define BOOTCS_LOW_DECODE_ADDRESS 0x238
+#define BOOTCS_HIGH_DECODE_ADDRESS 0x240
+
+#define PCI_0I_O_LOW_DECODE_ADDRESS 0x048
+#define PCI_0I_O_HIGH_DECODE_ADDRESS 0x050
+#define PCI_0MEMORY0_LOW_DECODE_ADDRESS 0x058
+#define PCI_0MEMORY0_HIGH_DECODE_ADDRESS 0x060
+#define PCI_0MEMORY1_LOW_DECODE_ADDRESS 0x080
+#define PCI_0MEMORY1_HIGH_DECODE_ADDRESS 0x088
+#define PCI_0MEMORY2_LOW_DECODE_ADDRESS 0x258
+#define PCI_0MEMORY2_HIGH_DECODE_ADDRESS 0x260
+#define PCI_0MEMORY3_LOW_DECODE_ADDRESS 0x280
+#define PCI_0MEMORY3_HIGH_DECODE_ADDRESS 0x288
+
+#define PCI_1I_O_LOW_DECODE_ADDRESS 0x090
+#define PCI_1I_O_HIGH_DECODE_ADDRESS 0x098
+#define PCI_1MEMORY0_LOW_DECODE_ADDRESS 0x0a0
+#define PCI_1MEMORY0_HIGH_DECODE_ADDRESS 0x0a8
+#define PCI_1MEMORY1_LOW_DECODE_ADDRESS 0x0b0
+#define PCI_1MEMORY1_HIGH_DECODE_ADDRESS 0x0b8
+#define PCI_1MEMORY2_LOW_DECODE_ADDRESS 0x2a0
+#define PCI_1MEMORY2_HIGH_DECODE_ADDRESS 0x2a8
+#define PCI_1MEMORY3_LOW_DECODE_ADDRESS 0x2b0
+#define PCI_1MEMORY3_HIGH_DECODE_ADDRESS 0x2b8
+
+#define INTERNAL_SPACE_DECODE 0x068
+
+#define CPU_0_LOW_DECODE_ADDRESS 0x290
+#define CPU_0_HIGH_DECODE_ADDRESS 0x298
+#define CPU_1_LOW_DECODE_ADDRESS 0x2c0
+#define CPU_1_HIGH_DECODE_ADDRESS 0x2c8
+
+#define PCI_0I_O_ADDRESS_REMAP 0x0f0
+#define PCI_0MEMORY0_ADDRESS_REMAP 0x0f8
+#define PCI_0MEMORY0_HIGH_ADDRESS_REMAP 0x320
+#define PCI_0MEMORY1_ADDRESS_REMAP 0x100
+#define PCI_0MEMORY1_HIGH_ADDRESS_REMAP 0x328
+#define PCI_0MEMORY2_ADDRESS_REMAP 0x2f8
+#define PCI_0MEMORY2_HIGH_ADDRESS_REMAP 0x330
+#define PCI_0MEMORY3_ADDRESS_REMAP 0x300
+#define PCI_0MEMORY3_HIGH_ADDRESS_REMAP 0x338
+
+#define PCI_1I_O_ADDRESS_REMAP 0x108
+#define PCI_1MEMORY0_ADDRESS_REMAP 0x110
+#define PCI_1MEMORY0_HIGH_ADDRESS_REMAP 0x340
+#define PCI_1MEMORY1_ADDRESS_REMAP 0x118
+#define PCI_1MEMORY1_HIGH_ADDRESS_REMAP 0x348
+#define PCI_1MEMORY2_ADDRESS_REMAP 0x310
+#define PCI_1MEMORY2_HIGH_ADDRESS_REMAP 0x350
+#define PCI_1MEMORY3_ADDRESS_REMAP 0x318
+#define PCI_1MEMORY3_HIGH_ADDRESS_REMAP 0x358
+
+/*
+ * CPU Sync Barrier
+ */
+
+#define PCI_0SYNC_BARIER_VIRTUAL_REGISTER 0x0c0
+#define PCI_1SYNC_BARIER_VIRTUAL_REGISTER 0x0c8
+
+
+/*
+ * CPU Access Protect
+ */
+
+#define CPU_LOW_PROTECT_ADDRESS_0 0X180
+#define CPU_HIGH_PROTECT_ADDRESS_0 0X188
+#define CPU_LOW_PROTECT_ADDRESS_1 0X190
+#define CPU_HIGH_PROTECT_ADDRESS_1 0X198
+#define CPU_LOW_PROTECT_ADDRESS_2 0X1a0
+#define CPU_HIGH_PROTECT_ADDRESS_2 0X1a8
+#define CPU_LOW_PROTECT_ADDRESS_3 0X1b0
+#define CPU_HIGH_PROTECT_ADDRESS_3 0X1b8
+#define CPU_LOW_PROTECT_ADDRESS_4 0X1c0
+#define CPU_HIGH_PROTECT_ADDRESS_4 0X1c8
+#define CPU_LOW_PROTECT_ADDRESS_5 0X1d0
+#define CPU_HIGH_PROTECT_ADDRESS_5 0X1d8
+#define CPU_LOW_PROTECT_ADDRESS_6 0X1e0
+#define CPU_HIGH_PROTECT_ADDRESS_6 0X1e8
+#define CPU_LOW_PROTECT_ADDRESS_7 0X1f0
+#define CPU_HIGH_PROTECT_ADDRESS_7 0X1f8
+
+
+/*
+ * Snoop Control
+ */
+
+#define SNOOP_BASE_ADDRESS_0 0x380
+#define SNOOP_TOP_ADDRESS_0 0x388
+#define SNOOP_BASE_ADDRESS_1 0x390
+#define SNOOP_TOP_ADDRESS_1 0x398
+#define SNOOP_BASE_ADDRESS_2 0x3a0
+#define SNOOP_TOP_ADDRESS_2 0x3a8
+#define SNOOP_BASE_ADDRESS_3 0x3b0
+#define SNOOP_TOP_ADDRESS_3 0x3b8
+
+/*
+ * CPU Error Report
+ */
+
+#define CPU_ERROR_ADDRESS_LOW 0x070
+#define CPU_ERROR_ADDRESS_HIGH 0x078
+#define CPU_ERROR_DATA_LOW 0x128
+#define CPU_ERROR_DATA_HIGH 0x130
+#define CPU_ERROR_PARITY 0x138
+#define CPU_ERROR_CAUSE 0x140
+#define CPU_ERROR_MASK 0x148
+
+/*
+ * Pslave Debug
+ */
+
+#define X_0_ADDRESS 0x360
+#define X_0_COMMAND_ID 0x368
+#define X_1_ADDRESS 0x370
+#define X_1_COMMAND_ID 0x378
+#define WRITE_DATA_LOW 0x3c0
+#define WRITE_DATA_HIGH 0x3c8
+#define WRITE_BYTE_ENABLE 0X3e0
+#define READ_DATA_LOW 0x3d0
+#define READ_DATA_HIGH 0x3d8
+#define READ_ID 0x3e8
+
+
+/*
+ * SDRAM and Device Address Space
+ */
+
+
+/*
+ * SDRAM Configuration
+ */
+
+#define SDRAM_CONFIGURATION 0x448
+#define SDRAM_OPERATION_MODE 0x474
+#define SDRAM_ADDRESS_DECODE 0x47C
+#define SDRAM_TIMING_PARAMETERS 0x4b4
+#define SDRAM_UMA_CONTROL 0x4a4
+#define SDRAM_CROSS_BAR_CONTROL_LOW 0x4a8
+#define SDRAM_CROSS_BAR_CONTROL_HIGH 0x4ac
+#define SDRAM_CROSS_BAR_TIMEOUT 0x4b0
+
+
+/*
+ * SDRAM Parameters
+ */
+
+#define SDRAM_BANK0PARAMETERS 0x44C
+#define SDRAM_BANK1PARAMETERS 0x450
+#define SDRAM_BANK2PARAMETERS 0x454
+#define SDRAM_BANK3PARAMETERS 0x458
+
+
+/*
+ * SDRAM Error Report
+ */
+
+#define SDRAM_ERROR_DATA_LOW 0x484
+#define SDRAM_ERROR_DATA_HIGH 0x480
+#define SDRAM_AND_DEVICE_ERROR_ADDRESS 0x490
+#define SDRAM_RECEIVED_ECC 0x488
+#define SDRAM_CALCULATED_ECC 0x48c
+#define SDRAM_ECC_CONTROL 0x494
+#define SDRAM_ECC_ERROR_COUNTER 0x498
+
+
+/*
+ * SDunit Debug (for internal use)
+ */
+
+#define X0_ADDRESS 0x500
+#define X0_COMMAND_AND_ID 0x504
+#define X0_WRITE_DATA_LOW 0x508
+#define X0_WRITE_DATA_HIGH 0x50c
+#define X0_WRITE_BYTE_ENABLE 0x518
+#define X0_READ_DATA_LOW 0x510
+#define X0_READ_DATA_HIGH 0x514
+#define X0_READ_ID 0x51c
+#define X1_ADDRESS 0x520
+#define X1_COMMAND_AND_ID 0x524
+#define X1_WRITE_DATA_LOW 0x528
+#define X1_WRITE_DATA_HIGH 0x52c
+#define X1_WRITE_BYTE_ENABLE 0x538
+#define X1_READ_DATA_LOW 0x530
+#define X1_READ_DATA_HIGH 0x534
+#define X1_READ_ID 0x53c
+#define X0_SNOOP_ADDRESS 0x540
+#define X0_SNOOP_COMMAND 0x544
+#define X1_SNOOP_ADDRESS 0x548
+#define X1_SNOOP_COMMAND 0x54c
+
+
+/*
+ * Device Parameters
+ */
+
+#define DEVICE_BANK0PARAMETERS 0x45c
+#define DEVICE_BANK1PARAMETERS 0x460
+#define DEVICE_BANK2PARAMETERS 0x464
+#define DEVICE_BANK3PARAMETERS 0x468
+#define DEVICE_BOOT_BANK_PARAMETERS 0x46c
+#define DEVICE_CONTROL 0x4c0
+#define DEVICE_CROSS_BAR_CONTROL_LOW 0x4c8
+#define DEVICE_CROSS_BAR_CONTROL_HIGH 0x4cc
+#define DEVICE_CROSS_BAR_TIMEOUT 0x4c4
+
+
+/*
+ * Device Interrupt
+ */
+
+#define DEVICE_INTERRUPT_CAUSE 0x4d0
+#define DEVICE_INTERRUPT_MASK 0x4d4
+#define DEVICE_ERROR_ADDRESS 0x4d8
+
+/*
+ * DMA Record
+ */
+
+#define CHANNEL0_DMA_BYTE_COUNT 0x800
+#define CHANNEL1_DMA_BYTE_COUNT 0x804
+#define CHANNEL2_DMA_BYTE_COUNT 0x808
+#define CHANNEL3_DMA_BYTE_COUNT 0x80C
+#define CHANNEL4_DMA_BYTE_COUNT 0x900
+#define CHANNEL5_DMA_BYTE_COUNT 0x904
+#define CHANNEL6_DMA_BYTE_COUNT 0x908
+#define CHANNEL7_DMA_BYTE_COUNT 0x90C
+#define CHANNEL0_DMA_SOURCE_ADDRESS 0x810
+#define CHANNEL1_DMA_SOURCE_ADDRESS 0x814
+#define CHANNEL2_DMA_SOURCE_ADDRESS 0x818
+#define CHANNEL3_DMA_SOURCE_ADDRESS 0x81C
+#define CHANNEL4_DMA_SOURCE_ADDRESS 0x910
+#define CHANNEL5_DMA_SOURCE_ADDRESS 0x914
+#define CHANNEL6_DMA_SOURCE_ADDRESS 0x918
+#define CHANNEL7_DMA_SOURCE_ADDRESS 0x91C
+#define CHANNEL0_DMA_DESTINATION_ADDRESS 0x820
+#define CHANNEL1_DMA_DESTINATION_ADDRESS 0x824
+#define CHANNEL2_DMA_DESTINATION_ADDRESS 0x828
+#define CHANNEL3_DMA_DESTINATION_ADDRESS 0x82C
+#define CHANNEL4_DMA_DESTINATION_ADDRESS 0x920
+#define CHANNEL5_DMA_DESTINATION_ADDRESS 0x924
+#define CHANNEL6_DMA_DESTINATION_ADDRESS 0x928
+#define CHANNEL7_DMA_DESTINATION_ADDRESS 0x92C
+#define CHANNEL0NEXT_RECORD_POINTER 0x830
+#define CHANNEL1NEXT_RECORD_POINTER 0x834
+#define CHANNEL2NEXT_RECORD_POINTER 0x838
+#define CHANNEL3NEXT_RECORD_POINTER 0x83C
+#define CHANNEL4NEXT_RECORD_POINTER 0x930
+#define CHANNEL5NEXT_RECORD_POINTER 0x934
+#define CHANNEL6NEXT_RECORD_POINTER 0x938
+#define CHANNEL7NEXT_RECORD_POINTER 0x93C
+#define CHANNEL0CURRENT_DESCRIPTOR_POINTER 0x870
+#define CHANNEL1CURRENT_DESCRIPTOR_POINTER 0x874
+#define CHANNEL2CURRENT_DESCRIPTOR_POINTER 0x878
+#define CHANNEL3CURRENT_DESCRIPTOR_POINTER 0x87C
+#define CHANNEL4CURRENT_DESCRIPTOR_POINTER 0x970
+#define CHANNEL5CURRENT_DESCRIPTOR_POINTER 0x974
+#define CHANNEL6CURRENT_DESCRIPTOR_POINTER 0x978
+#define CHANNEL7CURRENT_DESCRIPTOR_POINTER 0x97C
+#define CHANNEL0_DMA_SOURCE_HIGH_PCI_ADDRESS 0x890
+#define CHANNEL1_DMA_SOURCE_HIGH_PCI_ADDRESS 0x894
+#define CHANNEL2_DMA_SOURCE_HIGH_PCI_ADDRESS 0x898
+#define CHANNEL3_DMA_SOURCE_HIGH_PCI_ADDRESS 0x89c
+#define CHANNEL4_DMA_SOURCE_HIGH_PCI_ADDRESS 0x990
+#define CHANNEL5_DMA_SOURCE_HIGH_PCI_ADDRESS 0x994
+#define CHANNEL6_DMA_SOURCE_HIGH_PCI_ADDRESS 0x998
+#define CHANNEL7_DMA_SOURCE_HIGH_PCI_ADDRESS 0x99c
+#define CHANNEL0_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x8a0
+#define CHANNEL1_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x8a4
+#define CHANNEL2_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x8a8
+#define CHANNEL3_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x8ac
+#define CHANNEL4_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x9a0
+#define CHANNEL5_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x9a4
+#define CHANNEL6_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x9a8
+#define CHANNEL7_DMA_DESTINATION_HIGH_PCI_ADDRESS 0x9ac
+#define CHANNEL0_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x8b0
+#define CHANNEL1_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x8b4
+#define CHANNEL2_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x8b8
+#define CHANNEL3_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x8bc
+#define CHANNEL4_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x9b0
+#define CHANNEL5_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x9b4
+#define CHANNEL6_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x9b8
+#define CHANNEL7_DMA_NEXT_RECORD_POINTER_HIGH_PCI_ADDRESS 0x9bc
+
+/*
+ * DMA Channel Control
+ */
+
+#define CHANNEL0CONTROL 0x840
+#define CHANNEL0CONTROL_HIGH 0x880
+
+#define CHANNEL1CONTROL 0x844
+#define CHANNEL1CONTROL_HIGH 0x884
+
+#define CHANNEL2CONTROL 0x848
+#define CHANNEL2CONTROL_HIGH 0x888
+
+#define CHANNEL3CONTROL 0x84C
+#define CHANNEL3CONTROL_HIGH 0x88C
+
+#define CHANNEL4CONTROL 0x940
+#define CHANNEL4CONTROL_HIGH 0x980
+
+#define CHANNEL5CONTROL 0x944
+#define CHANNEL5CONTROL_HIGH 0x984
+
+#define CHANNEL6CONTROL 0x948
+#define CHANNEL6CONTROL_HIGH 0x988
+
+#define CHANNEL7CONTROL 0x94C
+#define CHANNEL7CONTROL_HIGH 0x98C
+
+
+/*
+ * DMA Arbiter
+ */
+
+#define ARBITER_CONTROL_0_3 0x860
+#define ARBITER_CONTROL_4_7 0x960
+
+
+/*
+ * DMA Interrupt
+ */
+
+#define CHANELS0_3_INTERRUPT_CAUSE 0x8c0
+#define CHANELS0_3_INTERRUPT_MASK 0x8c4
+#define CHANELS0_3_ERROR_ADDRESS 0x8c8
+#define CHANELS0_3_ERROR_SELECT 0x8cc
+#define CHANELS4_7_INTERRUPT_CAUSE 0x9c0
+#define CHANELS4_7_INTERRUPT_MASK 0x9c4
+#define CHANELS4_7_ERROR_ADDRESS 0x9c8
+#define CHANELS4_7_ERROR_SELECT 0x9cc
+
+
+/*
+ * DMA Debug (for internal use)
+ */
+
+#define DMA_X0_ADDRESS 0x8e0
+#define DMA_X0_COMMAND_AND_ID 0x8e4
+#define DMA_X0_WRITE_DATA_LOW 0x8e8
+#define DMA_X0_WRITE_DATA_HIGH 0x8ec
+#define DMA_X0_WRITE_BYTE_ENABLE 0x8f8
+#define DMA_X0_READ_DATA_LOW 0x8f0
+#define DMA_X0_READ_DATA_HIGH 0x8f4
+#define DMA_X0_READ_ID 0x8fc
+#define DMA_X1_ADDRESS 0x9e0
+#define DMA_X1_COMMAND_AND_ID 0x9e4
+#define DMA_X1_WRITE_DATA_LOW 0x9e8
+#define DMA_X1_WRITE_DATA_HIGH 0x9ec
+#define DMA_X1_WRITE_BYTE_ENABLE 0x9f8
+#define DMA_X1_READ_DATA_LOW 0x9f0
+#define DMA_X1_READ_DATA_HIGH 0x9f4
+#define DMA_X1_READ_ID 0x9fc
+
+/*
+ * Timer_Counter
+ */
+
+#define TIMER_COUNTER0 0x850
+#define TIMER_COUNTER1 0x854
+#define TIMER_COUNTER2 0x858
+#define TIMER_COUNTER3 0x85C
+#define TIMER_COUNTER_0_3_CONTROL 0x864
+#define TIMER_COUNTER_0_3_INTERRUPT_CAUSE 0x868
+#define TIMER_COUNTER_0_3_INTERRUPT_MASK 0x86c
+#define TIMER_COUNTER4 0x950
+#define TIMER_COUNTER5 0x954
+#define TIMER_COUNTER6 0x958
+#define TIMER_COUNTER7 0x95C
+#define TIMER_COUNTER_4_7_CONTROL 0x964
+#define TIMER_COUNTER_4_7_INTERRUPT_CAUSE 0x968
+#define TIMER_COUNTER_4_7_INTERRUPT_MASK 0x96c
+
+/*
+ * PCI Slave Address Decoding
+ */
+
+#define PCI_0SCS_0_BANK_SIZE 0xc08
+#define PCI_1SCS_0_BANK_SIZE 0xc88
+#define PCI_0SCS_1_BANK_SIZE 0xd08
+#define PCI_1SCS_1_BANK_SIZE 0xd88
+#define PCI_0SCS_2_BANK_SIZE 0xc0c
+#define PCI_1SCS_2_BANK_SIZE 0xc8c
+#define PCI_0SCS_3_BANK_SIZE 0xd0c
+#define PCI_1SCS_3_BANK_SIZE 0xd8c
+#define PCI_0CS_0_BANK_SIZE 0xc10
+#define PCI_1CS_0_BANK_SIZE 0xc90
+#define PCI_0CS_1_BANK_SIZE 0xd10
+#define PCI_1CS_1_BANK_SIZE 0xd90
+#define PCI_0CS_2_BANK_SIZE 0xd18
+#define PCI_1CS_2_BANK_SIZE 0xd98
+#define PCI_0CS_3_BANK_SIZE 0xc14
+#define PCI_1CS_3_BANK_SIZE 0xc94
+#define PCI_0CS_BOOT_BANK_SIZE 0xd14
+#define PCI_1CS_BOOT_BANK_SIZE 0xd94
+#define PCI_0P2P_MEM0_BAR_SIZE 0xd1c
+#define PCI_1P2P_MEM0_BAR_SIZE 0xd9c
+#define PCI_0P2P_MEM1_BAR_SIZE 0xd20
+#define PCI_1P2P_MEM1_BAR_SIZE 0xda0
+#define PCI_0P2P_I_O_BAR_SIZE 0xd24
+#define PCI_1P2P_I_O_BAR_SIZE 0xda4
+#define PCI_0CPU_BAR_SIZE 0xd28
+#define PCI_1CPU_BAR_SIZE 0xda8
+#define PCI_0DAC_SCS_0_BANK_SIZE 0xe00
+#define PCI_1DAC_SCS_0_BANK_SIZE 0xe80
+#define PCI_0DAC_SCS_1_BANK_SIZE 0xe04
+#define PCI_1DAC_SCS_1_BANK_SIZE 0xe84
+#define PCI_0DAC_SCS_2_BANK_SIZE 0xe08
+#define PCI_1DAC_SCS_2_BANK_SIZE 0xe88
+#define PCI_0DAC_SCS_3_BANK_SIZE 0xe0c
+#define PCI_1DAC_SCS_3_BANK_SIZE 0xe8c
+#define PCI_0DAC_CS_0_BANK_SIZE 0xe10
+#define PCI_1DAC_CS_0_BANK_SIZE 0xe90
+#define PCI_0DAC_CS_1_BANK_SIZE 0xe14
+#define PCI_1DAC_CS_1_BANK_SIZE 0xe94
+#define PCI_0DAC_CS_2_BANK_SIZE 0xe18
+#define PCI_1DAC_CS_2_BANK_SIZE 0xe98
+#define PCI_0DAC_CS_3_BANK_SIZE 0xe1c
+#define PCI_1DAC_CS_3_BANK_SIZE 0xe9c
+#define PCI_0DAC_BOOTCS_BANK_SIZE 0xe20
+#define PCI_1DAC_BOOTCS_BANK_SIZE 0xea0
+#define PCI_0DAC_P2P_MEM0_BAR_SIZE 0xe24
+#define PCI_1DAC_P2P_MEM0_BAR_SIZE 0xea4
+#define PCI_0DAC_P2P_MEM1_BAR_SIZE 0xe28
+#define PCI_1DAC_P2P_MEM1_BAR_SIZE 0xea8
+#define PCI_0DAC_CPU_BAR_SIZE 0xe2c
+#define PCI_1DAC_CPU_BAR_SIZE 0xeac
+#define PCI_0EXPANSION_ROM_BAR_SIZE 0xd2c
+#define PCI_1EXPANSION_ROM_BAR_SIZE 0xdac
+#define PCI_0BASE_ADDRESS_REGISTERS_ENABLE 0xc3c
+#define PCI_1BASE_ADDRESS_REGISTERS_ENABLE 0xcbc
+#define PCI_0SCS_0_BASE_ADDRESS_REMAP 0xc48
+#define PCI_1SCS_0_BASE_ADDRESS_REMAP 0xcc8
+#define PCI_0SCS_1_BASE_ADDRESS_REMAP 0xd48
+#define PCI_1SCS_1_BASE_ADDRESS_REMAP 0xdc8
+#define PCI_0SCS_2_BASE_ADDRESS_REMAP 0xc4c
+#define PCI_1SCS_2_BASE_ADDRESS_REMAP 0xccc
+#define PCI_0SCS_3_BASE_ADDRESS_REMAP 0xd4c
+#define PCI_1SCS_3_BASE_ADDRESS_REMAP 0xdcc
+#define PCI_0CS_0_BASE_ADDRESS_REMAP 0xc50
+#define PCI_1CS_0_BASE_ADDRESS_REMAP 0xcd0
+#define PCI_0CS_1_BASE_ADDRESS_REMAP 0xd50
+#define PCI_1CS_1_BASE_ADDRESS_REMAP 0xdd0
+#define PCI_0CS_2_BASE_ADDRESS_REMAP 0xd58
+#define PCI_1CS_2_BASE_ADDRESS_REMAP 0xdd8
+#define PCI_0CS_3_BASE_ADDRESS_REMAP 0xc54
+#define PCI_1CS_3_BASE_ADDRESS_REMAP 0xcd4
+#define PCI_0CS_BOOTCS_BASE_ADDRESS_REMAP 0xd54
+#define PCI_1CS_BOOTCS_BASE_ADDRESS_REMAP 0xdd4
+#define PCI_0P2P_MEM0_BASE_ADDRESS_REMAP_LOW 0xd5c
+#define PCI_1P2P_MEM0_BASE_ADDRESS_REMAP_LOW 0xddc
+#define PCI_0P2P_MEM0_BASE_ADDRESS_REMAP_HIGH 0xd60
+#define PCI_1P2P_MEM0_BASE_ADDRESS_REMAP_HIGH 0xde0
+#define PCI_0P2P_MEM1_BASE_ADDRESS_REMAP_LOW 0xd64
+#define PCI_1P2P_MEM1_BASE_ADDRESS_REMAP_LOW 0xde4
+#define PCI_0P2P_MEM1_BASE_ADDRESS_REMAP_HIGH 0xd68
+#define PCI_1P2P_MEM1_BASE_ADDRESS_REMAP_HIGH 0xde8
+#define PCI_0P2P_I_O_BASE_ADDRESS_REMAP 0xd6c
+#define PCI_1P2P_I_O_BASE_ADDRESS_REMAP 0xdec
+#define PCI_0CPU_BASE_ADDRESS_REMAP 0xd70
+#define PCI_1CPU_BASE_ADDRESS_REMAP 0xdf0
+#define PCI_0DAC_SCS_0_BASE_ADDRESS_REMAP 0xf00
+#define PCI_1DAC_SCS_0_BASE_ADDRESS_REMAP 0xff0
+#define PCI_0DAC_SCS_1_BASE_ADDRESS_REMAP 0xf04
+#define PCI_1DAC_SCS_1_BASE_ADDRESS_REMAP 0xf84
+#define PCI_0DAC_SCS_2_BASE_ADDRESS_REMAP 0xf08
+#define PCI_1DAC_SCS_2_BASE_ADDRESS_REMAP 0xf88
+#define PCI_0DAC_SCS_3_BASE_ADDRESS_REMAP 0xf0c
+#define PCI_1DAC_SCS_3_BASE_ADDRESS_REMAP 0xf8c
+#define PCI_0DAC_CS_0_BASE_ADDRESS_REMAP 0xf10
+#define PCI_1DAC_CS_0_BASE_ADDRESS_REMAP 0xf90
+#define PCI_0DAC_CS_1_BASE_ADDRESS_REMAP 0xf14
+#define PCI_1DAC_CS_1_BASE_ADDRESS_REMAP 0xf94
+#define PCI_0DAC_CS_2_BASE_ADDRESS_REMAP 0xf18
+#define PCI_1DAC_CS_2_BASE_ADDRESS_REMAP 0xf98
+#define PCI_0DAC_CS_3_BASE_ADDRESS_REMAP 0xf1c
+#define PCI_1DAC_CS_3_BASE_ADDRESS_REMAP 0xf9c
+#define PCI_0DAC_BOOTCS_BASE_ADDRESS_REMAP 0xf20
+#define PCI_1DAC_BOOTCS_BASE_ADDRESS_REMAP 0xfa0
+#define PCI_0DAC_P2P_MEM0_BASE_ADDRESS_REMAP_LOW 0xf24
+#define PCI_1DAC_P2P_MEM0_BASE_ADDRESS_REMAP_LOW 0xfa4
+#define PCI_0DAC_P2P_MEM0_BASE_ADDRESS_REMAP_HIGH 0xf28
+#define PCI_1DAC_P2P_MEM0_BASE_ADDRESS_REMAP_HIGH 0xfa8
+#define PCI_0DAC_P2P_MEM1_BASE_ADDRESS_REMAP_LOW 0xf2c
+#define PCI_1DAC_P2P_MEM1_BASE_ADDRESS_REMAP_LOW 0xfac
+#define PCI_0DAC_P2P_MEM1_BASE_ADDRESS_REMAP_HIGH 0xf30
+#define PCI_1DAC_P2P_MEM1_BASE_ADDRESS_REMAP_HIGH 0xfb0
+#define PCI_0DAC_CPU_BASE_ADDRESS_REMAP 0xf34
+#define PCI_1DAC_CPU_BASE_ADDRESS_REMAP 0xfb4
+#define PCI_0EXPANSION_ROM_BASE_ADDRESS_REMAP 0xf38
+#define PCI_1EXPANSION_ROM_BASE_ADDRESS_REMAP 0xfb8
+#define PCI_0ADDRESS_DECODE_CONTROL 0xd3c
+#define PCI_1ADDRESS_DECODE_CONTROL 0xdbc
+
+/*
+ * PCI Control
+ */
+
+#define PCI_0COMMAND 0xc00
+#define PCI_1COMMAND 0xc80
+#define PCI_0MODE 0xd00
+#define PCI_1MODE 0xd80
+#define PCI_0TIMEOUT_RETRY 0xc04
+#define PCI_1TIMEOUT_RETRY 0xc84
+#define PCI_0READ_BUFFER_DISCARD_TIMER 0xd04
+#define PCI_1READ_BUFFER_DISCARD_TIMER 0xd84
+#define MSI_0TRIGGER_TIMER 0xc38
+#define MSI_1TRIGGER_TIMER 0xcb8
+#define PCI_0ARBITER_CONTROL 0x1d00
+#define PCI_1ARBITER_CONTROL 0x1d80
+/* changing untill here */
+#define PCI_0CROSS_BAR_CONTROL_LOW 0x1d08
+#define PCI_0CROSS_BAR_CONTROL_HIGH 0x1d0c
+#define PCI_0CROSS_BAR_TIMEOUT 0x1d04
+#define PCI_0READ_RESPONSE_CROSS_BAR_CONTROL_LOW 0x1d18
+#define PCI_0READ_RESPONSE_CROSS_BAR_CONTROL_HIGH 0x1d1c
+#define PCI_0SYNC_BARRIER_VIRTUAL_REGISTER 0x1d10
+#define PCI_0P2P_CONFIGURATION 0x1d14
+#define PCI_0ACCESS_CONTROL_BASE_0_LOW 0x1e00
+#define PCI_0ACCESS_CONTROL_BASE_0_HIGH 0x1e04
+#define PCI_0ACCESS_CONTROL_TOP_0 0x1e08
+#define PCI_0ACCESS_CONTROL_BASE_1_LOW 0c1e10
+#define PCI_0ACCESS_CONTROL_BASE_1_HIGH 0x1e14
+#define PCI_0ACCESS_CONTROL_TOP_1 0x1e18
+#define PCI_0ACCESS_CONTROL_BASE_2_LOW 0c1e20
+#define PCI_0ACCESS_CONTROL_BASE_2_HIGH 0x1e24
+#define PCI_0ACCESS_CONTROL_TOP_2 0x1e28
+#define PCI_0ACCESS_CONTROL_BASE_3_LOW 0c1e30
+#define PCI_0ACCESS_CONTROL_BASE_3_HIGH 0x1e34
+#define PCI_0ACCESS_CONTROL_TOP_3 0x1e38
+#define PCI_0ACCESS_CONTROL_BASE_4_LOW 0c1e40
+#define PCI_0ACCESS_CONTROL_BASE_4_HIGH 0x1e44
+#define PCI_0ACCESS_CONTROL_TOP_4 0x1e48
+#define PCI_0ACCESS_CONTROL_BASE_5_LOW 0c1e50
+#define PCI_0ACCESS_CONTROL_BASE_5_HIGH 0x1e54
+#define PCI_0ACCESS_CONTROL_TOP_5 0x1e58
+#define PCI_0ACCESS_CONTROL_BASE_6_LOW 0c1e60
+#define PCI_0ACCESS_CONTROL_BASE_6_HIGH 0x1e64
+#define PCI_0ACCESS_CONTROL_TOP_6 0x1e68
+#define PCI_0ACCESS_CONTROL_BASE_7_LOW 0c1e70
+#define PCI_0ACCESS_CONTROL_BASE_7_HIGH 0x1e74
+#define PCI_0ACCESS_CONTROL_TOP_7 0x1e78
+#define PCI_1CROSS_BAR_CONTROL_LOW 0x1d88
+#define PCI_1CROSS_BAR_CONTROL_HIGH 0x1d8c
+#define PCI_1CROSS_BAR_TIMEOUT 0x1d84
+#define PCI_1READ_RESPONSE_CROSS_BAR_CONTROL_LOW 0x1d98
+#define PCI_1READ_RESPONSE_CROSS_BAR_CONTROL_HIGH 0x1d9c
+#define PCI_1SYNC_BARRIER_VIRTUAL_REGISTER 0x1d90
+#define PCI_1P2P_CONFIGURATION 0x1d94
+#define PCI_1ACCESS_CONTROL_BASE_0_LOW 0x1e80
+#define PCI_1ACCESS_CONTROL_BASE_0_HIGH 0x1e84
+#define PCI_1ACCESS_CONTROL_TOP_0 0x1e88
+#define PCI_1ACCESS_CONTROL_BASE_1_LOW 0c1e90
+#define PCI_1ACCESS_CONTROL_BASE_1_HIGH 0x1e94
+#define PCI_1ACCESS_CONTROL_TOP_1 0x1e98
+#define PCI_1ACCESS_CONTROL_BASE_2_LOW 0c1ea0
+#define PCI_1ACCESS_CONTROL_BASE_2_HIGH 0x1ea4
+#define PCI_1ACCESS_CONTROL_TOP_2 0x1ea8
+#define PCI_1ACCESS_CONTROL_BASE_3_LOW 0c1eb0
+#define PCI_1ACCESS_CONTROL_BASE_3_HIGH 0x1eb4
+#define PCI_1ACCESS_CONTROL_TOP_3 0x1eb8
+#define PCI_1ACCESS_CONTROL_BASE_4_LOW 0c1ec0
+#define PCI_1ACCESS_CONTROL_BASE_4_HIGH 0x1ec4
+#define PCI_1ACCESS_CONTROL_TOP_4 0x1ec8
+#define PCI_1ACCESS_CONTROL_BASE_5_LOW 0c1ed0
+#define PCI_1ACCESS_CONTROL_BASE_5_HIGH 0x1ed4
+#define PCI_1ACCESS_CONTROL_TOP_5 0x1ed8
+#define PCI_1ACCESS_CONTROL_BASE_6_LOW 0c1ee0
+#define PCI_1ACCESS_CONTROL_BASE_6_HIGH 0x1ee4
+#define PCI_1ACCESS_CONTROL_TOP_6 0x1ee8
+#define PCI_1ACCESS_CONTROL_BASE_7_LOW 0c1ef0
+#define PCI_1ACCESS_CONTROL_BASE_7_HIGH 0x1ef4
+#define PCI_1ACCESS_CONTROL_TOP_7 0x1ef8
+
+/*
+ * PCI Snoop Control
+ */
+
+#define PCI_0SNOOP_CONTROL_BASE_0_LOW 0x1f00
+#define PCI_0SNOOP_CONTROL_BASE_0_HIGH 0x1f04
+#define PCI_0SNOOP_CONTROL_TOP_0 0x1f08
+#define PCI_0SNOOP_CONTROL_BASE_1_0_LOW 0x1f10
+#define PCI_0SNOOP_CONTROL_BASE_1_0_HIGH 0x1f14
+#define PCI_0SNOOP_CONTROL_TOP_1 0x1f18
+#define PCI_0SNOOP_CONTROL_BASE_2_0_LOW 0x1f20
+#define PCI_0SNOOP_CONTROL_BASE_2_0_HIGH 0x1f24
+#define PCI_0SNOOP_CONTROL_TOP_2 0x1f28
+#define PCI_0SNOOP_CONTROL_BASE_3_0_LOW 0x1f30
+#define PCI_0SNOOP_CONTROL_BASE_3_0_HIGH 0x1f34
+#define PCI_0SNOOP_CONTROL_TOP_3 0x1f38
+#define PCI_1SNOOP_CONTROL_BASE_0_LOW 0x1f80
+#define PCI_1SNOOP_CONTROL_BASE_0_HIGH 0x1f84
+#define PCI_1SNOOP_CONTROL_TOP_0 0x1f88
+#define PCI_1SNOOP_CONTROL_BASE_1_0_LOW 0x1f90
+#define PCI_1SNOOP_CONTROL_BASE_1_0_HIGH 0x1f94
+#define PCI_1SNOOP_CONTROL_TOP_1 0x1f98
+#define PCI_1SNOOP_CONTROL_BASE_2_0_LOW 0x1fa0
+#define PCI_1SNOOP_CONTROL_BASE_2_0_HIGH 0x1fa4
+#define PCI_1SNOOP_CONTROL_TOP_2 0x1fa8
+#define PCI_1SNOOP_CONTROL_BASE_3_0_LOW 0x1fb0
+#define PCI_1SNOOP_CONTROL_BASE_3_0_HIGH 0x1fb4
+#define PCI_1SNOOP_CONTROL_TOP_3 0x1fb8
+
+/*
+ * PCI Configuration Address
+ */
+
+#define PCI_0CONFIGURATION_ADDRESS 0xcf8
+#define PCI_0CONFIGURATION_DATA_VIRTUAL_REGISTER 0xcfc
+#define PCI_1CONFIGURATION_ADDRESS 0xc78
+#define PCI_1CONFIGURATION_DATA_VIRTUAL_REGISTER 0xc7c
+#define PCI_0INTERRUPT_ACKNOWLEDGE_VIRTUAL_REGISTER 0xc34
+#define PCI_1INTERRUPT_ACKNOWLEDGE_VIRTUAL_REGISTER 0xcb4
+
+/*
+ * PCI Error Report
+ */
+
+#define PCI_0SERR_MASK 0xc28
+#define PCI_0ERROR_ADDRESS_LOW 0x1d40
+#define PCI_0ERROR_ADDRESS_HIGH 0x1d44
+#define PCI_0ERROR_DATA_LOW 0x1d48
+#define PCI_0ERROR_DATA_HIGH 0x1d4c
+#define PCI_0ERROR_COMMAND 0x1d50
+#define PCI_0ERROR_CAUSE 0x1d58
+#define PCI_0ERROR_MASK 0x1d5c
+
+#define PCI_1SERR_MASK 0xca8
+#define PCI_1ERROR_ADDRESS_LOW 0x1dc0
+#define PCI_1ERROR_ADDRESS_HIGH 0x1dc4
+#define PCI_1ERROR_DATA_LOW 0x1dc8
+#define PCI_1ERROR_DATA_HIGH 0x1dcc
+#define PCI_1ERROR_COMMAND 0x1dd0
+#define PCI_1ERROR_CAUSE 0x1dd8
+#define PCI_1ERROR_MASK 0x1ddc
+
+
+/*
+ * Lslave Debug (for internal use)
+ */
+
+#define L_SLAVE_X0_ADDRESS 0x1d20
+#define L_SLAVE_X0_COMMAND_AND_ID 0x1d24
+#define L_SLAVE_X1_ADDRESS 0x1d28
+#define L_SLAVE_X1_COMMAND_AND_ID 0x1d2c
+#define L_SLAVE_WRITE_DATA_LOW 0x1d30
+#define L_SLAVE_WRITE_DATA_HIGH 0x1d34
+#define L_SLAVE_WRITE_BYTE_ENABLE 0x1d60
+#define L_SLAVE_READ_DATA_LOW 0x1d38
+#define L_SLAVE_READ_DATA_HIGH 0x1d3c
+#define L_SLAVE_READ_ID 0x1d64
+
+#if 0 /* Disabled because PCI_* namespace belongs to PCI subsystem ... */
+
+/*
+ * PCI Configuration Function 0
+ */
+
+#define PCI_DEVICE_AND_VENDOR_ID 0x000
+#define PCI_STATUS_AND_COMMAND 0x004
+#define PCI_CLASS_CODE_AND_REVISION_ID 0x008
+#define PCI_BIST_HEADER_TYPE_LATENCY_TIMER_CACHE_LINE 0x00C
+#define PCI_SCS_0_BASE_ADDRESS 0x010
+#define PCI_SCS_1_BASE_ADDRESS 0x014
+#define PCI_SCS_2_BASE_ADDRESS 0x018
+#define PCI_SCS_3_BASE_ADDRESS 0x01C
+#define PCI_INTERNAL_REGISTERS_MEMORY_MAPPED_BASE_ADDRESS 0x020
+#define PCI_INTERNAL_REGISTERS_I_OMAPPED_BASE_ADDRESS 0x024
+#define PCI_SUBSYSTEM_ID_AND_SUBSYSTEM_VENDOR_ID 0x02C
+#define PCI_EXPANSION_ROM_BASE_ADDRESS_REGISTER 0x030
+#define PCI_CAPABILTY_LIST_POINTER 0x034
+#define PCI_INTERRUPT_PIN_AND_LINE 0x03C
+#define PCI_POWER_MANAGEMENT_CAPABILITY 0x040
+#define PCI_POWER_MANAGEMENT_STATUS_AND_CONTROL 0x044
+#define PCI_VPD_ADDRESS 0x048
+#define PCI_VPD_DATA 0X04c
+#define PCI_MSI_MESSAGE_CONTROL 0x050
+#define PCI_MSI_MESSAGE_ADDRESS 0x054
+#define PCI_MSI_MESSAGE_UPPER_ADDRESS 0x058
+#define PCI_MSI_MESSAGE_DATA 0x05c
+#define PCI_COMPACT_PCI_HOT_SWAP_CAPABILITY 0x058
+
+/*
+ * PCI Configuration Function 1
+ */
+
+#define PCI_CS_0_BASE_ADDRESS 0x110
+#define PCI_CS_1_BASE_ADDRESS 0x114
+#define PCI_CS_2_BASE_ADDRESS 0x118
+#define PCI_CS_3_BASE_ADDRESS 0x11c
+#define PCI_BOOTCS_BASE_ADDRESS 0x120
+
+/*
+ * PCI Configuration Function 2
+ */
+
+#define PCI_P2P_MEM0_BASE_ADDRESS 0x210
+#define PCI_P2P_MEM1_BASE_ADDRESS 0x214
+#define PCI_P2P_I_O_BASE_ADDRESS 0x218
+#define PCI_CPU_BASE_ADDRESS 0x21c
+
+/*
+ * PCI Configuration Function 4
+ */
+
+#define PCI_DAC_SCS_0_BASE_ADDRESS_LOW 0x410
+#define PCI_DAC_SCS_0_BASE_ADDRESS_HIGH 0x414
+#define PCI_DAC_SCS_1_BASE_ADDRESS_LOW 0x418
+#define PCI_DAC_SCS_1_BASE_ADDRESS_HIGH 0x41c
+#define PCI_DAC_P2P_MEM0_BASE_ADDRESS_LOW 0x420
+#define PCI_DAC_P2P_MEM0_BASE_ADDRESS_HIGH 0x424
+
+
+/*
+ * PCI Configuration Function 5
+ */
+
+#define PCI_DAC_SCS_2_BASE_ADDRESS_LOW 0x510
+#define PCI_DAC_SCS_2_BASE_ADDRESS_HIGH 0x514
+#define PCI_DAC_SCS_3_BASE_ADDRESS_LOW 0x518
+#define PCI_DAC_SCS_3_BASE_ADDRESS_HIGH 0x51c
+#define PCI_DAC_P2P_MEM1_BASE_ADDRESS_LOW 0x520
+#define PCI_DAC_P2P_MEM1_BASE_ADDRESS_HIGH 0x524
+
+
+/*
+ * PCI Configuration Function 6
+ */
+
+#define PCI_DAC_CS_0_BASE_ADDRESS_LOW 0x610
+#define PCI_DAC_CS_0_BASE_ADDRESS_HIGH 0x614
+#define PCI_DAC_CS_1_BASE_ADDRESS_LOW 0x618
+#define PCI_DAC_CS_1_BASE_ADDRESS_HIGH 0x61c
+#define PCI_DAC_CS_2_BASE_ADDRESS_LOW 0x620
+#define PCI_DAC_CS_2_BASE_ADDRESS_HIGH 0x624
+
+/*
+ * PCI Configuration Function 7
+ */
+
+#define PCI_DAC_CS_3_BASE_ADDRESS_LOW 0x710
+#define PCI_DAC_CS_3_BASE_ADDRESS_HIGH 0x714
+#define PCI_DAC_BOOTCS_BASE_ADDRESS_LOW 0x718
+#define PCI_DAC_BOOTCS_BASE_ADDRESS_HIGH 0x71c
+#define PCI_DAC_CPU_BASE_ADDRESS_LOW 0x720
+#define PCI_DAC_CPU_BASE_ADDRESS_HIGH 0x724
+#endif
+
+/*
+ * Interrupts
+ */
+
+#define LOW_INTERRUPT_CAUSE_REGISTER 0xc18
+#define HIGH_INTERRUPT_CAUSE_REGISTER 0xc68
+#define CPU_INTERRUPT_MASK_REGISTER_LOW 0xc1c
+#define CPU_INTERRUPT_MASK_REGISTER_HIGH 0xc6c
+#define CPU_SELECT_CAUSE_REGISTER 0xc70
+#define PCI_0INTERRUPT_CAUSE_MASK_REGISTER_LOW 0xc24
+#define PCI_0INTERRUPT_CAUSE_MASK_REGISTER_HIGH 0xc64
+#define PCI_0SELECT_CAUSE 0xc74
+#define PCI_1INTERRUPT_CAUSE_MASK_REGISTER_LOW 0xca4
+#define PCI_1INTERRUPT_CAUSE_MASK_REGISTER_HIGH 0xce4
+#define PCI_1SELECT_CAUSE 0xcf4
+#define CPU_INT_0_MASK 0xe60
+#define CPU_INT_1_MASK 0xe64
+#define CPU_INT_2_MASK 0xe68
+#define CPU_INT_3_MASK 0xe6c
+
+/*
+ * I20 Support registers
+ */
+
+#define INBOUND_MESSAGE_REGISTER0_PCI0_SIDE 0x010
+#define INBOUND_MESSAGE_REGISTER1_PCI0_SIDE 0x014
+#define OUTBOUND_MESSAGE_REGISTER0_PCI0_SIDE 0x018
+#define OUTBOUND_MESSAGE_REGISTER1_PCI0_SIDE 0x01C
+#define INBOUND_DOORBELL_REGISTER_PCI0_SIDE 0x020
+#define INBOUND_INTERRUPT_CAUSE_REGISTER_PCI0_SIDE 0x024
+#define INBOUND_INTERRUPT_MASK_REGISTER_PCI0_SIDE 0x028
+#define OUTBOUND_DOORBELL_REGISTER_PCI0_SIDE 0x02C
+#define OUTBOUND_INTERRUPT_CAUSE_REGISTER_PCI0_SIDE 0x030
+#define OUTBOUND_INTERRUPT_MASK_REGISTER_PCI0_SIDE 0x034
+#define INBOUND_QUEUE_PORT_VIRTUAL_REGISTER_PCI0_SIDE 0x040
+#define OUTBOUND_QUEUE_PORT_VIRTUAL_REGISTER_PCI0_SIDE 0x044
+#define QUEUE_CONTROL_REGISTER_PCI0_SIDE 0x050
+#define QUEUE_BASE_ADDRESS_REGISTER_PCI0_SIDE 0x054
+#define INBOUND_FREE_HEAD_POINTER_REGISTER_PCI0_SIDE 0x060
+#define INBOUND_FREE_TAIL_POINTER_REGISTER_PCI0_SIDE 0x064
+#define INBOUND_POST_HEAD_POINTER_REGISTER_PCI0_SIDE 0x068
+#define INBOUND_POST_TAIL_POINTER_REGISTER_PCI0_SIDE 0x06C
+#define OUTBOUND_FREE_HEAD_POINTER_REGISTER_PCI0_SIDE 0x070
+#define OUTBOUND_FREE_TAIL_POINTER_REGISTER_PCI0_SIDE 0x074
+#define OUTBOUND_POST_HEAD_POINTER_REGISTER_PCI0_SIDE 0x0F8
+#define OUTBOUND_POST_TAIL_POINTER_REGISTER_PCI0_SIDE 0x0FC
+
+#define INBOUND_MESSAGE_REGISTER0_PCI1_SIDE 0x090
+#define INBOUND_MESSAGE_REGISTER1_PCI1_SIDE 0x094
+#define OUTBOUND_MESSAGE_REGISTER0_PCI1_SIDE 0x098
+#define OUTBOUND_MESSAGE_REGISTER1_PCI1_SIDE 0x09C
+#define INBOUND_DOORBELL_REGISTER_PCI1_SIDE 0x0A0
+#define INBOUND_INTERRUPT_CAUSE_REGISTER_PCI1_SIDE 0x0A4
+#define INBOUND_INTERRUPT_MASK_REGISTER_PCI1_SIDE 0x0A8
+#define OUTBOUND_DOORBELL_REGISTER_PCI1_SIDE 0x0AC
+#define OUTBOUND_INTERRUPT_CAUSE_REGISTER_PCI1_SIDE 0x0B0
+#define OUTBOUND_INTERRUPT_MASK_REGISTER_PCI1_SIDE 0x0B4
+#define INBOUND_QUEUE_PORT_VIRTUAL_REGISTER_PCI1_SIDE 0x0C0
+#define OUTBOUND_QUEUE_PORT_VIRTUAL_REGISTER_PCI1_SIDE 0x0C4
+#define QUEUE_CONTROL_REGISTER_PCI1_SIDE 0x0D0
+#define QUEUE_BASE_ADDRESS_REGISTER_PCI1_SIDE 0x0D4
+#define INBOUND_FREE_HEAD_POINTER_REGISTER_PCI1_SIDE 0x0E0
+#define INBOUND_FREE_TAIL_POINTER_REGISTER_PCI1_SIDE 0x0E4
+#define INBOUND_POST_HEAD_POINTER_REGISTER_PCI1_SIDE 0x0E8
+#define INBOUND_POST_TAIL_POINTER_REGISTER_PCI1_SIDE 0x0EC
+#define OUTBOUND_FREE_HEAD_POINTER_REGISTER_PCI1_SIDE 0x0F0
+#define OUTBOUND_FREE_TAIL_POINTER_REGISTER_PCI1_SIDE 0x0F4
+#define OUTBOUND_POST_HEAD_POINTER_REGISTER_PCI1_SIDE 0x078
+#define OUTBOUND_POST_TAIL_POINTER_REGISTER_PCI1_SIDE 0x07C
+
+#define INBOUND_MESSAGE_REGISTER0_CPU0_SIDE 0X1C10
+#define INBOUND_MESSAGE_REGISTER1_CPU0_SIDE 0X1C14
+#define OUTBOUND_MESSAGE_REGISTER0_CPU0_SIDE 0X1C18
+#define OUTBOUND_MESSAGE_REGISTER1_CPU0_SIDE 0X1C1C
+#define INBOUND_DOORBELL_REGISTER_CPU0_SIDE 0X1C20
+#define INBOUND_INTERRUPT_CAUSE_REGISTER_CPU0_SIDE 0X1C24
+#define INBOUND_INTERRUPT_MASK_REGISTER_CPU0_SIDE 0X1C28
+#define OUTBOUND_DOORBELL_REGISTER_CPU0_SIDE 0X1C2C
+#define OUTBOUND_INTERRUPT_CAUSE_REGISTER_CPU0_SIDE 0X1C30
+#define OUTBOUND_INTERRUPT_MASK_REGISTER_CPU0_SIDE 0X1C34
+#define INBOUND_QUEUE_PORT_VIRTUAL_REGISTER_CPU0_SIDE 0X1C40
+#define OUTBOUND_QUEUE_PORT_VIRTUAL_REGISTER_CPU0_SIDE 0X1C44
+#define QUEUE_CONTROL_REGISTER_CPU0_SIDE 0X1C50
+#define QUEUE_BASE_ADDRESS_REGISTER_CPU0_SIDE 0X1C54
+#define INBOUND_FREE_HEAD_POINTER_REGISTER_CPU0_SIDE 0X1C60
+#define INBOUND_FREE_TAIL_POINTER_REGISTER_CPU0_SIDE 0X1C64
+#define INBOUND_POST_HEAD_POINTER_REGISTER_CPU0_SIDE 0X1C68
+#define INBOUND_POST_TAIL_POINTER_REGISTER_CPU0_SIDE 0X1C6C
+#define OUTBOUND_FREE_HEAD_POINTER_REGISTER_CPU0_SIDE 0X1C70
+#define OUTBOUND_FREE_TAIL_POINTER_REGISTER_CPU0_SIDE 0X1C74
+#define OUTBOUND_POST_HEAD_POINTER_REGISTER_CPU0_SIDE 0X1CF8
+#define OUTBOUND_POST_TAIL_POINTER_REGISTER_CPU0_SIDE 0X1CFC
+
+#define INBOUND_MESSAGE_REGISTER0_CPU1_SIDE 0X1C90
+#define INBOUND_MESSAGE_REGISTER1_CPU1_SIDE 0X1C94
+#define OUTBOUND_MESSAGE_REGISTER0_CPU1_SIDE 0X1C98
+#define OUTBOUND_MESSAGE_REGISTER1_CPU1_SIDE 0X1C9C
+#define INBOUND_DOORBELL_REGISTER_CPU1_SIDE 0X1CA0
+#define INBOUND_INTERRUPT_CAUSE_REGISTER_CPU1_SIDE 0X1CA4
+#define INBOUND_INTERRUPT_MASK_REGISTER_CPU1_SIDE 0X1CA8
+#define OUTBOUND_DOORBELL_REGISTER_CPU1_SIDE 0X1CAC
+#define OUTBOUND_INTERRUPT_CAUSE_REGISTER_CPU1_SIDE 0X1CB0
+#define OUTBOUND_INTERRUPT_MASK_REGISTER_CPU1_SIDE 0X1CB4
+#define INBOUND_QUEUE_PORT_VIRTUAL_REGISTER_CPU1_SIDE 0X1CC0
+#define OUTBOUND_QUEUE_PORT_VIRTUAL_REGISTER_CPU1_SIDE 0X1CC4
+#define QUEUE_CONTROL_REGISTER_CPU1_SIDE 0X1CD0
+#define QUEUE_BASE_ADDRESS_REGISTER_CPU1_SIDE 0X1CD4
+#define INBOUND_FREE_HEAD_POINTER_REGISTER_CPU1_SIDE 0X1CE0
+#define INBOUND_FREE_TAIL_POINTER_REGISTER_CPU1_SIDE 0X1CE4
+#define INBOUND_POST_HEAD_POINTER_REGISTER_CPU1_SIDE 0X1CE8
+#define INBOUND_POST_TAIL_POINTER_REGISTER_CPU1_SIDE 0X1CEC
+#define OUTBOUND_FREE_HEAD_POINTER_REGISTER_CPU1_SIDE 0X1CF0
+#define OUTBOUND_FREE_TAIL_POINTER_REGISTER_CPU1_SIDE 0X1CF4
+#define OUTBOUND_POST_HEAD_POINTER_REGISTER_CPU1_SIDE 0X1C78
+#define OUTBOUND_POST_TAIL_POINTER_REGISTER_CPU1_SIDE 0X1C7C
+
+/*
+ * Communication Unit Registers
+ */
+
+#define ETHERNET_0_ADDRESS_CONTROL_LOW
+#define ETHERNET_0_ADDRESS_CONTROL_HIGH 0xf204
+#define ETHERNET_0_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf208
+#define ETHERNET_0_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf20c
+#define ETHERNET_0_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf210
+#define ETHERNET_0_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf214
+#define ETHERNET_0_HASH_TABLE_PCI_HIGH_ADDRESS 0xf218
+#define ETHERNET_1_ADDRESS_CONTROL_LOW 0xf220
+#define ETHERNET_1_ADDRESS_CONTROL_HIGH 0xf224
+#define ETHERNET_1_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf228
+#define ETHERNET_1_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf22c
+#define ETHERNET_1_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf230
+#define ETHERNET_1_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf234
+#define ETHERNET_1_HASH_TABLE_PCI_HIGH_ADDRESS 0xf238
+#define ETHERNET_2_ADDRESS_CONTROL_LOW 0xf240
+#define ETHERNET_2_ADDRESS_CONTROL_HIGH 0xf244
+#define ETHERNET_2_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf248
+#define ETHERNET_2_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf24c
+#define ETHERNET_2_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf250
+#define ETHERNET_2_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf254
+#define ETHERNET_2_HASH_TABLE_PCI_HIGH_ADDRESS 0xf258
+#define MPSC_0_ADDRESS_CONTROL_LOW 0xf280
+#define MPSC_0_ADDRESS_CONTROL_HIGH 0xf284
+#define MPSC_0_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf288
+#define MPSC_0_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf28c
+#define MPSC_0_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf290
+#define MPSC_0_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf294
+#define MPSC_1_ADDRESS_CONTROL_LOW 0xf2a0
+#define MPSC_1_ADDRESS_CONTROL_HIGH 0xf2a4
+#define MPSC_1_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf2a8
+#define MPSC_1_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf2ac
+#define MPSC_1_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf2b0
+#define MPSC_1_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf2b4
+#define MPSC_2_ADDRESS_CONTROL_LOW 0xf2c0
+#define MPSC_2_ADDRESS_CONTROL_HIGH 0xf2c4
+#define MPSC_2_RECEIVE_BUFFER_PCI_HIGH_ADDRESS 0xf2c8
+#define MPSC_2_TRANSMIT_BUFFER_PCI_HIGH_ADDRESS 0xf2cc
+#define MPSC_2_RECEIVE_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf2d0
+#define MPSC_2_TRANSMIT_DESCRIPTOR_PCI_HIGH_ADDRESS 0xf2d4
+#define SERIAL_INIT_PCI_HIGH_ADDRESS 0xf320
+#define SERIAL_INIT_LAST_DATA 0xf324
+#define SERIAL_INIT_STATUS_AND_CONTROL 0xf328
+#define COMM_UNIT_ARBITER_CONTROL 0xf300
+#define COMM_UNIT_CROSS_BAR_TIMEOUT 0xf304
+#define COMM_UNIT_INTERRUPT_CAUSE 0xf310
+#define COMM_UNIT_INTERRUPT_MASK 0xf314
+#define COMM_UNIT_ERROR_ADDRESS 0xf314
+
+/*
+ * Cunit Debug (for internal use)
+ */
+
+#define CUNIT_ADDRESS 0xf340
+#define CUNIT_COMMAND_AND_ID 0xf344
+#define CUNIT_WRITE_DATA_LOW 0xf348
+#define CUNIT_WRITE_DATA_HIGH 0xf34c
+#define CUNIT_WRITE_BYTE_ENABLE 0xf358
+#define CUNIT_READ_DATA_LOW 0xf350
+#define CUNIT_READ_DATA_HIGH 0xf354
+#define CUNIT_READ_ID 0xf35c
+
+/*
+ * Fast Ethernet Unit Registers
+ */
+
+/* Ethernet */
+
+#define ETHERNET_PHY_ADDRESS_REGISTER 0x2000
+#define ETHERNET_SMI_REGISTER 0x2010
+
+/* Ethernet 0 */
+
+#define ETHERNET0_PORT_CONFIGURATION_REGISTER 0x2400
+#define ETHERNET0_PORT_CONFIGURATION_EXTEND_REGISTER 0x2408
+#define ETHERNET0_PORT_COMMAND_REGISTER 0x2410
+#define ETHERNET0_PORT_STATUS_REGISTER 0x2418
+#define ETHERNET0_SERIAL_PARAMETRS_REGISTER 0x2420
+#define ETHERNET0_HASH_TABLE_POINTER_REGISTER 0x2428
+#define ETHERNET0_FLOW_CONTROL_SOURCE_ADDRESS_LOW 0x2430
+#define ETHERNET0_FLOW_CONTROL_SOURCE_ADDRESS_HIGH 0x2438
+#define ETHERNET0_SDMA_CONFIGURATION_REGISTER 0x2440
+#define ETHERNET0_SDMA_COMMAND_REGISTER 0x2448
+#define ETHERNET0_INTERRUPT_CAUSE_REGISTER 0x2450
+#define ETHERNET0_INTERRUPT_MASK_REGISTER 0x2458
+#define ETHERNET0_FIRST_RX_DESCRIPTOR_POINTER0 0x2480
+#define ETHERNET0_FIRST_RX_DESCRIPTOR_POINTER1 0x2484
+#define ETHERNET0_FIRST_RX_DESCRIPTOR_POINTER2 0x2488
+#define ETHERNET0_FIRST_RX_DESCRIPTOR_POINTER3 0x248c
+#define ETHERNET0_CURRENT_RX_DESCRIPTOR_POINTER0 0x24a0
+#define ETHERNET0_CURRENT_RX_DESCRIPTOR_POINTER1 0x24a4
+#define ETHERNET0_CURRENT_RX_DESCRIPTOR_POINTER2 0x24a8
+#define ETHERNET0_CURRENT_RX_DESCRIPTOR_POINTER3 0x24ac
+#define ETHERNET0_CURRENT_TX_DESCRIPTOR_POINTER0 0x24e0
+#define ETHERNET0_CURRENT_TX_DESCRIPTOR_POINTER1 0x24e4
+#define ETHERNET0_MIB_COUNTER_BASE 0x2500
+
+/* Ethernet 1 */
+
+#define ETHERNET1_PORT_CONFIGURATION_REGISTER 0x2800
+#define ETHERNET1_PORT_CONFIGURATION_EXTEND_REGISTER 0x2808
+#define ETHERNET1_PORT_COMMAND_REGISTER 0x2810
+#define ETHERNET1_PORT_STATUS_REGISTER 0x2818
+#define ETHERNET1_SERIAL_PARAMETRS_REGISTER 0x2820
+#define ETHERNET1_HASH_TABLE_POINTER_REGISTER 0x2828
+#define ETHERNET1_FLOW_CONTROL_SOURCE_ADDRESS_LOW 0x2830
+#define ETHERNET1_FLOW_CONTROL_SOURCE_ADDRESS_HIGH 0x2838
+#define ETHERNET1_SDMA_CONFIGURATION_REGISTER 0x2840
+#define ETHERNET1_SDMA_COMMAND_REGISTER 0x2848
+#define ETHERNET1_INTERRUPT_CAUSE_REGISTER 0x2850
+#define ETHERNET1_INTERRUPT_MASK_REGISTER 0x2858
+#define ETHERNET1_FIRST_RX_DESCRIPTOR_POINTER0 0x2880
+#define ETHERNET1_FIRST_RX_DESCRIPTOR_POINTER1 0x2884
+#define ETHERNET1_FIRST_RX_DESCRIPTOR_POINTER2 0x2888
+#define ETHERNET1_FIRST_RX_DESCRIPTOR_POINTER3 0x288c
+#define ETHERNET1_CURRENT_RX_DESCRIPTOR_POINTER0 0x28a0
+#define ETHERNET1_CURRENT_RX_DESCRIPTOR_POINTER1 0x28a4
+#define ETHERNET1_CURRENT_RX_DESCRIPTOR_POINTER2 0x28a8
+#define ETHERNET1_CURRENT_RX_DESCRIPTOR_POINTER3 0x28ac
+#define ETHERNET1_CURRENT_TX_DESCRIPTOR_POINTER0 0x28e0
+#define ETHERNET1_CURRENT_TX_DESCRIPTOR_POINTER1 0x28e4
+#define ETHERNET1_MIB_COUNTER_BASE 0x2900
+
+/* Ethernet 2 */
+
+#define ETHERNET2_PORT_CONFIGURATION_REGISTER 0x2c00
+#define ETHERNET2_PORT_CONFIGURATION_EXTEND_REGISTER 0x2c08
+#define ETHERNET2_PORT_COMMAND_REGISTER 0x2c10
+#define ETHERNET2_PORT_STATUS_REGISTER 0x2c18
+#define ETHERNET2_SERIAL_PARAMETRS_REGISTER 0x2c20
+#define ETHERNET2_HASH_TABLE_POINTER_REGISTER 0x2c28
+#define ETHERNET2_FLOW_CONTROL_SOURCE_ADDRESS_LOW 0x2c30
+#define ETHERNET2_FLOW_CONTROL_SOURCE_ADDRESS_HIGH 0x2c38
+#define ETHERNET2_SDMA_CONFIGURATION_REGISTER 0x2c40
+#define ETHERNET2_SDMA_COMMAND_REGISTER 0x2c48
+#define ETHERNET2_INTERRUPT_CAUSE_REGISTER 0x2c50
+#define ETHERNET2_INTERRUPT_MASK_REGISTER 0x2c58
+#define ETHERNET2_FIRST_RX_DESCRIPTOR_POINTER0 0x2c80
+#define ETHERNET2_FIRST_RX_DESCRIPTOR_POINTER1 0x2c84
+#define ETHERNET2_FIRST_RX_DESCRIPTOR_POINTER2 0x2c88
+#define ETHERNET2_FIRST_RX_DESCRIPTOR_POINTER3 0x2c8c
+#define ETHERNET2_CURRENT_RX_DESCRIPTOR_POINTER0 0x2ca0
+#define ETHERNET2_CURRENT_RX_DESCRIPTOR_POINTER1 0x2ca4
+#define ETHERNET2_CURRENT_RX_DESCRIPTOR_POINTER2 0x2ca8
+#define ETHERNET2_CURRENT_RX_DESCRIPTOR_POINTER3 0x2cac
+#define ETHERNET2_CURRENT_TX_DESCRIPTOR_POINTER0 0x2ce0
+#define ETHERNET2_CURRENT_TX_DESCRIPTOR_POINTER1 0x2ce4
+#define ETHERNET2_MIB_COUNTER_BASE 0x2d00
+
+/*
+ * SDMA Registers
+ */
+
+#define SDMA_GROUP_CONFIGURATION_REGISTER 0xb1f0
+#define CHANNEL0_CONFIGURATION_REGISTER 0x4000
+#define CHANNEL0_COMMAND_REGISTER 0x4008
+#define CHANNEL0_RX_CMD_STATUS 0x4800
+#define CHANNEL0_RX_PACKET_AND_BUFFER_SIZES 0x4804
+#define CHANNEL0_RX_BUFFER_POINTER 0x4808
+#define CHANNEL0_RX_NEXT_POINTER 0x480c
+#define CHANNEL0_CURRENT_RX_DESCRIPTOR_POINTER 0x4810
+#define CHANNEL0_TX_CMD_STATUS 0x4C00
+#define CHANNEL0_TX_PACKET_SIZE 0x4C04
+#define CHANNEL0_TX_BUFFER_POINTER 0x4C08
+#define CHANNEL0_TX_NEXT_POINTER 0x4C0c
+#define CHANNEL0_CURRENT_TX_DESCRIPTOR_POINTER 0x4c10
+#define CHANNEL0_FIRST_TX_DESCRIPTOR_POINTER 0x4c14
+#define CHANNEL1_CONFIGURATION_REGISTER 0x6000
+#define CHANNEL1_COMMAND_REGISTER 0x6008
+#define CHANNEL1_RX_CMD_STATUS 0x6800
+#define CHANNEL1_RX_PACKET_AND_BUFFER_SIZES 0x6804
+#define CHANNEL1_RX_BUFFER_POINTER 0x6808
+#define CHANNEL1_RX_NEXT_POINTER 0x680c
+#define CHANNEL1_CURRENT_RX_DESCRIPTOR_POINTER 0x6810
+#define CHANNEL1_TX_CMD_STATUS 0x6C00
+#define CHANNEL1_TX_PACKET_SIZE 0x6C04
+#define CHANNEL1_TX_BUFFER_POINTER 0x6C08
+#define CHANNEL1_TX_NEXT_POINTER 0x6C0c
+#define CHANNEL1_CURRENT_RX_DESCRIPTOR_POINTER 0x6810
+#define CHANNEL1_CURRENT_TX_DESCRIPTOR_POINTER 0x6c10
+#define CHANNEL1_FIRST_TX_DESCRIPTOR_POINTER 0x6c14
+
+/* SDMA Interrupt */
+
+#define SDMA_CAUSE 0xb820
+#define SDMA_MASK 0xb8a0
+
+
+/*
+ * Baude Rate Generators Registers
+ */
+
+/* BRG 0 */
+
+#define BRG0_CONFIGURATION_REGISTER 0xb200
+#define BRG0_BAUDE_TUNING_REGISTER 0xb204
+
+/* BRG 1 */
+
+#define BRG1_CONFIGURATION_REGISTER 0xb208
+#define BRG1_BAUDE_TUNING_REGISTER 0xb20c
+
+/* BRG 2 */
+
+#define BRG2_CONFIGURATION_REGISTER 0xb210
+#define BRG2_BAUDE_TUNING_REGISTER 0xb214
+
+/* BRG Interrupts */
+
+#define BRG_CAUSE_REGISTER 0xb834
+#define BRG_MASK_REGISTER 0xb8b4
+
+/* MISC */
+
+#define MAIN_ROUTING_REGISTER 0xb400
+#define RECEIVE_CLOCK_ROUTING_REGISTER 0xb404
+#define TRANSMIT_CLOCK_ROUTING_REGISTER 0xb408
+#define COMM_UNIT_ARBITER_CONFIGURATION_REGISTER 0xb40c
+#define WATCHDOG_CONFIGURATION_REGISTER 0xb410
+#define WATCHDOG_VALUE_REGISTER 0xb414
+
+
+/*
+ * Flex TDM Registers
+ */
+
+/* FTDM Port */
+
+#define FLEXTDM_TRANSMIT_READ_POINTER 0xa800
+#define FLEXTDM_RECEIVE_READ_POINTER 0xa804
+#define FLEXTDM_CONFIGURATION_REGISTER 0xa808
+#define FLEXTDM_AUX_CHANNELA_TX_REGISTER 0xa80c
+#define FLEXTDM_AUX_CHANNELA_RX_REGISTER 0xa810
+#define FLEXTDM_AUX_CHANNELB_TX_REGISTER 0xa814
+#define FLEXTDM_AUX_CHANNELB_RX_REGISTER 0xa818
+
+/* FTDM Interrupts */
+
+#define FTDM_CAUSE_REGISTER 0xb830
+#define FTDM_MASK_REGISTER 0xb8b0
+
+
+/*
+ * GPP Interface Registers
+ */
+
+#define GPP_IO_CONTROL 0xf100
+#define GPP_LEVEL_CONTROL 0xf110
+#define GPP_VALUE 0xf104
+#define GPP_INTERRUPT_CAUSE 0xf108
+#define GPP_INTERRUPT_MASK 0xf10c
+
+#define MPP_CONTROL0 0xf000
+#define MPP_CONTROL1 0xf004
+#define MPP_CONTROL2 0xf008
+#define MPP_CONTROL3 0xf00c
+#define DEBUG_PORT_MULTIPLEX 0xf014
+#define SERIAL_PORT_MULTIPLEX 0xf010
+
+/*
+ * I2C Registers
+ */
+
+#define I2C_SLAVE_ADDRESS 0xc000
+#define I2C_EXTENDED_SLAVE_ADDRESS 0xc040
+#define I2C_DATA 0xc004
+#define I2C_CONTROL 0xc008
+#define I2C_STATUS_BAUDE_RATE 0xc00C
+#define I2C_SOFT_RESET 0xc01c
+
+/*
+ * MPSC Registers
+ */
+
+/*
+ * MPSC0
+ */
+
+#define MPSC0_MAIN_CONFIGURATION_LOW 0x8000
+#define MPSC0_MAIN_CONFIGURATION_HIGH 0x8004
+#define MPSC0_PROTOCOL_CONFIGURATION 0x8008
+#define CHANNEL0_REGISTER1 0x800c
+#define CHANNEL0_REGISTER2 0x8010
+#define CHANNEL0_REGISTER3 0x8014
+#define CHANNEL0_REGISTER4 0x8018
+#define CHANNEL0_REGISTER5 0x801c
+#define CHANNEL0_REGISTER6 0x8020
+#define CHANNEL0_REGISTER7 0x8024
+#define CHANNEL0_REGISTER8 0x8028
+#define CHANNEL0_REGISTER9 0x802c
+#define CHANNEL0_REGISTER10 0x8030
+#define CHANNEL0_REGISTER11 0x8034
+
+/*
+ * MPSC1
+ */
+
+#define MPSC1_MAIN_CONFIGURATION_LOW 0x9000
+#define MPSC1_MAIN_CONFIGURATION_HIGH 0x9004
+#define MPSC1_PROTOCOL_CONFIGURATION 0x9008
+#define CHANNEL1_REGISTER1 0x900c
+#define CHANNEL1_REGISTER2 0x9010
+#define CHANNEL1_REGISTER3 0x9014
+#define CHANNEL1_REGISTER4 0x9018
+#define CHANNEL1_REGISTER5 0x901c
+#define CHANNEL1_REGISTER6 0x9020
+#define CHANNEL1_REGISTER7 0x9024
+#define CHANNEL1_REGISTER8 0x9028
+#define CHANNEL1_REGISTER9 0x902c
+#define CHANNEL1_REGISTER10 0x9030
+#define CHANNEL1_REGISTER11 0x9034
+
+/*
+ * MPSCs Interupts
+ */
+
+#define MPSC0_CAUSE 0xb804
+#define MPSC0_MASK 0xb884
+#define MPSC1_CAUSE 0xb80c
+#define MPSC1_MASK 0xb88c
+
+#endif /* __ASM_MIPS_MV64240_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003, 2004 Ralf Baechle
+ */
+#ifndef __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H
+#define __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H
+
+/*
+ * Momentum Jaguar ATX always has the RM9000 processor.
+ */
+#define cpu_has_watch 1
+#define cpu_has_mips16 0
+#define cpu_has_divec 0
+#define cpu_has_vce 0
+#define cpu_has_cache_cdex_p 0
+#define cpu_has_cache_cdex_s 0
+#define cpu_has_prefetch 1
+#define cpu_has_mcheck 0
+#define cpu_has_ejtag 0
+
+#define cpu_has_llsc 1
+#define cpu_has_vtag_icache 0
+#define cpu_has_dc_aliases 0
+#define cpu_has_ic_fills_f_dc 0
+
+#define cpu_has_nofpuex 0
+#define cpu_has_64bits 1
+
+#define cpu_has_subset_pcaches 0
+
+#define cpu_dcache_line_size() 32
+#define cpu_icache_line_size() 32
+#define cpu_scache_line_size() 32
+
+#endif /* __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle
+ */
+#ifndef __ASM_MIPS_MARVELL_H
+#define __ASM_MIPS_MARVELL_H
+
+#include <linux/pci.h>
+
+#include <asm/byteorder.h>
+#include <asm/pci_channel.h>
+
+extern unsigned long marvell_base;
+
+/*
+ * Because of an error/peculiarity in the Galileo chip, we need to swap the
+ * bytes when running bigendian.
+ */
+#define __MV_READ(ofs) \
+ (*(volatile u32 *)(marvell_base+(ofs)))
+#define __MV_WRITE(ofs, data) \
+ do { *(volatile u32 *)(marvell_base+(ofs)) = (data); } while (0)
+
+#define MV_READ(ofs) le32_to_cpu(__MV_READ(ofs))
+#define MV_WRITE(ofs, data) __MV_WRITE(ofs, cpu_to_le32(data))
+
+#define MV_READ_16(ofs) \
+ le16_to_cpu(*(volatile u16 *)(marvell_base+(ofs)))
+#define MV_WRITE_16(ofs, data) \
+ *(volatile u16 *)(marvell_base+(ofs)) = cpu_to_le16(data)
+
+#define MV_READ_8(ofs) \
+ *(volatile u8 *)(marvell_base+(ofs))
+#define MV_WRITE_8(ofs, data) \
+ *(volatile u8 *)(marvell_base+(ofs)) = data
+
+#define MV_SET_REG_BITS(ofs, bits) \
+ (*((volatile u32 *)(marvell_base + (ofs)))) |= ((u32)cpu_to_le32(bits))
+#define MV_RESET_REG_BITS(ofs, bits) \
+ (*((volatile u32 *)(marvell_base + (ofs)))) &= ~((u32)cpu_to_le32(bits))
+
+extern struct pci_ops mv_pci_ops;
+
+struct mv_pci_controller {
+ struct pci_controller pcic;
+
+ /*
+ * GT-64240/MV-64340 specific, per host bus information
+ */
+ unsigned long config_addr;
+ unsigned long config_vreg;
+};
+
+#endif /* __ASM_MIPS_MARVELL_H */
--- /dev/null
+#ifdef __KERNEL__
+#ifndef _MIPS_SETUP_H
+#define _MIPS_SETUP_H
+
+#define COMMAND_LINE_SIZE 256
+
+#endif /* __SETUP_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * tb0219.h, Include file for TANBAC TB0219.
+ *
+ * Copyright (C) 2002-2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * Modified for TANBAC TB0219:
+ * Copyright (C) 2003 Megasolution Inc. <matsu@megasolution.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#ifndef __TANBAC_TB0219_H
+#define __TANBAC_TB0219_H
+
+#include <asm/vr41xx/vr41xx.h>
+
+/*
+ * General-Purpose I/O Pin Number
+ */
+#define TB0219_PCI_SLOT1_PIN 2
+#define TB0219_PCI_SLOT2_PIN 3
+#define TB0219_PCI_SLOT3_PIN 4
+
+/*
+ * Interrupt Number
+ */
+#define TB0219_PCI_SLOT1_IRQ GIU_IRQ(TB0219_PCI_SLOT1_PIN)
+#define TB0219_PCI_SLOT2_IRQ GIU_IRQ(TB0219_PCI_SLOT2_PIN)
+#define TB0219_PCI_SLOT3_IRQ GIU_IRQ(TB0219_PCI_SLOT3_PIN)
+
+#endif /* __TANBAC_TB0219_H */
--- /dev/null
+#ifndef _ASM_MAX_NUMNODES_H
+#define _ASM_MAX_NUMNODES_H
+
+#include <linux/config.h>
+
+/* Max 8 Nodes */
+#define NODES_SHIFT 3
+
+#endif /* _ASM_MAX_NUMNODES_H */
--- /dev/null
+/*
+ * Communication Processor Module v2.
+ *
+ * This file contains structures and information for the communication
+ * processor channels found in the dual port RAM or parameter RAM.
+ * All CPM control and status is available through the CPM2 internal
+ * memory map. See immap_cpm2.h for details.
+ */
+#ifdef __KERNEL__
+#ifndef __CPM2__
+#define __CPM2__
+
+#include <asm/immap_cpm2.h>
+
+/* CPM Command register.
+*/
+#define CPM_CR_RST ((uint)0x80000000)
+#define CPM_CR_PAGE ((uint)0x7c000000)
+#define CPM_CR_SBLOCK ((uint)0x03e00000)
+#define CPM_CR_FLG ((uint)0x00010000)
+#define CPM_CR_MCN ((uint)0x00003fc0)
+#define CPM_CR_OPCODE ((uint)0x0000000f)
+
+/* Device sub-block and page codes.
+*/
+#define CPM_CR_SCC1_SBLOCK (0x04)
+#define CPM_CR_SCC2_SBLOCK (0x05)
+#define CPM_CR_SCC3_SBLOCK (0x06)
+#define CPM_CR_SCC4_SBLOCK (0x07)
+#define CPM_CR_SMC1_SBLOCK (0x08)
+#define CPM_CR_SMC2_SBLOCK (0x09)
+#define CPM_CR_SPI_SBLOCK (0x0a)
+#define CPM_CR_I2C_SBLOCK (0x0b)
+#define CPM_CR_TIMER_SBLOCK (0x0f)
+#define CPM_CR_RAND_SBLOCK (0x0e)
+#define CPM_CR_FCC1_SBLOCK (0x10)
+#define CPM_CR_FCC2_SBLOCK (0x11)
+#define CPM_CR_FCC3_SBLOCK (0x12)
+#define CPM_CR_IDMA1_SBLOCK (0x14)
+#define CPM_CR_IDMA2_SBLOCK (0x15)
+#define CPM_CR_IDMA3_SBLOCK (0x16)
+#define CPM_CR_IDMA4_SBLOCK (0x17)
+#define CPM_CR_MCC1_SBLOCK (0x1c)
+
+#define CPM_CR_SCC1_PAGE (0x00)
+#define CPM_CR_SCC2_PAGE (0x01)
+#define CPM_CR_SCC3_PAGE (0x02)
+#define CPM_CR_SCC4_PAGE (0x03)
+#define CPM_CR_SMC1_PAGE (0x07)
+#define CPM_CR_SMC2_PAGE (0x08)
+#define CPM_CR_SPI_PAGE (0x09)
+#define CPM_CR_I2C_PAGE (0x0a)
+#define CPM_CR_TIMER_PAGE (0x0a)
+#define CPM_CR_RAND_PAGE (0x0a)
+#define CPM_CR_FCC1_PAGE (0x04)
+#define CPM_CR_FCC2_PAGE (0x05)
+#define CPM_CR_FCC3_PAGE (0x06)
+#define CPM_CR_IDMA1_PAGE (0x07)
+#define CPM_CR_IDMA2_PAGE (0x08)
+#define CPM_CR_IDMA3_PAGE (0x09)
+#define CPM_CR_IDMA4_PAGE (0x0a)
+#define CPM_CR_MCC1_PAGE (0x07)
+#define CPM_CR_MCC2_PAGE (0x08)
+
+/* Some opcodes (there are more...later)
+*/
+#define CPM_CR_INIT_TRX ((ushort)0x0000)
+#define CPM_CR_INIT_RX ((ushort)0x0001)
+#define CPM_CR_INIT_TX ((ushort)0x0002)
+#define CPM_CR_HUNT_MODE ((ushort)0x0003)
+#define CPM_CR_STOP_TX ((ushort)0x0004)
+#define CPM_CR_RESTART_TX ((ushort)0x0006)
+#define CPM_CR_SET_GADDR ((ushort)0x0008)
+#define CPM_CR_START_IDMA ((ushort)0x0009)
+#define CPM_CR_STOP_IDMA ((ushort)0x000b)
+
+#define mk_cr_cmd(PG, SBC, MCN, OP) \
+ ((PG << 26) | (SBC << 21) | (MCN << 6) | OP)
+
+/* Dual Port RAM addresses. The first 16K is available for almost
+ * any CPM use, so we put the BDs there. The first 128 bytes are
+ * used for SMC1 and SMC2 parameter RAM, so we start allocating
+ * BDs above that. All of this must change when we start
+ * downloading RAM microcode.
+ */
+#define CPM_DATAONLY_BASE ((uint)128)
+#define CPM_DP_NOSPACE ((uint)0x7fffffff)
+#ifdef CONFIG_8272
+#define CPM_DATAONLY_SIZE ((uint)(8 * 1024) - CPM_DATAONLY_BASE)
+#define CPM_FCC_SPECIAL_BASE ((uint)0x00009000)
+#else
+#define CPM_DATAONLY_SIZE ((uint)(16 * 1024) - CPM_DATAONLY_BASE)
+#define CPM_FCC_SPECIAL_BASE ((uint)0x0000b000)
+#endif
+
+/* The number of pages of host memory we allocate for CPM. This is
+ * done early in kernel initialization to get physically contiguous
+ * pages.
+ */
+#define NUM_CPM_HOST_PAGES 2
+
+
+/* Export the base address of the communication processor registers
+ * and dual port ram.
+ */
+extern cpm_cpm2_t *cpmp; /* Pointer to comm processor */
+extern void *cpm2_dpalloc(uint size, uint align);
+extern int cpm2_dpfree(void *addr);
+extern void *cpm2_dpalloc_fixed(void *addr, uint size, uint allign);
+extern void cpm2_dpdump(void);
+extern unsigned int cpm2_dpram_offset(void *addr);
+extern void *cpm2_dpram_addr(int offset);
+extern void cpm2_setbrg(uint brg, uint rate);
+extern void cpm2_fastbrg(uint brg, uint rate, int div16);
+
+/* Buffer descriptors used by many of the CPM protocols.
+*/
+typedef struct cpm_buf_desc {
+ ushort cbd_sc; /* Status and Control */
+ ushort cbd_datlen; /* Data length in buffer */
+ uint cbd_bufaddr; /* Buffer address in host memory */
+} cbd_t;
+
+#define BD_SC_EMPTY ((ushort)0x8000) /* Receive is empty */
+#define BD_SC_READY ((ushort)0x8000) /* Transmit is ready */
+#define BD_SC_WRAP ((ushort)0x2000) /* Last buffer descriptor */
+#define BD_SC_INTRPT ((ushort)0x1000) /* Interrupt on change */
+#define BD_SC_LAST ((ushort)0x0800) /* Last buffer in frame */
+#define BD_SC_CM ((ushort)0x0200) /* Continous mode */
+#define BD_SC_ID ((ushort)0x0100) /* Rec'd too many idles */
+#define BD_SC_P ((ushort)0x0100) /* xmt preamble */
+#define BD_SC_BR ((ushort)0x0020) /* Break received */
+#define BD_SC_FR ((ushort)0x0010) /* Framing error */
+#define BD_SC_PR ((ushort)0x0008) /* Parity error */
+#define BD_SC_OV ((ushort)0x0002) /* Overrun */
+#define BD_SC_CD ((ushort)0x0001) /* ?? */
+
+/* Function code bits, usually generic to devices.
+*/
+#define CPMFCR_GBL ((u_char)0x20) /* Set memory snooping */
+#define CPMFCR_EB ((u_char)0x10) /* Set big endian byte order */
+#define CPMFCR_TC2 ((u_char)0x04) /* Transfer code 2 value */
+#define CPMFCR_DTB ((u_char)0x02) /* Use local bus for data when set */
+#define CPMFCR_BDB ((u_char)0x01) /* Use local bus for BD when set */
+
+/* Parameter RAM offsets from the base.
+*/
+#define PROFF_SCC1 ((uint)0x8000)
+#define PROFF_SCC2 ((uint)0x8100)
+#define PROFF_SCC3 ((uint)0x8200)
+#define PROFF_SCC4 ((uint)0x8300)
+#define PROFF_FCC1 ((uint)0x8400)
+#define PROFF_FCC2 ((uint)0x8500)
+#define PROFF_FCC3 ((uint)0x8600)
+#define PROFF_MCC1 ((uint)0x8700)
+#define PROFF_SMC1_BASE ((uint)0x87fc)
+#define PROFF_IDMA1_BASE ((uint)0x87fe)
+#define PROFF_MCC2 ((uint)0x8800)
+#define PROFF_SMC2_BASE ((uint)0x88fc)
+#define PROFF_IDMA2_BASE ((uint)0x88fe)
+#define PROFF_SPI_BASE ((uint)0x89fc)
+#define PROFF_IDMA3_BASE ((uint)0x89fe)
+#define PROFF_TIMERS ((uint)0x8ae0)
+#define PROFF_REVNUM ((uint)0x8af0)
+#define PROFF_RAND ((uint)0x8af8)
+#define PROFF_I2C_BASE ((uint)0x8afc)
+#define PROFF_IDMA4_BASE ((uint)0x8afe)
+
+/* The SMCs are relocated to any of the first eight DPRAM pages.
+ * We will fix these at the first locations of DPRAM, until we
+ * get some microcode patches :-).
+ * The parameter ram space for the SMCs is fifty-some bytes, and
+ * they are required to start on a 64 byte boundary.
+ */
+#define PROFF_SMC1 (0)
+#define PROFF_SMC2 (64)
+
+
+/* Define enough so I can at least use the serial port as a UART.
+ */
+typedef struct smc_uart {
+ ushort smc_rbase; /* Rx Buffer descriptor base address */
+ ushort smc_tbase; /* Tx Buffer descriptor base address */
+ u_char smc_rfcr; /* Rx function code */
+ u_char smc_tfcr; /* Tx function code */
+ ushort smc_mrblr; /* Max receive buffer length */
+ uint smc_rstate; /* Internal */
+ uint smc_idp; /* Internal */
+ ushort smc_rbptr; /* Internal */
+ ushort smc_ibc; /* Internal */
+ uint smc_rxtmp; /* Internal */
+ uint smc_tstate; /* Internal */
+ uint smc_tdp; /* Internal */
+ ushort smc_tbptr; /* Internal */
+ ushort smc_tbc; /* Internal */
+ uint smc_txtmp; /* Internal */
+ ushort smc_maxidl; /* Maximum idle characters */
+ ushort smc_tmpidl; /* Temporary idle counter */
+ ushort smc_brklen; /* Last received break length */
+ ushort smc_brkec; /* rcv'd break condition counter */
+ ushort smc_brkcr; /* xmt break count register */
+ ushort smc_rmask; /* Temporary bit mask */
+ uint smc_stmp; /* SDMA Temp */
+} smc_uart_t;
+
+/* SMC uart mode register (Internal memory map).
+*/
+#define SMCMR_REN ((ushort)0x0001)
+#define SMCMR_TEN ((ushort)0x0002)
+#define SMCMR_DM ((ushort)0x000c)
+#define SMCMR_SM_GCI ((ushort)0x0000)
+#define SMCMR_SM_UART ((ushort)0x0020)
+#define SMCMR_SM_TRANS ((ushort)0x0030)
+#define SMCMR_SM_MASK ((ushort)0x0030)
+#define SMCMR_PM_EVEN ((ushort)0x0100) /* Even parity, else odd */
+#define SMCMR_REVD SMCMR_PM_EVEN
+#define SMCMR_PEN ((ushort)0x0200) /* Parity enable */
+#define SMCMR_BS SMCMR_PEN
+#define SMCMR_SL ((ushort)0x0400) /* Two stops, else one */
+#define SMCR_CLEN_MASK ((ushort)0x7800) /* Character length */
+#define smcr_mk_clen(C) (((C) << 11) & SMCR_CLEN_MASK)
+
+/* SMC Event and Mask register.
+*/
+#define SMCM_BRKE ((unsigned char)0x40) /* When in UART Mode */
+#define SMCM_BRK ((unsigned char)0x10) /* When in UART Mode */
+#define SMCM_TXE ((unsigned char)0x10)
+#define SMCM_BSY ((unsigned char)0x04)
+#define SMCM_TX ((unsigned char)0x02)
+#define SMCM_RX ((unsigned char)0x01)
+
+/* Baud rate generators.
+*/
+#define CPM_BRG_RST ((uint)0x00020000)
+#define CPM_BRG_EN ((uint)0x00010000)
+#define CPM_BRG_EXTC_INT ((uint)0x00000000)
+#define CPM_BRG_EXTC_CLK3_9 ((uint)0x00004000)
+#define CPM_BRG_EXTC_CLK5_15 ((uint)0x00008000)
+#define CPM_BRG_ATB ((uint)0x00002000)
+#define CPM_BRG_CD_MASK ((uint)0x00001ffe)
+#define CPM_BRG_DIV16 ((uint)0x00000001)
+
+/* SCCs.
+*/
+#define SCC_GSMRH_IRP ((uint)0x00040000)
+#define SCC_GSMRH_GDE ((uint)0x00010000)
+#define SCC_GSMRH_TCRC_CCITT ((uint)0x00008000)
+#define SCC_GSMRH_TCRC_BISYNC ((uint)0x00004000)
+#define SCC_GSMRH_TCRC_HDLC ((uint)0x00000000)
+#define SCC_GSMRH_REVD ((uint)0x00002000)
+#define SCC_GSMRH_TRX ((uint)0x00001000)
+#define SCC_GSMRH_TTX ((uint)0x00000800)
+#define SCC_GSMRH_CDP ((uint)0x00000400)
+#define SCC_GSMRH_CTSP ((uint)0x00000200)
+#define SCC_GSMRH_CDS ((uint)0x00000100)
+#define SCC_GSMRH_CTSS ((uint)0x00000080)
+#define SCC_GSMRH_TFL ((uint)0x00000040)
+#define SCC_GSMRH_RFW ((uint)0x00000020)
+#define SCC_GSMRH_TXSY ((uint)0x00000010)
+#define SCC_GSMRH_SYNL16 ((uint)0x0000000c)
+#define SCC_GSMRH_SYNL8 ((uint)0x00000008)
+#define SCC_GSMRH_SYNL4 ((uint)0x00000004)
+#define SCC_GSMRH_RTSM ((uint)0x00000002)
+#define SCC_GSMRH_RSYN ((uint)0x00000001)
+
+#define SCC_GSMRL_SIR ((uint)0x80000000) /* SCC2 only */
+#define SCC_GSMRL_EDGE_NONE ((uint)0x60000000)
+#define SCC_GSMRL_EDGE_NEG ((uint)0x40000000)
+#define SCC_GSMRL_EDGE_POS ((uint)0x20000000)
+#define SCC_GSMRL_EDGE_BOTH ((uint)0x00000000)
+#define SCC_GSMRL_TCI ((uint)0x10000000)
+#define SCC_GSMRL_TSNC_3 ((uint)0x0c000000)
+#define SCC_GSMRL_TSNC_4 ((uint)0x08000000)
+#define SCC_GSMRL_TSNC_14 ((uint)0x04000000)
+#define SCC_GSMRL_TSNC_INF ((uint)0x00000000)
+#define SCC_GSMRL_RINV ((uint)0x02000000)
+#define SCC_GSMRL_TINV ((uint)0x01000000)
+#define SCC_GSMRL_TPL_128 ((uint)0x00c00000)
+#define SCC_GSMRL_TPL_64 ((uint)0x00a00000)
+#define SCC_GSMRL_TPL_48 ((uint)0x00800000)
+#define SCC_GSMRL_TPL_32 ((uint)0x00600000)
+#define SCC_GSMRL_TPL_16 ((uint)0x00400000)
+#define SCC_GSMRL_TPL_8 ((uint)0x00200000)
+#define SCC_GSMRL_TPL_NONE ((uint)0x00000000)
+#define SCC_GSMRL_TPP_ALL1 ((uint)0x00180000)
+#define SCC_GSMRL_TPP_01 ((uint)0x00100000)
+#define SCC_GSMRL_TPP_10 ((uint)0x00080000)
+#define SCC_GSMRL_TPP_ZEROS ((uint)0x00000000)
+#define SCC_GSMRL_TEND ((uint)0x00040000)
+#define SCC_GSMRL_TDCR_32 ((uint)0x00030000)
+#define SCC_GSMRL_TDCR_16 ((uint)0x00020000)
+#define SCC_GSMRL_TDCR_8 ((uint)0x00010000)
+#define SCC_GSMRL_TDCR_1 ((uint)0x00000000)
+#define SCC_GSMRL_RDCR_32 ((uint)0x0000c000)
+#define SCC_GSMRL_RDCR_16 ((uint)0x00008000)
+#define SCC_GSMRL_RDCR_8 ((uint)0x00004000)
+#define SCC_GSMRL_RDCR_1 ((uint)0x00000000)
+#define SCC_GSMRL_RENC_DFMAN ((uint)0x00003000)
+#define SCC_GSMRL_RENC_MANCH ((uint)0x00002000)
+#define SCC_GSMRL_RENC_FM0 ((uint)0x00001000)
+#define SCC_GSMRL_RENC_NRZI ((uint)0x00000800)
+#define SCC_GSMRL_RENC_NRZ ((uint)0x00000000)
+#define SCC_GSMRL_TENC_DFMAN ((uint)0x00000600)
+#define SCC_GSMRL_TENC_MANCH ((uint)0x00000400)
+#define SCC_GSMRL_TENC_FM0 ((uint)0x00000200)
+#define SCC_GSMRL_TENC_NRZI ((uint)0x00000100)
+#define SCC_GSMRL_TENC_NRZ ((uint)0x00000000)
+#define SCC_GSMRL_DIAG_LE ((uint)0x000000c0) /* Loop and echo */
+#define SCC_GSMRL_DIAG_ECHO ((uint)0x00000080)
+#define SCC_GSMRL_DIAG_LOOP ((uint)0x00000040)
+#define SCC_GSMRL_DIAG_NORM ((uint)0x00000000)
+#define SCC_GSMRL_ENR ((uint)0x00000020)
+#define SCC_GSMRL_ENT ((uint)0x00000010)
+#define SCC_GSMRL_MODE_ENET ((uint)0x0000000c)
+#define SCC_GSMRL_MODE_DDCMP ((uint)0x00000009)
+#define SCC_GSMRL_MODE_BISYNC ((uint)0x00000008)
+#define SCC_GSMRL_MODE_V14 ((uint)0x00000007)
+#define SCC_GSMRL_MODE_AHDLC ((uint)0x00000006)
+#define SCC_GSMRL_MODE_PROFIBUS ((uint)0x00000005)
+#define SCC_GSMRL_MODE_UART ((uint)0x00000004)
+#define SCC_GSMRL_MODE_SS7 ((uint)0x00000003)
+#define SCC_GSMRL_MODE_ATALK ((uint)0x00000002)
+#define SCC_GSMRL_MODE_HDLC ((uint)0x00000000)
+
+#define SCC_TODR_TOD ((ushort)0x8000)
+
+/* SCC Event and Mask register.
+*/
+#define SCCM_TXE ((unsigned char)0x10)
+#define SCCM_BSY ((unsigned char)0x04)
+#define SCCM_TX ((unsigned char)0x02)
+#define SCCM_RX ((unsigned char)0x01)
+
+typedef struct scc_param {
+ ushort scc_rbase; /* Rx Buffer descriptor base address */
+ ushort scc_tbase; /* Tx Buffer descriptor base address */
+ u_char scc_rfcr; /* Rx function code */
+ u_char scc_tfcr; /* Tx function code */
+ ushort scc_mrblr; /* Max receive buffer length */
+ uint scc_rstate; /* Internal */
+ uint scc_idp; /* Internal */
+ ushort scc_rbptr; /* Internal */
+ ushort scc_ibc; /* Internal */
+ uint scc_rxtmp; /* Internal */
+ uint scc_tstate; /* Internal */
+ uint scc_tdp; /* Internal */
+ ushort scc_tbptr; /* Internal */
+ ushort scc_tbc; /* Internal */
+ uint scc_txtmp; /* Internal */
+ uint scc_rcrc; /* Internal */
+ uint scc_tcrc; /* Internal */
+} sccp_t;
+
+/* CPM Ethernet through SCC1.
+ */
+typedef struct scc_enet {
+ sccp_t sen_genscc;
+ uint sen_cpres; /* Preset CRC */
+ uint sen_cmask; /* Constant mask for CRC */
+ uint sen_crcec; /* CRC Error counter */
+ uint sen_alec; /* alignment error counter */
+ uint sen_disfc; /* discard frame counter */
+ ushort sen_pads; /* Tx short frame pad character */
+ ushort sen_retlim; /* Retry limit threshold */
+ ushort sen_retcnt; /* Retry limit counter */
+ ushort sen_maxflr; /* maximum frame length register */
+ ushort sen_minflr; /* minimum frame length register */
+ ushort sen_maxd1; /* maximum DMA1 length */
+ ushort sen_maxd2; /* maximum DMA2 length */
+ ushort sen_maxd; /* Rx max DMA */
+ ushort sen_dmacnt; /* Rx DMA counter */
+ ushort sen_maxb; /* Max BD byte count */
+ ushort sen_gaddr1; /* Group address filter */
+ ushort sen_gaddr2;
+ ushort sen_gaddr3;
+ ushort sen_gaddr4;
+ uint sen_tbuf0data0; /* Save area 0 - current frame */
+ uint sen_tbuf0data1; /* Save area 1 - current frame */
+ uint sen_tbuf0rba; /* Internal */
+ uint sen_tbuf0crc; /* Internal */
+ ushort sen_tbuf0bcnt; /* Internal */
+ ushort sen_paddrh; /* physical address (MSB) */
+ ushort sen_paddrm;
+ ushort sen_paddrl; /* physical address (LSB) */
+ ushort sen_pper; /* persistence */
+ ushort sen_rfbdptr; /* Rx first BD pointer */
+ ushort sen_tfbdptr; /* Tx first BD pointer */
+ ushort sen_tlbdptr; /* Tx last BD pointer */
+ uint sen_tbuf1data0; /* Save area 0 - current frame */
+ uint sen_tbuf1data1; /* Save area 1 - current frame */
+ uint sen_tbuf1rba; /* Internal */
+ uint sen_tbuf1crc; /* Internal */
+ ushort sen_tbuf1bcnt; /* Internal */
+ ushort sen_txlen; /* Tx Frame length counter */
+ ushort sen_iaddr1; /* Individual address filter */
+ ushort sen_iaddr2;
+ ushort sen_iaddr3;
+ ushort sen_iaddr4;
+ ushort sen_boffcnt; /* Backoff counter */
+
+ /* NOTE: Some versions of the manual have the following items
+ * incorrectly documented. Below is the proper order.
+ */
+ ushort sen_taddrh; /* temp address (MSB) */
+ ushort sen_taddrm;
+ ushort sen_taddrl; /* temp address (LSB) */
+} scc_enet_t;
+
+
+/* SCC Event register as used by Ethernet.
+*/
+#define SCCE_ENET_GRA ((ushort)0x0080) /* Graceful stop complete */
+#define SCCE_ENET_TXE ((ushort)0x0010) /* Transmit Error */
+#define SCCE_ENET_RXF ((ushort)0x0008) /* Full frame received */
+#define SCCE_ENET_BSY ((ushort)0x0004) /* All incoming buffers full */
+#define SCCE_ENET_TXB ((ushort)0x0002) /* A buffer was transmitted */
+#define SCCE_ENET_RXB ((ushort)0x0001) /* A buffer was received */
+
+/* SCC Mode Register (PSMR) as used by Ethernet.
+*/
+#define SCC_PSMR_HBC ((ushort)0x8000) /* Enable heartbeat */
+#define SCC_PSMR_FC ((ushort)0x4000) /* Force collision */
+#define SCC_PSMR_RSH ((ushort)0x2000) /* Receive short frames */
+#define SCC_PSMR_IAM ((ushort)0x1000) /* Check individual hash */
+#define SCC_PSMR_ENCRC ((ushort)0x0800) /* Ethernet CRC mode */
+#define SCC_PSMR_PRO ((ushort)0x0200) /* Promiscuous mode */
+#define SCC_PSMR_BRO ((ushort)0x0100) /* Catch broadcast pkts */
+#define SCC_PSMR_SBT ((ushort)0x0080) /* Special backoff timer */
+#define SCC_PSMR_LPB ((ushort)0x0040) /* Set Loopback mode */
+#define SCC_PSMR_SIP ((ushort)0x0020) /* Sample Input Pins */
+#define SCC_PSMR_LCW ((ushort)0x0010) /* Late collision window */
+#define SCC_PSMR_NIB22 ((ushort)0x000a) /* Start frame search */
+#define SCC_PSMR_FDE ((ushort)0x0001) /* Full duplex enable */
+
+/* Buffer descriptor control/status used by Ethernet receive.
+ * Common to SCC and FCC.
+ */
+#define BD_ENET_RX_EMPTY ((ushort)0x8000)
+#define BD_ENET_RX_WRAP ((ushort)0x2000)
+#define BD_ENET_RX_INTR ((ushort)0x1000)
+#define BD_ENET_RX_LAST ((ushort)0x0800)
+#define BD_ENET_RX_FIRST ((ushort)0x0400)
+#define BD_ENET_RX_MISS ((ushort)0x0100)
+#define BD_ENET_RX_BC ((ushort)0x0080) /* FCC Only */
+#define BD_ENET_RX_MC ((ushort)0x0040) /* FCC Only */
+#define BD_ENET_RX_LG ((ushort)0x0020)
+#define BD_ENET_RX_NO ((ushort)0x0010)
+#define BD_ENET_RX_SH ((ushort)0x0008)
+#define BD_ENET_RX_CR ((ushort)0x0004)
+#define BD_ENET_RX_OV ((ushort)0x0002)
+#define BD_ENET_RX_CL ((ushort)0x0001)
+#define BD_ENET_RX_STATS ((ushort)0x01ff) /* All status bits */
+
+/* Buffer descriptor control/status used by Ethernet transmit.
+ * Common to SCC and FCC.
+ */
+#define BD_ENET_TX_READY ((ushort)0x8000)
+#define BD_ENET_TX_PAD ((ushort)0x4000)
+#define BD_ENET_TX_WRAP ((ushort)0x2000)
+#define BD_ENET_TX_INTR ((ushort)0x1000)
+#define BD_ENET_TX_LAST ((ushort)0x0800)
+#define BD_ENET_TX_TC ((ushort)0x0400)
+#define BD_ENET_TX_DEF ((ushort)0x0200)
+#define BD_ENET_TX_HB ((ushort)0x0100)
+#define BD_ENET_TX_LC ((ushort)0x0080)
+#define BD_ENET_TX_RL ((ushort)0x0040)
+#define BD_ENET_TX_RCMASK ((ushort)0x003c)
+#define BD_ENET_TX_UN ((ushort)0x0002)
+#define BD_ENET_TX_CSL ((ushort)0x0001)
+#define BD_ENET_TX_STATS ((ushort)0x03ff) /* All status bits */
+
+/* SCC as UART
+*/
+typedef struct scc_uart {
+ sccp_t scc_genscc;
+ uint scc_res1; /* Reserved */
+ uint scc_res2; /* Reserved */
+ ushort scc_maxidl; /* Maximum idle chars */
+ ushort scc_idlc; /* temp idle counter */
+ ushort scc_brkcr; /* Break count register */
+ ushort scc_parec; /* receive parity error counter */
+ ushort scc_frmec; /* receive framing error counter */
+ ushort scc_nosec; /* receive noise counter */
+ ushort scc_brkec; /* receive break condition counter */
+ ushort scc_brkln; /* last received break length */
+ ushort scc_uaddr1; /* UART address character 1 */
+ ushort scc_uaddr2; /* UART address character 2 */
+ ushort scc_rtemp; /* Temp storage */
+ ushort scc_toseq; /* Transmit out of sequence char */
+ ushort scc_char1; /* control character 1 */
+ ushort scc_char2; /* control character 2 */
+ ushort scc_char3; /* control character 3 */
+ ushort scc_char4; /* control character 4 */
+ ushort scc_char5; /* control character 5 */
+ ushort scc_char6; /* control character 6 */
+ ushort scc_char7; /* control character 7 */
+ ushort scc_char8; /* control character 8 */
+ ushort scc_rccm; /* receive control character mask */
+ ushort scc_rccr; /* receive control character register */
+ ushort scc_rlbc; /* receive last break character */
+} scc_uart_t;
+
+/* SCC Event and Mask registers when it is used as a UART.
+*/
+#define UART_SCCM_GLR ((ushort)0x1000)
+#define UART_SCCM_GLT ((ushort)0x0800)
+#define UART_SCCM_AB ((ushort)0x0200)
+#define UART_SCCM_IDL ((ushort)0x0100)
+#define UART_SCCM_GRA ((ushort)0x0080)
+#define UART_SCCM_BRKE ((ushort)0x0040)
+#define UART_SCCM_BRKS ((ushort)0x0020)
+#define UART_SCCM_CCR ((ushort)0x0008)
+#define UART_SCCM_BSY ((ushort)0x0004)
+#define UART_SCCM_TX ((ushort)0x0002)
+#define UART_SCCM_RX ((ushort)0x0001)
+
+/* The SCC PSMR when used as a UART.
+*/
+#define SCU_PSMR_FLC ((ushort)0x8000)
+#define SCU_PSMR_SL ((ushort)0x4000)
+#define SCU_PSMR_CL ((ushort)0x3000)
+#define SCU_PSMR_UM ((ushort)0x0c00)
+#define SCU_PSMR_FRZ ((ushort)0x0200)
+#define SCU_PSMR_RZS ((ushort)0x0100)
+#define SCU_PSMR_SYN ((ushort)0x0080)
+#define SCU_PSMR_DRT ((ushort)0x0040)
+#define SCU_PSMR_PEN ((ushort)0x0010)
+#define SCU_PSMR_RPM ((ushort)0x000c)
+#define SCU_PSMR_REVP ((ushort)0x0008)
+#define SCU_PSMR_TPM ((ushort)0x0003)
+#define SCU_PSMR_TEVP ((ushort)0x0003)
+
+/* CPM Transparent mode SCC.
+ */
+typedef struct scc_trans {
+ sccp_t st_genscc;
+ uint st_cpres; /* Preset CRC */
+ uint st_cmask; /* Constant mask for CRC */
+} scc_trans_t;
+
+#define BD_SCC_TX_LAST ((ushort)0x0800)
+
+/* How about some FCCs.....
+*/
+#define FCC_GFMR_DIAG_NORM ((uint)0x00000000)
+#define FCC_GFMR_DIAG_LE ((uint)0x40000000)
+#define FCC_GFMR_DIAG_AE ((uint)0x80000000)
+#define FCC_GFMR_DIAG_ALE ((uint)0xc0000000)
+#define FCC_GFMR_TCI ((uint)0x20000000)
+#define FCC_GFMR_TRX ((uint)0x10000000)
+#define FCC_GFMR_TTX ((uint)0x08000000)
+#define FCC_GFMR_TTX ((uint)0x08000000)
+#define FCC_GFMR_CDP ((uint)0x04000000)
+#define FCC_GFMR_CTSP ((uint)0x02000000)
+#define FCC_GFMR_CDS ((uint)0x01000000)
+#define FCC_GFMR_CTSS ((uint)0x00800000)
+#define FCC_GFMR_SYNL_NONE ((uint)0x00000000)
+#define FCC_GFMR_SYNL_AUTO ((uint)0x00004000)
+#define FCC_GFMR_SYNL_8 ((uint)0x00008000)
+#define FCC_GFMR_SYNL_16 ((uint)0x0000c000)
+#define FCC_GFMR_RTSM ((uint)0x00002000)
+#define FCC_GFMR_RENC_NRZ ((uint)0x00000000)
+#define FCC_GFMR_RENC_NRZI ((uint)0x00000800)
+#define FCC_GFMR_REVD ((uint)0x00000400)
+#define FCC_GFMR_TENC_NRZ ((uint)0x00000000)
+#define FCC_GFMR_TENC_NRZI ((uint)0x00000100)
+#define FCC_GFMR_TCRC_16 ((uint)0x00000000)
+#define FCC_GFMR_TCRC_32 ((uint)0x00000080)
+#define FCC_GFMR_ENR ((uint)0x00000020)
+#define FCC_GFMR_ENT ((uint)0x00000010)
+#define FCC_GFMR_MODE_ENET ((uint)0x0000000c)
+#define FCC_GFMR_MODE_ATM ((uint)0x0000000a)
+#define FCC_GFMR_MODE_HDLC ((uint)0x00000000)
+
+/* Generic FCC parameter ram.
+*/
+typedef struct fcc_param {
+ ushort fcc_riptr; /* Rx Internal temp pointer */
+ ushort fcc_tiptr; /* Tx Internal temp pointer */
+ ushort fcc_res1;
+ ushort fcc_mrblr; /* Max receive buffer length, mod 32 bytes */
+ uint fcc_rstate; /* Upper byte is Func code, must be set */
+ uint fcc_rbase; /* Receive BD base */
+ ushort fcc_rbdstat; /* RxBD status */
+ ushort fcc_rbdlen; /* RxBD down counter */
+ uint fcc_rdptr; /* RxBD internal data pointer */
+ uint fcc_tstate; /* Upper byte is Func code, must be set */
+ uint fcc_tbase; /* Transmit BD base */
+ ushort fcc_tbdstat; /* TxBD status */
+ ushort fcc_tbdlen; /* TxBD down counter */
+ uint fcc_tdptr; /* TxBD internal data pointer */
+ uint fcc_rbptr; /* Rx BD Internal buf pointer */
+ uint fcc_tbptr; /* Tx BD Internal buf pointer */
+ uint fcc_rcrc; /* Rx temp CRC */
+ uint fcc_res2;
+ uint fcc_tcrc; /* Tx temp CRC */
+} fccp_t;
+
+
+/* Ethernet controller through FCC.
+*/
+typedef struct fcc_enet {
+ fccp_t fen_genfcc;
+ uint fen_statbuf; /* Internal status buffer */
+ uint fen_camptr; /* CAM address */
+ uint fen_cmask; /* Constant mask for CRC */
+ uint fen_cpres; /* Preset CRC */
+ uint fen_crcec; /* CRC Error counter */
+ uint fen_alec; /* alignment error counter */
+ uint fen_disfc; /* discard frame counter */
+ ushort fen_retlim; /* Retry limit */
+ ushort fen_retcnt; /* Retry counter */
+ ushort fen_pper; /* Persistence */
+ ushort fen_boffcnt; /* backoff counter */
+ uint fen_gaddrh; /* Group address filter, high 32-bits */
+ uint fen_gaddrl; /* Group address filter, low 32-bits */
+ ushort fen_tfcstat; /* out of sequence TxBD */
+ ushort fen_tfclen;
+ uint fen_tfcptr;
+ ushort fen_mflr; /* Maximum frame length (1518) */
+ ushort fen_paddrh; /* MAC address */
+ ushort fen_paddrm;
+ ushort fen_paddrl;
+ ushort fen_ibdcount; /* Internal BD counter */
+ ushort fen_ibdstart; /* Internal BD start pointer */
+ ushort fen_ibdend; /* Internal BD end pointer */
+ ushort fen_txlen; /* Internal Tx frame length counter */
+ uint fen_ibdbase[8]; /* Internal use */
+ uint fen_iaddrh; /* Individual address filter */
+ uint fen_iaddrl;
+ ushort fen_minflr; /* Minimum frame length (64) */
+ ushort fen_taddrh; /* Filter transfer MAC address */
+ ushort fen_taddrm;
+ ushort fen_taddrl;
+ ushort fen_padptr; /* Pointer to pad byte buffer */
+ ushort fen_cftype; /* control frame type */
+ ushort fen_cfrange; /* control frame range */
+ ushort fen_maxb; /* maximum BD count */
+ ushort fen_maxd1; /* Max DMA1 length (1520) */
+ ushort fen_maxd2; /* Max DMA2 length (1520) */
+ ushort fen_maxd; /* internal max DMA count */
+ ushort fen_dmacnt; /* internal DMA counter */
+ uint fen_octc; /* Total octect counter */
+ uint fen_colc; /* Total collision counter */
+ uint fen_broc; /* Total broadcast packet counter */
+ uint fen_mulc; /* Total multicast packet count */
+ uint fen_uspc; /* Total packets < 64 bytes */
+ uint fen_frgc; /* Total packets < 64 bytes with errors */
+ uint fen_ospc; /* Total packets > 1518 */
+ uint fen_jbrc; /* Total packets > 1518 with errors */
+ uint fen_p64c; /* Total packets == 64 bytes */
+ uint fen_p65c; /* Total packets 64 < bytes <= 127 */
+ uint fen_p128c; /* Total packets 127 < bytes <= 255 */
+ uint fen_p256c; /* Total packets 256 < bytes <= 511 */
+ uint fen_p512c; /* Total packets 512 < bytes <= 1023 */
+ uint fen_p1024c; /* Total packets 1024 < bytes <= 1518 */
+ uint fen_cambuf; /* Internal CAM buffer poiner */
+ ushort fen_rfthr; /* Received frames threshold */
+ ushort fen_rfcnt; /* Received frames count */
+} fcc_enet_t;
+
+/* FCC Event/Mask register as used by Ethernet.
+*/
+#define FCC_ENET_GRA ((ushort)0x0080) /* Graceful stop complete */
+#define FCC_ENET_RXC ((ushort)0x0040) /* Control Frame Received */
+#define FCC_ENET_TXC ((ushort)0x0020) /* Out of seq. Tx sent */
+#define FCC_ENET_TXE ((ushort)0x0010) /* Transmit Error */
+#define FCC_ENET_RXF ((ushort)0x0008) /* Full frame received */
+#define FCC_ENET_BSY ((ushort)0x0004) /* Busy. Rx Frame dropped */
+#define FCC_ENET_TXB ((ushort)0x0002) /* A buffer was transmitted */
+#define FCC_ENET_RXB ((ushort)0x0001) /* A buffer was received */
+
+/* FCC Mode Register (FPSMR) as used by Ethernet.
+*/
+#define FCC_PSMR_HBC ((uint)0x80000000) /* Enable heartbeat */
+#define FCC_PSMR_FC ((uint)0x40000000) /* Force Collision */
+#define FCC_PSMR_SBT ((uint)0x20000000) /* Stop backoff timer */
+#define FCC_PSMR_LPB ((uint)0x10000000) /* Local protect. 1 = FDX */
+#define FCC_PSMR_LCW ((uint)0x08000000) /* Late collision select */
+#define FCC_PSMR_FDE ((uint)0x04000000) /* Full Duplex Enable */
+#define FCC_PSMR_MON ((uint)0x02000000) /* RMON Enable */
+#define FCC_PSMR_PRO ((uint)0x00400000) /* Promiscuous Enable */
+#define FCC_PSMR_FCE ((uint)0x00200000) /* Flow Control Enable */
+#define FCC_PSMR_RSH ((uint)0x00100000) /* Receive Short Frames */
+#define FCC_PSMR_CAM ((uint)0x00000400) /* CAM enable */
+#define FCC_PSMR_BRO ((uint)0x00000200) /* Broadcast pkt discard */
+#define FCC_PSMR_ENCRC ((uint)0x00000080) /* Use 32-bit CRC */
+
+/* IIC parameter RAM.
+*/
+typedef struct iic {
+ ushort iic_rbase; /* Rx Buffer descriptor base address */
+ ushort iic_tbase; /* Tx Buffer descriptor base address */
+ u_char iic_rfcr; /* Rx function code */
+ u_char iic_tfcr; /* Tx function code */
+ ushort iic_mrblr; /* Max receive buffer length */
+ uint iic_rstate; /* Internal */
+ uint iic_rdp; /* Internal */
+ ushort iic_rbptr; /* Internal */
+ ushort iic_rbc; /* Internal */
+ uint iic_rxtmp; /* Internal */
+ uint iic_tstate; /* Internal */
+ uint iic_tdp; /* Internal */
+ ushort iic_tbptr; /* Internal */
+ ushort iic_tbc; /* Internal */
+ uint iic_txtmp; /* Internal */
+} iic_t;
+
+/* SPI parameter RAM.
+*/
+typedef struct spi {
+ ushort spi_rbase; /* Rx Buffer descriptor base address */
+ ushort spi_tbase; /* Tx Buffer descriptor base address */
+ u_char spi_rfcr; /* Rx function code */
+ u_char spi_tfcr; /* Tx function code */
+ ushort spi_mrblr; /* Max receive buffer length */
+ uint spi_rstate; /* Internal */
+ uint spi_rdp; /* Internal */
+ ushort spi_rbptr; /* Internal */
+ ushort spi_rbc; /* Internal */
+ uint spi_rxtmp; /* Internal */
+ uint spi_tstate; /* Internal */
+ uint spi_tdp; /* Internal */
+ ushort spi_tbptr; /* Internal */
+ ushort spi_tbc; /* Internal */
+ uint spi_txtmp; /* Internal */
+ uint spi_res; /* Tx temp. */
+ uint spi_res1[4]; /* SDMA temp. */
+} spi_t;
+
+/* SPI Mode register.
+*/
+#define SPMODE_LOOP ((ushort)0x4000) /* Loopback */
+#define SPMODE_CI ((ushort)0x2000) /* Clock Invert */
+#define SPMODE_CP ((ushort)0x1000) /* Clock Phase */
+#define SPMODE_DIV16 ((ushort)0x0800) /* BRG/16 mode */
+#define SPMODE_REV ((ushort)0x0400) /* Reversed Data */
+#define SPMODE_MSTR ((ushort)0x0200) /* SPI Master */
+#define SPMODE_EN ((ushort)0x0100) /* Enable */
+#define SPMODE_LENMSK ((ushort)0x00f0) /* character length */
+#define SPMODE_PMMSK ((ushort)0x000f) /* prescale modulus */
+
+#define SPMODE_LEN(x) ((((x)-1)&0xF)<<4)
+#define SPMODE_PM(x) ((x) &0xF)
+
+#define SPI_EB ((u_char)0x10) /* big endian byte order */
+
+#define BD_IIC_START ((ushort)0x0400)
+
+/* IDMA parameter RAM
+*/
+typedef struct idma {
+ ushort ibase; /* IDMA buffer descriptor table base address */
+ ushort dcm; /* DMA channel mode */
+ ushort ibdptr; /* IDMA current buffer descriptor pointer */
+ ushort dpr_buf; /* IDMA transfer buffer base address */
+ ushort buf_inv; /* internal buffer inventory */
+ ushort ss_max; /* steady-state maximum transfer size */
+ ushort dpr_in_ptr; /* write pointer inside the internal buffer */
+ ushort sts; /* source transfer size */
+ ushort dpr_out_ptr; /* read pointer inside the internal buffer */
+ ushort seob; /* source end of burst */
+ ushort deob; /* destination end of burst */
+ ushort dts; /* destination transfer size */
+ ushort ret_add; /* return address when working in ERM=1 mode */
+ ushort res0; /* reserved */
+ uint bd_cnt; /* internal byte count */
+ uint s_ptr; /* source internal data pointer */
+ uint d_ptr; /* destination internal data pointer */
+ uint istate; /* internal state */
+ u_char res1[20]; /* pad to 64-byte length */
+} idma_t;
+
+/* DMA channel mode bit fields
+*/
+#define IDMA_DCM_FB ((ushort)0x8000) /* fly-by mode */
+#define IDMA_DCM_LP ((ushort)0x4000) /* low priority */
+#define IDMA_DCM_TC2 ((ushort)0x0400) /* value driven on TC[2] */
+#define IDMA_DCM_DMA_WRAP_MASK ((ushort)0x01c0) /* mask for DMA wrap */
+#define IDMA_DCM_DMA_WRAP_64 ((ushort)0x0000) /* 64-byte DMA xfer buffer */
+#define IDMA_DCM_DMA_WRAP_128 ((ushort)0x0040) /* 128-byte DMA xfer buffer */
+#define IDMA_DCM_DMA_WRAP_256 ((ushort)0x0080) /* 256-byte DMA xfer buffer */
+#define IDMA_DCM_DMA_WRAP_512 ((ushort)0x00c0) /* 512-byte DMA xfer buffer */
+#define IDMA_DCM_DMA_WRAP_1024 ((ushort)0x0100) /* 1024-byte DMA xfer buffer */
+#define IDMA_DCM_DMA_WRAP_2048 ((ushort)0x0140) /* 2048-byte DMA xfer buffer */
+#define IDMA_DCM_SINC ((ushort)0x0020) /* source inc addr */
+#define IDMA_DCM_DINC ((ushort)0x0010) /* destination inc addr */
+#define IDMA_DCM_ERM ((ushort)0x0008) /* external request mode */
+#define IDMA_DCM_DT ((ushort)0x0004) /* DONE treatment */
+#define IDMA_DCM_SD_MASK ((ushort)0x0003) /* mask for SD bit field */
+#define IDMA_DCM_SD_MEM2MEM ((ushort)0x0000) /* memory-to-memory xfer */
+#define IDMA_DCM_SD_PER2MEM ((ushort)0x0002) /* peripheral-to-memory xfer */
+#define IDMA_DCM_SD_MEM2PER ((ushort)0x0001) /* memory-to-peripheral xfer */
+
+/* IDMA Buffer Descriptors
+*/
+typedef struct idma_bd {
+ uint flags;
+ uint len; /* data length */
+ uint src; /* source data buffer pointer */
+ uint dst; /* destination data buffer pointer */
+} idma_bd_t;
+
+/* IDMA buffer descriptor flag bit fields
+*/
+#define IDMA_BD_V ((uint)0x80000000) /* valid */
+#define IDMA_BD_W ((uint)0x20000000) /* wrap */
+#define IDMA_BD_I ((uint)0x10000000) /* interrupt */
+#define IDMA_BD_L ((uint)0x08000000) /* last */
+#define IDMA_BD_CM ((uint)0x02000000) /* continuous mode */
+#define IDMA_BD_SDN ((uint)0x00400000) /* source done */
+#define IDMA_BD_DDN ((uint)0x00200000) /* destination done */
+#define IDMA_BD_DGBL ((uint)0x00100000) /* destination global */
+#define IDMA_BD_DBO_LE ((uint)0x00040000) /* little-end dest byte order */
+#define IDMA_BD_DBO_BE ((uint)0x00080000) /* big-end dest byte order */
+#define IDMA_BD_DDTB ((uint)0x00010000) /* destination data bus */
+#define IDMA_BD_SGBL ((uint)0x00002000) /* source global */
+#define IDMA_BD_SBO_LE ((uint)0x00000800) /* little-end src byte order */
+#define IDMA_BD_SBO_BE ((uint)0x00001000) /* big-end src byte order */
+#define IDMA_BD_SDTB ((uint)0x00000200) /* source data bus */
+
+/* per-channel IDMA registers
+*/
+typedef struct im_idma {
+ u_char idsr; /* IDMAn event status register */
+ u_char res0[3];
+ u_char idmr; /* IDMAn event mask register */
+ u_char res1[3];
+} im_idma_t;
+
+/* IDMA event register bit fields
+*/
+#define IDMA_EVENT_SC ((unsigned char)0x08) /* stop completed */
+#define IDMA_EVENT_OB ((unsigned char)0x04) /* out of buffers */
+#define IDMA_EVENT_EDN ((unsigned char)0x02) /* external DONE asserted */
+#define IDMA_EVENT_BC ((unsigned char)0x01) /* buffer descriptor complete */
+
+/* RISC Controller Configuration Register (RCCR) bit fields
+*/
+#define RCCR_TIME ((uint)0x80000000) /* timer enable */
+#define RCCR_TIMEP_MASK ((uint)0x3f000000) /* mask for timer period bit field */
+#define RCCR_DR0M ((uint)0x00800000) /* IDMA0 request mode */
+#define RCCR_DR1M ((uint)0x00400000) /* IDMA1 request mode */
+#define RCCR_DR2M ((uint)0x00000080) /* IDMA2 request mode */
+#define RCCR_DR3M ((uint)0x00000040) /* IDMA3 request mode */
+#define RCCR_DR0QP_MASK ((uint)0x00300000) /* mask for IDMA0 req priority */
+#define RCCR_DR0QP_HIGH ((uint)0x00000000) /* IDMA0 has high req priority */
+#define RCCR_DR0QP_MED ((uint)0x00100000) /* IDMA0 has medium req priority */
+#define RCCR_DR0QP_LOW ((uint)0x00200000) /* IDMA0 has low req priority */
+#define RCCR_DR1QP_MASK ((uint)0x00030000) /* mask for IDMA1 req priority */
+#define RCCR_DR1QP_HIGH ((uint)0x00000000) /* IDMA1 has high req priority */
+#define RCCR_DR1QP_MED ((uint)0x00010000) /* IDMA1 has medium req priority */
+#define RCCR_DR1QP_LOW ((uint)0x00020000) /* IDMA1 has low req priority */
+#define RCCR_DR2QP_MASK ((uint)0x00000030) /* mask for IDMA2 req priority */
+#define RCCR_DR2QP_HIGH ((uint)0x00000000) /* IDMA2 has high req priority */
+#define RCCR_DR2QP_MED ((uint)0x00000010) /* IDMA2 has medium req priority */
+#define RCCR_DR2QP_LOW ((uint)0x00000020) /* IDMA2 has low req priority */
+#define RCCR_DR3QP_MASK ((uint)0x00000003) /* mask for IDMA3 req priority */
+#define RCCR_DR3QP_HIGH ((uint)0x00000000) /* IDMA3 has high req priority */
+#define RCCR_DR3QP_MED ((uint)0x00000001) /* IDMA3 has medium req priority */
+#define RCCR_DR3QP_LOW ((uint)0x00000002) /* IDMA3 has low req priority */
+#define RCCR_EIE ((uint)0x00080000) /* external interrupt enable */
+#define RCCR_SCD ((uint)0x00040000) /* scheduler configuration */
+#define RCCR_ERAM_MASK ((uint)0x0000e000) /* mask for enable RAM microcode */
+#define RCCR_ERAM_0KB ((uint)0x00000000) /* use 0KB of dpram for microcode */
+#define RCCR_ERAM_2KB ((uint)0x00002000) /* use 2KB of dpram for microcode */
+#define RCCR_ERAM_4KB ((uint)0x00004000) /* use 4KB of dpram for microcode */
+#define RCCR_ERAM_6KB ((uint)0x00006000) /* use 6KB of dpram for microcode */
+#define RCCR_ERAM_8KB ((uint)0x00008000) /* use 8KB of dpram for microcode */
+#define RCCR_ERAM_10KB ((uint)0x0000a000) /* use 10KB of dpram for microcode */
+#define RCCR_ERAM_12KB ((uint)0x0000c000) /* use 12KB of dpram for microcode */
+#define RCCR_EDM0 ((uint)0x00000800) /* DREQ0 edge detect mode */
+#define RCCR_EDM1 ((uint)0x00000400) /* DREQ1 edge detect mode */
+#define RCCR_EDM2 ((uint)0x00000200) /* DREQ2 edge detect mode */
+#define RCCR_EDM3 ((uint)0x00000100) /* DREQ3 edge detect mode */
+#define RCCR_DEM01 ((uint)0x00000008) /* DONE0/DONE1 edge detect mode */
+#define RCCR_DEM23 ((uint)0x00000004) /* DONE2/DONE3 edge detect mode */
+
+/*-----------------------------------------------------------------------
+ * CMXFCR - CMX FCC Clock Route Register
+ */
+#define CMXFCR_FC1 0x40000000 /* FCC1 connection */
+#define CMXFCR_RF1CS_MSK 0x38000000 /* Receive FCC1 Clock Source Mask */
+#define CMXFCR_TF1CS_MSK 0x07000000 /* Transmit FCC1 Clock Source Mask */
+#define CMXFCR_FC2 0x00400000 /* FCC2 connection */
+#define CMXFCR_RF2CS_MSK 0x00380000 /* Receive FCC2 Clock Source Mask */
+#define CMXFCR_TF2CS_MSK 0x00070000 /* Transmit FCC2 Clock Source Mask */
+#define CMXFCR_FC3 0x00004000 /* FCC3 connection */
+#define CMXFCR_RF3CS_MSK 0x00003800 /* Receive FCC3 Clock Source Mask */
+#define CMXFCR_TF3CS_MSK 0x00000700 /* Transmit FCC3 Clock Source Mask */
+
+#define CMXFCR_RF1CS_BRG5 0x00000000 /* Receive FCC1 Clock Source is BRG5 */
+#define CMXFCR_RF1CS_BRG6 0x08000000 /* Receive FCC1 Clock Source is BRG6 */
+#define CMXFCR_RF1CS_BRG7 0x10000000 /* Receive FCC1 Clock Source is BRG7 */
+#define CMXFCR_RF1CS_BRG8 0x18000000 /* Receive FCC1 Clock Source is BRG8 */
+#define CMXFCR_RF1CS_CLK9 0x20000000 /* Receive FCC1 Clock Source is CLK9 */
+#define CMXFCR_RF1CS_CLK10 0x28000000 /* Receive FCC1 Clock Source is CLK10 */
+#define CMXFCR_RF1CS_CLK11 0x30000000 /* Receive FCC1 Clock Source is CLK11 */
+#define CMXFCR_RF1CS_CLK12 0x38000000 /* Receive FCC1 Clock Source is CLK12 */
+
+#define CMXFCR_TF1CS_BRG5 0x00000000 /* Transmit FCC1 Clock Source is BRG5 */
+#define CMXFCR_TF1CS_BRG6 0x01000000 /* Transmit FCC1 Clock Source is BRG6 */
+#define CMXFCR_TF1CS_BRG7 0x02000000 /* Transmit FCC1 Clock Source is BRG7 */
+#define CMXFCR_TF1CS_BRG8 0x03000000 /* Transmit FCC1 Clock Source is BRG8 */
+#define CMXFCR_TF1CS_CLK9 0x04000000 /* Transmit FCC1 Clock Source is CLK9 */
+#define CMXFCR_TF1CS_CLK10 0x05000000 /* Transmit FCC1 Clock Source is CLK10 */
+#define CMXFCR_TF1CS_CLK11 0x06000000 /* Transmit FCC1 Clock Source is CLK11 */
+#define CMXFCR_TF1CS_CLK12 0x07000000 /* Transmit FCC1 Clock Source is CLK12 */
+
+#define CMXFCR_RF2CS_BRG5 0x00000000 /* Receive FCC2 Clock Source is BRG5 */
+#define CMXFCR_RF2CS_BRG6 0x00080000 /* Receive FCC2 Clock Source is BRG6 */
+#define CMXFCR_RF2CS_BRG7 0x00100000 /* Receive FCC2 Clock Source is BRG7 */
+#define CMXFCR_RF2CS_BRG8 0x00180000 /* Receive FCC2 Clock Source is BRG8 */
+#define CMXFCR_RF2CS_CLK13 0x00200000 /* Receive FCC2 Clock Source is CLK13 */
+#define CMXFCR_RF2CS_CLK14 0x00280000 /* Receive FCC2 Clock Source is CLK14 */
+#define CMXFCR_RF2CS_CLK15 0x00300000 /* Receive FCC2 Clock Source is CLK15 */
+#define CMXFCR_RF2CS_CLK16 0x00380000 /* Receive FCC2 Clock Source is CLK16 */
+
+#define CMXFCR_TF2CS_BRG5 0x00000000 /* Transmit FCC2 Clock Source is BRG5 */
+#define CMXFCR_TF2CS_BRG6 0x00010000 /* Transmit FCC2 Clock Source is BRG6 */
+#define CMXFCR_TF2CS_BRG7 0x00020000 /* Transmit FCC2 Clock Source is BRG7 */
+#define CMXFCR_TF2CS_BRG8 0x00030000 /* Transmit FCC2 Clock Source is BRG8 */
+#define CMXFCR_TF2CS_CLK13 0x00040000 /* Transmit FCC2 Clock Source is CLK13 */
+#define CMXFCR_TF2CS_CLK14 0x00050000 /* Transmit FCC2 Clock Source is CLK14 */
+#define CMXFCR_TF2CS_CLK15 0x00060000 /* Transmit FCC2 Clock Source is CLK15 */
+#define CMXFCR_TF2CS_CLK16 0x00070000 /* Transmit FCC2 Clock Source is CLK16 */
+
+#define CMXFCR_RF3CS_BRG5 0x00000000 /* Receive FCC3 Clock Source is BRG5 */
+#define CMXFCR_RF3CS_BRG6 0x00000800 /* Receive FCC3 Clock Source is BRG6 */
+#define CMXFCR_RF3CS_BRG7 0x00001000 /* Receive FCC3 Clock Source is BRG7 */
+#define CMXFCR_RF3CS_BRG8 0x00001800 /* Receive FCC3 Clock Source is BRG8 */
+#define CMXFCR_RF3CS_CLK13 0x00002000 /* Receive FCC3 Clock Source is CLK13 */
+#define CMXFCR_RF3CS_CLK14 0x00002800 /* Receive FCC3 Clock Source is CLK14 */
+#define CMXFCR_RF3CS_CLK15 0x00003000 /* Receive FCC3 Clock Source is CLK15 */
+#define CMXFCR_RF3CS_CLK16 0x00003800 /* Receive FCC3 Clock Source is CLK16 */
+
+#define CMXFCR_TF3CS_BRG5 0x00000000 /* Transmit FCC3 Clock Source is BRG5 */
+#define CMXFCR_TF3CS_BRG6 0x00000100 /* Transmit FCC3 Clock Source is BRG6 */
+#define CMXFCR_TF3CS_BRG7 0x00000200 /* Transmit FCC3 Clock Source is BRG7 */
+#define CMXFCR_TF3CS_BRG8 0x00000300 /* Transmit FCC3 Clock Source is BRG8 */
+#define CMXFCR_TF3CS_CLK13 0x00000400 /* Transmit FCC3 Clock Source is CLK13 */
+#define CMXFCR_TF3CS_CLK14 0x00000500 /* Transmit FCC3 Clock Source is CLK14 */
+#define CMXFCR_TF3CS_CLK15 0x00000600 /* Transmit FCC3 Clock Source is CLK15 */
+#define CMXFCR_TF3CS_CLK16 0x00000700 /* Transmit FCC3 Clock Source is CLK16 */
+
+/*-----------------------------------------------------------------------
+ * CMXSCR - CMX SCC Clock Route Register
+ */
+#define CMXSCR_GR1 0x80000000 /* Grant Support of SCC1 */
+#define CMXSCR_SC1 0x40000000 /* SCC1 connection */
+#define CMXSCR_RS1CS_MSK 0x38000000 /* Receive SCC1 Clock Source Mask */
+#define CMXSCR_TS1CS_MSK 0x07000000 /* Transmit SCC1 Clock Source Mask */
+#define CMXSCR_GR2 0x00800000 /* Grant Support of SCC2 */
+#define CMXSCR_SC2 0x00400000 /* SCC2 connection */
+#define CMXSCR_RS2CS_MSK 0x00380000 /* Receive SCC2 Clock Source Mask */
+#define CMXSCR_TS2CS_MSK 0x00070000 /* Transmit SCC2 Clock Source Mask */
+#define CMXSCR_GR3 0x00008000 /* Grant Support of SCC3 */
+#define CMXSCR_SC3 0x00004000 /* SCC3 connection */
+#define CMXSCR_RS3CS_MSK 0x00003800 /* Receive SCC3 Clock Source Mask */
+#define CMXSCR_TS3CS_MSK 0x00000700 /* Transmit SCC3 Clock Source Mask */
+#define CMXSCR_GR4 0x00000080 /* Grant Support of SCC4 */
+#define CMXSCR_SC4 0x00000040 /* SCC4 connection */
+#define CMXSCR_RS4CS_MSK 0x00000038 /* Receive SCC4 Clock Source Mask */
+#define CMXSCR_TS4CS_MSK 0x00000007 /* Transmit SCC4 Clock Source Mask */
+
+#define CMXSCR_RS1CS_BRG1 0x00000000 /* SCC1 Rx Clock Source is BRG1 */
+#define CMXSCR_RS1CS_BRG2 0x08000000 /* SCC1 Rx Clock Source is BRG2 */
+#define CMXSCR_RS1CS_BRG3 0x10000000 /* SCC1 Rx Clock Source is BRG3 */
+#define CMXSCR_RS1CS_BRG4 0x18000000 /* SCC1 Rx Clock Source is BRG4 */
+#define CMXSCR_RS1CS_CLK11 0x20000000 /* SCC1 Rx Clock Source is CLK11 */
+#define CMXSCR_RS1CS_CLK12 0x28000000 /* SCC1 Rx Clock Source is CLK12 */
+#define CMXSCR_RS1CS_CLK3 0x30000000 /* SCC1 Rx Clock Source is CLK3 */
+#define CMXSCR_RS1CS_CLK4 0x38000000 /* SCC1 Rx Clock Source is CLK4 */
+
+#define CMXSCR_TS1CS_BRG1 0x00000000 /* SCC1 Tx Clock Source is BRG1 */
+#define CMXSCR_TS1CS_BRG2 0x01000000 /* SCC1 Tx Clock Source is BRG2 */
+#define CMXSCR_TS1CS_BRG3 0x02000000 /* SCC1 Tx Clock Source is BRG3 */
+#define CMXSCR_TS1CS_BRG4 0x03000000 /* SCC1 Tx Clock Source is BRG4 */
+#define CMXSCR_TS1CS_CLK11 0x04000000 /* SCC1 Tx Clock Source is CLK11 */
+#define CMXSCR_TS1CS_CLK12 0x05000000 /* SCC1 Tx Clock Source is CLK12 */
+#define CMXSCR_TS1CS_CLK3 0x06000000 /* SCC1 Tx Clock Source is CLK3 */
+#define CMXSCR_TS1CS_CLK4 0x07000000 /* SCC1 Tx Clock Source is CLK4 */
+
+#define CMXSCR_RS2CS_BRG1 0x00000000 /* SCC2 Rx Clock Source is BRG1 */
+#define CMXSCR_RS2CS_BRG2 0x00080000 /* SCC2 Rx Clock Source is BRG2 */
+#define CMXSCR_RS2CS_BRG3 0x00100000 /* SCC2 Rx Clock Source is BRG3 */
+#define CMXSCR_RS2CS_BRG4 0x00180000 /* SCC2 Rx Clock Source is BRG4 */
+#define CMXSCR_RS2CS_CLK11 0x00200000 /* SCC2 Rx Clock Source is CLK11 */
+#define CMXSCR_RS2CS_CLK12 0x00280000 /* SCC2 Rx Clock Source is CLK12 */
+#define CMXSCR_RS2CS_CLK3 0x00300000 /* SCC2 Rx Clock Source is CLK3 */
+#define CMXSCR_RS2CS_CLK4 0x00380000 /* SCC2 Rx Clock Source is CLK4 */
+
+#define CMXSCR_TS2CS_BRG1 0x00000000 /* SCC2 Tx Clock Source is BRG1 */
+#define CMXSCR_TS2CS_BRG2 0x00010000 /* SCC2 Tx Clock Source is BRG2 */
+#define CMXSCR_TS2CS_BRG3 0x00020000 /* SCC2 Tx Clock Source is BRG3 */
+#define CMXSCR_TS2CS_BRG4 0x00030000 /* SCC2 Tx Clock Source is BRG4 */
+#define CMXSCR_TS2CS_CLK11 0x00040000 /* SCC2 Tx Clock Source is CLK11 */
+#define CMXSCR_TS2CS_CLK12 0x00050000 /* SCC2 Tx Clock Source is CLK12 */
+#define CMXSCR_TS2CS_CLK3 0x00060000 /* SCC2 Tx Clock Source is CLK3 */
+#define CMXSCR_TS2CS_CLK4 0x00070000 /* SCC2 Tx Clock Source is CLK4 */
+
+#define CMXSCR_RS3CS_BRG1 0x00000000 /* SCC3 Rx Clock Source is BRG1 */
+#define CMXSCR_RS3CS_BRG2 0x00000800 /* SCC3 Rx Clock Source is BRG2 */
+#define CMXSCR_RS3CS_BRG3 0x00001000 /* SCC3 Rx Clock Source is BRG3 */
+#define CMXSCR_RS3CS_BRG4 0x00001800 /* SCC3 Rx Clock Source is BRG4 */
+#define CMXSCR_RS3CS_CLK5 0x00002000 /* SCC3 Rx Clock Source is CLK5 */
+#define CMXSCR_RS3CS_CLK6 0x00002800 /* SCC3 Rx Clock Source is CLK6 */
+#define CMXSCR_RS3CS_CLK7 0x00003000 /* SCC3 Rx Clock Source is CLK7 */
+#define CMXSCR_RS3CS_CLK8 0x00003800 /* SCC3 Rx Clock Source is CLK8 */
+
+#define CMXSCR_TS3CS_BRG1 0x00000000 /* SCC3 Tx Clock Source is BRG1 */
+#define CMXSCR_TS3CS_BRG2 0x00000100 /* SCC3 Tx Clock Source is BRG2 */
+#define CMXSCR_TS3CS_BRG3 0x00000200 /* SCC3 Tx Clock Source is BRG3 */
+#define CMXSCR_TS3CS_BRG4 0x00000300 /* SCC3 Tx Clock Source is BRG4 */
+#define CMXSCR_TS3CS_CLK5 0x00000400 /* SCC3 Tx Clock Source is CLK5 */
+#define CMXSCR_TS3CS_CLK6 0x00000500 /* SCC3 Tx Clock Source is CLK6 */
+#define CMXSCR_TS3CS_CLK7 0x00000600 /* SCC3 Tx Clock Source is CLK7 */
+#define CMXSCR_TS3CS_CLK8 0x00000700 /* SCC3 Tx Clock Source is CLK8 */
+
+#define CMXSCR_RS4CS_BRG1 0x00000000 /* SCC4 Rx Clock Source is BRG1 */
+#define CMXSCR_RS4CS_BRG2 0x00000008 /* SCC4 Rx Clock Source is BRG2 */
+#define CMXSCR_RS4CS_BRG3 0x00000010 /* SCC4 Rx Clock Source is BRG3 */
+#define CMXSCR_RS4CS_BRG4 0x00000018 /* SCC4 Rx Clock Source is BRG4 */
+#define CMXSCR_RS4CS_CLK5 0x00000020 /* SCC4 Rx Clock Source is CLK5 */
+#define CMXSCR_RS4CS_CLK6 0x00000028 /* SCC4 Rx Clock Source is CLK6 */
+#define CMXSCR_RS4CS_CLK7 0x00000030 /* SCC4 Rx Clock Source is CLK7 */
+#define CMXSCR_RS4CS_CLK8 0x00000038 /* SCC4 Rx Clock Source is CLK8 */
+
+#define CMXSCR_TS4CS_BRG1 0x00000000 /* SCC4 Tx Clock Source is BRG1 */
+#define CMXSCR_TS4CS_BRG2 0x00000001 /* SCC4 Tx Clock Source is BRG2 */
+#define CMXSCR_TS4CS_BRG3 0x00000002 /* SCC4 Tx Clock Source is BRG3 */
+#define CMXSCR_TS4CS_BRG4 0x00000003 /* SCC4 Tx Clock Source is BRG4 */
+#define CMXSCR_TS4CS_CLK5 0x00000004 /* SCC4 Tx Clock Source is CLK5 */
+#define CMXSCR_TS4CS_CLK6 0x00000005 /* SCC4 Tx Clock Source is CLK6 */
+#define CMXSCR_TS4CS_CLK7 0x00000006 /* SCC4 Tx Clock Source is CLK7 */
+#define CMXSCR_TS4CS_CLK8 0x00000007 /* SCC4 Tx Clock Source is CLK8 */
+
+#endif /* __CPM2__ */
+#endif /* __KERNEL__ */
+
+
--- /dev/null
+/*
+ * include/asm-ppc/fsl_ocp.h
+ *
+ * Definitions for the on-chip peripherals on Freescale PPC processors
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com)
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_FS_OCP_H__
+#define __ASM_FS_OCP_H__
+
+/* A table of information for supporting the Gianfar Ethernet Controller
+ * This helps identify which enet controller we are dealing with,
+ * and what type of enet controller it is
+ */
+struct ocp_gfar_data {
+ uint interruptTransmit;
+ uint interruptError;
+ uint interruptReceive;
+ uint interruptPHY;
+ uint flags;
+ uint phyid;
+ uint phyregidx;
+ unsigned char mac_addr[6];
+};
+
+/* Flags in the flags field */
+#define GFAR_HAS_COALESCE 0x20
+#define GFAR_HAS_RMON 0x10
+#define GFAR_HAS_MULTI_INTR 0x08
+#define GFAR_FIRM_SET_MACADDR 0x04
+#define GFAR_HAS_PHY_INTR 0x02 /* if not set use a timer */
+#define GFAR_HAS_GIGABIT 0x01
+
+/* Data structure for I2C support. Just contains a couple flags
+ * to distinguish various I2C implementations*/
+struct ocp_fs_i2c_data {
+ uint flags;
+};
+
+/* Flags for I2C */
+#define FS_I2C_SEPARATE_DFSRR 0x02
+#define FS_I2C_32BIT 0x01
+
+#endif /* __ASM_FS_OCP_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/immap_85xx.h
+ *
+ * MPC85xx Internal Memory Map
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_IMMAP_85XX_H__
+#define __ASM_IMMAP_85XX_H__
+
+/* Eventually this should define all the IO block registers in 85xx */
+
+/* PCI Registers */
+typedef struct ccsr_pci {
+ uint cfg_addr; /* 0x.000 - PCI Configuration Address Register */
+ uint cfg_data; /* 0x.004 - PCI Configuration Data Register */
+ uint int_ack; /* 0x.008 - PCI Interrupt Acknowledge Register */
+ char res1[3060];
+ uint potar0; /* 0x.c00 - PCI Outbound Transaction Address Register 0 */
+ uint potear0; /* 0x.c04 - PCI Outbound Translation Extended Address Register 0 */
+ uint powbar0; /* 0x.c08 - PCI Outbound Window Base Address Register 0 */
+ char res2[4];
+ uint powar0; /* 0x.c10 - PCI Outbound Window Attributes Register 0 */
+ char res3[12];
+ uint potar1; /* 0x.c20 - PCI Outbound Transaction Address Register 1 */
+ uint potear1; /* 0x.c24 - PCI Outbound Translation Extended Address Register 1 */
+ uint powbar1; /* 0x.c28 - PCI Outbound Window Base Address Register 1 */
+ char res4[4];
+ uint powar1; /* 0x.c30 - PCI Outbound Window Attributes Register 1 */
+ char res5[12];
+ uint potar2; /* 0x.c40 - PCI Outbound Transaction Address Register 2 */
+ uint potear2; /* 0x.c44 - PCI Outbound Translation Extended Address Register 2 */
+ uint powbar2; /* 0x.c48 - PCI Outbound Window Base Address Register 2 */
+ char res6[4];
+ uint powar2; /* 0x.c50 - PCI Outbound Window Attributes Register 2 */
+ char res7[12];
+ uint potar3; /* 0x.c60 - PCI Outbound Transaction Address Register 3 */
+ uint potear3; /* 0x.c64 - PCI Outbound Translation Extended Address Register 3 */
+ uint powbar3; /* 0x.c68 - PCI Outbound Window Base Address Register 3 */
+ char res8[4];
+ uint powar3; /* 0x.c70 - PCI Outbound Window Attributes Register 3 */
+ char res9[12];
+ uint potar4; /* 0x.c80 - PCI Outbound Transaction Address Register 4 */
+ uint potear4; /* 0x.c84 - PCI Outbound Translation Extended Address Register 4 */
+ uint powbar4; /* 0x.c88 - PCI Outbound Window Base Address Register 4 */
+ char res10[4];
+ uint powar4; /* 0x.c90 - PCI Outbound Window Attributes Register 4 */
+ char res11[268];
+ uint pitar3; /* 0x.da0 - PCI Inbound Translation Address Register 3 */
+ char res12[4];
+ uint piwbar3; /* 0x.da8 - PCI Inbound Window Base Address Register 3 */
+ uint piwbear3; /* 0x.dac - PCI Inbound Window Base Extended Address Register 3 */
+ uint piwar3; /* 0x.db0 - PCI Inbound Window Attributes Register 3 */
+ char res13[12];
+ uint pitar2; /* 0x.dc0 - PCI Inbound Translation Address Register 2 */
+ char res14[4];
+ uint piwbar2; /* 0x.dc8 - PCI Inbound Window Base Address Register 2 */
+ uint piwbear2; /* 0x.dcc - PCI Inbound Window Base Extended Address Register 2 */
+ uint piwar2; /* 0x.dd0 - PCI Inbound Window Attributes Register 2 */
+ char res15[12];
+ uint pitar1; /* 0x.de0 - PCI Inbound Translation Address Register 1 */
+ char res16[4];
+ uint piwbar1; /* 0x.de8 - PCI Inbound Window Base Address Register 1 */
+ char res17[4];
+ uint piwar1; /* 0x.df0 - PCI Inbound Window Attributes Register 1 */
+ char res18[12];
+ uint err_dr; /* 0x.e00 - PCI Error Detect Register */
+ uint err_cap_dr; /* 0x.e04 - PCI Error Capture Disable Register */
+ uint err_en; /* 0x.e08 - PCI Error Enable Register */
+ uint err_attrib; /* 0x.e0c - PCI Error Attributes Capture Register */
+ uint err_addr; /* 0x.e10 - PCI Error Address Capture Register */
+ uint err_ext_addr; /* 0x.e14 - PCI Error Extended Address Capture Register */
+ uint err_dl; /* 0x.e18 - PCI Error Data Low Capture Register */
+ uint err_dh; /* 0x.e1c - PCI Error Data High Capture Register */
+ uint gas_timr; /* 0x.e20 - PCI Gasket Timer Register */
+ uint pci_timr; /* 0x.e24 - PCI Timer Register */
+ char res19[472];
+} ccsr_pci_t;
+
+/* Global Utility Registers */
+typedef struct ccsr_guts {
+ uint porpllsr; /* 0x.0000 - POR PLL Ratio Status Register */
+ uint porbmsr; /* 0x.0004 - POR Boot Mode Status Register */
+ uint porimpscr; /* 0x.0008 - POR I/O Impedance Status and Control Register */
+ uint pordevsr; /* 0x.000c - POR I/O Device Status Register */
+ uint pordbgmsr; /* 0x.0010 - POR Debug Mode Status Register */
+ char res1[12];
+ uint gpporcr; /* 0x.0020 - General-Purpose POR Configuration Register */
+ char res2[12];
+ uint gpiocr; /* 0x.0030 - GPIO Control Register */
+ char res3[12];
+ uint gpoutdr; /* 0x.0040 - General-Purpose Output Data Register */
+ char res4[12];
+ uint gpindr; /* 0x.0050 - General-Purpose Input Data Register */
+ char res5[12];
+ uint pmuxcr; /* 0x.0060 - Alternate Function Signal Multiplex Control */
+ char res6[12];
+ uint devdisr; /* 0x.0070 - Device Disable Control */
+ char res7[12];
+ uint powmgtcsr; /* 0x.0080 - Power Management Status and Control Register */
+ char res8[12];
+ uint mcpsumr; /* 0x.0090 - Machine Check Summary Register */
+ char res9[12];
+ uint pvr; /* 0x.00a0 - Processor Version Register */
+ uint svr; /* 0x.00a4 - System Version Register */
+ char res10[3416];
+ uint clkocr; /* 0x.0e00 - Clock Out Select Register */
+ char res11[12];
+ uint ddrdllcr; /* 0x.0e10 - DDR DLL Control Register */
+ char res12[12];
+ uint lbcdllcr; /* 0x.0e20 - LBC DLL Control Register */
+ char res13[61916];
+} ccsr_guts_t;
+
+#endif /* __ASM_IMMAP_85XX_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * CPM2 Internal Memory Map
+ * Copyright (c) 1999 Dan Malek (dmalek@jlc.net)
+ *
+ * The Internal Memory Map for devices with CPM2 on them. This
+ * is the superset of all CPM2 devices (8260, 8266, 8280, 8272,
+ * 8560).
+ */
+#ifdef __KERNEL__
+#ifndef __IMMAP_CPM2__
+#define __IMMAP_CPM2__
+
+/* System configuration registers.
+*/
+typedef struct sys_82xx_conf {
+ u32 sc_siumcr;
+ u32 sc_sypcr;
+ u8 res1[6];
+ u16 sc_swsr;
+ u8 res2[20];
+ u32 sc_bcr;
+ u8 sc_ppc_acr;
+ u8 res3[3];
+ u32 sc_ppc_alrh;
+ u32 sc_ppc_alrl;
+ u8 sc_lcl_acr;
+ u8 res4[3];
+ u32 sc_lcl_alrh;
+ u32 sc_lcl_alrl;
+ u32 sc_tescr1;
+ u32 sc_tescr2;
+ u32 sc_ltescr1;
+ u32 sc_ltescr2;
+ u32 sc_pdtea;
+ u8 sc_pdtem;
+ u8 res5[3];
+ u32 sc_ldtea;
+ u8 sc_ldtem;
+ u8 res6[163];
+} sysconf_82xx_cpm2_t;
+
+typedef struct sys_85xx_conf {
+ u32 sc_cear;
+ u16 sc_ceer;
+ u16 sc_cemr;
+ u8 res1[70];
+ u32 sc_smaer;
+ u8 res2[4];
+ u32 sc_smevr;
+ u32 sc_smctr;
+ u32 sc_lmaer;
+ u8 res3[4];
+ u32 sc_lmevr;
+ u32 sc_lmctr;
+ u8 res4[144];
+} sysconf_85xx_cpm2_t;
+
+typedef union sys_conf {
+ sysconf_82xx_cpm2_t siu_82xx;
+ sysconf_85xx_cpm2_t siu_85xx;
+} sysconf_cpm2_t;
+
+
+
+/* Memory controller registers.
+*/
+typedef struct mem_ctlr {
+ u32 memc_br0;
+ u32 memc_or0;
+ u32 memc_br1;
+ u32 memc_or1;
+ u32 memc_br2;
+ u32 memc_or2;
+ u32 memc_br3;
+ u32 memc_or3;
+ u32 memc_br4;
+ u32 memc_or4;
+ u32 memc_br5;
+ u32 memc_or5;
+ u32 memc_br6;
+ u32 memc_or6;
+ u32 memc_br7;
+ u32 memc_or7;
+ u32 memc_br8;
+ u32 memc_or8;
+ u32 memc_br9;
+ u32 memc_or9;
+ u32 memc_br10;
+ u32 memc_or10;
+ u32 memc_br11;
+ u32 memc_or11;
+ u8 res1[8];
+ u32 memc_mar;
+ u8 res2[4];
+ u32 memc_mamr;
+ u32 memc_mbmr;
+ u32 memc_mcmr;
+ u8 res3[8];
+ u16 memc_mptpr;
+ u8 res4[2];
+ u32 memc_mdr;
+ u8 res5[4];
+ u32 memc_psdmr;
+ u32 memc_lsdmr;
+ u8 memc_purt;
+ u8 res6[3];
+ u8 memc_psrt;
+ u8 res7[3];
+ u8 memc_lurt;
+ u8 res8[3];
+ u8 memc_lsrt;
+ u8 res9[3];
+ u32 memc_immr;
+ u32 memc_pcibr0;
+ u32 memc_pcibr1;
+ u8 res10[16];
+ u32 memc_pcimsk0;
+ u32 memc_pcimsk1;
+ u8 res11[52];
+} memctl_cpm2_t;
+
+/* System Integration Timers.
+*/
+typedef struct sys_int_timers {
+ u8 res1[32];
+ u16 sit_tmcntsc;
+ u8 res2[2];
+ u32 sit_tmcnt;
+ u8 res3[4];
+ u32 sit_tmcntal;
+ u8 res4[16];
+ u16 sit_piscr;
+ u8 res5[2];
+ u32 sit_pitc;
+ u32 sit_pitr;
+ u8 res6[92];
+ u8 res7[390];
+} sit_cpm2_t;
+
+#define PISCR_PIRQ_MASK ((u16)0xff00)
+#define PISCR_PS ((u16)0x0080)
+#define PISCR_PIE ((u16)0x0004)
+#define PISCR_PTF ((u16)0x0002)
+#define PISCR_PTE ((u16)0x0001)
+
+/* PCI Controller.
+*/
+typedef struct pci_ctlr {
+ u32 pci_omisr;
+ u32 pci_omimr;
+ u8 res1[8];
+ u32 pci_ifqpr;
+ u32 pci_ofqpr;
+ u8 res2[8];
+ u32 pci_imr0;
+ u32 pci_imr1;
+ u32 pci_omr0;
+ u32 pci_omr1;
+ u32 pci_odr;
+ u8 res3[4];
+ u32 pci_idr;
+ u8 res4[20];
+ u32 pci_imisr;
+ u32 pci_imimr;
+ u8 res5[24];
+ u32 pci_ifhpr;
+ u8 res6[4];
+ u32 pci_iftpr;
+ u8 res7[4];
+ u32 pci_iphpr;
+ u8 res8[4];
+ u32 pci_iptpr;
+ u8 res9[4];
+ u32 pci_ofhpr;
+ u8 res10[4];
+ u32 pci_oftpr;
+ u8 res11[4];
+ u32 pci_ophpr;
+ u8 res12[4];
+ u32 pci_optpr;
+ u8 res13[8];
+ u32 pci_mucr;
+ u8 res14[8];
+ u32 pci_qbar;
+ u8 res15[12];
+ u32 pci_dmamr0;
+ u32 pci_dmasr0;
+ u32 pci_dmacdar0;
+ u8 res16[4];
+ u32 pci_dmasar0;
+ u8 res17[4];
+ u32 pci_dmadar0;
+ u8 res18[4];
+ u32 pci_dmabcr0;
+ u32 pci_dmandar0;
+ u8 res19[86];
+ u32 pci_dmamr1;
+ u32 pci_dmasr1;
+ u32 pci_dmacdar1;
+ u8 res20[4];
+ u32 pci_dmasar1;
+ u8 res21[4];
+ u32 pci_dmadar1;
+ u8 res22[4];
+ u32 pci_dmabcr1;
+ u32 pci_dmandar1;
+ u8 res23[88];
+ u32 pci_dmamr2;
+ u32 pci_dmasr2;
+ u32 pci_dmacdar2;
+ u8 res24[4];
+ u32 pci_dmasar2;
+ u8 res25[4];
+ u32 pci_dmadar2;
+ u8 res26[4];
+ u32 pci_dmabcr2;
+ u32 pci_dmandar2;
+ u8 res27[88];
+ u32 pci_dmamr3;
+ u32 pci_dmasr3;
+ u32 pci_dmacdar3;
+ u8 res28[4];
+ u32 pci_dmasar3;
+ u8 res29[4];
+ u32 pci_dmadar3;
+ u8 res30[4];
+ u32 pci_dmabcr3;
+ u32 pci_dmandar3;
+ u8 res31[344];
+ u32 pci_potar0;
+ u8 res32[4];
+ u32 pci_pobar0;
+ u8 res33[4];
+ u32 pci_pocmr0;
+ u8 res34[4];
+ u32 pci_potar1;
+ u8 res35[4];
+ u32 pci_pobar1;
+ u8 res36[4];
+ u32 pci_pocmr1;
+ u8 res37[4];
+ u32 pci_potar2;
+ u8 res38[4];
+ u32 pci_pobar2;
+ u8 res39[4];
+ u32 pci_pocmr2;
+ u8 res40[50];
+ u32 pci_ptcr;
+ u32 pci_gpcr;
+ u32 pci_gcr;
+ u32 pci_esr;
+ u32 pci_emr;
+ u32 pci_ecr;
+ u32 pci_eacr;
+ u8 res41[4];
+ u32 pci_edcr;
+ u8 res42[4];
+ u32 pci_eccr;
+ u8 res43[44];
+ u32 pci_pitar1;
+ u8 res44[4];
+ u32 pci_pibar1;
+ u8 res45[4];
+ u32 pci_picmr1;
+ u8 res46[4];
+ u32 pci_pitar0;
+ u8 res47[4];
+ u32 pci_pibar0;
+ u8 res48[4];
+ u32 pci_picmr0;
+ u8 res49[4];
+ u32 pci_cfg_addr;
+ u32 pci_cfg_data;
+ u32 pci_int_ack;
+ u8 res50[756];
+} pci_cpm2_t;
+
+/* Interrupt Controller.
+*/
+typedef struct interrupt_controller {
+ u16 ic_sicr;
+ u8 res1[2];
+ u32 ic_sivec;
+ u32 ic_sipnrh;
+ u32 ic_sipnrl;
+ u32 ic_siprr;
+ u32 ic_scprrh;
+ u32 ic_scprrl;
+ u32 ic_simrh;
+ u32 ic_simrl;
+ u32 ic_siexr;
+ u8 res2[88];
+} intctl_cpm2_t;
+
+/* Clocks and Reset.
+*/
+typedef struct clk_and_reset {
+ u32 car_sccr;
+ u8 res1[4];
+ u32 car_scmr;
+ u8 res2[4];
+ u32 car_rsr;
+ u32 car_rmr;
+ u8 res[104];
+} car_cpm2_t;
+
+/* Input/Output Port control/status registers.
+ * Names consistent with processor manual, although they are different
+ * from the original 8xx names.......
+ */
+typedef struct io_port {
+ u32 iop_pdira;
+ u32 iop_ppara;
+ u32 iop_psora;
+ u32 iop_podra;
+ u32 iop_pdata;
+ u8 res1[12];
+ u32 iop_pdirb;
+ u32 iop_pparb;
+ u32 iop_psorb;
+ u32 iop_podrb;
+ u32 iop_pdatb;
+ u8 res2[12];
+ u32 iop_pdirc;
+ u32 iop_pparc;
+ u32 iop_psorc;
+ u32 iop_podrc;
+ u32 iop_pdatc;
+ u8 res3[12];
+ u32 iop_pdird;
+ u32 iop_ppard;
+ u32 iop_psord;
+ u32 iop_podrd;
+ u32 iop_pdatd;
+ u8 res4[12];
+} iop_cpm2_t;
+
+/* Communication Processor Module Timers
+*/
+typedef struct cpm_timers {
+ u8 cpmt_tgcr1;
+ u8 res1[3];
+ u8 cpmt_tgcr2;
+ u8 res2[11];
+ u16 cpmt_tmr1;
+ u16 cpmt_tmr2;
+ u16 cpmt_trr1;
+ u16 cpmt_trr2;
+ u16 cpmt_tcr1;
+ u16 cpmt_tcr2;
+ u16 cpmt_tcn1;
+ u16 cpmt_tcn2;
+ u16 cpmt_tmr3;
+ u16 cpmt_tmr4;
+ u16 cpmt_trr3;
+ u16 cpmt_trr4;
+ u16 cpmt_tcr3;
+ u16 cpmt_tcr4;
+ u16 cpmt_tcn3;
+ u16 cpmt_tcn4;
+ u16 cpmt_ter1;
+ u16 cpmt_ter2;
+ u16 cpmt_ter3;
+ u16 cpmt_ter4;
+ u8 res3[584];
+} cpmtimer_cpm2_t;
+
+/* DMA control/status registers.
+*/
+typedef struct sdma_csr {
+ u8 res0[24];
+ u8 sdma_sdsr;
+ u8 res1[3];
+ u8 sdma_sdmr;
+ u8 res2[3];
+ u8 sdma_idsr1;
+ u8 res3[3];
+ u8 sdma_idmr1;
+ u8 res4[3];
+ u8 sdma_idsr2;
+ u8 res5[3];
+ u8 sdma_idmr2;
+ u8 res6[3];
+ u8 sdma_idsr3;
+ u8 res7[3];
+ u8 sdma_idmr3;
+ u8 res8[3];
+ u8 sdma_idsr4;
+ u8 res9[3];
+ u8 sdma_idmr4;
+ u8 res10[707];
+} sdma_cpm2_t;
+
+/* Fast controllers
+*/
+typedef struct fcc {
+ u32 fcc_gfmr;
+ u32 fcc_fpsmr;
+ u16 fcc_ftodr;
+ u8 res1[2];
+ u16 fcc_fdsr;
+ u8 res2[2];
+ u16 fcc_fcce;
+ u8 res3[2];
+ u16 fcc_fccm;
+ u8 res4[2];
+ u8 fcc_fccs;
+ u8 res5[3];
+ u8 fcc_ftirr_phy[4];
+} fcc_t;
+
+/* Fast controllers continued
+ */
+typedef struct fcc_c {
+ u32 fcc_firper;
+ u32 fcc_firer;
+ u32 fcc_firsr_hi;
+ u32 fcc_firsr_lo;
+ u8 fcc_gfemr;
+ u8 res1[15];
+} fcc_c_t;
+
+/* TC Layer
+ */
+typedef struct tclayer {
+ u16 tc_tcmode;
+ u16 tc_cdsmr;
+ u16 tc_tcer;
+ u16 tc_rcc;
+ u16 tc_tcmr;
+ u16 tc_fcc;
+ u16 tc_ccc;
+ u16 tc_icc;
+ u16 tc_tcc;
+ u16 tc_ecc;
+ u8 res1[12];
+} tclayer_t;
+
+
+/* I2C
+*/
+typedef struct i2c {
+ u8 i2c_i2mod;
+ u8 res1[3];
+ u8 i2c_i2add;
+ u8 res2[3];
+ u8 i2c_i2brg;
+ u8 res3[3];
+ u8 i2c_i2com;
+ u8 res4[3];
+ u8 i2c_i2cer;
+ u8 res5[3];
+ u8 i2c_i2cmr;
+ u8 res6[331];
+} i2c_cpm2_t;
+
+typedef struct scc { /* Serial communication channels */
+ u32 scc_gsmrl;
+ u32 scc_gsmrh;
+ u16 scc_psmr;
+ u8 res1[2];
+ u16 scc_todr;
+ u16 scc_dsr;
+ u16 scc_scce;
+ u8 res2[2];
+ u16 scc_sccm;
+ u8 res3;
+ u8 scc_sccs;
+ u8 res4[8];
+} scc_t;
+
+typedef struct smc { /* Serial management channels */
+ u8 res1[2];
+ u16 smc_smcmr;
+ u8 res2[2];
+ u8 smc_smce;
+ u8 res3[3];
+ u8 smc_smcm;
+ u8 res4[5];
+} smc_t;
+
+/* Serial Peripheral Interface.
+*/
+typedef struct spi_ctrl {
+ u16 spi_spmode;
+ u8 res1[4];
+ u8 spi_spie;
+ u8 res2[3];
+ u8 spi_spim;
+ u8 res3[2];
+ u8 spi_spcom;
+ u8 res4[82];
+} spictl_cpm2_t;
+
+/* CPM Mux.
+*/
+typedef struct cpmux {
+ u8 cmx_si1cr;
+ u8 res1;
+ u8 cmx_si2cr;
+ u8 res2;
+ u32 cmx_fcr;
+ u32 cmx_scr;
+ u8 cmx_smr;
+ u8 res3;
+ u16 cmx_uar;
+ u8 res4[16];
+} cpmux_t;
+
+/* SIRAM control
+*/
+typedef struct siram {
+ u16 si_amr;
+ u16 si_bmr;
+ u16 si_cmr;
+ u16 si_dmr;
+ u8 si_gmr;
+ u8 res1;
+ u8 si_cmdr;
+ u8 res2;
+ u8 si_str;
+ u8 res3;
+ u16 si_rsr;
+} siramctl_t;
+
+typedef struct mcc {
+ u16 mcc_mcce;
+ u8 res1[2];
+ u16 mcc_mccm;
+ u8 res2[2];
+ u8 mcc_mccf;
+ u8 res3[7];
+} mcc_t;
+
+typedef struct comm_proc {
+ u32 cp_cpcr;
+ u32 cp_rccr;
+ u8 res1[14];
+ u16 cp_rter;
+ u8 res2[2];
+ u16 cp_rtmr;
+ u16 cp_rtscr;
+ u8 res3[2];
+ u32 cp_rtsr;
+ u8 res4[12];
+} cpm_cpm2_t;
+
+/* USB Controller.
+*/
+typedef struct usb_ctlr {
+ u8 usb_usmod;
+ u8 usb_usadr;
+ u8 usb_uscom;
+ u8 res1[1];
+ u16 usb_usep1;
+ u16 usb_usep2;
+ u16 usb_usep3;
+ u16 usb_usep4;
+ u8 res2[4];
+ u16 usb_usber;
+ u8 res3[2];
+ u16 usb_usbmr;
+ u8 usb_usbs;
+ u8 res4[7];
+} usb_cpm2_t;
+
+/* ...and the whole thing wrapped up....
+*/
+
+typedef struct immap {
+ /* Some references are into the unique and known dpram spaces,
+ * others are from the generic base.
+ */
+#define im_dprambase im_dpram1
+ u8 im_dpram1[16*1024];
+ u8 res1[16*1024];
+ u8 im_dpram2[4*1024];
+ u8 res2[8*1024];
+ u8 im_dpram3[4*1024];
+ u8 res3[16*1024];
+
+ sysconf_cpm2_t im_siu_conf; /* SIU Configuration */
+ memctl_cpm2_t im_memctl; /* Memory Controller */
+ sit_cpm2_t im_sit; /* System Integration Timers */
+ pci_cpm2_t im_pci; /* PCI Controller */
+ intctl_cpm2_t im_intctl; /* Interrupt Controller */
+ car_cpm2_t im_clkrst; /* Clocks and reset */
+ iop_cpm2_t im_ioport; /* IO Port control/status */
+ cpmtimer_cpm2_t im_cpmtimer; /* CPM timers */
+ sdma_cpm2_t im_sdma; /* SDMA control/status */
+
+ fcc_t im_fcc[3]; /* Three FCCs */
+ u8 res4z[32];
+ fcc_c_t im_fcc_c[3]; /* Continued FCCs */
+
+ u8 res4[32];
+
+ tclayer_t im_tclayer[8]; /* Eight TCLayers */
+ u16 tc_tcgsr;
+ u16 tc_tcger;
+
+ /* First set of baud rate generators.
+ */
+ u8 res[236];
+ u32 im_brgc5;
+ u32 im_brgc6;
+ u32 im_brgc7;
+ u32 im_brgc8;
+
+ u8 res5[608];
+
+ i2c_cpm2_t im_i2c; /* I2C control/status */
+ cpm_cpm2_t im_cpm; /* Communication processor */
+
+ /* Second set of baud rate generators.
+ */
+ u32 im_brgc1;
+ u32 im_brgc2;
+ u32 im_brgc3;
+ u32 im_brgc4;
+
+ scc_t im_scc[4]; /* Four SCCs */
+ smc_t im_smc[2]; /* Couple of SMCs */
+ spictl_cpm2_t im_spi; /* A SPI */
+ cpmux_t im_cpmux; /* CPM clock route mux */
+ siramctl_t im_siramctl1; /* First SI RAM Control */
+ mcc_t im_mcc1; /* First MCC */
+ siramctl_t im_siramctl2; /* Second SI RAM Control */
+ mcc_t im_mcc2; /* Second MCC */
+ usb_cpm2_t im_usb; /* USB Controller */
+
+ u8 res6[1153];
+
+ u16 im_si1txram[256];
+ u8 res7[512];
+ u16 im_si1rxram[256];
+ u8 res8[512];
+ u16 im_si2txram[256];
+ u8 res9[512];
+ u16 im_si2rxram[256];
+ u8 res10[512];
+ u8 res11[4096];
+} cpm2_map_t;
+
+extern cpm2_map_t *cpm2_immr;
+
+#endif /* __IMMAP_CPM2__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/m8260_pci.h
+ *
+ * Definitions for the MPC8250/MPC8265/MPC8266 integrated PCI host bridge.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __M8260_PCI_H
+#define __M8260_PCI_H
+
+#include <linux/pci_ids.h>
+
+/*
+ * Define the vendor/device ID for the MPC8265.
+ */
+#define PCI_DEVICE_ID_MPC8265 ((0x18C0 << 16) | PCI_VENDOR_ID_MOTOROLA)
+
+#define M8265_PCIBR0 0x101ac
+#define M8265_PCIBR1 0x101b0
+#define M8265_PCIMSK0 0x101c4
+#define M8265_PCIMSK1 0x101c8
+
+/* Bit definitions for PCIBR registers */
+
+#define PCIBR_ENABLE 0x00000001
+
+/* Bit definitions for PCIMSK registers */
+
+#define PCIMSK_32KiB 0xFFFF8000 /* Size of window, smallest */
+#define PCIMSK_64KiB 0xFFFF0000
+#define PCIMSK_128KiB 0xFFFE0000
+#define PCIMSK_256KiB 0xFFFC0000
+#define PCIMSK_512KiB 0xFFF80000
+#define PCIMSK_1MiB 0xFFF00000
+#define PCIMSK_2MiB 0xFFE00000
+#define PCIMSK_4MiB 0xFFC00000
+#define PCIMSK_8MiB 0xFF800000
+#define PCIMSK_16MiB 0xFF000000
+#define PCIMSK_32MiB 0xFE000000
+#define PCIMSK_64MiB 0xFC000000
+#define PCIMSK_128MiB 0xF8000000
+#define PCIMSK_256MiB 0xF0000000
+#define PCIMSK_512MiB 0xE0000000
+#define PCIMSK_1GiB 0xC0000000 /* Size of window, largest */
+
+
+#define M826X_SCCR_PCI_MODE_EN 0x100
+
+
+/*
+ * Outbound ATU registers (3 sets). These registers control how 60x bus (local)
+ * addresses are translated to PCI addresses when the MPC826x is a PCI bus
+ * master (initiator).
+ */
+
+#define POTAR_REG0 0x10800 /* PCI Outbound Translation Addr registers */
+#define POTAR_REG1 0x10818
+#define POTAR_REG2 0x10830
+
+#define POBAR_REG0 0x10808 /* PCI Outbound Base Addr registers */
+#define POBAR_REG1 0x10820
+#define POBAR_REG2 0x10838
+
+#define POCMR_REG0 0x10810 /* PCI Outbound Comparison Mask registers */
+#define POCMR_REG1 0x10828
+#define POCMR_REG2 0x10840
+
+/* Bit definitions for POMCR registers */
+
+#define POCMR_MASK_4KiB 0x000FFFFF
+#define POCMR_MASK_8KiB 0x000FFFFE
+#define POCMR_MASK_16KiB 0x000FFFFC
+#define POCMR_MASK_32KiB 0x000FFFF8
+#define POCMR_MASK_64KiB 0x000FFFF0
+#define POCMR_MASK_128KiB 0x000FFFE0
+#define POCMR_MASK_256KiB 0x000FFFC0
+#define POCMR_MASK_512KiB 0x000FFF80
+#define POCMR_MASK_1MiB 0x000FFF00
+#define POCMR_MASK_2MiB 0x000FFE00
+#define POCMR_MASK_4MiB 0x000FFC00
+#define POCMR_MASK_8MiB 0x000FF800
+#define POCMR_MASK_16MiB 0x000FF000
+#define POCMR_MASK_32MiB 0x000FE000
+#define POCMR_MASK_64MiB 0x000FC000
+#define POCMR_MASK_128MiB 0x000F8000
+#define POCMR_MASK_256MiB 0x000F0000
+#define POCMR_MASK_512MiB 0x000E0000
+#define POCMR_MASK_1GiB 0x000C0000
+
+#define POCMR_ENABLE 0x80000000
+#define POCMR_PCI_IO 0x40000000
+#define POCMR_PREFETCH_EN 0x20000000
+
+/* Soft PCI reset */
+
+#define PCI_GCR_REG 0x10880
+
+/* Bit definitions for PCI_GCR registers */
+
+#define PCIGCR_PCI_BUS_EN 0x1
+
+#define PCI_EMR_REG 0x10888
+/*
+ * Inbound ATU registers (2 sets). These registers control how PCI addresses
+ * are translated to 60x bus (local) addresses when the MPC826x is a PCI bus target.
+ */
+
+#define PITAR_REG1 0x108D0
+#define PIBAR_REG1 0x108D8
+#define PICMR_REG1 0x108E0
+#define PITAR_REG0 0x108E8
+#define PIBAR_REG0 0x108F0
+#define PICMR_REG0 0x108F8
+
+/* Bit definitions for PCI Inbound Comparison Mask registers */
+
+#define PICMR_MASK_4KiB 0x000FFFFF
+#define PICMR_MASK_8KiB 0x000FFFFE
+#define PICMR_MASK_16KiB 0x000FFFFC
+#define PICMR_MASK_32KiB 0x000FFFF8
+#define PICMR_MASK_64KiB 0x000FFFF0
+#define PICMR_MASK_128KiB 0x000FFFE0
+#define PICMR_MASK_256KiB 0x000FFFC0
+#define PICMR_MASK_512KiB 0x000FFF80
+#define PICMR_MASK_1MiB 0x000FFF00
+#define PICMR_MASK_2MiB 0x000FFE00
+#define PICMR_MASK_4MiB 0x000FFC00
+#define PICMR_MASK_8MiB 0x000FF800
+#define PICMR_MASK_16MiB 0x000FF000
+#define PICMR_MASK_32MiB 0x000FE000
+#define PICMR_MASK_64MiB 0x000FC000
+#define PICMR_MASK_128MiB 0x000F8000
+#define PICMR_MASK_256MiB 0x000F0000
+#define PICMR_MASK_512MiB 0x000E0000
+#define PICMR_MASK_1GiB 0x000C0000
+
+#define PICMR_ENABLE 0x80000000
+#define PICMR_NO_SNOOP_EN 0x40000000
+#define PICMR_PREFETCH_EN 0x20000000
+
+/* PCI error Registers */
+
+#define PCI_ERROR_STATUS_REG 0x10884
+#define PCI_ERROR_MASK_REG 0x10888
+#define PCI_ERROR_CONTROL_REG 0x1088C
+#define PCI_ERROR_ADRS_CAPTURE_REG 0x10890
+#define PCI_ERROR_DATA_CAPTURE_REG 0x10898
+#define PCI_ERROR_CTRL_CAPTURE_REG 0x108A0
+
+/* PCI error Register bit defines */
+
+#define PCI_ERROR_PCI_ADDR_PAR 0x00000001
+#define PCI_ERROR_PCI_DATA_PAR_WR 0x00000002
+#define PCI_ERROR_PCI_DATA_PAR_RD 0x00000004
+#define PCI_ERROR_PCI_NO_RSP 0x00000008
+#define PCI_ERROR_PCI_TAR_ABT 0x00000010
+#define PCI_ERROR_PCI_SERR 0x00000020
+#define PCI_ERROR_PCI_PERR_RD 0x00000040
+#define PCI_ERROR_PCI_PERR_WR 0x00000080
+#define PCI_ERROR_I2O_OFQO 0x00000100
+#define PCI_ERROR_I2O_IPQO 0x00000200
+#define PCI_ERROR_IRA 0x00000400
+#define PCI_ERROR_NMI 0x00000800
+#define PCI_ERROR_I2O_DBMC 0x00001000
+
+/*
+ * Register pair used to generate configuration cycles on the PCI bus
+ * and access the MPC826x's own PCI configuration registers.
+ */
+
+#define PCI_CFG_ADDR_REG 0x10900
+#define PCI_CFG_DATA_REG 0x10904
+
+/* Bus parking decides where the bus control sits when idle */
+/* If modifying memory controllers for PCI park on the core */
+
+#define PPC_ACR_BUS_PARK_CORE 0x6
+#define PPC_ACR_BUS_PARK_PCI 0x3
+
+#endif /* __M8260_PCI_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc52xx.h
+ *
+ * Prototypes, etc. for the Freescale MPC52xx embedded cpu chips
+ * May need to be cleaned as the port goes on ...
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Originally written by Dale Farnsworth <dfarnsworth@mvista.com>
+ * for the 2.4 kernel.
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#ifndef __ASM_MPC52xx_H__
+#define __ASM_MPC52xx_H__
+
+#ifndef __ASSEMBLY__
+#include <asm/ppcboot.h>
+#include <asm/types.h>
+
+struct pt_regs;
+struct ocp_def;
+#endif /* __ASSEMBLY__ */
+
+
+/* ======================================================================== */
+/* Main registers/struct addresses */
+/* ======================================================================== */
+/* Theses are PHYSICAL addresses ! */
+/* TODO : There should be no static mapping, but it's not yet the case, so */
+/* we require a 1:1 mapping */
+
+#define MPC52xx_MBAR 0xf0000000 /* Phys address */
+#define MPC52xx_MBAR_SIZE 0x00010000
+#define MPC52xx_MBAR_VIRT 0xf0000000 /* Virt address */
+
+#define MPC52xx_MMAP_CTL (MPC52xx_MBAR + 0x0000)
+#define MPC52xx_CDM (MPC52xx_MBAR + 0x0200)
+#define MPC52xx_SFTRST (MPC52xx_MBAR + 0x0220)
+#define MPC52xx_SFTRST_BIT 0x01000000
+#define MPC52xx_INTR (MPC52xx_MBAR + 0x0500)
+#define MPC52xx_GPTx(x) (MPC52xx_MBAR + 0x0600 + ((x)<<4))
+#define MPC52xx_RTC (MPC52xx_MBAR + 0x0800)
+#define MPC52xx_MSCAN1 (MPC52xx_MBAR + 0x0900)
+#define MPC52xx_MSCAN2 (MPC52xx_MBAR + 0x0980)
+#define MPC52xx_GPIO (MPC52xx_MBAR + 0x0b00)
+#define MPC52xx_PCI (MPC52xx_MBAR + 0x0d00)
+#define MPC52xx_USB_OHCI (MPC52xx_MBAR + 0x1000)
+#define MPC52xx_SDMA (MPC52xx_MBAR + 0x1200)
+#define MPC52xx_XLB (MPC52xx_MBAR + 0x1f00)
+#define MPC52xx_PSCx(x) (MPC52xx_MBAR + 0x2000 + ((x)<<9))
+#define MPC52xx_PSC1 (MPC52xx_MBAR + 0x2000)
+#define MPC52xx_PSC2 (MPC52xx_MBAR + 0x2200)
+#define MPC52xx_PSC3 (MPC52xx_MBAR + 0x2400)
+#define MPC52xx_PSC4 (MPC52xx_MBAR + 0x2600)
+#define MPC52xx_PSC5 (MPC52xx_MBAR + 0x2800)
+#define MPC52xx_PSC6 (MPC52xx_MBAR + 0x2C00)
+#define MPC52xx_FEC (MPC52xx_MBAR + 0x3000)
+#define MPC52xx_ATA (MPC52xx_MBAR + 0x3a00)
+#define MPC52xx_I2C1 (MPC52xx_MBAR + 0x3d00)
+#define MPC52xx_I2C_MICR (MPC52xx_MBAR + 0x3d20)
+#define MPC52xx_I2C2 (MPC52xx_MBAR + 0x3d40)
+
+/* SRAM used for SDMA */
+#define MPC52xx_SRAM (MPC52xx_MBAR + 0x8000)
+#define MPC52xx_SRAM_SIZE (16*1024)
+#define MPC52xx_SDMA_MAX_TASKS 16
+
+ /* Memory allocation block size */
+#define MPC52xx_SDRAM_UNIT 0x8000 /* 32K byte */
+
+
+/* ======================================================================== */
+/* IRQ mapping */
+/* ======================================================================== */
+/* Be sure to look at mpc52xx_pic.h if you wish for whatever reason to change
+ * this
+ */
+
+#define MPC52xx_CRIT_IRQ_NUM 4
+#define MPC52xx_MAIN_IRQ_NUM 17
+#define MPC52xx_SDMA_IRQ_NUM 17
+#define MPC52xx_PERP_IRQ_NUM 23
+
+#define MPC52xx_CRIT_IRQ_BASE 0
+#define MPC52xx_MAIN_IRQ_BASE (MPC52xx_CRIT_IRQ_BASE + MPC52xx_CRIT_IRQ_NUM)
+#define MPC52xx_SDMA_IRQ_BASE (MPC52xx_MAIN_IRQ_BASE + MPC52xx_MAIN_IRQ_NUM)
+#define MPC52xx_PERP_IRQ_BASE (MPC52xx_SDMA_IRQ_BASE + MPC52xx_SDMA_IRQ_NUM)
+
+#define MPC52xx_IRQ0 (MPC52xx_CRIT_IRQ_BASE + 0)
+#define MPC52xx_SLICE_TIMER_0_IRQ (MPC52xx_CRIT_IRQ_BASE + 1)
+#define MPC52xx_HI_INT_IRQ (MPC52xx_CRIT_IRQ_BASE + 2)
+#define MPC52xx_CCS_IRQ (MPC52xx_CRIT_IRQ_BASE + 3)
+
+#define MPC52xx_IRQ1 (MPC52xx_MAIN_IRQ_BASE + 1)
+#define MPC52xx_IRQ2 (MPC52xx_MAIN_IRQ_BASE + 2)
+#define MPC52xx_IRQ3 (MPC52xx_MAIN_IRQ_BASE + 3)
+
+#define MPC52xx_SDMA_IRQ (MPC52xx_PERP_IRQ_BASE + 0)
+#define MPC52xx_PSC1_IRQ (MPC52xx_PERP_IRQ_BASE + 1)
+#define MPC52xx_PSC2_IRQ (MPC52xx_PERP_IRQ_BASE + 2)
+#define MPC52xx_PSC3_IRQ (MPC52xx_PERP_IRQ_BASE + 3)
+#define MPC52xx_PSC6_IRQ (MPC52xx_PERP_IRQ_BASE + 4)
+#define MPC52xx_IRDA_IRQ (MPC52xx_PERP_IRQ_BASE + 4)
+#define MPC52xx_FEC_IRQ (MPC52xx_PERP_IRQ_BASE + 5)
+#define MPC52xx_USB_IRQ (MPC52xx_PERP_IRQ_BASE + 6)
+#define MPC52xx_ATA_IRQ (MPC52xx_PERP_IRQ_BASE + 7)
+#define MPC52xx_PCI_CNTRL_IRQ (MPC52xx_PERP_IRQ_BASE + 8)
+#define MPC52xx_PCI_SCIRX_IRQ (MPC52xx_PERP_IRQ_BASE + 9)
+#define MPC52xx_PCI_SCITX_IRQ (MPC52xx_PERP_IRQ_BASE + 10)
+#define MPC52xx_PSC4_IRQ (MPC52xx_PERP_IRQ_BASE + 11)
+#define MPC52xx_PSC5_IRQ (MPC52xx_PERP_IRQ_BASE + 12)
+#define MPC52xx_SPI_MODF_IRQ (MPC52xx_PERP_IRQ_BASE + 13)
+#define MPC52xx_SPI_SPIF_IRQ (MPC52xx_PERP_IRQ_BASE + 14)
+#define MPC52xx_I2C1_IRQ (MPC52xx_PERP_IRQ_BASE + 15)
+#define MPC52xx_I2C2_IRQ (MPC52xx_PERP_IRQ_BASE + 16)
+#define MPC52xx_CAN1_IRQ (MPC52xx_PERP_IRQ_BASE + 17)
+#define MPC52xx_CAN2_IRQ (MPC52xx_PERP_IRQ_BASE + 18)
+#define MPC52xx_IR_RX_IRQ (MPC52xx_PERP_IRQ_BASE + 19)
+#define MPC52xx_IR_TX_IRQ (MPC52xx_PERP_IRQ_BASE + 20)
+#define MPC52xx_XLB_ARB_IRQ (MPC52xx_PERP_IRQ_BASE + 21)
+
+
+
+/* ======================================================================== */
+/* Structures mapping of some unit register set */
+/* ======================================================================== */
+
+#ifndef __ASSEMBLY__
+
+/* Memory Mapping Control */
+struct mpc52xx_mmap_ctl {
+ volatile u32 mbar; /* MMAP_CTRL + 0x00 */
+
+ volatile u32 cs0_start; /* MMAP_CTRL + 0x04 */
+ volatile u32 cs0_stop; /* MMAP_CTRL + 0x08 */
+ volatile u32 cs1_start; /* MMAP_CTRL + 0x0c */
+ volatile u32 cs1_stop; /* MMAP_CTRL + 0x10 */
+ volatile u32 cs2_start; /* MMAP_CTRL + 0x14 */
+ volatile u32 cs2_stop; /* MMAP_CTRL + 0x18 */
+ volatile u32 cs3_start; /* MMAP_CTRL + 0x1c */
+ volatile u32 cs3_stop; /* MMAP_CTRL + 0x20 */
+ volatile u32 cs4_start; /* MMAP_CTRL + 0x24 */
+ volatile u32 cs4_stop; /* MMAP_CTRL + 0x28 */
+ volatile u32 cs5_start; /* MMAP_CTRL + 0x2c */
+ volatile u32 cs5_stop; /* MMAP_CTRL + 0x30 */
+
+ volatile u32 sdram0; /* MMAP_CTRL + 0x34 */
+ volatile u32 sdram1; /* MMAP_CTRL + 0X38 */
+
+ volatile u32 reserved[4]; /* MMAP_CTRL + 0x3c .. 0x48 */
+
+ volatile u32 boot_start; /* MMAP_CTRL + 0x4c */
+ volatile u32 boot_stop; /* MMAP_CTRL + 0x50 */
+
+ volatile u32 ipbi_ws_ctrl; /* MMAP_CTRL + 0x54 */
+
+ volatile u32 cs6_start; /* MMAP_CTRL + 0x58 */
+ volatile u32 cs6_stop; /* MMAP_CTRL + 0x5c */
+ volatile u32 cs7_start; /* MMAP_CTRL + 0x60 */
+ volatile u32 cs7_stop; /* MMAP_CTRL + 0x60 */
+};
+
+/* Interrupt controller */
+struct mpc52xx_intr {
+ volatile u32 per_mask; /* INTR + 0x00 */
+ volatile u32 per_pri1; /* INTR + 0x04 */
+ volatile u32 per_pri2; /* INTR + 0x08 */
+ volatile u32 per_pri3; /* INTR + 0x0c */
+ volatile u32 ctrl; /* INTR + 0x10 */
+ volatile u32 main_mask; /* INTR + 0x14 */
+ volatile u32 main_pri1; /* INTR + 0x18 */
+ volatile u32 main_pri2; /* INTR + 0x1c */
+ volatile u32 reserved1; /* INTR + 0x20 */
+ volatile u32 enc_status; /* INTR + 0x24 */
+ volatile u32 crit_status; /* INTR + 0x28 */
+ volatile u32 main_status; /* INTR + 0x2c */
+ volatile u32 per_status; /* INTR + 0x30 */
+ volatile u32 reserved2; /* INTR + 0x34 */
+ volatile u32 per_error; /* INTR + 0x38 */
+};
+
+/* SDMA */
+struct mpc52xx_sdma {
+ volatile u32 taskBar; /* SDMA + 0x00 */
+ volatile u32 currentPointer; /* SDMA + 0x04 */
+ volatile u32 endPointer; /* SDMA + 0x08 */
+ volatile u32 variablePointer;/* SDMA + 0x0c */
+
+ volatile u8 IntVect1; /* SDMA + 0x10 */
+ volatile u8 IntVect2; /* SDMA + 0x11 */
+ volatile u16 PtdCntrl; /* SDMA + 0x12 */
+
+ volatile u32 IntPend; /* SDMA + 0x14 */
+ volatile u32 IntMask; /* SDMA + 0x18 */
+
+ volatile u16 tcr[16]; /* SDMA + 0x1c .. 0x3a */
+
+ volatile u8 ipr[31]; /* SDMA + 0x3c .. 5b */
+
+ volatile u32 res1; /* SDMA + 0x5c */
+ volatile u32 task_size0; /* SDMA + 0x60 */
+ volatile u32 task_size1; /* SDMA + 0x64 */
+ volatile u32 MDEDebug; /* SDMA + 0x68 */
+ volatile u32 ADSDebug; /* SDMA + 0x6c */
+ volatile u32 Value1; /* SDMA + 0x70 */
+ volatile u32 Value2; /* SDMA + 0x74 */
+ volatile u32 Control; /* SDMA + 0x78 */
+ volatile u32 Status; /* SDMA + 0x7c */
+};
+
+/* GPT */
+struct mpc52xx_gpt {
+ volatile u32 mode; /* GPTx + 0x00 */
+ volatile u32 count; /* GPTx + 0x04 */
+ volatile u32 pwm; /* GPTx + 0x08 */
+ volatile u32 status; /* GPTx + 0X0c */
+};
+
+/* RTC */
+struct mpc52xx_rtc {
+ volatile u32 time_set; /* RTC + 0x00 */
+ volatile u32 date_set; /* RTC + 0x04 */
+ volatile u32 stopwatch; /* RTC + 0x08 */
+ volatile u32 int_enable; /* RTC + 0x0c */
+ volatile u32 time; /* RTC + 0x10 */
+ volatile u32 date; /* RTC + 0x14 */
+ volatile u32 stopwatch_intr; /* RTC + 0x18 */
+ volatile u32 bus_error; /* RTC + 0x1c */
+ volatile u32 dividers; /* RTC + 0x20 */
+};
+
+/* GPIO */
+struct mpc52xx_gpio {
+ volatile u32 port_config; /* GPIO + 0x00 */
+ volatile u32 simple_gpioe; /* GPIO + 0x04 */
+ volatile u32 simple_ode; /* GPIO + 0x08 */
+ volatile u32 simple_ddr; /* GPIO + 0x0c */
+ volatile u32 simple_dvo; /* GPIO + 0x10 */
+ volatile u32 simple_ival; /* GPIO + 0x14 */
+ volatile u8 outo_gpioe; /* GPIO + 0x18 */
+ volatile u8 reserved1[3]; /* GPIO + 0x19 */
+ volatile u8 outo_dvo; /* GPIO + 0x1c */
+ volatile u8 reserved2[3]; /* GPIO + 0x1d */
+ volatile u8 sint_gpioe; /* GPIO + 0x20 */
+ volatile u8 reserved3[3]; /* GPIO + 0x21 */
+ volatile u8 sint_ode; /* GPIO + 0x24 */
+ volatile u8 reserved4[3]; /* GPIO + 0x25 */
+ volatile u8 sint_ddr; /* GPIO + 0x28 */
+ volatile u8 reserved5[3]; /* GPIO + 0x29 */
+ volatile u8 sint_dvo; /* GPIO + 0x2c */
+ volatile u8 reserved6[3]; /* GPIO + 0x2d */
+ volatile u8 sint_inten; /* GPIO + 0x30 */
+ volatile u8 reserved7[3]; /* GPIO + 0x31 */
+ volatile u16 sint_itype; /* GPIO + 0x34 */
+ volatile u16 reserved8; /* GPIO + 0x36 */
+ volatile u8 gpio_control; /* GPIO + 0x38 */
+ volatile u8 reserved9[3]; /* GPIO + 0x39 */
+ volatile u8 sint_istat; /* GPIO + 0x3c */
+ volatile u8 sint_ival; /* GPIO + 0x3d */
+ volatile u8 bus_errs; /* GPIO + 0x3e */
+ volatile u8 reserved10; /* GPIO + 0x3f */
+};
+
+#define MPC52xx_GPIO_PSC_CONFIG_UART_WITHOUT_CD 4
+#define MPC52xx_GPIO_PSC_CONFIG_UART_WITH_CD 5
+#define MPC52xx_GPIO_PCI_DIS (1<<15)
+
+/* XLB Bus control */
+struct mpc52xx_xlb {
+ volatile u8 reserved[0x40];
+ volatile u32 config; /* XLB + 0x40 */
+ volatile u32 version; /* XLB + 0x44 */
+ volatile u32 status; /* XLB + 0x48 */
+ volatile u32 int_enable; /* XLB + 0x4c */
+ volatile u32 addr_capture; /* XLB + 0x50 */
+ volatile u32 bus_sig_capture; /* XLB + 0x54 */
+ volatile u32 addr_timeout; /* XLB + 0x58 */
+ volatile u32 data_timeout; /* XLB + 0x5c */
+ volatile u32 bus_act_timeout; /* XLB + 0x60 */
+ volatile u32 master_pri_enable; /* XLB + 0x64 */
+ volatile u32 master_priority; /* XLB + 0x68 */
+ volatile u32 base_address; /* XLB + 0x6c */
+ volatile u32 snoop_window; /* XLB + 0x70 */
+};
+
+
+/* Clock Distribution control */
+struct mpc52xx_cdm {
+ volatile u32 jtag_id; /* MBAR_CDM + 0x00 reg0 read only */
+ volatile u32 rstcfg; /* MBAR_CDM + 0x04 reg1 read only */
+ volatile u32 breadcrumb; /* MBAR_CDM + 0x08 reg2 */
+
+ volatile u8 mem_clk_sel; /* MBAR_CDM + 0x0c reg3 byte0 */
+ volatile u8 xlb_clk_sel; /* MBAR_CDM + 0x0d reg3 byte1 read only */
+ volatile u8 ipb_clk_sel; /* MBAR_CDM + 0x0e reg3 byte2 */
+ volatile u8 pci_clk_sel; /* MBAR_CDM + 0x0f reg3 byte3 */
+
+ volatile u8 ext_48mhz_en; /* MBAR_CDM + 0x10 reg4 byte0 */
+ volatile u8 fd_enable; /* MBAR_CDM + 0x11 reg4 byte1 */
+ volatile u16 fd_counters; /* MBAR_CDM + 0x12 reg4 byte2,3 */
+
+ volatile u32 clk_enables; /* MBAR_CDM + 0x14 reg5 */
+
+ volatile u8 osc_disable; /* MBAR_CDM + 0x18 reg6 byte0 */
+ volatile u8 reserved0[3]; /* MBAR_CDM + 0x19 reg6 byte1,2,3 */
+
+ volatile u8 ccs_sleep_enable;/* MBAR_CDM + 0x1c reg7 byte0 */
+ volatile u8 osc_sleep_enable;/* MBAR_CDM + 0x1d reg7 byte1 */
+ volatile u8 reserved1; /* MBAR_CDM + 0x1e reg7 byte2 */
+ volatile u8 ccs_qreq_test; /* MBAR_CDM + 0x1f reg7 byte3 */
+
+ volatile u8 soft_reset; /* MBAR_CDM + 0x20 u8 byte0 */
+ volatile u8 no_ckstp; /* MBAR_CDM + 0x21 u8 byte0 */
+ volatile u8 reserved2[2]; /* MBAR_CDM + 0x22 u8 byte1,2,3 */
+
+ volatile u8 pll_lock; /* MBAR_CDM + 0x24 reg9 byte0 */
+ volatile u8 pll_looselock; /* MBAR_CDM + 0x25 reg9 byte1 */
+ volatile u8 pll_sm_lockwin; /* MBAR_CDM + 0x26 reg9 byte2 */
+ volatile u8 reserved3; /* MBAR_CDM + 0x27 reg9 byte3 */
+
+ volatile u16 reserved4; /* MBAR_CDM + 0x28 reg10 byte0,1 */
+ volatile u16 mclken_div_psc1;/* MBAR_CDM + 0x2a reg10 byte2,3 */
+
+ volatile u16 reserved5; /* MBAR_CDM + 0x2c reg11 byte0,1 */
+ volatile u16 mclken_div_psc2;/* MBAR_CDM + 0x2e reg11 byte2,3 */
+
+ volatile u16 reserved6; /* MBAR_CDM + 0x30 reg12 byte0,1 */
+ volatile u16 mclken_div_psc3;/* MBAR_CDM + 0x32 reg12 byte2,3 */
+
+ volatile u16 reserved7; /* MBAR_CDM + 0x34 reg13 byte0,1 */
+ volatile u16 mclken_div_psc6;/* MBAR_CDM + 0x36 reg13 byte2,3 */
+};
+
+#endif /* __ASSEMBLY__ */
+
+
+/* ========================================================================= */
+/* Prototypes for MPC52xx syslib */
+/* ========================================================================= */
+
+#ifndef __ASSEMBLY__
+
+extern void mpc52xx_init_irq(void);
+extern int mpc52xx_get_irq(struct pt_regs *regs);
+
+extern unsigned long mpc52xx_find_end_of_memory(void);
+extern void mpc52xx_set_bat(void);
+extern void mpc52xx_map_io(void);
+extern void mpc52xx_restart(char *cmd);
+extern void mpc52xx_halt(void);
+extern void mpc52xx_power_off(void);
+extern void mpc52xx_progress(char *s, unsigned short hex);
+extern void mpc52xx_calibrate_decr(void);
+extern void mpc52xx_add_board_devices(struct ocp_def board_ocp[]);
+
+#endif /* __ASSEMBLY__ */
+
+
+/* ========================================================================= */
+/* Platform configuration */
+/* ========================================================================= */
+
+/* The U-Boot platform information struct */
+extern bd_t __res;
+
+/* Platform options */
+#if defined(CONFIG_LITE5200)
+#include <platforms/lite5200.h>
+#endif
+
+
+#endif /* __ASM_MPC52xx_H__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc52xx_psc.h
+ *
+ * Definitions of consts/structs to drive the Freescale MPC52xx OnChip
+ * PSCs. Theses are shared between multiple drivers since a PSC can be
+ * UART, AC97, IR, I2S, ... So this header is in asm-ppc.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based/Extracted from some header of the 2.4 originally written by
+ * Dale Farnsworth <dfarnsworth@mvista.com>
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#ifndef __MPC52xx_PSC_H__
+#define __MPC52xx_PSC_H__
+
+#include <asm/types.h>
+
+/* Max number of PSCs */
+#define MPC52xx_PSC_MAXNUM 6
+
+/* Programmable Serial Controller (PSC) status register bits */
+#define MPC52xx_PSC_SR_CDE 0x0080
+#define MPC52xx_PSC_SR_RXRDY 0x0100
+#define MPC52xx_PSC_SR_RXFULL 0x0200
+#define MPC52xx_PSC_SR_TXRDY 0x0400
+#define MPC52xx_PSC_SR_TXEMP 0x0800
+#define MPC52xx_PSC_SR_OE 0x1000
+#define MPC52xx_PSC_SR_PE 0x2000
+#define MPC52xx_PSC_SR_FE 0x4000
+#define MPC52xx_PSC_SR_RB 0x8000
+
+/* PSC Command values */
+#define MPC52xx_PSC_RX_ENABLE 0x0001
+#define MPC52xx_PSC_RX_DISABLE 0x0002
+#define MPC52xx_PSC_TX_ENABLE 0x0004
+#define MPC52xx_PSC_TX_DISABLE 0x0008
+#define MPC52xx_PSC_SEL_MODE_REG_1 0x0010
+#define MPC52xx_PSC_RST_RX 0x0020
+#define MPC52xx_PSC_RST_TX 0x0030
+#define MPC52xx_PSC_RST_ERR_STAT 0x0040
+#define MPC52xx_PSC_RST_BRK_CHG_INT 0x0050
+#define MPC52xx_PSC_START_BRK 0x0060
+#define MPC52xx_PSC_STOP_BRK 0x0070
+
+/* PSC TxRx FIFO status bits */
+#define MPC52xx_PSC_RXTX_FIFO_ERR 0x0040
+#define MPC52xx_PSC_RXTX_FIFO_UF 0x0020
+#define MPC52xx_PSC_RXTX_FIFO_OF 0x0010
+#define MPC52xx_PSC_RXTX_FIFO_FR 0x0008
+#define MPC52xx_PSC_RXTX_FIFO_FULL 0x0004
+#define MPC52xx_PSC_RXTX_FIFO_ALARM 0x0002
+#define MPC52xx_PSC_RXTX_FIFO_EMPTY 0x0001
+
+/* PSC interrupt mask bits */
+#define MPC52xx_PSC_IMR_TXRDY 0x0100
+#define MPC52xx_PSC_IMR_RXRDY 0x0200
+#define MPC52xx_PSC_IMR_DB 0x0400
+#define MPC52xx_PSC_IMR_IPC 0x8000
+
+/* PSC input port change bit */
+#define MPC52xx_PSC_CTS 0x01
+#define MPC52xx_PSC_DCD 0x02
+#define MPC52xx_PSC_D_CTS 0x10
+#define MPC52xx_PSC_D_DCD 0x20
+
+/* PSC mode fields */
+#define MPC52xx_PSC_MODE_5_BITS 0x00
+#define MPC52xx_PSC_MODE_6_BITS 0x01
+#define MPC52xx_PSC_MODE_7_BITS 0x02
+#define MPC52xx_PSC_MODE_8_BITS 0x03
+#define MPC52xx_PSC_MODE_BITS_MASK 0x03
+#define MPC52xx_PSC_MODE_PAREVEN 0x00
+#define MPC52xx_PSC_MODE_PARODD 0x04
+#define MPC52xx_PSC_MODE_PARFORCE 0x08
+#define MPC52xx_PSC_MODE_PARNONE 0x10
+#define MPC52xx_PSC_MODE_ERR 0x20
+#define MPC52xx_PSC_MODE_FFULL 0x40
+#define MPC52xx_PSC_MODE_RXRTS 0x80
+
+#define MPC52xx_PSC_MODE_ONE_STOP_5_BITS 0x00
+#define MPC52xx_PSC_MODE_ONE_STOP 0x07
+#define MPC52xx_PSC_MODE_TWO_STOP 0x0f
+
+#define MPC52xx_PSC_RFNUM_MASK 0x01ff
+
+
+/* Structure of the hardware registers */
+struct mpc52xx_psc {
+ volatile u8 mode; /* PSC + 0x00 */
+ volatile u8 reserved0[3];
+ union { /* PSC + 0x04 */
+ volatile u16 status;
+ volatile u16 clock_select;
+ } sr_csr;
+#define mpc52xx_psc_status sr_csr.status
+#define mpc52xx_psc_clock_select sr_csr.clock_select
+ volatile u16 reserved1;
+ volatile u8 command; /* PSC + 0x08 */
+volatile u8 reserved2[3];
+ union { /* PSC + 0x0c */
+ volatile u8 buffer_8;
+ volatile u16 buffer_16;
+ volatile u32 buffer_32;
+ } buffer;
+#define mpc52xx_psc_buffer_8 buffer.buffer_8
+#define mpc52xx_psc_buffer_16 buffer.buffer_16
+#define mpc52xx_psc_buffer_32 buffer.buffer_32
+ union { /* PSC + 0x10 */
+ volatile u8 ipcr;
+ volatile u8 acr;
+ } ipcr_acr;
+#define mpc52xx_psc_ipcr ipcr_acr.ipcr
+#define mpc52xx_psc_acr ipcr_acr.acr
+ volatile u8 reserved3[3];
+ union { /* PSC + 0x14 */
+ volatile u16 isr;
+ volatile u16 imr;
+ } isr_imr;
+#define mpc52xx_psc_isr isr_imr.isr
+#define mpc52xx_psc_imr isr_imr.imr
+ volatile u16 reserved4;
+ volatile u8 ctur; /* PSC + 0x18 */
+ volatile u8 reserved5[3];
+ volatile u8 ctlr; /* PSC + 0x1c */
+ volatile u8 reserved6[3];
+ volatile u16 ccr; /* PSC + 0x20 */
+ volatile u8 reserved7[14];
+ volatile u8 ivr; /* PSC + 0x30 */
+ volatile u8 reserved8[3];
+ volatile u8 ip; /* PSC + 0x34 */
+ volatile u8 reserved9[3];
+ volatile u8 op1; /* PSC + 0x38 */
+ volatile u8 reserved10[3];
+ volatile u8 op0; /* PSC + 0x3c */
+ volatile u8 reserved11[3];
+ volatile u32 sicr; /* PSC + 0x40 */
+ volatile u8 ircr1; /* PSC + 0x44 */
+ volatile u8 reserved13[3];
+ volatile u8 ircr2; /* PSC + 0x44 */
+ volatile u8 reserved14[3];
+ volatile u8 irsdr; /* PSC + 0x4c */
+ volatile u8 reserved15[3];
+ volatile u8 irmdr; /* PSC + 0x50 */
+ volatile u8 reserved16[3];
+ volatile u8 irfdr; /* PSC + 0x54 */
+ volatile u8 reserved17[3];
+ volatile u16 rfnum; /* PSC + 0x58 */
+ volatile u16 reserved18;
+ volatile u16 tfnum; /* PSC + 0x5c */
+ volatile u16 reserved19;
+ volatile u32 rfdata; /* PSC + 0x60 */
+ volatile u16 rfstat; /* PSC + 0x64 */
+ volatile u16 reserved20;
+ volatile u8 rfcntl; /* PSC + 0x68 */
+ volatile u8 reserved21[5];
+ volatile u16 rfalarm; /* PSC + 0x6e */
+ volatile u16 reserved22;
+ volatile u16 rfrptr; /* PSC + 0x72 */
+ volatile u16 reserved23;
+ volatile u16 rfwptr; /* PSC + 0x76 */
+ volatile u16 reserved24;
+ volatile u16 rflrfptr; /* PSC + 0x7a */
+ volatile u16 reserved25;
+ volatile u16 rflwfptr; /* PSC + 0x7e */
+ volatile u32 tfdata; /* PSC + 0x80 */
+ volatile u16 tfstat; /* PSC + 0x84 */
+ volatile u16 reserved26;
+ volatile u8 tfcntl; /* PSC + 0x88 */
+ volatile u8 reserved27[5];
+ volatile u16 tfalarm; /* PSC + 0x8e */
+ volatile u16 reserved28;
+ volatile u16 tfrptr; /* PSC + 0x92 */
+ volatile u16 reserved29;
+ volatile u16 tfwptr; /* PSC + 0x96 */
+ volatile u16 reserved30;
+ volatile u16 tflrfptr; /* PSC + 0x9a */
+ volatile u16 reserved31;
+ volatile u16 tflwfptr; /* PSC + 0x9e */
+};
+
+
+#endif /* __MPC52xx_PSC_H__ */
--- /dev/null
+/* include/asm-ppc/mpc8260_pci9.h
+ *
+ * Undefine the PCI read* and in* macros so we can define them as functions
+ * that implement the workaround for the MPC8260 device erratum PCI 9.
+ *
+ * This header file should only be included at the end of include/asm-ppc/io.h
+ * and never included directly anywhere else.
+ *
+ * Author: andy_lowe@mvista.com
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#ifndef _PPC_IO_H
+#error "Do not include mpc8260_pci9.h directly."
+#endif
+
+#ifdef __KERNEL__
+#ifndef __CONFIG_8260_PCI9_DEFS
+#define __CONFIG_8260_PCI9_DEFS
+
+#undef readb
+#undef readw
+#undef readl
+#undef insb
+#undef insw
+#undef insl
+#undef inb
+#undef inw
+#undef inl
+#undef insw_ns
+#undef insl_ns
+#undef memcpy_fromio
+
+extern int readb(volatile unsigned char *addr);
+extern int readw(volatile unsigned short *addr);
+extern unsigned readl(volatile unsigned *addr);
+extern void insb(unsigned port, void *buf, int ns);
+extern void insw(unsigned port, void *buf, int ns);
+extern void insl(unsigned port, void *buf, int nl);
+extern int inb(unsigned port);
+extern int inw(unsigned port);
+extern unsigned inl(unsigned port);
+extern void insw_ns(unsigned port, void *buf, int ns);
+extern void insl_ns(unsigned port, void *buf, int nl);
+extern void *memcpy_fromio(void *dest, unsigned long src, size_t count);
+
+#endif /* !__CONFIG_8260_PCI9_DEFS */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc85xx.h
+ *
+ * MPC85xx definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_MPC85xx_H__
+#define __ASM_MPC85xx_H__
+
+#include <linux/config.h>
+#include <asm/mmu.h>
+
+#ifdef CONFIG_85xx
+
+#ifdef CONFIG_MPC8540_ADS
+#include <platforms/85xx/mpc8540_ads.h>
+#endif
+
+#define _IO_BASE isa_io_base
+#define _ISA_MEM_BASE isa_mem_base
+#define PCI_DRAM_OFFSET pci_dram_offset
+
+/*
+ * The "residual" board information structure the boot loader passes
+ * into the kernel.
+ */
+extern unsigned char __res[];
+
+/* Internal IRQs on MPC85xx OpenPIC */
+/* Not all of these exist on all MPC85xx implementations */
+
+#ifndef MPC85xx_OPENPIC_IRQ_OFFSET
+#define MPC85xx_OPENPIC_IRQ_OFFSET 64
+#endif
+
+/* The 32 internal sources */
+#define MPC85xx_IRQ_L2CACHE ( 0 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_ECM ( 1 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DDR ( 2 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_LBIU ( 3 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA0 ( 4 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA1 ( 5 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA2 ( 6 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA3 ( 7 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PCI1 ( 8 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PCI2 ( 9 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_ERROR ( 9 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_BELL (10 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_TX (11 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_RX (12 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_TX (13 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_RX (14 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_ERROR (18 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_TX (19 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_RX (20 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_ERROR (24 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_FEC (25 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DUART (26 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_IIC1 (27 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PERFMON (28 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_CPM (30 + MPC85xx_OPENPIC_IRQ_OFFSET)
+
+/* The 12 external interrupt lines */
+#define MPC85xx_IRQ_EXT0 (32 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT1 (33 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT2 (34 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT3 (35 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT4 (36 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT5 (37 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT6 (38 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT7 (39 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT8 (40 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT9 (41 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT10 (42 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT11 (43 + MPC85xx_OPENPIC_IRQ_OFFSET)
+
+/* Offset from CCSRBAR */
+#define MPC85xx_CPM_OFFSET (0x80000)
+#define MPC85xx_CPM_SIZE (0x40000)
+#define MPC85xx_DMA_OFFSET (0x21000)
+#define MPC85xx_DMA_SIZE (0x01000)
+#define MPC85xx_ENET1_OFFSET (0x24000)
+#define MPC85xx_ENET1_SIZE (0x01000)
+#define MPC85xx_ENET2_OFFSET (0x25000)
+#define MPC85xx_ENET2_SIZE (0x01000)
+#define MPC85xx_ENET3_OFFSET (0x26000)
+#define MPC85xx_ENET3_SIZE (0x01000)
+#define MPC85xx_GUTS_OFFSET (0xe0000)
+#define MPC85xx_GUTS_SIZE (0x01000)
+#define MPC85xx_IIC1_OFFSET (0x03000)
+#define MPC85xx_IIC1_SIZE (0x01000)
+#define MPC85xx_OPENPIC_OFFSET (0x40000)
+#define MPC85xx_OPENPIC_SIZE (0x40000)
+#define MPC85xx_PCI1_OFFSET (0x08000)
+#define MPC85xx_PCI1_SIZE (0x01000)
+#define MPC85xx_PCI2_OFFSET (0x09000)
+#define MPC85xx_PCI2_SIZE (0x01000)
+#define MPC85xx_PERFMON_OFFSET (0xe1000)
+#define MPC85xx_PERFMON_SIZE (0x01000)
+#define MPC85xx_UART0_OFFSET (0x04500)
+#define MPC85xx_UART0_SIZE (0x00100)
+#define MPC85xx_UART1_OFFSET (0x04600)
+#define MPC85xx_UART1_SIZE (0x00100)
+
+#define MPC85xx_CCSRBAR_SIZE (1024*1024)
+
+/* Let modules/drivers get at CCSRBAR */
+extern phys_addr_t get_ccsrbar(void);
+
+#ifdef MODULE
+#define CCSRBAR get_ccsrbar()
+#else
+#define CCSRBAR BOARD_CCSRBAR
+#endif
+
+#endif /* CONFIG_85xx */
+#endif /* __ASM_MPC85xx_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/ppc4xx_dma.h
+ *
+ * IBM PPC4xx DMA engine library
+ *
+ * Copyright 2000-2004 MontaVista Software Inc.
+ *
+ * Cleaned up a bit more, Matt Porter <mporter@kernel.crashing.org>
+ *
+ * Original code by Armin Kuster <akuster@mvista.com>
+ * and Pete Popov <ppopov@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASMPPC_PPC4xx_DMA_H
+#define __ASMPPC_PPC4xx_DMA_H
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/mmu.h>
+#include <asm/ibm4xx.h>
+
+#undef DEBUG_4xxDMA
+
+#define MAX_PPC4xx_DMA_CHANNELS 4
+
+/* in arch/ppc/kernel/setup.c -- Cort */
+extern unsigned long DMA_MODE_WRITE, DMA_MODE_READ;
+
+/*
+ * Function return status codes
+ * These values are used to indicate whether or not the function
+ * call was successful, or a bad/invalid parameter was passed.
+ */
+#define DMA_STATUS_GOOD 0
+#define DMA_STATUS_BAD_CHANNEL 1
+#define DMA_STATUS_BAD_HANDLE 2
+#define DMA_STATUS_BAD_MODE 3
+#define DMA_STATUS_NULL_POINTER 4
+#define DMA_STATUS_OUT_OF_MEMORY 5
+#define DMA_STATUS_SGL_LIST_EMPTY 6
+#define DMA_STATUS_GENERAL_ERROR 7
+#define DMA_STATUS_CHANNEL_NOTFREE 8
+
+#define DMA_CHANNEL_BUSY 0x80000000
+
+/*
+ * These indicate status as returned from the DMA Status Register.
+ */
+#define DMA_STATUS_NO_ERROR 0
+#define DMA_STATUS_CS 1 /* Count Status */
+#define DMA_STATUS_TS 2 /* Transfer Status */
+#define DMA_STATUS_DMA_ERROR 3 /* DMA Error Occurred */
+#define DMA_STATUS_DMA_BUSY 4 /* The channel is busy */
+
+
+/*
+ * DMA Channel Control Registers
+ */
+
+#ifdef CONFIG_44x
+#define PPC4xx_DMA_64BIT
+#define DMA_CR_OFFSET 1
+#else
+#define DMA_CR_OFFSET 0
+#endif
+
+#define DMA_CE_ENABLE (1<<31) /* DMA Channel Enable */
+#define SET_DMA_CE_ENABLE(x) (((x)&0x1)<<31)
+#define GET_DMA_CE_ENABLE(x) (((x)&DMA_CE_ENABLE)>>31)
+
+#define DMA_CIE_ENABLE (1<<30) /* DMA Channel Interrupt Enable */
+#define SET_DMA_CIE_ENABLE(x) (((x)&0x1)<<30)
+#define GET_DMA_CIE_ENABLE(x) (((x)&DMA_CIE_ENABLE)>>30)
+
+#define DMA_TD (1<<29)
+#define SET_DMA_TD(x) (((x)&0x1)<<29)
+#define GET_DMA_TD(x) (((x)&DMA_TD)>>29)
+
+#define DMA_PL (1<<28) /* Peripheral Location */
+#define SET_DMA_PL(x) (((x)&0x1)<<28)
+#define GET_DMA_PL(x) (((x)&DMA_PL)>>28)
+
+#define EXTERNAL_PERIPHERAL 0
+#define INTERNAL_PERIPHERAL 1
+
+#define SET_DMA_PW(x) (((x)&0x3)<<(26-DMA_CR_OFFSET)) /* Peripheral Width */
+#define DMA_PW_MASK SET_DMA_PW(3)
+#define PW_8 0
+#define PW_16 1
+#define PW_32 2
+#define PW_64 3
+/* FIXME: Add PW_128 support for 440GP DMA block */
+#define GET_DMA_PW(x) (((x)&DMA_PW_MASK)>>(26-DMA_CR_OFFSET))
+
+#define DMA_DAI (1<<(25-DMA_CR_OFFSET)) /* Destination Address Increment */
+#define SET_DMA_DAI(x) (((x)&0x1)<<(25-DMA_CR_OFFSET))
+
+#define DMA_SAI (1<<(24-DMA_CR_OFFSET)) /* Source Address Increment */
+#define SET_DMA_SAI(x) (((x)&0x1)<<(24-DMA_CR_OFFSET))
+
+#define DMA_BEN (1<<(23-DMA_CR_OFFSET)) /* Buffer Enable */
+#define SET_DMA_BEN(x) (((x)&0x1)<<(23-DMA_CR_OFFSET))
+
+#define SET_DMA_TM(x) (((x)&0x3)<<(21-DMA_CR_OFFSET)) /* Transfer Mode */
+#define DMA_TM_MASK SET_DMA_TM(3)
+#define TM_PERIPHERAL 0 /* Peripheral */
+#define TM_RESERVED 1 /* Reserved */
+#define TM_S_MM 2 /* Memory to Memory */
+#define TM_D_MM 3 /* Device Paced Memory to Memory */
+#define GET_DMA_TM(x) (((x)&DMA_TM_MASK)>>(21-DMA_CR_OFFSET))
+
+#define SET_DMA_PSC(x) (((x)&0x3)<<(19-DMA_CR_OFFSET)) /* Peripheral Setup Cycles */
+#define DMA_PSC_MASK SET_DMA_PSC(3)
+#define GET_DMA_PSC(x) (((x)&DMA_PSC_MASK)>>(19-DMA_CR_OFFSET))
+
+#define SET_DMA_PWC(x) (((x)&0x3F)<<(13-DMA_CR_OFFSET)) /* Peripheral Wait Cycles */
+#define DMA_PWC_MASK SET_DMA_PWC(0x3F)
+#define GET_DMA_PWC(x) (((x)&DMA_PWC_MASK)>>(13-DMA_CR_OFFSET))
+
+#define SET_DMA_PHC(x) (((x)&0x7)<<(10-DMA_CR_OFFSET)) /* Peripheral Hold Cycles */
+#define DMA_PHC_MASK SET_DMA_PHC(0x7)
+#define GET_DMA_PHC(x) (((x)&DMA_PHC_MASK)>>(10-DMA_CR_OFFSET))
+
+#define DMA_ETD_OUTPUT (1<<(9-DMA_CR_OFFSET)) /* EOT pin is a TC output */
+#define SET_DMA_ETD(x) (((x)&0x1)<<(9-DMA_CR_OFFSET))
+
+#define DMA_TCE_ENABLE (1<<(8-DMA_CR_OFFSET))
+#define SET_DMA_TCE(x) (((x)&0x1)<<(8-DMA_CR_OFFSET))
+
+#define DMA_DEC (1<<(2) /* Address Decrement */
+#define SET_DMA_DEC(x) (((x)&0x1)<<2)
+#define GET_DMA_DEC(x) (((x)&DMA_DEC)>>2)
+
+/*
+ * Transfer Modes
+ * These modes are defined in a way that makes it possible to
+ * simply "or" in the value in the control register.
+ */
+
+#define DMA_MODE_MM (SET_DMA_TM(TM_S_MM)) /* memory to memory */
+
+ /* Device-paced memory to memory, */
+ /* device is at source address */
+#define DMA_MODE_MM_DEVATSRC (DMA_TD | SET_DMA_TM(TM_D_MM))
+
+ /* Device-paced memory to memory, */
+ /* device is at destination address */
+#define DMA_MODE_MM_DEVATDST (SET_DMA_TM(TM_D_MM))
+
+/* 405gp/440gp */
+#define SET_DMA_PREFETCH(x) (((x)&0x3)<<(4-DMA_CR_OFFSET)) /* Memory Read Prefetch */
+#define DMA_PREFETCH_MASK SET_DMA_PREFETCH(3)
+#define PREFETCH_1 0 /* Prefetch 1 Double Word */
+#define PREFETCH_2 1
+#define PREFETCH_4 2
+#define GET_DMA_PREFETCH(x) (((x)&DMA_PREFETCH_MASK)>>(4-DMA_CR_OFFSET))
+
+#define DMA_PCE (1<<(3-DMA_CR_OFFSET)) /* Parity Check Enable */
+#define SET_DMA_PCE(x) (((x)&0x1)<<(3-DMA_CR_OFFSET))
+#define GET_DMA_PCE(x) (((x)&DMA_PCE)>>(3-DMA_CR_OFFSET))
+
+/* stb3x */
+
+#define DMA_ECE_ENABLE (1<<5)
+#define SET_DMA_ECE(x) (((x)&0x1)<<5)
+#define GET_DMA_ECE(x) (((x)&DMA_ECE_ENABLE)>>5)
+
+#define DMA_TCD_DISABLE (1<<4)
+#define SET_DMA_TCD(x) (((x)&0x1)<<4)
+#define GET_DMA_TCD(x) (((x)&DMA_TCD_DISABLE)>>4)
+
+typedef uint32_t sgl_handle_t;
+
+#ifdef CONFIG_PPC4xx_EDMA
+
+#define SGL_LIST_SIZE 4096
+#define DMA_PPC4xx_SIZE SGL_LIST_SIZE
+
+#define SET_DMA_PRIORITY(x) (((x)&0x3)<<(6-DMA_CR_OFFSET)) /* DMA Channel Priority */
+#define DMA_PRIORITY_MASK SET_DMA_PRIORITY(3)
+#define PRIORITY_LOW 0
+#define PRIORITY_MID_LOW 1
+#define PRIORITY_MID_HIGH 2
+#define PRIORITY_HIGH 3
+#define GET_DMA_PRIORITY(x) (((x)&DMA_PRIORITY_MASK)>>(6-DMA_CR_OFFSET))
+
+/*
+ * DMA Polarity Configuration Register
+ */
+#define DMAReq_ActiveLow(chan) (1<<(31-(chan*3)))
+#define DMAAck_ActiveLow(chan) (1<<(30-(chan*3)))
+#define EOT_ActiveLow(chan) (1<<(29-(chan*3))) /* End of Transfer */
+
+/*
+ * DMA Sleep Mode Register
+ */
+#define SLEEP_MODE_ENABLE (1<<21)
+
+/*
+ * DMA Status Register
+ */
+#define DMA_CS0 (1<<31) /* Terminal Count has been reached */
+#define DMA_CS1 (1<<30)
+#define DMA_CS2 (1<<29)
+#define DMA_CS3 (1<<28)
+
+#define DMA_TS0 (1<<27) /* End of Transfer has been requested */
+#define DMA_TS1 (1<<26)
+#define DMA_TS2 (1<<25)
+#define DMA_TS3 (1<<24)
+
+#define DMA_CH0_ERR (1<<23) /* DMA Chanel 0 Error */
+#define DMA_CH1_ERR (1<<22)
+#define DMA_CH2_ERR (1<<21)
+#define DMA_CH3_ERR (1<<20)
+
+#define DMA_IN_DMA_REQ0 (1<<19) /* Internal DMA Request is pending */
+#define DMA_IN_DMA_REQ1 (1<<18)
+#define DMA_IN_DMA_REQ2 (1<<17)
+#define DMA_IN_DMA_REQ3 (1<<16)
+
+#define DMA_EXT_DMA_REQ0 (1<<15) /* External DMA Request is pending */
+#define DMA_EXT_DMA_REQ1 (1<<14)
+#define DMA_EXT_DMA_REQ2 (1<<13)
+#define DMA_EXT_DMA_REQ3 (1<<12)
+
+#define DMA_CH0_BUSY (1<<11) /* DMA Channel 0 Busy */
+#define DMA_CH1_BUSY (1<<10)
+#define DMA_CH2_BUSY (1<<9)
+#define DMA_CH3_BUSY (1<<8)
+
+#define DMA_SG0 (1<<7) /* DMA Channel 0 Scatter/Gather in progress */
+#define DMA_SG1 (1<<6)
+#define DMA_SG2 (1<<5)
+#define DMA_SG3 (1<<4)
+
+/*
+ * DMA SG Command Register
+ */
+#define SSG_ENABLE(chan) (1<<(31-chan)) /* Start Scatter Gather */
+#define SSG_MASK_ENABLE(chan) (1<<(15-chan)) /* Enable writing to SSG0 bit */
+
+/*
+ * DMA Scatter/Gather Descriptor Bit fields
+ */
+#define SG_LINK (1<<31) /* Link */
+#define SG_TCI_ENABLE (1<<29) /* Enable Terminal Count Interrupt */
+#define SG_ETI_ENABLE (1<<28) /* Enable End of Transfer Interrupt */
+#define SG_ERI_ENABLE (1<<27) /* Enable Error Interrupt */
+#define SG_COUNT_MASK 0xFFFF /* Count Field */
+
+#define SET_DMA_CONTROL \
+ (SET_DMA_CIE_ENABLE(p_init->int_enable) | /* interrupt enable */ \
+ SET_DMA_BEN(p_init->buffer_enable) | /* buffer enable */\
+ SET_DMA_ETD(p_init->etd_output) | /* end of transfer pin */ \
+ SET_DMA_TCE(p_init->tce_enable) | /* terminal count enable */ \
+ SET_DMA_PL(p_init->pl) | /* peripheral location */ \
+ SET_DMA_DAI(p_init->dai) | /* dest addr increment */ \
+ SET_DMA_SAI(p_init->sai) | /* src addr increment */ \
+ SET_DMA_PRIORITY(p_init->cp) | /* channel priority */ \
+ SET_DMA_PW(p_init->pwidth) | /* peripheral/bus width */ \
+ SET_DMA_PSC(p_init->psc) | /* peripheral setup cycles */ \
+ SET_DMA_PWC(p_init->pwc) | /* peripheral wait cycles */ \
+ SET_DMA_PHC(p_init->phc) | /* peripheral hold cycles */ \
+ SET_DMA_PREFETCH(p_init->pf) /* read prefetch */)
+
+#define GET_DMA_POLARITY(chan) (DMAReq_ActiveLow(chan) | DMAAck_ActiveLow(chan) | EOT_ActiveLow(chan))
+
+#elif defined(CONFIG_STBXXX_DMA) /* stb03xxx */
+
+#define DMA_PPC4xx_SIZE 4096
+
+/*
+ * DMA Status Register
+ */
+
+#define SET_DMA_PRIORITY(x) (((x)&0x00800001)) /* DMA Channel Priority */
+#define DMA_PRIORITY_MASK 0x00800001
+#define PRIORITY_LOW 0x00000000
+#define PRIORITY_MID_LOW 0x00000001
+#define PRIORITY_MID_HIGH 0x00800000
+#define PRIORITY_HIGH 0x00800001
+#define GET_DMA_PRIORITY(x) (((((x)&DMA_PRIORITY_MASK) &0x00800000) >> 22 ) | (((x)&DMA_PRIORITY_MASK) &0x00000001))
+
+#define DMA_CS0 (1<<31) /* Terminal Count has been reached */
+#define DMA_CS1 (1<<30)
+#define DMA_CS2 (1<<29)
+#define DMA_CS3 (1<<28)
+
+#define DMA_TS0 (1<<27) /* End of Transfer has been requested */
+#define DMA_TS1 (1<<26)
+#define DMA_TS2 (1<<25)
+#define DMA_TS3 (1<<24)
+
+#define DMA_CH0_ERR (1<<23) /* DMA Chanel 0 Error */
+#define DMA_CH1_ERR (1<<22)
+#define DMA_CH2_ERR (1<<21)
+#define DMA_CH3_ERR (1<<20)
+
+#define DMA_CT0 (1<<19) /* Chained transfere */
+
+#define DMA_IN_DMA_REQ0 (1<<18) /* Internal DMA Request is pending */
+#define DMA_IN_DMA_REQ1 (1<<17)
+#define DMA_IN_DMA_REQ2 (1<<16)
+#define DMA_IN_DMA_REQ3 (1<<15)
+
+#define DMA_EXT_DMA_REQ0 (1<<14) /* External DMA Request is pending */
+#define DMA_EXT_DMA_REQ1 (1<<13)
+#define DMA_EXT_DMA_REQ2 (1<<12)
+#define DMA_EXT_DMA_REQ3 (1<<11)
+
+#define DMA_CH0_BUSY (1<<10) /* DMA Channel 0 Busy */
+#define DMA_CH1_BUSY (1<<9)
+#define DMA_CH2_BUSY (1<<8)
+#define DMA_CH3_BUSY (1<<7)
+
+#define DMA_CT1 (1<<6) /* Chained transfere */
+#define DMA_CT2 (1<<5)
+#define DMA_CT3 (1<<4)
+
+#define DMA_CH_ENABLE (1<<7)
+#define SET_DMA_CH(x) (((x)&0x1)<<7)
+#define GET_DMA_CH(x) (((x)&DMA_CH_ENABLE)>>7)
+
+/* STBx25xxx dma unique */
+/* enable device port on a dma channel
+ * example ext 0 on dma 1
+ */
+
+#define SSP0_RECV 15
+#define SSP0_XMIT 14
+#define EXT_DMA_0 12
+#define SC1_XMIT 11
+#define SC1_RECV 10
+#define EXT_DMA_2 9
+#define EXT_DMA_3 8
+#define SERIAL2_XMIT 7
+#define SERIAL2_RECV 6
+#define SC0_XMIT 5
+#define SC0_RECV 4
+#define SERIAL1_XMIT 3
+#define SERIAL1_RECV 2
+#define SERIAL0_XMIT 1
+#define SERIAL0_RECV 0
+
+#define DMA_CHAN_0 1
+#define DMA_CHAN_1 2
+#define DMA_CHAN_2 3
+#define DMA_CHAN_3 4
+
+/* end STBx25xx */
+
+/*
+ * Bit 30 must be one for Redwoods, otherwise transfers may receive errors.
+ */
+#define DMA_CR_MB0 0x2
+
+#define SET_DMA_CONTROL \
+ (SET_DMA_CIE_ENABLE(p_init->int_enable) | /* interrupt enable */ \
+ SET_DMA_ETD(p_init->etd_output) | /* end of transfer pin */ \
+ SET_DMA_TCE(p_init->tce_enable) | /* terminal count enable */ \
+ SET_DMA_PL(p_init->pl) | /* peripheral location */ \
+ SET_DMA_DAI(p_init->dai) | /* dest addr increment */ \
+ SET_DMA_SAI(p_init->sai) | /* src addr increment */ \
+ SET_DMA_PRIORITY(p_init->cp) | /* channel priority */ \
+ SET_DMA_PW(p_init->pwidth) | /* peripheral/bus width */ \
+ SET_DMA_PSC(p_init->psc) | /* peripheral setup cycles */ \
+ SET_DMA_PWC(p_init->pwc) | /* peripheral wait cycles */ \
+ SET_DMA_PHC(p_init->phc) | /* peripheral hold cycles */ \
+ SET_DMA_TCD(p_init->tcd_disable) | /* TC chain mode disable */ \
+ SET_DMA_ECE(p_init->ece_enable) | /* ECE chanin mode enable */ \
+ SET_DMA_CH(p_init->ch_enable) | /* Chain enable */ \
+ DMA_CR_MB0 /* must be one */)
+
+#define GET_DMA_POLARITY(chan) chan
+
+#endif
+
+typedef struct {
+ unsigned short in_use; /* set when channel is being used, clr when
+ * available.
+ */
+ /*
+ * Valid polarity settings:
+ * DMAReq_ActiveLow(n)
+ * DMAAck_ActiveLow(n)
+ * EOT_ActiveLow(n)
+ *
+ * n is 0 to max dma chans
+ */
+ unsigned int polarity;
+
+ char buffer_enable; /* Boolean: buffer enable */
+ char tce_enable; /* Boolean: terminal count enable */
+ char etd_output; /* Boolean: eot pin is a tc output */
+ char pce; /* Boolean: parity check enable */
+
+ /*
+ * Peripheral location:
+ * INTERNAL_PERIPHERAL (UART0 on the 405GP)
+ * EXTERNAL_PERIPHERAL
+ */
+ char pl; /* internal/external peripheral */
+
+ /*
+ * Valid pwidth settings:
+ * PW_8
+ * PW_16
+ * PW_32
+ * PW_64
+ */
+ unsigned int pwidth;
+
+ char dai; /* Boolean: dst address increment */
+ char sai; /* Boolean: src address increment */
+
+ /*
+ * Valid psc settings: 0-3
+ */
+ unsigned int psc; /* Peripheral Setup Cycles */
+
+ /*
+ * Valid pwc settings:
+ * 0-63
+ */
+ unsigned int pwc; /* Peripheral Wait Cycles */
+
+ /*
+ * Valid phc settings:
+ * 0-7
+ */
+ unsigned int phc; /* Peripheral Hold Cycles */
+
+ /*
+ * Valid cp (channel priority) settings:
+ * PRIORITY_LOW
+ * PRIORITY_MID_LOW
+ * PRIORITY_MID_HIGH
+ * PRIORITY_HIGH
+ */
+ unsigned int cp; /* channel priority */
+
+ /*
+ * Valid pf (memory read prefetch) settings:
+ *
+ * PREFETCH_1
+ * PREFETCH_2
+ * PREFETCH_4
+ */
+ unsigned int pf; /* memory read prefetch */
+
+ /*
+ * Boolean: channel interrupt enable
+ * NOTE: for sgl transfers, only the last descriptor will be setup to
+ * interrupt.
+ */
+ char int_enable;
+
+ char shift; /* easy access to byte_count shift, based on */
+ /* the width of the channel */
+
+ uint32_t control; /* channel control word */
+
+ /* These variabled are used ONLY in single dma transfers */
+ unsigned int mode; /* transfer mode */
+ phys_addr_t addr;
+ char ce; /* channel enable */
+#ifdef CONFIG_STB03xxx
+ char ch_enable;
+ char tcd_disable;
+ char ece_enable;
+ char td; /* transfer direction */
+#endif
+
+} ppc_dma_ch_t;
+
+/*
+ * PPC44x DMA implementations have a slightly different
+ * descriptor layout. Probably moved about due to the
+ * change to 64-bit addresses and link pointer. I don't
+ * know why they didn't just leave control_count after
+ * the dst_addr.
+ */
+#ifdef PPC4xx_DMA_64BIT
+typedef struct {
+ uint32_t control;
+ uint32_t control_count;
+ phys_addr_t src_addr;
+ phys_addr_t dst_addr;
+ phys_addr_t next;
+} ppc_sgl_t;
+#else
+typedef struct {
+ uint32_t control;
+ phys_addr_t src_addr;
+ phys_addr_t dst_addr;
+ uint32_t control_count;
+ uint32_t next;
+} ppc_sgl_t;
+#endif
+
+typedef struct {
+ unsigned int dmanr;
+ uint32_t control; /* channel ctrl word; loaded from each descrptr */
+ uint32_t sgl_control; /* LK, TCI, ETI, and ERI bits in sgl descriptor */
+ dma_addr_t dma_addr; /* dma (physical) address of this list */
+ ppc_sgl_t *phead;
+ dma_addr_t phead_dma;
+ ppc_sgl_t *ptail;
+ dma_addr_t ptail_dma;
+} sgl_list_info_t;
+
+typedef struct {
+ phys_addr_t *src_addr;
+ phys_addr_t *dst_addr;
+ phys_addr_t dma_src_addr;
+ phys_addr_t dma_dst_addr;
+} pci_alloc_desc_t;
+
+extern ppc_dma_ch_t dma_channels[];
+
+/*
+ * The DMA API are in ppc4xx_dma.c and ppc4xx_sgdma.c
+ */
+extern int ppc4xx_init_dma_channel(unsigned int, ppc_dma_ch_t *);
+extern int ppc4xx_get_channel_config(unsigned int, ppc_dma_ch_t *);
+extern int ppc4xx_set_channel_priority(unsigned int, unsigned int);
+extern unsigned int ppc4xx_get_peripheral_width(unsigned int);
+extern void ppc4xx_set_sg_addr(int, phys_addr_t);
+extern int ppc4xx_add_dma_sgl(sgl_handle_t, phys_addr_t, phys_addr_t, unsigned int);
+extern void ppc4xx_enable_dma_sgl(sgl_handle_t);
+extern void ppc4xx_disable_dma_sgl(sgl_handle_t);
+extern int ppc4xx_get_dma_sgl_residue(sgl_handle_t, phys_addr_t *, phys_addr_t *);
+extern int ppc4xx_delete_dma_sgl_element(sgl_handle_t, phys_addr_t *, phys_addr_t *);
+extern int ppc4xx_alloc_dma_handle(sgl_handle_t *, unsigned int, unsigned int);
+extern void ppc4xx_free_dma_handle(sgl_handle_t);
+extern int ppc4xx_get_dma_status(void);
+extern void ppc4xx_set_src_addr(int dmanr, phys_addr_t src_addr);
+extern void ppc4xx_set_dst_addr(int dmanr, phys_addr_t dst_addr);
+extern void ppc4xx_enable_dma(unsigned int dmanr);
+extern void ppc4xx_disable_dma(unsigned int dmanr);
+extern void ppc4xx_set_dma_count(unsigned int dmanr, unsigned int count);
+extern int ppc4xx_get_dma_residue(unsigned int dmanr);
+extern void ppc4xx_set_dma_addr2(unsigned int dmanr, phys_addr_t src_dma_addr,
+ phys_addr_t dst_dma_addr);
+extern int ppc4xx_enable_dma_interrupt(unsigned int dmanr);
+extern int ppc4xx_disable_dma_interrupt(unsigned int dmanr);
+extern int ppc4xx_clr_dma_status(unsigned int dmanr);
+extern int ppc4xx_map_dma_port(unsigned int dmanr, unsigned int ocp_dma,short dma_chan);
+extern int ppc4xx_disable_dma_port(unsigned int dmanr, unsigned int ocp_dma,short dma_chan);
+extern int ppc4xx_set_dma_mode(unsigned int dmanr, unsigned int mode);
+
+/* These are in kernel/dma.c: */
+
+/* reserve a DMA channel */
+extern int request_dma(unsigned int dmanr, const char *device_id);
+/* release it again */
+extern void free_dma(unsigned int dmanr);
+#endif
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/rheap.c
+ *
+ * Header file for the implementation of a remote heap.
+ *
+ * Author: Pantelis Antoniou <panto@intracom.gr>
+ *
+ * 2004 (c) INTRACOM S.A. Greece. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __ASM_PPC_RHEAP_H__
+#define __ASM_PPC_RHEAP_H__
+
+#include <linux/list.h>
+
+typedef struct _rh_block {
+ struct list_head list;
+ void *start;
+ int size;
+ const char *owner;
+} rh_block_t;
+
+typedef struct _rh_info {
+ unsigned int alignment;
+ int max_blocks;
+ int empty_slots;
+ rh_block_t *block;
+ struct list_head empty_list;
+ struct list_head free_list;
+ struct list_head taken_list;
+ unsigned int flags;
+} rh_info_t;
+
+#define RHIF_STATIC_INFO 0x1
+#define RHIF_STATIC_BLOCK 0x2
+
+typedef struct rh_stats_t {
+ void *start;
+ int size;
+ const char *owner;
+} rh_stats_t;
+
+#define RHGS_FREE 0
+#define RHGS_TAKEN 1
+
+/* Create a remote heap dynamically */
+extern rh_info_t *rh_create(unsigned int alignment);
+
+/* Destroy a remote heap, created by rh_create() */
+extern void rh_destroy(rh_info_t * info);
+
+/* Initialize in place a remote info block */
+extern void rh_init(rh_info_t * info, unsigned int alignment, int max_blocks,
+ rh_block_t * block);
+
+/* Attach a free region to manage */
+extern int rh_attach_region(rh_info_t * info, void *start, int size);
+
+/* Detach a free region */
+extern void *rh_detach_region(rh_info_t * info, void *start, int size);
+
+/* Allocate the given size from the remote heap */
+extern void *rh_alloc(rh_info_t * info, int size, const char *owner);
+
+/* Allocate the given size from the given address */
+extern void *rh_alloc_fixed(rh_info_t * info, void *start, int size,
+ const char *owner);
+
+/* Free the allocated area */
+extern int rh_free(rh_info_t * info, void *start);
+
+/* Get stats for debugging purposes */
+extern int rh_get_stats(rh_info_t * info, int what, int max_stats,
+ rh_stats_t * stats);
+
+/* Simple dump of remote heap info */
+extern void rh_dump(rh_info_t * info);
+
+/* Set owner of taken block */
+extern int rh_set_owner(rh_info_t * info, void *start, const char *owner);
+
+#endif /* __ASM_PPC_RHEAP_H__ */
--- /dev/null
+/*
+ * hvcserver.h
+ * Copyright (C) 2004 Ryan S Arnold, IBM Corporation
+ *
+ * PPC64 virtual I/O console server support.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _PPC64_HVCSERVER_H
+#define _PPC64_HVCSERVER_H
+
+#include <linux/list.h>
+
+/* Converged Location Code length */
+#define HVCS_CLC_LENGTH 79
+
+struct hvcs_partner_info {
+ struct list_head node;
+ unsigned int unit_address;
+ unsigned int partition_ID;
+ char location_code[HVCS_CLC_LENGTH + 1]; /* CLC + 1 null-term char */
+};
+
+extern int hvcs_free_partner_info(struct list_head *head);
+extern int hvcs_get_partner_info(unsigned int unit_address,
+ struct list_head *head, unsigned long *pi_buff);
+extern int hvcs_register_connection(unsigned int unit_address,
+ unsigned int p_partition_ID, unsigned int p_unit_address);
+extern int hvcs_free_connection(unsigned int unit_address);
+
+#endif /* _PPC64_HVCSERVER_H */
--- /dev/null
+#ifndef __ASM_ADC_H
+#define __ASM_ADC_H
+
+/*
+ * Copyright (C) 2004 Andriy Skulysh
+ */
+
+#include <asm/cpu/adc.h>
+
+int adc_single(unsigned int channel);
+
+#endif /* __ASM_ADC_H */
--- /dev/null
+/*
+ * include/asm-sh/bus-sh.h
+ *
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#ifndef __ASM_SH_BUS_SH_H
+#define __ASM_SH_BUS_SH_H
+
+extern struct bus_type sh_bus_types[];
+
+struct sh_dev {
+ struct device dev;
+ char *name;
+ unsigned int dev_id;
+ unsigned int bus_id;
+ struct resource res;
+ void *mapbase;
+ unsigned int irq[6];
+ u64 *dma_mask;
+};
+
+#define to_sh_dev(d) container_of((d), struct sh_dev, dev)
+
+#define sh_get_drvdata(d) dev_get_drvdata(&(d)->dev)
+#define sh_set_drvdata(d,p) dev_set_drvdata(&(d)->dev, (p))
+
+struct sh_driver {
+ struct device_driver drv;
+ unsigned int dev_id;
+ unsigned int bus_id;
+ int (*probe)(struct sh_dev *);
+ int (*remove)(struct sh_dev *);
+ int (*suspend)(struct sh_dev *, u32);
+ int (*resume)(struct sh_dev *);
+};
+
+#define to_sh_driver(d) container_of((d), struct sh_driver, drv)
+#define sh_name(d) ((d)->dev.driver->name)
+
+/*
+ * Device ID numbers for bus types
+ */
+enum {
+ SH_DEV_ID_USB_OHCI,
+};
+
+#define SH_NR_BUSES 1
+#define SH_BUS_NAME_VIRT "shbus"
+
+enum {
+ SH_BUS_VIRT,
+};
+
+/* arch/sh/kernel/cpu/bus.c */
+extern int sh_device_register(struct sh_dev *dev);
+extern void sh_device_unregister(struct sh_dev *dev);
+extern int sh_driver_register(struct sh_driver *drv);
+extern void sh_driver_unregister(struct sh_driver *drv);
+
+#endif /* __ASM_SH_BUS_SH_H */
+
--- /dev/null
+#ifndef __ASM_CPU_SH3_ADC_H
+#define __ASM_CPU_SH3_ADC_H
+
+/*
+ * Copyright (C) 2004 Andriy Skulysh
+ */
+
+
+#define ADDRAH 0xa4000080
+#define ADDRAL 0xa4000082
+#define ADDRBH 0xa4000084
+#define ADDRBL 0xa4000086
+#define ADDRCH 0xa4000088
+#define ADDRCL 0xa400008a
+#define ADDRDH 0xa400008c
+#define ADDRDL 0xa400008e
+#define ADCSR 0xa4000090
+
+#define ADCSR_ADF 0x80
+#define ADCSR_ADIE 0x40
+#define ADCSR_ADST 0x20
+#define ADCSR_MULTI 0x10
+#define ADCSR_CKS 0x08
+#define ADCSR_CH_MASK 0x07
+
+#define ADCR 0xa4000092
+
+#endif /* __ASM_CPU_SH3_ADC_H */
--- /dev/null
+/*
+ * fixmap.h: compile-time virtual memory allocation
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1998 Ingo Molnar
+ *
+ * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
+ */
+
+#ifndef _ASM_FIXMAP_H
+#define _ASM_FIXMAP_H
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <asm/page.h>
+#ifdef CONFIG_HIGHMEM
+#include <linux/threads.h>
+#include <asm/kmap_types.h>
+#endif
+
+/*
+ * Here we define all the compile-time 'special' virtual
+ * addresses. The point is to have a constant address at
+ * compile time, but to set the physical address only
+ * in the boot process. We allocate these special addresses
+ * from the end of virtual memory (0xfffff000) backwards.
+ * Also this lets us do fail-safe vmalloc(), we
+ * can guarantee that these special addresses and
+ * vmalloc()-ed addresses never overlap.
+ *
+ * these 'compile-time allocated' memory buffers are
+ * fixed-size 4k pages. (or larger if used with an increment
+ * highger than 1) use fixmap_set(idx,phys) to associate
+ * physical memory with fixmap indices.
+ *
+ * TLB entries of such buffers will not be flushed across
+ * task switches.
+ */
+
+/*
+ * on UP currently we will have no trace of the fixmap mechanizm,
+ * no page table allocations, etc. This might change in the
+ * future, say framebuffers for the console driver(s) could be
+ * fix-mapped?
+ */
+enum fixed_addresses {
+#ifdef CONFIG_HIGHMEM
+ FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
+ FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
+#endif
+ __end_of_fixed_addresses
+};
+
+extern void __set_fixmap (enum fixed_addresses idx,
+ unsigned long phys, pgprot_t flags);
+
+#define set_fixmap(idx, phys) \
+ __set_fixmap(idx, phys, PAGE_KERNEL)
+/*
+ * Some hardware wants to get fixmapped without caching.
+ */
+#define set_fixmap_nocache(idx, phys) \
+ __set_fixmap(idx, phys, PAGE_KERNEL_NOCACHE)
+/*
+ * used by vmalloc.c.
+ *
+ * Leave one empty page between vmalloc'ed areas and
+ * the start of the fixmap, and leave one page empty
+ * at the top of mem..
+ */
+#define FIXADDR_TOP (P4SEG - PAGE_SIZE)
+#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
+
+#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
+#define __virt_to_fix(x) ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)
+
+extern void __this_fixmap_does_not_exist(void);
+
+/*
+ * 'index to address' translation. If anyone tries to use the idx
+ * directly without tranlation, we catch the bug with a NULL-deference
+ * kernel oops. Illegal ranges of incoming indices are caught too.
+ */
+static inline unsigned long fix_to_virt(const unsigned int idx)
+{
+ /*
+ * this branch gets completely eliminated after inlining,
+ * except when someone tries to use fixaddr indices in an
+ * illegal way. (such as mixing up address types or using
+ * out-of-range indices).
+ *
+ * If it doesn't get removed, the linker will complain
+ * loudly with a reasonably clear error message..
+ */
+ if (idx >= __end_of_fixed_addresses)
+ __this_fixmap_does_not_exist();
+
+ return __fix_to_virt(idx);
+}
+
+static inline unsigned long virt_to_fix(const unsigned long vaddr)
+{
+ BUG_ON(vaddr >= FIXADDR_TOP || vaddr < FIXADDR_START);
+ return __virt_to_fix(vaddr);
+}
+
+#endif
--- /dev/null
+#ifndef __ASM_SH_HP6XX_IDE_H
+#define __ASM_SH_HP6XX_IDE_H
+
+#define IRQ_CFCARD 93
+#define IRQ_PCMCIA 94
+
+#endif /* __ASM_SH_HP6XX_IDE_H */
+
--- /dev/null
+#ifndef __ASM_SH_RENESAS_HS7751RVOIP_H
+#define __ASM_SH_RENESAS_HS7751RVOIP_H
+
+/*
+ * linux/include/asm-sh/hs7751rvoip/hs7751rvoip.h
+ *
+ * Copyright (C) 2000 Atom Create Engineering Co., Ltd.
+ *
+ * Renesas Technology Sales HS7751RVoIP support
+ */
+
+/* Box specific addresses. */
+
+#define PA_BCR 0xa4000000 /* FPGA */
+#define PA_SLICCNTR1 0xa4000006 /* SLIC PIO Control 1 */
+#define PA_SLICCNTR2 0xa4000008 /* SLIC PIO Control 2 */
+#define PA_DMACNTR 0xa400000a /* USB DMA Control */
+#define PA_INPORTR 0xa400000c /* Input Port Register */
+#define PA_OUTPORTR 0xa400000e /* Output Port Reguster */
+#define PA_VERREG 0xa4000014 /* FPGA Version Register */
+
+#define PA_AREA5_IO 0xb4000000 /* Area 5 IO Memory */
+#define PA_AREA6_IO 0xb8000000 /* Area 6 IO Memory */
+#define PA_IDE_OFFSET 0x1f0 /* CF IDE Offset */
+
+#define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
+#define IRLCNTR2 (PA_BCR + 2) /* Interrupt Control Register2 */
+#define IRLCNTR3 (PA_BCR + 4) /* Interrupt Control Register3 */
+#define IRLCNTR4 (PA_BCR + 16) /* Interrupt Control Register4 */
+#define IRLCNTR5 (PA_BCR + 18) /* Interrupt Control Register5 */
+
+#define IRQ_PCIETH 6 /* PCI Ethernet IRQ */
+#define IRQ_PCIHUB 7 /* PCI Ethernet Hub IRQ */
+#define IRQ_USBCOM 8 /* USB Comunication IRQ */
+#define IRQ_USBCON 9 /* USB Connect IRQ */
+#define IRQ_USBDMA 10 /* USB DMA IRQ */
+#define IRQ_CFCARD 11 /* CF Card IRQ */
+#define IRQ_PCMCIA 12 /* PCMCIA IRQ */
+#define IRQ_PCISLOT 13 /* PCI Slot #1 IRQ */
+#define IRQ_ONHOOK1 0 /* ON HOOK1 IRQ */
+#define IRQ_OFFHOOK1 1 /* OFF HOOK1 IRQ */
+#define IRQ_ONHOOK2 2 /* ON HOOK2 IRQ */
+#define IRQ_OFFHOOK2 3 /* OFF HOOK2 IRQ */
+#define IRQ_RINGING 4 /* Ringing IRQ */
+#define IRQ_CODEC 5 /* CODEC IRQ */
+
+#endif /* __ASM_SH_RENESAS_HS7751RVOIP */
--- /dev/null
+#ifndef __ASM_SH_HS7751RVOIP_IDE_H
+#define __ASM_SH_HS7751RVOIP_IDE_H
+
+/* Nothing to see here.. */
+#include <asm/hs7751rvoip/hs7751rvoip.h>
+
+#endif /* __ASM_SH_HS7751RVOIP_IDE_H */
+
--- /dev/null
+/*
+ * include/asm-sh/hs7751rvoip/hs7751rvoip.h
+ *
+ * Modified version of io_se.h for the hs7751rvoip-specific functions.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * IO functions for an Renesas Technology sales HS7751RVOIP
+ */
+
+#ifndef _ASM_SH_IO_HS7751RVOIP_H
+#define _ASM_SH_IO_HS7751RVOIP_H
+
+#include <asm/io_generic.h>
+
+extern unsigned char hs7751rvoip_inb(unsigned long port);
+extern unsigned short hs7751rvoip_inw(unsigned long port);
+extern unsigned int hs7751rvoip_inl(unsigned long port);
+
+extern void hs7751rvoip_outb(unsigned char value, unsigned long port);
+extern void hs7751rvoip_outw(unsigned short value, unsigned long port);
+extern void hs7751rvoip_outl(unsigned int value, unsigned long port);
+
+extern unsigned char hs7751rvoip_inb_p(unsigned long port);
+extern void hs7751rvoip_outb_p(unsigned char value, unsigned long port);
+
+extern void hs7751rvoip_insb(unsigned long port, void *addr, unsigned long count);
+extern void hs7751rvoip_insw(unsigned long port, void *addr, unsigned long count);
+extern void hs7751rvoip_insl(unsigned long port, void *addr, unsigned long count);
+extern void hs7751rvoip_outsb(unsigned long port, const void *addr, unsigned long count);
+extern void hs7751rvoip_outsw(unsigned long port, const void *addr, unsigned long count);
+extern void hs7751rvoip_outsl(unsigned long port, const void *addr, unsigned long count);
+
+extern void *hs7751rvoip_ioremap(unsigned long offset, unsigned long size);
+
+extern unsigned long hs7751rvoip_isa_port2addr(unsigned long offset);
+
+#endif /* _ASM_SH_IO_HS7751RVOIP_H */
--- /dev/null
+#ifndef __ASM_SH_RTS7751R2D_IDE_H
+#define __ASM_SH_RTS7751R2D_IDE_H
+
+/* Nothing to see here.. */
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+#endif /* __ASM_SH_RTS7751R2D_IDE_H */
+
--- /dev/null
+/*
+ * include/asm-sh/io_rts7751r2d.h
+ *
+ * Modified version of io_se.h for the rts7751r2d-specific functions.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * IO functions for an Renesas Technology sales RTS7751R2D
+ */
+
+#ifndef _ASM_SH_IO_RTS7751R2D_H
+#define _ASM_SH_IO_RTS7751R2D_H
+
+extern unsigned char rts7751r2d_inb(unsigned long port);
+extern unsigned short rts7751r2d_inw(unsigned long port);
+extern unsigned int rts7751r2d_inl(unsigned long port);
+
+extern void rts7751r2d_outb(unsigned char value, unsigned long port);
+extern void rts7751r2d_outw(unsigned short value, unsigned long port);
+extern void rts7751r2d_outl(unsigned int value, unsigned long port);
+
+extern unsigned char rts7751r2d_inb_p(unsigned long port);
+extern void rts7751r2d_outb_p(unsigned char value, unsigned long port);
+
+extern void rts7751r2d_insb(unsigned long port, void *addr, unsigned long count);
+extern void rts7751r2d_insw(unsigned long port, void *addr, unsigned long count);
+extern void rts7751r2d_insl(unsigned long port, void *addr, unsigned long count);
+extern void rts7751r2d_outsb(unsigned long port, const void *addr, unsigned long count);
+extern void rts7751r2d_outsw(unsigned long port, const void *addr, unsigned long count);
+extern void rts7751r2d_outsl(unsigned long port, const void *addr, unsigned long count);
+
+extern void *rts7751r2d_ioremap(unsigned long offset, unsigned long size);
+
+extern unsigned long rts7751r2d_isa_port2addr(unsigned long offset);
+
+#endif /* _ASM_SH_IO_RTS7751R2D_H */
--- /dev/null
+#ifndef __ASM_SH_RENESAS_RTS7751R2D_H
+#define __ASM_SH_RENESAS_RTS7751R2D_H
+
+/*
+ * linux/include/asm-sh/renesas_rts7751r2d.h
+ *
+ * Copyright (C) 2000 Atom Create Engineering Co., Ltd.
+ *
+ * Renesas Technology Sales RTS7751R2D support
+ */
+
+/* Box specific addresses. */
+
+#define PA_BCR 0xa4000000 /* FPGA */
+#define PA_IRLMON 0xa4000002 /* Interrupt Status control */
+#define PA_CFCTL 0xa4000004 /* CF Timing control */
+#define PA_CFPOW 0xa4000006 /* CF Power control */
+#define PA_DISPCTL 0xa4000008 /* Display Timing control */
+#define PA_SDMPOW 0xa400000a /* SD Power control */
+#define PA_RTCCE 0xa400000c /* RTC(9701) Enable control */
+#define PA_PCICD 0xa400000e /* PCI Extention detect control */
+#define PA_VOYAGERRTS 0xa4000020 /* VOYAGER Reset control */
+#if defined(CONFIG_RTS7751R2D_REV11)
+#define PA_AXRST 0xa4000022 /* AX_LAN Reset control */
+#define PA_CFRST 0xa4000024 /* CF Reset control */
+#define PA_ADMRTS 0xa4000026 /* SD Reset control */
+#define PA_EXTRST 0xa4000028 /* Extention Reset control */
+#define PA_CFCDINTCLR 0xa400002a /* CF Insert Interrupt clear */
+#else
+#define PA_CFRST 0xa4000022 /* CF Reset control */
+#define PA_ADMRTS 0xa4000024 /* SD Reset control */
+#define PA_EXTRST 0xa4000026 /* Extention Reset control */
+#define PA_CFCDINTCLR 0xa4000028 /* CF Insert Interrupt clear */
+#define PA_KEYCTLCLR 0xa400002a /* Key Interrupt clear */
+#endif
+#define PA_POWOFF 0xa4000030 /* Board Power OFF control */
+#define PA_VERREG 0xa4000032 /* FPGA Version Register */
+#define PA_INPORT 0xa4000034 /* KEY Input Port control */
+#define PA_OUTPORT 0xa4000036 /* LED control */
+#define PA_DMPORT 0xa4000038 /* DM270 Output Port control */
+
+#define PA_AX88796L 0xaa000400 /* AX88796L Area */
+#define PA_VOYAGER 0xab000000 /* VOYAGER GX Area */
+#define PA_AREA5_IO 0xb4000000 /* Area 5 IO Memory */
+#define PA_AREA6_IO 0xb8000000 /* Area 6 IO Memory */
+#define PA_IDE_OFFSET 0x1f0 /* CF IDE Offset */
+#define AX88796L_IO_BASE 0x1000 /* AX88796L IO Base Address */
+
+#define IRLCNTR1 (PA_BCR + 0) /* Interrupt Control Register1 */
+
+#if defined(CONFIG_RTS7751R2D_REV11)
+#define IRQ_PCIETH 0 /* PCI Ethernet IRQ */
+#define IRQ_CFCARD 1 /* CF Card IRQ */
+#define IRQ_CFINST 2 /* CF Card Insert IRQ */
+#define IRQ_PCMCIA 3 /* PCMCIA IRQ */
+#define IRQ_VOYAGER 4 /* VOYAGER IRQ */
+#define IRQ_ONETH 5 /* On board Ethernet IRQ */
+#else
+#define IRQ_KEYIN 0 /* Key Input IRQ */
+#define IRQ_PCIETH 1 /* PCI Ethernet IRQ */
+#define IRQ_CFCARD 2 /* CF Card IRQ */
+#define IRQ_CFINST 3 /* CF Card Insert IRQ */
+#define IRQ_PCMCIA 4 /* PCMCIA IRQ */
+#define IRQ_VOYAGER 5 /* VOYAGER IRQ */
+#endif
+#define IRQ_RTCALM 6 /* RTC Alarm IRQ */
+#define IRQ_RTCTIME 7 /* RTC Timer IRQ */
+#define IRQ_SDCARD 8 /* SD Card IRQ */
+#define IRQ_PCISLOT1 9 /* PCI Slot #1 IRQ */
+#define IRQ_PCISLOT2 10 /* PCI Slot #2 IRQ */
+#define IRQ_EXTENTION 11 /* EXTn IRQ */
+
+#endif /* __ASM_SH_RENESAS_RTS7751R2D */
--- /dev/null
+/* -------------------------------------------------------------------- */
+/* voyagergx_reg.h */
+/* -------------------------------------------------------------------- */
+/* This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+ Copyright 2003 (c) Lineo uSolutions,Inc.
+*/
+/* -------------------------------------------------------------------- */
+
+#ifndef _VOYAGER_GX_REG_H
+#define _VOYAGER_GX_REG_H
+
+#define VOYAGER_BASE 0xb3e00000
+#define VOYAGER_USBH_BASE (0x40000 + VOYAGER_BASE)
+#define VOYAGER_UART_BASE (0x30000 + VOYAGER_BASE)
+#define VOYAGER_AC97_BASE (0xa0000 + VOYAGER_BASE)
+
+#define VOYAGER_IRQ_NUM 32
+#define VOYAGER_IRQ_BASE 50
+#define VOYAGER_USBH_IRQ VOYAGER_IRQ_BASE + 6
+#define VOYAGER_8051_IRQ VOYAGER_IRQ_BASE + 10
+#define VOYAGER_UART0_IRQ VOYAGER_IRQ_BASE + 12
+#define VOYAGER_UART1_IRQ VOYAGER_IRQ_BASE + 13
+#define VOYAGER_AC97_IRQ VOYAGER_IRQ_BASE + 17
+
+/* ----- MISC controle register ------------------------------ */
+#define MISC_CTRL (0x000004 + VOYAGER_BASE)
+#define MISC_CTRL_USBCLK_48 (3 << 28)
+#define MISC_CTRL_USBCLK_96 (2 << 28)
+#define MISC_CTRL_USBCLK_CRYSTAL (1 << 28)
+
+/* ----- GPIO[31:0] register --------------------------------- */
+#define GPIO_MUX_LOW (0x000008 + VOYAGER_BASE)
+#define GPIO_MUX_LOW_AC97 0x1F000000
+#define GPIO_MUX_LOW_8051 0x0000ffff
+#define GPIO_MUX_LOW_PWM (1 << 29)
+
+/* ----- GPIO[63:32] register --------------------------------- */
+#define GPIO_MUX_HIGH (0x00000C + VOYAGER_BASE)
+
+/* ----- DRAM controle register ------------------------------- */
+#define DRAM_CTRL (0x000010 + VOYAGER_BASE)
+#define DRAM_CTRL_EMBEDDED (1 << 31)
+#define DRAM_CTRL_CPU_BURST_1 (0 << 28)
+#define DRAM_CTRL_CPU_BURST_2 (1 << 28)
+#define DRAM_CTRL_CPU_BURST_4 (2 << 28)
+#define DRAM_CTRL_CPU_BURST_8 (3 << 28)
+#define DRAM_CTRL_CPU_CAS_LATENCY (1 << 27)
+#define DRAM_CTRL_CPU_SIZE_2 (0 << 24)
+#define DRAM_CTRL_CPU_SIZE_4 (1 << 24)
+#define DRAM_CTRL_CPU_SIZE_64 (4 << 24)
+#define DRAM_CTRL_CPU_SIZE_32 (5 << 24)
+#define DRAM_CTRL_CPU_SIZE_16 (6 << 24)
+#define DRAM_CTRL_CPU_SIZE_8 (7 << 24)
+#define DRAM_CTRL_CPU_COLUMN_SIZE_1024 (0 << 22)
+#define DRAM_CTRL_CPU_COLUMN_SIZE_512 (2 << 22)
+#define DRAM_CTRL_CPU_COLUMN_SIZE_256 (3 << 22)
+#define DRAM_CTRL_CPU_ACTIVE_PRECHARGE (1 << 21)
+#define DRAM_CTRL_CPU_RESET (1 << 20)
+#define DRAM_CTRL_CPU_BANKS (1 << 19)
+#define DRAM_CTRL_CPU_WRITE_PRECHARGE (1 << 18)
+#define DRAM_CTRL_BLOCK_WRITE (1 << 17)
+#define DRAM_CTRL_REFRESH_COMMAND (1 << 16)
+#define DRAM_CTRL_SIZE_4 (0 << 13)
+#define DRAM_CTRL_SIZE_8 (1 << 13)
+#define DRAM_CTRL_SIZE_16 (2 << 13)
+#define DRAM_CTRL_SIZE_32 (3 << 13)
+#define DRAM_CTRL_SIZE_64 (4 << 13)
+#define DRAM_CTRL_SIZE_2 (5 << 13)
+#define DRAM_CTRL_COLUMN_SIZE_256 (0 << 11)
+#define DRAM_CTRL_COLUMN_SIZE_512 (2 << 11)
+#define DRAM_CTRL_COLUMN_SIZE_1024 (3 << 11)
+#define DRAM_CTRL_BLOCK_WRITE_TIME (1 << 10)
+#define DRAM_CTRL_BLOCK_WRITE_PRECHARGE (1 << 9)
+#define DRAM_CTRL_ACTIVE_PRECHARGE (1 << 8)
+#define DRAM_CTRL_RESET (1 << 7)
+#define DRAM_CTRL_REMAIN_ACTIVE (1 << 6)
+#define DRAM_CTRL_BANKS (1 << 1)
+#define DRAM_CTRL_WRITE_PRECHARGE (1 << 0)
+
+/* ----- Arvitration control register -------------------------- */
+#define ARBITRATION_CTRL (0x000014 + VOYAGER_BASE)
+#define ARBITRATION_CTRL_CPUMEM (1 << 29)
+#define ARBITRATION_CTRL_INTMEM (1 << 28)
+#define ARBITRATION_CTRL_USB_OFF (0 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_1 (1 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_2 (2 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_3 (3 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_4 (4 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_5 (5 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_6 (6 << 24)
+#define ARBITRATION_CTRL_USB_PRIORITY_7 (7 << 24)
+#define ARBITRATION_CTRL_PANEL_OFF (0 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_1 (1 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_2 (2 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_3 (3 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_4 (4 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_5 (5 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_6 (6 << 20)
+#define ARBITRATION_CTRL_PANEL_PRIORITY_7 (7 << 20)
+#define ARBITRATION_CTRL_ZVPORT_OFF (0 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_1 (1 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_2 (2 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_3 (3 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_4 (4 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_5 (5 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_6 (6 << 16)
+#define ARBITRATION_CTRL_ZVPORTL_PRIORITY_7 (7 << 16)
+#define ARBITRATION_CTRL_CMD_INTPR_OFF (0 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_1 (1 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_2 (2 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_3 (3 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_4 (4 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_5 (5 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_6 (6 << 12)
+#define ARBITRATION_CTRL_CMD_INTPR_PRIORITY_7 (7 << 12)
+#define ARBITRATION_CTRL_DMA_OFF (0 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_1 (1 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_2 (2 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_3 (3 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_4 (4 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_5 (5 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_6 (6 << 8)
+#define ARBITRATION_CTRL_DMA_PRIORITY_7 (7 << 8)
+#define ARBITRATION_CTRL_VIDEO_OFF (0 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_1 (1 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_2 (2 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_3 (3 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_4 (4 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_5 (5 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_6 (6 << 4)
+#define ARBITRATION_CTRL_VIDEO_PRIORITY_7 (7 << 4)
+#define ARBITRATION_CTRL_CRT_OFF (0 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_1 (1 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_2 (2 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_3 (3 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_4 (4 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_5 (5 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_6 (6 << 0)
+#define ARBITRATION_CTRL_CRT_PRIORITY_7 (7 << 0)
+
+/* ----- Command list status register -------------------------- */
+#define CMD_INTPR_STATUS (0x000024 + VOYAGER_BASE)
+
+/* ----- Interrupt status register ----------------------------- */
+#define INT_STATUS (0x00002c + VOYAGER_BASE)
+#define INT_STATUS_UH (1 << 6)
+#define INT_STATUS_MC (1 << 10)
+#define INT_STATUS_U0 (1 << 12)
+#define INT_STATUS_U1 (1 << 13)
+#define INT_STATUS_AC (1 << 17)
+
+/* ----- Interrupt mask register ------------------------------ */
+#define VOYAGER_INT_MASK (0x000030 + VOYAGER_BASE)
+#define VOYAGER_INT_MASK_AC (1 << 17)
+
+/* ----- Current Gate register ---------------------------------*/
+#define CURRENT_GATE (0x000038 + VOYAGER_BASE)
+
+/* ----- Power mode 0 gate register --------------------------- */
+#define POWER_MODE0_GATE (0x000040 + VOYAGER_BASE)
+#define POWER_MODE0_GATE_G (1 << 6)
+#define POWER_MODE0_GATE_U0 (1 << 7)
+#define POWER_MODE0_GATE_U1 (1 << 8)
+#define POWER_MODE0_GATE_UH (1 << 11)
+#define POWER_MODE0_GATE_AC (1 << 18)
+
+/* ----- Power mode 1 gate register --------------------------- */
+#define POWER_MODE1_GATE (0x000048 + VOYAGER_BASE)
+#define POWER_MODE1_GATE_G (1 << 6)
+#define POWER_MODE1_GATE_U0 (1 << 7)
+#define POWER_MODE1_GATE_U1 (1 << 8)
+#define POWER_MODE1_GATE_UH (1 << 11)
+#define POWER_MODE1_GATE_AC (1 << 18)
+
+/* ----- Power mode 0 clock register -------------------------- */
+#define POWER_MODE0_CLOCK (0x000044 + VOYAGER_BASE)
+
+/* ----- Power mode 1 clock register -------------------------- */
+#define POWER_MODE1_CLOCK (0x00004C + VOYAGER_BASE)
+
+/* ----- Power mode controll register ------------------------- */
+#define POWER_MODE_CTRL (0x000054 + VOYAGER_BASE)
+
+/* ----- Miscellaneous Timing register ------------------------ */
+#define SYSTEM_DRAM_CTRL (0x000068 + VOYAGER_BASE)
+
+/* ----- PWM register ------------------------------------------*/
+#define PWM_0 (0x010020 + VOYAGER_BASE)
+#define PWM_0_HC(x) (((x)&0x0fff)<<20)
+#define PWM_0_LC(x) (((x)&0x0fff)<<8 )
+#define PWM_0_CLK_DEV(x) (((x)&0x000f)<<4 )
+#define PWM_0_EN (1<<0)
+
+/* ----- I2C register ----------------------------------------- */
+#define I2C_BYTECOUNT (0x010040 + VOYAGER_BASE)
+#define I2C_CONTROL (0x010041 + VOYAGER_BASE)
+#define I2C_STATUS (0x010042 + VOYAGER_BASE)
+#define I2C_RESET (0x010042 + VOYAGER_BASE)
+#define I2C_SADDRESS (0x010043 + VOYAGER_BASE)
+#define I2C_DATA (0x010044 + VOYAGER_BASE)
+
+/* ----- Controle register bits ----------------------------------------- */
+#define I2C_CONTROL_E (1 << 0)
+#define I2C_CONTROL_MODE (1 << 1)
+#define I2C_CONTROL_STATUS (1 << 2)
+#define I2C_CONTROL_INT (1 << 4)
+#define I2C_CONTROL_INTACK (1 << 5)
+#define I2C_CONTROL_REPEAT (1 << 6)
+
+/* ----- Status register bits ----------------------------------------- */
+#define I2C_STATUS_BUSY (1 << 0)
+#define I2C_STATUS_ACK (1 << 1)
+#define I2C_STATUS_ERROR (1 << 2)
+#define I2C_STATUS_COMPLETE (1 << 3)
+
+/* ----- Reset register ---------------------------------------------- */
+#define I2C_RESET_ERROR (1 << 2)
+
+/* ----- transmission frequencies ------------------------------------- */
+#define I2C_SADDRESS_SELECT (1 << 0)
+
+/* ----- Display Controll register ----------------------------------------- */
+#define PANEL_DISPLAY_CTRL (0x080000 + VOYAGER_BASE)
+#define PANEL_DISPLAY_CTRL_BIAS (1<<26)
+#define PANEL_PAN_CTRL (0x080004 + VOYAGER_BASE)
+#define PANEL_COLOR_KEY (0x080008 + VOYAGER_BASE)
+#define PANEL_FB_ADDRESS (0x08000C + VOYAGER_BASE)
+#define PANEL_FB_WIDTH (0x080010 + VOYAGER_BASE)
+#define PANEL_WINDOW_WIDTH (0x080014 + VOYAGER_BASE)
+#define PANEL_WINDOW_HEIGHT (0x080018 + VOYAGER_BASE)
+#define PANEL_PLANE_TL (0x08001C + VOYAGER_BASE)
+#define PANEL_PLANE_BR (0x080020 + VOYAGER_BASE)
+#define PANEL_HORIZONTAL_TOTAL (0x080024 + VOYAGER_BASE)
+#define PANEL_HORIZONTAL_SYNC (0x080028 + VOYAGER_BASE)
+#define PANEL_VERTICAL_TOTAL (0x08002C + VOYAGER_BASE)
+#define PANEL_VERTICAL_SYNC (0x080030 + VOYAGER_BASE)
+#define PANEL_CURRENT_LINE (0x080034 + VOYAGER_BASE)
+#define VIDEO_DISPLAY_CTRL (0x080040 + VOYAGER_BASE)
+#define VIDEO_FB_0_ADDRESS (0x080044 + VOYAGER_BASE)
+#define VIDEO_FB_WIDTH (0x080048 + VOYAGER_BASE)
+#define VIDEO_FB_0_LAST_ADDRESS (0x08004C + VOYAGER_BASE)
+#define VIDEO_PLANE_TL (0x080050 + VOYAGER_BASE)
+#define VIDEO_PLANE_BR (0x080054 + VOYAGER_BASE)
+#define VIDEO_SCALE (0x080058 + VOYAGER_BASE)
+#define VIDEO_INITIAL_SCALE (0x08005C + VOYAGER_BASE)
+#define VIDEO_YUV_CONSTANTS (0x080060 + VOYAGER_BASE)
+#define VIDEO_FB_1_ADDRESS (0x080064 + VOYAGER_BASE)
+#define VIDEO_FB_1_LAST_ADDRESS (0x080068 + VOYAGER_BASE)
+#define VIDEO_ALPHA_DISPLAY_CTRL (0x080080 + VOYAGER_BASE)
+#define VIDEO_ALPHA_FB_ADDRESS (0x080084 + VOYAGER_BASE)
+#define VIDEO_ALPHA_FB_WIDTH (0x080088 + VOYAGER_BASE)
+#define VIDEO_ALPHA_FB_LAST_ADDRESS (0x08008C + VOYAGER_BASE)
+#define VIDEO_ALPHA_PLANE_TL (0x080090 + VOYAGER_BASE)
+#define VIDEO_ALPHA_PLANE_BR (0x080094 + VOYAGER_BASE)
+#define VIDEO_ALPHA_SCALE (0x080098 + VOYAGER_BASE)
+#define VIDEO_ALPHA_INITIAL_SCALE (0x08009C + VOYAGER_BASE)
+#define VIDEO_ALPHA_CHROMA_KEY (0x0800A0 + VOYAGER_BASE)
+#define PANEL_HWC_ADDRESS (0x0800F0 + VOYAGER_BASE)
+#define PANEL_HWC_LOCATION (0x0800F4 + VOYAGER_BASE)
+#define PANEL_HWC_COLOR_12 (0x0800F8 + VOYAGER_BASE)
+#define PANEL_HWC_COLOR_3 (0x0800FC + VOYAGER_BASE)
+#define ALPHA_DISPLAY_CTRL (0x080100 + VOYAGER_BASE)
+#define ALPHA_FB_ADDRESS (0x080104 + VOYAGER_BASE)
+#define ALPHA_FB_WIDTH (0x080108 + VOYAGER_BASE)
+#define ALPHA_PLANE_TL (0x08010C + VOYAGER_BASE)
+#define ALPHA_PLANE_BR (0x080110 + VOYAGER_BASE)
+#define ALPHA_CHROMA_KEY (0x080114 + VOYAGER_BASE)
+#define CRT_DISPLAY_CTRL (0x080200 + VOYAGER_BASE)
+#define CRT_FB_ADDRESS (0x080204 + VOYAGER_BASE)
+#define CRT_FB_WIDTH (0x080208 + VOYAGER_BASE)
+#define CRT_HORIZONTAL_TOTAL (0x08020C + VOYAGER_BASE)
+#define CRT_HORIZONTAL_SYNC (0x080210 + VOYAGER_BASE)
+#define CRT_VERTICAL_TOTAL (0x080214 + VOYAGER_BASE)
+#define CRT_VERTICAL_SYNC (0x080218 + VOYAGER_BASE)
+#define CRT_SIGNATURE_ANALYZER (0x08021C + VOYAGER_BASE)
+#define CRT_CURRENT_LINE (0x080220 + VOYAGER_BASE)
+#define CRT_MONITOR_DETECT (0x080224 + VOYAGER_BASE)
+#define CRT_HWC_ADDRESS (0x080230 + VOYAGER_BASE)
+#define CRT_HWC_LOCATION (0x080234 + VOYAGER_BASE)
+#define CRT_HWC_COLOR_12 (0x080238 + VOYAGER_BASE)
+#define CRT_HWC_COLOR_3 (0x08023C + VOYAGER_BASE)
+#define CRT_PALETTE_RAM (0x080400 + VOYAGER_BASE)
+#define PANEL_PALETTE_RAM (0x080800 + VOYAGER_BASE)
+#define VIDEO_PALETTE_RAM (0x080C00 + VOYAGER_BASE)
+
+/* ----- 8051 Controle register ----------------------------------------- */
+#define VOYAGER_8051_BASE (0x000c0000 + VOYAGER_BASE)
+#define VOYAGER_8051_RESET (0x000b0000 + VOYAGER_BASE)
+#define VOYAGER_8051_SELECT (0x000b0004 + VOYAGER_BASE)
+#define VOYAGER_8051_CPU_INT (0x000b000c + VOYAGER_BASE)
+
+/* ----- AC97 Controle register ----------------------------------------- */
+#define AC97_TX_SLOT0 (0x00000000 + VOYAGER_AC97_BASE)
+#define AC97_CONTROL_STATUS (0x00000080 + VOYAGER_AC97_BASE)
+#define AC97C_READ (1 << 19)
+#define AC97C_WD_BIT (1 << 2)
+#define AC97C_INDEX_MASK 0x7f
+/* -------------------------------------------------------------------- */
+
+#endif /* _VOYAGER_GX_REG_H */
--- /dev/null
+/*
+ * include/asm-sh/se7300/io.h
+ *
+ * Copyright (C) 2003 Takashi Kusuda <kusuda-takashi@hitachi-ul.co.jp>
+ * IO functions for SH-Mobile(SH7300) SolutionEngine
+ */
+
+#ifndef _ASM_SH_IO_7300SE_H
+#define _ASM_SH_IO_7300SE_H
+
+extern unsigned char sh7300se_inb(unsigned long port);
+extern unsigned short sh7300se_inw(unsigned long port);
+extern unsigned int sh7300se_inl(unsigned long port);
+
+extern void sh7300se_outb(unsigned char value, unsigned long port);
+extern void sh7300se_outw(unsigned short value, unsigned long port);
+extern void sh7300se_outl(unsigned int value, unsigned long port);
+
+extern unsigned char sh7300se_inb_p(unsigned long port);
+extern void sh7300se_outb_p(unsigned char value, unsigned long port);
+
+extern void sh7300se_insb(unsigned long port, void *addr, unsigned long count);
+extern void sh7300se_insw(unsigned long port, void *addr, unsigned long count);
+extern void sh7300se_insl(unsigned long port, void *addr, unsigned long count);
+extern void sh7300se_outsb(unsigned long port, const void *addr, unsigned long count);
+extern void sh7300se_outsw(unsigned long port, const void *addr, unsigned long count);
+extern void sh7300se_outsl(unsigned long port, const void *addr, unsigned long count);
+
+#endif /* _ASM_SH_IO_7300SE_H */
--- /dev/null
+#ifndef __ASM_SH_HITACHI_SE7300_H
+#define __ASM_SH_HITACHI_SE7300_H
+
+/*
+ * linux/include/asm-sh/se/se7300.h
+ *
+ * Copyright (C) 2003 Takashi Kusuda <kusuda-takashi@hitachi-ul.co.jp>
+ *
+ * SH-Mobile SolutionEngine 7300 support
+ */
+
+/* Box specific addresses. */
+
+/* Area 0 */
+#define PA_ROM 0x00000000 /* EPROM */
+#define PA_ROM_SIZE 0x00400000 /* EPROM size 4M byte(Actually 2MB) */
+#define PA_FROM 0x00400000 /* Flash ROM */
+#define PA_FROM_SIZE 0x00400000 /* Flash size 4M byte */
+#define PA_SRAM 0x00800000 /* SRAM */
+#define PA_FROM_SIZE 0x00400000 /* SRAM size 4M byte */
+/* Area 1 */
+#define PA_EXT1 0x04000000
+#define PA_EXT1_SIZE 0x04000000
+/* Area 2 */
+#define PA_EXT2 0x08000000
+#define PA_EXT2_SIZE 0x04000000
+/* Area 3 */
+#define PA_SDRAM 0x0c000000
+#define PA_SDRAM_SIZE 0x04000000
+/* Area 4 */
+#define PA_PCIC 0x10000000 /* MR-SHPC-01 PCMCIA */
+#define PA_MRSHPC 0xb03fffe0 /* MR-SHPC-01 PCMCIA controller */
+#define PA_MRSHPC_MW1 0xb0400000 /* MR-SHPC-01 memory window base */
+#define PA_MRSHPC_MW2 0xb0500000 /* MR-SHPC-01 attribute window base */
+#define PA_MRSHPC_IO 0xb0600000 /* MR-SHPC-01 I/O window base */
+#define MRSHPC_OPTION (PA_MRSHPC + 6)
+#define MRSHPC_CSR (PA_MRSHPC + 8)
+#define MRSHPC_ISR (PA_MRSHPC + 10)
+#define MRSHPC_ICR (PA_MRSHPC + 12)
+#define MRSHPC_CPWCR (PA_MRSHPC + 14)
+#define MRSHPC_MW0CR1 (PA_MRSHPC + 16)
+#define MRSHPC_MW1CR1 (PA_MRSHPC + 18)
+#define MRSHPC_IOWCR1 (PA_MRSHPC + 20)
+#define MRSHPC_MW0CR2 (PA_MRSHPC + 22)
+#define MRSHPC_MW1CR2 (PA_MRSHPC + 24)
+#define MRSHPC_IOWCR2 (PA_MRSHPC + 26)
+#define MRSHPC_CDCR (PA_MRSHPC + 28)
+#define MRSHPC_PCIC_INFO (PA_MRSHPC + 30)
+#define PA_LED 0xb0800000 /* LED */
+#define PA_DIPSW 0xb0900000 /* Dip switch 31 */
+#define PA_EPLD_MODESET 0xb0a00000 /* FPGA Mode set register */
+#define PA_EPLD_ST1 0xb0a80000 /* FPGA Interrupt status register1 */
+#define PA_EPLD_ST2 0xb0ac0000 /* FPGA Interrupt status register2 */
+/* Area 5 */
+#define PA_EXT5 0x14000000
+#define PA_EXT5_SIZE 0x04000000
+/* Area 6 */
+#define PA_LCD1 0xb8000000
+#define PA_LCD2 0xb8800000
+
+#endif /* __ASM_SH_HITACHI_SE7300_H */
--- /dev/null
+#ifdef __KERNEL__
+#ifndef _SH_SETUP_H
+#define _SH_SETUP_H
+
+#define COMMAND_LINE_SIZE 256
+
+#endif /* _SH_SETUP_H */
+#endif /* __KERNEL__ */
--- /dev/null
+#ifndef __ASM_SH64_A_OUT_H
+#define __ASM_SH64_A_OUT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/a.out.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+struct exec
+{
+ unsigned long a_info; /* Use macros N_MAGIC, etc for access */
+ unsigned a_text; /* length of text, in bytes */
+ unsigned a_data; /* length of data, in bytes */
+ unsigned a_bss; /* length of uninitialized data area for file, in bytes */
+ unsigned a_syms; /* length of symbol table data in file, in bytes */
+ unsigned a_entry; /* start address */
+ unsigned a_trsize; /* length of relocation info for text, in bytes */
+ unsigned a_drsize; /* length of relocation info for data, in bytes */
+};
+
+#define N_TRSIZE(a) ((a).a_trsize)
+#define N_DRSIZE(a) ((a).a_drsize)
+#define N_SYMSIZE(a) ((a).a_syms)
+
+#ifdef __KERNEL__
+
+#define STACK_TOP TASK_SIZE
+
+#endif
+
+#endif /* __ASM_SH64_A_OUT_H */
--- /dev/null
+#ifndef __ASM_SH64_ATOMIC_H
+#define __ASM_SH64_ATOMIC_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/atomic.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * Atomic operations that C can't guarantee us. Useful for
+ * resource counting etc..
+ *
+ */
+
+typedef struct { volatile int counter; } atomic_t;
+
+#define ATOMIC_INIT(i) ( (atomic_t) { (i) } )
+
+#define atomic_read(v) ((v)->counter)
+#define atomic_set(v,i) ((v)->counter = (i))
+
+#include <asm/system.h>
+
+/*
+ * To get proper branch prediction for the main line, we must branch
+ * forward to code at the end of this object's .text section, then
+ * branch back to restart the operation.
+ */
+
+static __inline__ void atomic_add(int i, atomic_t * v)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *(long *)v += i;
+ local_irq_restore(flags);
+}
+
+static __inline__ void atomic_sub(int i, atomic_t *v)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *(long *)v -= i;
+ local_irq_restore(flags);
+}
+
+static __inline__ int atomic_add_return(int i, atomic_t * v)
+{
+ unsigned long temp, flags;
+
+ local_irq_save(flags);
+ temp = *(long *)v;
+ temp += i;
+ *(long *)v = temp;
+ local_irq_restore(flags);
+
+ return temp;
+}
+
+#define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
+
+static __inline__ int atomic_sub_return(int i, atomic_t * v)
+{
+ unsigned long temp, flags;
+
+ local_irq_save(flags);
+ temp = *(long *)v;
+ temp -= i;
+ *(long *)v = temp;
+ local_irq_restore(flags);
+
+ return temp;
+}
+
+#define atomic_dec_return(v) atomic_sub_return(1,(v))
+#define atomic_inc_return(v) atomic_add_return(1,(v))
+
+/*
+ * atomic_inc_and_test - increment and test
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increments @v by 1
+ * and returns true if the result is zero, or false for all
+ * other cases.
+ */
+#define atomic_inc_and_test(v) (atomic_inc_return(v) == 0)
+
+#define atomic_sub_and_test(i,v) (atomic_sub_return((i), (v)) == 0)
+#define atomic_dec_and_test(v) (atomic_sub_return(1, (v)) == 0)
+
+#define atomic_inc(v) atomic_add(1,(v))
+#define atomic_dec(v) atomic_sub(1,(v))
+
+static __inline__ void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *(long *)v &= ~mask;
+ local_irq_restore(flags);
+}
+
+static __inline__ void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ *(long *)v |= mask;
+ local_irq_restore(flags);
+}
+
+/* Atomic operations are already serializing on SH */
+#define smp_mb__before_atomic_dec() barrier()
+#define smp_mb__after_atomic_dec() barrier()
+#define smp_mb__before_atomic_inc() barrier()
+#define smp_mb__after_atomic_inc() barrier()
+
+#endif /* __ASM_SH64_ATOMIC_H */
--- /dev/null
+#ifndef __ASM_SH64_BITOPS_H
+#define __ASM_SH64_BITOPS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/bitops.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ */
+
+#ifdef __KERNEL__
+#include <linux/compiler.h>
+#include <asm/system.h>
+/* For __swab32 */
+#include <asm/byteorder.h>
+
+static __inline__ void set_bit(int nr, volatile void * addr)
+{
+ int mask;
+ volatile unsigned int *a = addr;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ *a |= mask;
+ local_irq_restore(flags);
+}
+
+static inline void __set_bit(int nr, void *addr)
+{
+ int mask;
+ unsigned int *a = addr;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ *a |= mask;
+}
+
+/*
+ * clear_bit() doesn't provide any barrier for the compiler.
+ */
+#define smp_mb__before_clear_bit() barrier()
+#define smp_mb__after_clear_bit() barrier()
+static inline void clear_bit(int nr, volatile unsigned long *a)
+{
+ int mask;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ *a &= ~mask;
+ local_irq_restore(flags);
+}
+
+static inline void __clear_bit(int nr, volatile unsigned long *a)
+{
+ int mask;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ *a &= ~mask;
+}
+
+static __inline__ void change_bit(int nr, volatile void * addr)
+{
+ int mask;
+ volatile unsigned int *a = addr;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ *a ^= mask;
+ local_irq_restore(flags);
+}
+
+static __inline__ void __change_bit(int nr, volatile void * addr)
+{
+ int mask;
+ volatile unsigned int *a = addr;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ *a ^= mask;
+}
+
+static __inline__ int test_and_set_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ retval = (mask & *a) != 0;
+ *a |= mask;
+ local_irq_restore(flags);
+
+ return retval;
+}
+
+static __inline__ int __test_and_set_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ retval = (mask & *a) != 0;
+ *a |= mask;
+
+ return retval;
+}
+
+static __inline__ int test_and_clear_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ retval = (mask & *a) != 0;
+ *a &= ~mask;
+ local_irq_restore(flags);
+
+ return retval;
+}
+
+static __inline__ int __test_and_clear_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ retval = (mask & *a) != 0;
+ *a &= ~mask;
+
+ return retval;
+}
+
+static __inline__ int test_and_change_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+ unsigned long flags;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ local_irq_save(flags);
+ retval = (mask & *a) != 0;
+ *a ^= mask;
+ local_irq_restore(flags);
+
+ return retval;
+}
+
+static __inline__ int __test_and_change_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ volatile unsigned int *a = addr;
+
+ a += nr >> 5;
+ mask = 1 << (nr & 0x1f);
+ retval = (mask & *a) != 0;
+ *a ^= mask;
+
+ return retval;
+}
+
+static __inline__ int test_bit(int nr, const volatile void *addr)
+{
+ return 1UL & (((const volatile unsigned int *) addr)[nr >> 5] >> (nr & 31));
+}
+
+static __inline__ unsigned long ffz(unsigned long word)
+{
+ unsigned long result, __d2, __d3;
+
+ __asm__("gettr tr0, %2\n\t"
+ "pta $+32, tr0\n\t"
+ "andi %1, 1, %3\n\t"
+ "beq %3, r63, tr0\n\t"
+ "pta $+4, tr0\n"
+ "0:\n\t"
+ "shlri.l %1, 1, %1\n\t"
+ "addi %0, 1, %0\n\t"
+ "andi %1, 1, %3\n\t"
+ "beqi %3, 1, tr0\n"
+ "1:\n\t"
+ "ptabs %2, tr0\n\t"
+ : "=r" (result), "=r" (word), "=r" (__d2), "=r" (__d3)
+ : "0" (0L), "1" (word));
+
+ return result;
+}
+
+/**
+ * __ffs - find first bit in word
+ * @word: The word to search
+ *
+ * Undefined if no bit exists, so code should check against 0 first.
+ */
+static inline unsigned long __ffs(unsigned long word)
+{
+ int r = 0;
+
+ if (!word)
+ return 0;
+ if (!(word & 0xffff)) {
+ word >>= 16;
+ r += 16;
+ }
+ if (!(word & 0xff)) {
+ word >>= 8;
+ r += 8;
+ }
+ if (!(word & 0xf)) {
+ word >>= 4;
+ r += 4;
+ }
+ if (!(word & 3)) {
+ word >>= 2;
+ r += 2;
+ }
+ if (!(word & 1)) {
+ word >>= 1;
+ r += 1;
+ }
+ return r;
+}
+
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The maximum size to search
+ */
+static inline unsigned long find_next_bit(const unsigned long *addr,
+ unsigned long size, unsigned long offset)
+{
+ unsigned int *p = ((unsigned int *) addr) + (offset >> 5);
+ unsigned int result = offset & ~31UL;
+ unsigned int tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 31UL;
+ if (offset) {
+ tmp = *p++;
+ tmp &= ~0UL << offset;
+ if (size < 32)
+ goto found_first;
+ if (tmp)
+ goto found_middle;
+ size -= 32;
+ result += 32;
+ }
+ while (size >= 32) {
+ if ((tmp = *p++) != 0)
+ goto found_middle;
+ result += 32;
+ size -= 32;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp &= ~0UL >> (32 - size);
+ if (tmp == 0UL) /* Are any bits set? */
+ return result + size; /* Nope. */
+found_middle:
+ return result + __ffs(tmp);
+}
+
+/**
+ * find_first_bit - find the first set bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+ * Returns the bit-number of the first set bit, not the number of the byte
+ * containing a bit.
+ */
+#define find_first_bit(addr, size) \
+ find_next_bit((addr), (size), 0)
+
+
+static inline int find_next_zero_bit(void *addr, int size, int offset)
+{
+ unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
+ unsigned long result = offset & ~31UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 31UL;
+ if (offset) {
+ tmp = *(p++);
+ tmp |= ~0UL >> (32-offset);
+ if (size < 32)
+ goto found_first;
+ if (~tmp)
+ goto found_middle;
+ size -= 32;
+ result += 32;
+ }
+ while (size & ~31UL) {
+ if (~(tmp = *(p++)))
+ goto found_middle;
+ result += 32;
+ size -= 32;
+ }
+ if (!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ tmp |= ~0UL << size;
+found_middle:
+ return result + ffz(tmp);
+}
+
+#define find_first_zero_bit(addr, size) \
+ find_next_zero_bit((addr), (size), 0)
+
+/*
+ * hweightN: returns the hamming weight (i.e. the number
+ * of bits set) of a N-bit word
+ */
+
+#define hweight32(x) generic_hweight32(x)
+#define hweight16(x) generic_hweight16(x)
+#define hweight8(x) generic_hweight8(x)
+
+/*
+ * Every architecture must define this function. It's the fastest
+ * way of searching a 140-bit bitmap where the first 100 bits are
+ * unlikely to be set. It's guaranteed that at least one of the 140
+ * bits is cleared.
+ */
+
+static inline int sched_find_first_bit(unsigned long *b)
+{
+ if (unlikely(b[0]))
+ return __ffs(b[0]);
+ if (unlikely(b[1]))
+ return __ffs(b[1]) + 32;
+ if (unlikely(b[2]))
+ return __ffs(b[2]) + 64;
+ if (b[3])
+ return __ffs(b[3]) + 96;
+ return __ffs(b[4]) + 128;
+}
+
+/*
+ * ffs: find first bit set. This is defined the same way as
+ * the libc and compiler builtin ffs routines, therefore
+ * differs in spirit from the above ffz (man ffs).
+ */
+
+#define ffs(x) generic_ffs(x)
+
+/*
+ * hweightN: returns the hamming weight (i.e. the number
+ * of bits set) of a N-bit word
+ */
+
+#define hweight32(x) generic_hweight32(x)
+#define hweight16(x) generic_hweight16(x)
+#define hweight8(x) generic_hweight8(x)
+
+#ifdef __LITTLE_ENDIAN__
+#define ext2_set_bit(nr, addr) test_and_set_bit((nr), (addr))
+#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr), (addr))
+#define ext2_test_bit(nr, addr) test_bit((nr), (addr))
+#define ext2_find_first_zero_bit(addr, size) find_first_zero_bit((addr), (size))
+#define ext2_find_next_zero_bit(addr, size, offset) \
+ find_next_zero_bit((addr), (size), (offset))
+#else
+static __inline__ int ext2_set_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ unsigned long flags;
+ volatile unsigned char *ADDR = (unsigned char *) addr;
+
+ ADDR += nr >> 3;
+ mask = 1 << (nr & 0x07);
+ local_irq_save(flags);
+ retval = (mask & *ADDR) != 0;
+ *ADDR |= mask;
+ local_irq_restore(flags);
+ return retval;
+}
+
+static __inline__ int ext2_clear_bit(int nr, volatile void * addr)
+{
+ int mask, retval;
+ unsigned long flags;
+ volatile unsigned char *ADDR = (unsigned char *) addr;
+
+ ADDR += nr >> 3;
+ mask = 1 << (nr & 0x07);
+ local_irq_save(flags);
+ retval = (mask & *ADDR) != 0;
+ *ADDR &= ~mask;
+ local_irq_restore(flags);
+ return retval;
+}
+
+static __inline__ int ext2_test_bit(int nr, const volatile void * addr)
+{
+ int mask;
+ const volatile unsigned char *ADDR = (const unsigned char *) addr;
+
+ ADDR += nr >> 3;
+ mask = 1 << (nr & 0x07);
+ return ((mask & *ADDR) != 0);
+}
+
+#define ext2_find_first_zero_bit(addr, size) \
+ ext2_find_next_zero_bit((addr), (size), 0)
+
+static __inline__ unsigned long ext2_find_next_zero_bit(void *addr, unsigned long size, unsigned long offset)
+{
+ unsigned long *p = ((unsigned long *) addr) + (offset >> 5);
+ unsigned long result = offset & ~31UL;
+ unsigned long tmp;
+
+ if (offset >= size)
+ return size;
+ size -= result;
+ offset &= 31UL;
+ if(offset) {
+ /* We hold the little endian value in tmp, but then the
+ * shift is illegal. So we could keep a big endian value
+ * in tmp, like this:
+ *
+ * tmp = __swab32(*(p++));
+ * tmp |= ~0UL >> (32-offset);
+ *
+ * but this would decrease preformance, so we change the
+ * shift:
+ */
+ tmp = *(p++);
+ tmp |= __swab32(~0UL >> (32-offset));
+ if(size < 32)
+ goto found_first;
+ if(~tmp)
+ goto found_middle;
+ size -= 32;
+ result += 32;
+ }
+ while(size & ~31UL) {
+ if(~(tmp = *(p++)))
+ goto found_middle;
+ result += 32;
+ size -= 32;
+ }
+ if(!size)
+ return result;
+ tmp = *p;
+
+found_first:
+ /* tmp is little endian, so we would have to swab the shift,
+ * see above. But then we have to swab tmp below for ffz, so
+ * we might as well do this here.
+ */
+ return result + ffz(__swab32(tmp) | (~0UL << size));
+found_middle:
+ return result + ffz(__swab32(tmp));
+}
+#endif
+
+#define ext2_set_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_set_bit((nr), (addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+#define ext2_clear_bit_atomic(lock, nr, addr) \
+ ({ \
+ int ret; \
+ spin_lock(lock); \
+ ret = ext2_clear_bit((nr), (addr)); \
+ spin_unlock(lock); \
+ ret; \
+ })
+
+/* Bitmap functions for the minix filesystem. */
+#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
+#define minix_set_bit(nr,addr) set_bit(nr,addr)
+#define minix_test_and_clear_bit(nr,addr) test_and_clear_bit(nr,addr)
+#define minix_test_bit(nr,addr) test_bit(nr,addr)
+#define minix_find_first_zero_bit(addr,size) find_first_zero_bit(addr,size)
+
+#define ffs(x) generic_ffs(x)
+#define fls(x) generic_fls(x)
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_BITOPS_H */
--- /dev/null
+#ifndef __ASM_SH64_BUG_H
+#define __ASM_SH64_BUG_H
+
+#include <asm-sh/bug.h>
+
+#endif /* __ASM_SH64_BUG_H */
+
--- /dev/null
+#ifndef __ASM_SH64_BUGS_H
+#define __ASM_SH64_BUGS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/bugs.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * This is included by init/main.c to check for architecture-dependent bugs.
+ *
+ * Needs:
+ * void check_bugs(void);
+ */
+
+/*
+ * I don't know of any Super-H bugs yet.
+ */
+
+#include <asm/processor.h>
+
+static void __init check_bugs(void)
+{
+ extern char *get_cpu_subtype(void);
+ extern unsigned long loops_per_jiffy;
+
+ cpu_data->loops_per_jiffy = loops_per_jiffy;
+
+ printk("CPU: %s\n", get_cpu_subtype());
+}
+#endif /* __ASM_SH64_BUGS_H */
--- /dev/null
+#ifndef __ASM_SH64_BYTEORDER_H
+#define __ASM_SH64_BYTEORDER_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/byteorder.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <asm/types.h>
+
+static __inline__ __const__ __u32 ___arch__swab32(__u32 x)
+{
+ __asm__("byterev %0, %0\n\t"
+ "shari %0, 32, %0"
+ : "=r" (x)
+ : "0" (x));
+ return x;
+}
+
+static __inline__ __const__ __u16 ___arch__swab16(__u16 x)
+{
+ __asm__("byterev %0, %0\n\t"
+ "shari %0, 48, %0"
+ : "=r" (x)
+ : "0" (x));
+ return x;
+}
+
+#define __arch__swab32(x) ___arch__swab32(x)
+#define __arch__swab16(x) ___arch__swab16(x)
+
+#if !defined(__STRICT_ANSI__) || defined(__KERNEL__)
+# define __BYTEORDER_HAS_U64__
+# define __SWAB_64_THRU_32__
+#endif
+
+#ifdef __LITTLE_ENDIAN__
+#include <linux/byteorder/little_endian.h>
+#else
+#include <linux/byteorder/big_endian.h>
+#endif
+
+#endif /* __ASM_SH64_BYTEORDER_H */
--- /dev/null
+#ifndef __ASM_SH64_CACHE_H
+#define __ASM_SH64_CACHE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/cache.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ */
+#include <asm/cacheflush.h>
+
+#define L1_CACHE_SHIFT 5
+/* bytes per L1 cache line */
+#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+#define L1_CACHE_ALIGN_MASK (~(L1_CACHE_BYTES - 1))
+#define L1_CACHE_ALIGN(x) (((x)+(L1_CACHE_BYTES - 1)) & L1_CACHE_ALIGN_MASK)
+#define L1_CACHE_SIZE_BYTES (L1_CACHE_BYTES << 10)
+/* Largest L1 which this arch supports */
+#define L1_CACHE_SHIFT_MAX 5
+
+#ifdef MODULE
+#define __cacheline_aligned __attribute__((__aligned__(L1_CACHE_BYTES)))
+#else
+#define __cacheline_aligned \
+ __attribute__((__aligned__(L1_CACHE_BYTES), \
+ __section__(".data.cacheline_aligned")))
+#endif
+
+/*
+ * Control Registers.
+ */
+#define ICCR_BASE 0x01600000 /* Instruction Cache Control Register */
+#define ICCR_REG0 0 /* Register 0 offset */
+#define ICCR_REG1 1 /* Register 1 offset */
+#define ICCR0 ICCR_BASE+ICCR_REG0
+#define ICCR1 ICCR_BASE+ICCR_REG1
+
+#define ICCR0_OFF 0x0 /* Set ICACHE off */
+#define ICCR0_ON 0x1 /* Set ICACHE on */
+#define ICCR0_ICI 0x2 /* Invalidate all in IC */
+
+#define ICCR1_NOLOCK 0x0 /* Set No Locking */
+
+#define OCCR_BASE 0x01E00000 /* Operand Cache Control Register */
+#define OCCR_REG0 0 /* Register 0 offset */
+#define OCCR_REG1 1 /* Register 1 offset */
+#define OCCR0 OCCR_BASE+OCCR_REG0
+#define OCCR1 OCCR_BASE+OCCR_REG1
+
+#define OCCR0_OFF 0x0 /* Set OCACHE off */
+#define OCCR0_ON 0x1 /* Set OCACHE on */
+#define OCCR0_OCI 0x2 /* Invalidate all in OC */
+#define OCCR0_WT 0x4 /* Set OCACHE in WT Mode */
+#define OCCR0_WB 0x0 /* Set OCACHE in WB Mode */
+
+#define OCCR1_NOLOCK 0x0 /* Set No Locking */
+
+
+/*
+ * SH-5
+ * A bit of description here, for neff=32.
+ *
+ * |<--- tag (19 bits) --->|
+ * +-----------------------------+-----------------+------+----------+------+
+ * | | | ways |set index |offset|
+ * +-----------------------------+-----------------+------+----------+------+
+ * ^ 2 bits 8 bits 5 bits
+ * +- Bit 31
+ *
+ * Cacheline size is based on offset: 5 bits = 32 bytes per line
+ * A cache line is identified by a tag + set but OCACHETAG/ICACHETAG
+ * have a broader space for registers. These are outlined by
+ * CACHE_?C_*_STEP below.
+ *
+ */
+
+/* Valid and Dirty bits */
+#define SH_CACHE_VALID (1LL<<0)
+#define SH_CACHE_UPDATED (1LL<<57)
+
+/* Cache flags */
+#define SH_CACHE_MODE_WT (1LL<<0)
+#define SH_CACHE_MODE_WB (1LL<<1)
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Cache information structure.
+ *
+ * Defined for both I and D cache, per-processor.
+ */
+struct cache_info {
+ unsigned int ways;
+ unsigned int sets;
+ unsigned int linesz;
+
+ unsigned int way_shift;
+ unsigned int entry_shift;
+ unsigned int set_shift;
+ unsigned int way_step_shift;
+ unsigned int asid_shift;
+
+ unsigned int way_ofs;
+
+ unsigned int asid_mask;
+ unsigned int idx_mask;
+ unsigned int epn_mask;
+
+ unsigned long flags;
+};
+
+#endif /* __ASSEMBLY__ */
+
+/* Instruction cache */
+#define CACHE_IC_ADDRESS_ARRAY 0x01000000
+
+/* Operand Cache */
+#define CACHE_OC_ADDRESS_ARRAY 0x01800000
+
+/* These declarations relate to cache 'synonyms' in the operand cache. A
+ 'synonym' occurs where effective address bits overlap between those used for
+ indexing the cache sets and those passed to the MMU for translation. In the
+ case of SH5-101 & SH5-103, only bit 12 is affected for 4k pages. */
+
+#define CACHE_OC_N_SYNBITS 1 /* Number of synonym bits */
+#define CACHE_OC_SYN_SHIFT 12
+/* Mask to select synonym bit(s) */
+#define CACHE_OC_SYN_MASK (((1UL<<CACHE_OC_N_SYNBITS)-1)<<CACHE_OC_SYN_SHIFT)
+
+
+/*
+ * Instruction cache can't be invalidated based on physical addresses.
+ * No Instruction Cache defines required, then.
+ */
+
+#endif /* __ASM_SH64_CACHE_H */
--- /dev/null
+#ifndef __ASM_SH64_CACHEFLUSH_H
+#define __ASM_SH64_CACHEFLUSH_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/page.h>
+
+struct vm_area_struct;
+struct page;
+struct mm_struct;
+
+extern void flush_cache_all(void);
+extern void flush_cache_mm(struct mm_struct *mm);
+extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
+extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_dcache_page(struct page *pg);
+extern void flush_icache_range(unsigned long start, unsigned long end);
+extern void flush_icache_user_range(struct vm_area_struct *vma,
+ struct page *page, unsigned long addr,
+ int len);
+
+#define flush_dcache_mmap_lock(mapping) do { } while (0)
+#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+
+#define flush_cache_vmap(start, end) flush_cache_all()
+#define flush_cache_vunmap(start, end) flush_cache_all()
+
+#define flush_icache_page(vma, page) do { } while (0)
+
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+do { memcpy(dst, src, len); \
+ flush_icache_user_range(vma, page, vaddr, len); \
+} while (0)
+
+#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+ memcpy(dst, src, len)
+
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_SH64_CACHEFLUSH_H */
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/cayman.h
+ *
+ * Cayman definitions
+ *
+ * Global defintions for the SH5 Cayman board
+ *
+ * Copyright (C) 2002 Stuart Menefy
+ */
+
+
+/* Setup for the SMSC FDC37C935 / LAN91C100FD */
+#define SMSC_IRQ IRQ_IRL1
+
+/* Setup for PCI Bus 2, which transmits interrupts via the EPLD */
+#define PCI2_IRQ IRQ_IRL3
--- /dev/null
+#ifndef __ASM_SH64_CHECKSUM_H
+#define __ASM_SH64_CHECKSUM_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/checksum.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <asm/registers.h>
+
+/*
+ * computes the checksum of a memory block at buff, length len,
+ * and adds in "sum" (32-bit)
+ *
+ * returns a 32-bit number suitable for feeding into itself
+ * or csum_tcpudp_magic
+ *
+ * this function must be called with even lengths, except
+ * for the last fragment, which may be odd
+ *
+ * it's best to have buff aligned on a 32-bit boundary
+ */
+asmlinkage unsigned int csum_partial(const unsigned char *buff, int len,
+ unsigned int sum);
+
+/*
+ * Note: when you get a NULL pointer exception here this means someone
+ * passed in an incorrect kernel address to one of these functions.
+ *
+ * If you use these functions directly please don't forget the
+ * verify_area().
+ */
+
+
+unsigned int csum_partial_copy_nocheck(const char *src, char *dst, int len,
+ unsigned int sum);
+
+unsigned int csum_partial_copy_from_user(const char *src, char *dst,
+ int len, int sum, int *err_ptr);
+
+/*
+ * These are the old (and unsafe) way of doing checksums, a warning message will be
+ * printed if they are used and an exeption occurs.
+ *
+ * these functions should go away after some time.
+ */
+
+#define csum_partial_copy_fromuser csum_partial_copy
+
+unsigned int csum_partial_copy(const char *src, char *dst, int len,
+ unsigned int sum);
+
+static inline unsigned short csum_fold(unsigned int sum)
+{
+ sum = (sum & 0xffff) + (sum >> 16);
+ sum = (sum & 0xffff) + (sum >> 16);
+ return ~(sum);
+}
+
+unsigned short ip_fast_csum(unsigned char * iph, unsigned int ihl);
+
+unsigned long csum_tcpudp_nofold(unsigned long saddr, unsigned long daddr,
+ unsigned short len, unsigned short proto,
+ unsigned int sum);
+
+/*
+ * computes the checksum of the TCP/UDP pseudo-header
+ * returns a 16-bit checksum, already complemented
+ */
+static inline unsigned short int csum_tcpudp_magic(unsigned long saddr,
+ unsigned long daddr,
+ unsigned short len,
+ unsigned short proto,
+ unsigned int sum)
+{
+ return csum_fold(csum_tcpudp_nofold(saddr,daddr,len,proto,sum));
+}
+
+/*
+ * this routine is used for miscellaneous IP-like checksums, mainly
+ * in icmp.c
+ */
+static inline unsigned short ip_compute_csum(unsigned char * buff, int len)
+{
+ return csum_fold(csum_partial(buff, len, 0));
+}
+
+#endif /* __ASM_SH64_CHECKSUM_H */
+
--- /dev/null
+#ifndef __ASM_SH64_CPUMASK_H
+#define __ASM_SH64_CPUMASK_H
+
+#include <asm-generic/cpumask.h>
+
+#endif /* __ASM_SH64_CPUMASK_H */
--- /dev/null
+#ifndef __ASM_SH64_CURRENT_H
+#define __ASM_SH64_CURRENT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/current.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+#include <linux/thread_info.h>
+
+struct task_struct;
+
+static __inline__ struct task_struct * get_current(void)
+{
+ return current_thread_info()->task;
+}
+
+#define current get_current()
+
+#endif /* __ASM_SH64_CURRENT_H */
+
--- /dev/null
+#ifndef __ASM_SH64_DELAY_H
+#define __ASM_SH64_DELAY_H
+
+extern void __delay(int loops);
+extern void __udelay(unsigned long long usecs, unsigned long lpj);
+extern void __ndelay(unsigned long long nsecs, unsigned long lpj);
+extern void udelay(unsigned long usecs);
+extern void ndelay(unsigned long nsecs);
+
+#endif /* __ASM_SH64_DELAY_H */
+
--- /dev/null
+#ifndef __ASM_SH64_DIV64_H
+#define __ASM_SH64_DIV64_H
+
+#include <asm-generic/div64.h>
+
+#endif /* __ASM_SH64_DIV64_H */
--- /dev/null
+#ifndef __ASM_SH_DMA_MAPPING_H
+#define __ASM_SH_DMA_MAPPING_H
+
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <linux/device.h>
+#include <asm/scatterlist.h>
+#include <asm/io.h>
+
+struct pci_dev;
+extern void *consistent_alloc(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle);
+extern void consistent_free(struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle);
+
+#define dma_supported(dev, mask) (1)
+
+static inline int dma_set_mask(struct device *dev, u64 mask)
+{
+ if (!dev->dma_mask || !dma_supported(dev, mask))
+ return -EIO;
+
+ *dev->dma_mask = mask;
+
+ return 0;
+}
+
+static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, int flag)
+{
+ return consistent_alloc(NULL, size, dma_handle);
+}
+
+static inline void dma_free_coherent(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t dma_handle)
+{
+ consistent_free(NULL, size, vaddr, dma_handle);
+}
+
+static inline void dma_cache_sync(void *vaddr, size_t size,
+ enum dma_data_direction dir)
+{
+ dma_cache_wback_inv((unsigned long)vaddr, size);
+}
+
+static inline dma_addr_t dma_map_single(struct device *dev,
+ void *ptr, size_t size,
+ enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return virt_to_bus(ptr);
+#endif
+ dma_cache_sync(ptr, size, dir);
+
+ return virt_to_bus(ptr);
+}
+
+#define dma_unmap_single(dev, addr, size, dir) do { } while (0)
+
+static inline int dma_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir)
+{
+ int i;
+
+ for (i = 0; i < nents; i++) {
+#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ dma_cache_sync(page_address(sg[i].page) + sg[i].offset,
+ sg[i].length, dir);
+#endif
+ sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset;
+ }
+
+ return nents;
+}
+
+#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
+
+static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir)
+{
+ return dma_map_single(dev, page_address(page) + offset, size, dir);
+}
+
+static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
+ size_t size, enum dma_data_direction dir)
+{
+ dma_unmap_single(dev, dma_address, size, dir);
+}
+
+static inline void dma_sync_single(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return;
+#endif
+ dma_cache_sync(bus_to_virt(dma_handle), size, dir);
+}
+
+static inline void dma_sync_single_range(struct device *dev,
+ dma_addr_t dma_handle,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return;
+#endif
+ dma_cache_sync(bus_to_virt(dma_handle) + offset, size, dir);
+}
+
+static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg,
+ int nelems, enum dma_data_direction dir)
+{
+ int i;
+
+ for (i = 0; i < nelems; i++) {
+#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ dma_cache_sync(page_address(sg[i].page) + sg[i].offset,
+ sg[i].length, dir);
+#endif
+ sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset;
+ }
+}
+
+static inline void dma_sync_single_for_cpu(struct device *dev,
+ dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_single")));
+
+static inline void dma_sync_single_for_device(struct device *dev,
+ dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_single")));
+
+static inline void dma_sync_sg_for_cpu(struct device *dev,
+ struct scatterlist *sg, int nelems,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_sg")));
+
+static inline void dma_sync_sg_for_device(struct device *dev,
+ struct scatterlist *sg, int nelems,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_sg")));
+
+static inline int dma_get_cache_alignment(void)
+{
+ /*
+ * Each processor family will define its own L1_CACHE_SHIFT,
+ * L1_CACHE_BYTES wraps to this, so this is always safe.
+ */
+ return L1_CACHE_BYTES;
+}
+
+static inline int dma_mapping_error(dma_addr_t dma_addr)
+{
+ return dma_addr == 0;
+}
+
+#endif /* __ASM_SH_DMA_MAPPING_H */
+
--- /dev/null
+#ifndef __ASM_SH64_DMA_H
+#define __ASM_SH64_DMA_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/dma.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+#include <linux/mm.h>
+#include <asm/io.h>
+#include <asm/pgtable.h>
+
+#define MAX_DMA_CHANNELS 4
+
+/*
+ * SH5 can DMA in any memory area.
+ *
+ * The static definition is dodgy because it should limit
+ * the highest DMA-able address based on the actual
+ * Physical memory available. This is actually performed
+ * at run time in defining the memory allowed to DMA_ZONE.
+ */
+#define MAX_DMA_ADDRESS ~(NPHYS_MASK)
+
+#define DMA_MODE_READ 0
+#define DMA_MODE_WRITE 1
+
+#ifdef CONFIG_PCI
+extern int isa_dma_bridge_buggy;
+#else
+#define isa_dma_bridge_buggy (0)
+#endif
+
+#endif /* __ASM_SH64_DMA_H */
--- /dev/null
+#ifndef __ASM_SH64_ELF_H
+#define __ASM_SH64_ELF_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/elf.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * ELF register definitions..
+ */
+
+#include <asm/ptrace.h>
+#include <asm/user.h>
+#include <asm/byteorder.h>
+
+typedef unsigned long elf_greg_t;
+
+#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef struct user_fpu_struct elf_fpregset_t;
+
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch(x) ( (x)->e_machine == EM_SH )
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS32
+#ifdef __LITTLE_ENDIAN__
+#define ELF_DATA ELFDATA2LSB
+#else
+#define ELF_DATA ELFDATA2MSB
+#endif
+#define ELF_ARCH EM_SH
+
+#define USE_ELF_CORE_DUMP
+#define ELF_EXEC_PAGESIZE 4096
+
+/* This is the location that an ET_DYN program is loaded if exec'ed. Typical
+ use of this is to invoke "./ld.so someprog" to test out a new version of
+ the loader. We need to make sure that it is out of the way of the program
+ that it will "exec", and that there is sufficient room for the brk. */
+
+#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3)
+
+
+#define ELF_CORE_COPY_REGS(_dest,_regs) \
+ memcpy((char *) &_dest, (char *) _regs, \
+ sizeof(struct pt_regs));
+
+/* This yields a mask that user programs can use to figure out what
+ instruction set this CPU supports. This could be done in user space,
+ but it's not easy, and we've already done it here. */
+
+#define ELF_HWCAP (0)
+
+/* This yields a string that ld.so will use to load implementation
+ specific libraries for optimization. This is more specific in
+ intent than poking at uname or /proc/cpuinfo.
+
+ For the moment, we have only optimizations for the Intel generations,
+ but that could change... */
+
+#define ELF_PLATFORM (NULL)
+
+#define ELF_PLAT_INIT(_r, load_addr) \
+ do { _r->regs[0]=0; _r->regs[1]=0; _r->regs[2]=0; _r->regs[3]=0; \
+ _r->regs[4]=0; _r->regs[5]=0; _r->regs[6]=0; _r->regs[7]=0; \
+ _r->regs[8]=0; _r->regs[9]=0; _r->regs[10]=0; _r->regs[11]=0; \
+ _r->regs[12]=0; _r->regs[13]=0; _r->regs[14]=0; _r->regs[15]=0; \
+ _r->regs[16]=0; _r->regs[17]=0; _r->regs[18]=0; _r->regs[19]=0; \
+ _r->regs[20]=0; _r->regs[21]=0; _r->regs[22]=0; _r->regs[23]=0; \
+ _r->regs[24]=0; _r->regs[25]=0; _r->regs[26]=0; _r->regs[27]=0; \
+ _r->regs[28]=0; _r->regs[29]=0; _r->regs[30]=0; _r->regs[31]=0; \
+ _r->regs[32]=0; _r->regs[33]=0; _r->regs[34]=0; _r->regs[35]=0; \
+ _r->regs[36]=0; _r->regs[37]=0; _r->regs[38]=0; _r->regs[39]=0; \
+ _r->regs[40]=0; _r->regs[41]=0; _r->regs[42]=0; _r->regs[43]=0; \
+ _r->regs[44]=0; _r->regs[45]=0; _r->regs[46]=0; _r->regs[47]=0; \
+ _r->regs[48]=0; _r->regs[49]=0; _r->regs[50]=0; _r->regs[51]=0; \
+ _r->regs[52]=0; _r->regs[53]=0; _r->regs[54]=0; _r->regs[55]=0; \
+ _r->regs[56]=0; _r->regs[57]=0; _r->regs[58]=0; _r->regs[59]=0; \
+ _r->regs[60]=0; _r->regs[61]=0; _r->regs[62]=0; \
+ _r->tregs[0]=0; _r->tregs[1]=0; _r->tregs[2]=0; _r->tregs[3]=0; \
+ _r->tregs[4]=0; _r->tregs[5]=0; _r->tregs[6]=0; _r->tregs[7]=0; \
+ _r->sr = SR_FD | SR_MMU; } while (0)
+
+#ifdef __KERNEL__
+#define SET_PERSONALITY(ex, ibcs2) set_personality(PER_LINUX_32BIT)
+#endif
+
+#endif /* __ASM_SH64_ELF_H */
--- /dev/null
+#ifndef __ASM_SH64_ERRNO_H
+#define __ASM_SH64_ERRNO_H
+
+#include <asm-generic/errno.h>
+
+#endif /* __ASM_SH64_ERRNO_H */
--- /dev/null
+#ifndef __ASM_SH64_FCNTL_H
+#define __ASM_SH64_FCNTL_H
+
+#include <asm-sh/fcntl.h>
+
+#endif /* __ASM_SH64_FCNTL_H */
+
--- /dev/null
+#ifndef __ASM_SH64_HARDIRQ_H
+#define __ASM_SH64_HARDIRQ_H
+
+#include <asm-sh/hardirq.h>
+
+#endif /* __ASM_SH64_HARDIRQ_H */
+
--- /dev/null
+#ifndef __ASM_SH64_HARDWARE_H
+#define __ASM_SH64_HARDWARE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/hardware.h
+ *
+ * Copyright (C) 2002 Stuart Menefy
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Defitions of the locations of registers in the physical address space.
+ */
+
+#define PHYS_PERIPHERAL_BLOCK 0x09000000
+#define PHYS_DMAC_BLOCK 0x0e000000
+#define PHYS_PCI_BLOCK 0x60000000
+
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+#include <asm/io.h>
+
+struct vcr_info {
+ u8 perr_flags; /* P-port Error flags */
+ u8 merr_flags; /* Module Error flags */
+ u16 mod_vers; /* Module Version */
+ u16 mod_id; /* Module ID */
+ u8 bot_mb; /* Bottom Memory block */
+ u8 top_mb; /* Top Memory block */
+};
+
+static inline struct vcr_info sh64_get_vcr_info(unsigned long base)
+{
+ unsigned long long tmp;
+
+ tmp = sh64_in64(base);
+
+ return *((struct vcr_info *)&tmp);
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_SH64_HARDWARE_H */
--- /dev/null
+#ifndef __ASM_SH64_HDREG_H
+#define __ASM_SH64_HDREG_H
+
+#include <asm-generic/hdreg.h>
+
+#endif /* __ASM_SH64_HDREG_H */
--- /dev/null
+#ifndef __ASM_SH64_HW_IRQ_H
+#define __ASM_SH64_HW_IRQ_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/hw_irq.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+static __inline__ void hw_resend_irq(struct hw_interrupt_type *h, unsigned int i) { /* Nothing to do */ }
+
+#endif /* __ASM_SH64_HW_IRQ_H */
--- /dev/null
+/*
+ * linux/include/asm-sh64/ide.h
+ *
+ * Copyright (C) 1994-1996 Linus Torvalds & authors
+ *
+ * sh64 version by Richard Curnow & Paul Mundt
+ */
+
+/*
+ * This file contains the sh64 architecture specific IDE code.
+ */
+
+#ifndef __ASM_SH64_IDE_H
+#define __ASM_SH64_IDE_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+
+#ifndef MAX_HWIFS
+#define MAX_HWIFS CONFIG_IDE_MAX_HWIFS
+#endif
+
+#define ide_default_io_ctl(base) (0)
+
+#include <asm-generic/ide_iops.h>
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_IDE_H */
--- /dev/null
+#ifndef __ASM_SH64_IO_H
+#define __ASM_SH64_IO_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/io.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * Convention:
+ * read{b,w,l}/write{b,w,l} are for PCI,
+ * while in{b,w,l}/out{b,w,l} are for ISA
+ * These may (will) be platform specific function.
+ *
+ * In addition, we have
+ * ctrl_in{b,w,l}/ctrl_out{b,w,l} for SuperH specific I/O.
+ * which are processor specific. Address should be the result of
+ * onchip_remap();
+ */
+
+#include <asm/cache.h>
+#include <asm/system.h>
+#include <asm/page.h>
+
+#define virt_to_bus virt_to_phys
+#define bus_to_virt phys_to_virt
+#define page_to_bus page_to_phys
+
+/*
+ * Nothing overly special here.. instead of doing the same thing
+ * over and over again, we just define a set of sh64_in/out functions
+ * with an implicit size. The traditional read{b,w,l}/write{b,w,l}
+ * mess is wrapped to this, as are the SH-specific ctrl_in/out routines.
+ */
+static inline unsigned char sh64_in8(unsigned long addr)
+{
+ return *(volatile unsigned char *)addr;
+}
+
+static inline unsigned short sh64_in16(unsigned long addr)
+{
+ return *(volatile unsigned short *)addr;
+}
+
+static inline unsigned long sh64_in32(unsigned long addr)
+{
+ return *(volatile unsigned long *)addr;
+}
+
+static inline unsigned long long sh64_in64(unsigned long addr)
+{
+ return *(volatile unsigned long long *)addr;
+}
+
+static inline void sh64_out8(unsigned char b, unsigned long addr)
+{
+ *(volatile unsigned char *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out16(unsigned short b, unsigned long addr)
+{
+ *(volatile unsigned short *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out32(unsigned long b, unsigned long addr)
+{
+ *(volatile unsigned long *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out64(unsigned long long b, unsigned long addr)
+{
+ *(volatile unsigned long long *)addr = b;
+ wmb();
+}
+
+#define readb(addr) sh64_in8(addr)
+#define readw(addr) sh64_in16(addr)
+#define readl(addr) sh64_in32(addr)
+
+#define writeb(b, addr) sh64_out8(b, addr)
+#define writew(b, addr) sh64_out16(b, addr)
+#define writel(b, addr) sh64_out32(b, addr)
+
+#define ctrl_inb(addr) sh64_in8(addr)
+#define ctrl_inw(addr) sh64_in16(addr)
+#define ctrl_inl(addr) sh64_in32(addr)
+
+#define ctrl_outb(b, addr) sh64_out8(b, addr)
+#define ctrl_outw(b, addr) sh64_out16(b, addr)
+#define ctrl_outl(b, addr) sh64_out32(b, addr)
+
+unsigned long inb(unsigned long port);
+unsigned long inw(unsigned long port);
+unsigned long inl(unsigned long port);
+void outb(unsigned long value, unsigned long port);
+void outw(unsigned long value, unsigned long port);
+void outl(unsigned long value, unsigned long port);
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_SH_CAYMAN
+extern unsigned long smsc_superio_virt;
+#endif
+#ifdef CONFIG_PCI
+extern unsigned long pciio_virt;
+#endif
+
+#define IO_SPACE_LIMIT 0xffffffff
+
+/*
+ * Change virtual addresses to physical addresses and vv.
+ * These are trivial on the 1:1 Linux/SuperH mapping
+ */
+extern __inline__ unsigned long virt_to_phys(volatile void * address)
+{
+ return __pa(address);
+}
+
+extern __inline__ void * phys_to_virt(unsigned long address)
+{
+ return __va(address);
+}
+
+extern void * __ioremap(unsigned long phys_addr, unsigned long size,
+ unsigned long flags);
+
+extern __inline__ void * ioremap(unsigned long phys_addr, unsigned long size)
+{
+ return __ioremap(phys_addr, size, 1);
+}
+
+extern __inline__ void * ioremap_nocache (unsigned long phys_addr, unsigned long size)
+{
+ return __ioremap(phys_addr, size, 0);
+}
+
+extern void iounmap(void *addr);
+
+unsigned long onchip_remap(unsigned long addr, unsigned long size, const char* name);
+extern void onchip_unmap(unsigned long vaddr);
+
+static __inline__ int check_signature(unsigned long io_addr,
+ const unsigned char *signature, int length)
+{
+ int retval = 0;
+ do {
+ if (readb(io_addr) != *signature)
+ goto out;
+ io_addr++;
+ signature++;
+ length--;
+ } while (length);
+ retval = 1;
+out:
+ return retval;
+}
+
+/*
+ * The caches on some architectures aren't dma-coherent and have need to
+ * handle this in software. There are three types of operations that
+ * can be applied to dma buffers.
+ *
+ * - dma_cache_wback_inv(start, size) makes caches and RAM coherent by
+ * writing the content of the caches back to memory, if necessary.
+ * The function also invalidates the affected part of the caches as
+ * necessary before DMA transfers from outside to memory.
+ * - dma_cache_inv(start, size) invalidates the affected parts of the
+ * caches. Dirty lines of the caches may be written back or simply
+ * be discarded. This operation is necessary before dma operations
+ * to the memory.
+ * - dma_cache_wback(start, size) writes back any dirty lines but does
+ * not invalidate the cache. This can be used before DMA reads from
+ * memory,
+ */
+
+static __inline__ void dma_cache_wback_inv (unsigned long start, unsigned long size)
+{
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbp %0, 0" : : "r" (s));
+}
+
+static __inline__ void dma_cache_inv (unsigned long start, unsigned long size)
+{
+ // Note that caller has to be careful with overzealous
+ // invalidation should there be partial cache lines at the extremities
+ // of the specified range
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbi %0, 0" : : "r" (s));
+}
+
+static __inline__ void dma_cache_wback (unsigned long start, unsigned long size)
+{
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbwb %0, 0" : : "r" (s));
+}
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_SH64_IO_H */
--- /dev/null
+#ifndef __ASM_SH64_IOCTL_H
+#define __ASM_SH64_IOCTL_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ioctl.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * linux/ioctl.h for Linux by H.H. Bergman.
+ *
+ */
+
+/* ioctl command encoding: 32 bits total, command in lower 16 bits,
+ * size of the parameter structure in the lower 14 bits of the
+ * upper 16 bits.
+ * Encoding the size of the parameter structure in the ioctl request
+ * is useful for catching programs compiled with old versions
+ * and to avoid overwriting user space outside the user buffer area.
+ * The highest 2 bits are reserved for indicating the ``access mode''.
+ * NOTE: This limits the max parameter size to 16kB -1 !
+ */
+
+/*
+ * The following is for compatibility across the various Linux
+ * platforms. The i386 ioctl numbering scheme doesn't really enforce
+ * a type field. De facto, however, the top 8 bits of the lower 16
+ * bits are indeed used as a type field, so we might just as well make
+ * this explicit here. Please be sure to use the decoding macros
+ * below from now on.
+ */
+#define _IOC_NRBITS 8
+#define _IOC_TYPEBITS 8
+#define _IOC_SIZEBITS 14
+#define _IOC_DIRBITS 2
+
+#define _IOC_NRMASK ((1 << _IOC_NRBITS)-1)
+#define _IOC_TYPEMASK ((1 << _IOC_TYPEBITS)-1)
+#define _IOC_SIZEMASK ((1 << _IOC_SIZEBITS)-1)
+#define _IOC_DIRMASK ((1 << _IOC_DIRBITS)-1)
+
+#define _IOC_NRSHIFT 0
+#define _IOC_TYPESHIFT (_IOC_NRSHIFT+_IOC_NRBITS)
+#define _IOC_SIZESHIFT (_IOC_TYPESHIFT+_IOC_TYPEBITS)
+#define _IOC_DIRSHIFT (_IOC_SIZESHIFT+_IOC_SIZEBITS)
+
+/*
+ * Direction bits.
+ */
+#define _IOC_NONE 0U
+#define _IOC_WRITE 1U
+#define _IOC_READ 2U
+
+#define _IOC(dir,type,nr,size) \
+ (((dir) << _IOC_DIRSHIFT) | \
+ ((type) << _IOC_TYPESHIFT) | \
+ ((nr) << _IOC_NRSHIFT) | \
+ ((size) << _IOC_SIZESHIFT))
+
+/* used to create numbers */
+#define _IO(type,nr) _IOC(_IOC_NONE,(type),(nr),0)
+#define _IOR(type,nr,size) _IOC(_IOC_READ,(type),(nr),sizeof(size))
+#define _IOW(type,nr,size) _IOC(_IOC_WRITE,(type),(nr),sizeof(size))
+#define _IOWR(type,nr,size) _IOC(_IOC_READ|_IOC_WRITE,(type),(nr),sizeof(size))
+
+/* used to decode ioctl numbers.. */
+#define _IOC_DIR(nr) (((nr) >> _IOC_DIRSHIFT) & _IOC_DIRMASK)
+#define _IOC_TYPE(nr) (((nr) >> _IOC_TYPESHIFT) & _IOC_TYPEMASK)
+#define _IOC_NR(nr) (((nr) >> _IOC_NRSHIFT) & _IOC_NRMASK)
+#define _IOC_SIZE(nr) (((nr) >> _IOC_SIZESHIFT) & _IOC_SIZEMASK)
+
+/* ...and for the drivers/sound files... */
+
+#define IOC_IN (_IOC_WRITE << _IOC_DIRSHIFT)
+#define IOC_OUT (_IOC_READ << _IOC_DIRSHIFT)
+#define IOC_INOUT ((_IOC_WRITE|_IOC_READ) << _IOC_DIRSHIFT)
+#define IOCSIZE_MASK (_IOC_SIZEMASK << _IOC_SIZESHIFT)
+#define IOCSIZE_SHIFT (_IOC_SIZESHIFT)
+
+#endif /* __ASM_SH64_IOCTL_H */
--- /dev/null
+#ifndef __ASM_SH64_IOCTLS_H
+#define __ASM_SH64_IOCTLS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ioctls.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <asm/ioctl.h>
+
+#define FIOCLEX _IO('f', 1)
+#define FIONCLEX _IO('f', 2)
+#define FIOASYNC _IOW('f', 125, int)
+#define FIONBIO _IOW('f', 126, int)
+#define FIONREAD _IOR('f', 127, int)
+#define TIOCINQ FIONREAD
+#define FIOQSIZE _IOR('f', 128, loff_t)
+
+#define TCGETS 0x5401
+#define TCSETS 0x5402
+#define TCSETSW 0x5403
+#define TCSETSF 0x5404
+
+#define TCGETA _IOR('t', 23, struct termio)
+#define TCSETA _IOW('t', 24, struct termio)
+#define TCSETAW _IOW('t', 25, struct termio)
+#define TCSETAF _IOW('t', 28, struct termio)
+
+#define TCSBRK _IO('t', 29)
+#define TCXONC _IO('t', 30)
+#define TCFLSH _IO('t', 31)
+
+#define TIOCSWINSZ _IOW('t', 103, struct winsize)
+#define TIOCGWINSZ _IOR('t', 104, struct winsize)
+#define TIOCSTART _IO('t', 110) /* start output, like ^Q */
+#define TIOCSTOP _IO('t', 111) /* stop output, like ^S */
+#define TIOCOUTQ _IOR('t', 115, int) /* output queue size */
+
+#define TIOCSPGRP _IOW('t', 118, int)
+#define TIOCGPGRP _IOR('t', 119, int)
+
+#define TIOCEXCL _IO('T', 12) /* 0x540C */
+#define TIOCNXCL _IO('T', 13) /* 0x540D */
+#define TIOCSCTTY _IO('T', 14) /* 0x540E */
+
+#define TIOCSTI _IOW('T', 18, char) /* 0x5412 */
+#define TIOCMGET _IOR('T', 21, unsigned int) /* 0x5415 */
+#define TIOCMBIS _IOW('T', 22, unsigned int) /* 0x5416 */
+#define TIOCMBIC _IOW('T', 23, unsigned int) /* 0x5417 */
+#define TIOCMSET _IOW('T', 24, unsigned int) /* 0x5418 */
+# define TIOCM_LE 0x001
+# define TIOCM_DTR 0x002
+# define TIOCM_RTS 0x004
+# define TIOCM_ST 0x008
+# define TIOCM_SR 0x010
+# define TIOCM_CTS 0x020
+# define TIOCM_CAR 0x040
+# define TIOCM_RNG 0x080
+# define TIOCM_DSR 0x100
+# define TIOCM_CD TIOCM_CAR
+# define TIOCM_RI TIOCM_RNG
+
+#define TIOCGSOFTCAR _IOR('T', 25, unsigned int) /* 0x5419 */
+#define TIOCSSOFTCAR _IOW('T', 26, unsigned int) /* 0x541A */
+#define TIOCLINUX _IOW('T', 28, char) /* 0x541C */
+#define TIOCCONS _IO('T', 29) /* 0x541D */
+#define TIOCGSERIAL _IOR('T', 30, struct serial_struct) /* 0x541E */
+#define TIOCSSERIAL _IOW('T', 31, struct serial_struct) /* 0x541F */
+#define TIOCPKT _IOW('T', 32, int) /* 0x5420 */
+# define TIOCPKT_DATA 0
+# define TIOCPKT_FLUSHREAD 1
+# define TIOCPKT_FLUSHWRITE 2
+# define TIOCPKT_STOP 4
+# define TIOCPKT_START 8
+# define TIOCPKT_NOSTOP 16
+# define TIOCPKT_DOSTOP 32
+
+
+#define TIOCNOTTY _IO('T', 34) /* 0x5422 */
+#define TIOCSETD _IOW('T', 35, int) /* 0x5423 */
+#define TIOCGETD _IOR('T', 36, int) /* 0x5424 */
+#define TCSBRKP _IOW('T', 37, int) /* 0x5425 */ /* Needed for POSIX tcsendbreak() */
+#define TIOCTTYGSTRUCT _IOR('T', 38, struct tty_struct) /* 0x5426 */ /* For debugging only */
+#define TIOCSBRK _IO('T', 39) /* 0x5427 */ /* BSD compatibility */
+#define TIOCCBRK _IO('T', 40) /* 0x5428 */ /* BSD compatibility */
+#define TIOCGSID _IOR('T', 41, pid_t) /* 0x5429 */ /* Return the session ID of FD */
+#define TIOCGPTN _IOR('T',0x30, unsigned int) /* Get Pty Number (of pty-mux device) */
+#define TIOCSPTLCK _IOW('T',0x31, int) /* Lock/unlock Pty */
+
+#define TIOCSERCONFIG _IO('T', 83) /* 0x5453 */
+#define TIOCSERGWILD _IOR('T', 84, int) /* 0x5454 */
+#define TIOCSERSWILD _IOW('T', 85, int) /* 0x5455 */
+#define TIOCGLCKTRMIOS 0x5456
+#define TIOCSLCKTRMIOS 0x5457
+#define TIOCSERGSTRUCT _IOR('T', 88, struct async_struct) /* 0x5458 */ /* For debugging only */
+#define TIOCSERGETLSR _IOR('T', 89, unsigned int) /* 0x5459 */ /* Get line status register */
+ /* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+# define TIOCSER_TEMT 0x01 /* Transmitter physically empty */
+#define TIOCSERGETMULTI _IOR('T', 90, struct serial_multiport_struct) /* 0x545A */ /* Get multiport config */
+#define TIOCSERSETMULTI _IOW('T', 91, struct serial_multiport_struct) /* 0x545B */ /* Set multiport config */
+
+#define TIOCMIWAIT _IO('T', 92) /* 0x545C */ /* wait for a change on serial input line(s) */
+#define TIOCGICOUNT _IOR('T', 93, struct async_icount) /* 0x545D */ /* read serial port inline interrupt counts */
+
+#endif /* __ASM_SH64_IOCTLS_H */
--- /dev/null
+#ifndef __ASM_SH64_IPC_H
+#define __ASM_SH64_IPC_H
+
+#include <asm-sh/ipc.h>
+
+#endif /* __ASM_SH64_IPC_H */
--- /dev/null
+#ifndef __ASM_SH64_IPCBUF_H__
+#define __ASM_SH64_IPCBUF_H__
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ipcbuf.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * The ipc64_perm structure for i386 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 32-bit mode_t and seq
+ * - 2 miscellaneous 32-bit values
+ */
+
+struct ipc64_perm
+{
+ __kernel_key_t key;
+ __kernel_uid32_t uid;
+ __kernel_gid32_t gid;
+ __kernel_uid32_t cuid;
+ __kernel_gid32_t cgid;
+ __kernel_mode_t mode;
+ unsigned short __pad1;
+ unsigned short seq;
+ unsigned short __pad2;
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+#endif /* __ASM_SH64_IPCBUF_H__ */
--- /dev/null
+#ifndef __ASM_SH64_IRQ_H
+#define __ASM_SH64_IRQ_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/irq.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/config.h>
+
+/*
+ * Encoded IRQs are not considered worth to be supported.
+ * Main reason is that there's no per-encoded-interrupt
+ * enable/disable mechanism (as there was in SH3/4).
+ * An all enabled/all disabled is worth only if there's
+ * a cascaded IC to disable/enable/ack on. Until such
+ * IC is available there's no such support.
+ *
+ * Presumably Encoded IRQs may use extra IRQs beyond 64,
+ * below. Some logic must be added to cope with IRQ_IRL?
+ * in an exclusive way.
+ *
+ * Priorities are set at Platform level, when IRQ_IRL0-3
+ * are set to 0 Encoding is allowed. Otherwise it's not
+ * allowed.
+ */
+
+/* Independent IRQs */
+#define IRQ_IRL0 0
+#define IRQ_IRL1 1
+#define IRQ_IRL2 2
+#define IRQ_IRL3 3
+
+#define IRQ_INTA 4
+#define IRQ_INTB 5
+#define IRQ_INTC 6
+#define IRQ_INTD 7
+
+#define IRQ_SERR 12
+#define IRQ_ERR 13
+#define IRQ_PWR3 14
+#define IRQ_PWR2 15
+#define IRQ_PWR1 16
+#define IRQ_PWR0 17
+
+#define IRQ_DMTE0 18
+#define IRQ_DMTE1 19
+#define IRQ_DMTE2 20
+#define IRQ_DMTE3 21
+#define IRQ_DAERR 22
+
+#define IRQ_TUNI0 32
+#define IRQ_TUNI1 33
+#define IRQ_TUNI2 34
+#define IRQ_TICPI2 35
+
+#define IRQ_ATI 36
+#define IRQ_PRI 37
+#define IRQ_CUI 38
+
+#define IRQ_ERI 39
+#define IRQ_RXI 40
+#define IRQ_BRI 41
+#define IRQ_TXI 42
+
+#define IRQ_ITI 63
+
+#define NR_INTC_IRQS 64
+
+#ifdef CONFIG_SH_CAYMAN
+#define NR_EXT_IRQS 32
+#define START_EXT_IRQS 64
+
+/* PCI bus 2 uses encoded external interrupts on the Cayman board */
+#define IRQ_P2INTA (START_EXT_IRQS + (3*8) + 0)
+#define IRQ_P2INTB (START_EXT_IRQS + (3*8) + 1)
+#define IRQ_P2INTC (START_EXT_IRQS + (3*8) + 2)
+#define IRQ_P2INTD (START_EXT_IRQS + (3*8) + 3)
+
+#define START_EXT_IRQS 64
+
+#define I8042_KBD_IRQ (START_EXT_IRQS + 2)
+#define I8042_AUX_IRQ (START_EXT_IRQS + 6)
+
+#else
+#define NR_EXT_IRQS 0
+#endif
+
+#define NR_IRQS (NR_INTC_IRQS+NR_EXT_IRQS)
+
+
+/* Default IRQs, fixed */
+#define TIMER_IRQ IRQ_TUNI0
+#define RTC_IRQ IRQ_CUI
+
+/* Default Priorities, Platform may choose differently */
+#define NO_PRIORITY 0 /* Disabled */
+#define TIMER_PRIORITY 2
+#define RTC_PRIORITY TIMER_PRIORITY
+#define SCIF_PRIORITY 3
+#define INTD_PRIORITY 3
+#define IRL3_PRIORITY 4
+#define INTC_PRIORITY 6
+#define IRL2_PRIORITY 7
+#define INTB_PRIORITY 9
+#define IRL1_PRIORITY 10
+#define INTA_PRIORITY 12
+#define IRL0_PRIORITY 13
+#define TOP_PRIORITY 15
+
+extern void disable_irq(unsigned int);
+extern void disable_irq_nosync(unsigned int);
+extern void enable_irq(unsigned int);
+
+extern int intc_evt_to_irq[(0xE20/0x20)+1];
+int intc_irq_describe(char* p, int irq);
+
+#define irq_canonicalize(irq) (irq)
+
+#ifdef CONFIG_SH_CAYMAN
+int cayman_irq_demux(int evt);
+int cayman_irq_describe(char* p, int irq);
+#define irq_demux(x) cayman_irq_demux(x)
+#define irq_describe(p, x) cayman_irq_describe(p, x)
+#else
+#define irq_demux(x) (intc_evt_to_irq[x])
+#define irq_describe(p, x) intc_irq_describe(p, x)
+#endif
+
+/*
+ * Function for "on chip support modules".
+ */
+
+/*
+ * SH-5 supports Priority based interrupts only.
+ * Interrupt priorities are defined at platform level.
+ */
+#define set_ipr_data(a, b, c, d)
+#define make_ipr_irq(a)
+#define make_imask_irq(a)
+
+#endif /* __ASM_SH64_IRQ_H */
--- /dev/null
+/*
+ * linux/include/asm-shmedia/keyboard.h
+ *
+ * Copied from i386 version:
+ * Created 3 Nov 1996 by Geert Uytterhoeven
+ */
+
+/*
+ * This file contains the i386 architecture specific keyboard definitions
+ */
+
+#ifndef __ASM_SH64_KEYBOARD_H
+#define __ASM_SH64_KEYBOARD_H
+
+#ifdef __KERNEL__
+
+#include <linux/kernel.h>
+#include <linux/ioport.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_SH_CAYMAN
+#define KEYBOARD_IRQ (START_EXT_IRQS + 2) /* SMSC SuperIO IRQ 1 */
+#endif
+#define DISABLE_KBD_DURING_INTERRUPTS 0
+
+extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
+extern int pckbd_getkeycode(unsigned int scancode);
+extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
+ char raw_mode);
+extern char pckbd_unexpected_up(unsigned char keycode);
+extern void pckbd_leds(unsigned char leds);
+extern void pckbd_init_hw(void);
+extern unsigned char pckbd_sysrq_xlate[128];
+
+#define kbd_setkeycode pckbd_setkeycode
+#define kbd_getkeycode pckbd_getkeycode
+#define kbd_translate pckbd_translate
+#define kbd_unexpected_up pckbd_unexpected_up
+#define kbd_leds pckbd_leds
+#define kbd_init_hw pckbd_init_hw
+#define kbd_sysrq_xlate pckbd_sysrq_xlate
+
+#define SYSRQ_KEY 0x54
+
+/* resource allocation */
+#define kbd_request_region()
+#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, \
+ "keyboard", NULL)
+
+/* How to access the keyboard macros on this platform. */
+#define kbd_read_input() inb(KBD_DATA_REG)
+#define kbd_read_status() inb(KBD_STATUS_REG)
+#define kbd_write_output(val) outb(val, KBD_DATA_REG)
+#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
+
+/* Some stoneage hardware needs delays after some operations. */
+#define kbd_pause() do { } while(0)
+
+/*
+ * Machine specific bits for the PS/2 driver
+ */
+
+#ifdef CONFIG_SH_CAYMAN
+#define AUX_IRQ (START_EXT_IRQS + 6) /* SMSC SuperIO IRQ12 */
+#endif
+
+#define aux_request_irq(hand, dev_id) \
+ request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS/2 Mouse", dev_id)
+
+#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_SH64_KEYBOARD_H */
+
--- /dev/null
+#ifndef __ASM_SH64_KMAP_TYPES_H
+#define __ASM_SH64_KMAP_TYPES_H
+
+#include <asm-sh/kmap_types.h>
+
+#endif /* __ASM_SH64_KMAP_TYPES_H */
+
--- /dev/null
+#ifndef __ASM_SH64_LINKAGE_H
+#define __ASM_SH64_LINKAGE_H
+
+#include <asm-sh/linkage.h>
+
+#endif /* __ASM_SH64_LINKAGE_H */
+
--- /dev/null
+#ifndef __ASM_SH64_LOCAL_H
+#define __ASM_SH64_LOCAL_H
+
+#include <asm-generic/local.h>
+
+#endif /* __ASM_SH64_LOCAL_H */
+
--- /dev/null
+/*
+ * linux/include/asm-sh64/mc146818rtc.h
+ *
+*/
+
+/* For now, an empty place-holder to get IDE to compile. */
+
--- /dev/null
+#ifndef __ASM_SH64_MMAN_H
+#define __ASM_SH64_MMAN_H
+
+#include <asm-sh/mman.h>
+
+#endif /* __ASM_SH64_MMAN_H */
--- /dev/null
+#ifndef __MMU_H
+#define __MMU_H
+
+/* Default "unsigned long" context */
+typedef unsigned long mm_context_t;
+
+#endif
--- /dev/null
+#ifndef __ASM_SH64_MMU_CONTEXT_H
+#define __ASM_SH64_MMU_CONTEXT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/mmu_context.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * ASID handling idea taken from MIPS implementation.
+ *
+ */
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Cache of MMU context last used.
+ *
+ * The MMU "context" consists of two things:
+ * (a) TLB cache version (or cycle, top 24 bits of mmu_context_cache)
+ * (b) ASID (Address Space IDentifier, bottom 8 bits of mmu_context_cache)
+ */
+extern unsigned long mmu_context_cache;
+
+#include <linux/config.h>
+#include <asm/page.h>
+
+
+/* Current mm's pgd */
+extern pgd_t *mmu_pdtp_cache;
+
+#define SR_ASID_MASK 0xffffffffff00ffffULL
+#define SR_ASID_SHIFT 16
+
+#define MMU_CONTEXT_ASID_MASK 0x000000ff
+#define MMU_CONTEXT_VERSION_MASK 0xffffff00
+#define MMU_CONTEXT_FIRST_VERSION 0x00000100
+#define NO_CONTEXT 0
+
+/* ASID is 8-bit value, so it can't be 0x100 */
+#define MMU_NO_ASID 0x100
+
+
+/*
+ * Virtual Page Number mask
+ */
+#define MMU_VPN_MASK 0xfffff000
+
+extern __inline__ void
+get_new_mmu_context(struct mm_struct *mm)
+{
+ extern void flush_tlb_all(void);
+ extern void flush_cache_all(void);
+
+ unsigned long mc = ++mmu_context_cache;
+
+ if (!(mc & MMU_CONTEXT_ASID_MASK)) {
+ /* We exhaust ASID of this version.
+ Flush all TLB and start new cycle. */
+ flush_tlb_all();
+ /* We have to flush all caches as ASIDs are
+ used in cache */
+ flush_cache_all();
+ /* Fix version if needed.
+ Note that we avoid version #0/asid #0 to distingush NO_CONTEXT. */
+ if (!mc)
+ mmu_context_cache = mc = MMU_CONTEXT_FIRST_VERSION;
+ }
+ mm->context = mc;
+}
+
+/*
+ * Get MMU context if needed.
+ */
+static __inline__ void
+get_mmu_context(struct mm_struct *mm)
+{
+ if (mm) {
+ unsigned long mc = mmu_context_cache;
+ /* Check if we have old version of context.
+ If it's old, we need to get new context with new version. */
+ if ((mm->context ^ mc) & MMU_CONTEXT_VERSION_MASK)
+ get_new_mmu_context(mm);
+ }
+}
+
+/*
+ * Initialize the context related info for a new mm_struct
+ * instance.
+ */
+static inline int init_new_context(struct task_struct *tsk,
+ struct mm_struct *mm)
+{
+ mm->context = NO_CONTEXT;
+
+ return 0;
+}
+
+/*
+ * Destroy context related info for an mm_struct that is about
+ * to be put to rest.
+ */
+static inline void destroy_context(struct mm_struct *mm)
+{
+ extern void flush_tlb_mm(struct mm_struct *mm);
+
+ /* Well, at least free TLB entries */
+ flush_tlb_mm(mm);
+}
+
+#endif /* __ASSEMBLY__ */
+
+/* Common defines */
+#define TLB_STEP 0x00000010
+#define TLB_PTEH 0x00000000
+#define TLB_PTEL 0x00000008
+
+/* PTEH defines */
+#define PTEH_ASID_SHIFT 2
+#define PTEH_VALID 0x0000000000000001
+#define PTEH_SHARED 0x0000000000000002
+#define PTEH_MATCH_ASID 0x00000000000003ff
+
+#ifndef __ASSEMBLY__
+/* This has to be a common function because the next location to fill
+ * information is shared. */
+extern void __do_tlb_refill(unsigned long address, unsigned long long is_text_not_data, pte_t *pte);
+
+/* Profiling counter. */
+#ifdef CONFIG_SH64_PROC_TLB
+extern unsigned long long calls_to_do_fast_page_fault;
+#endif
+
+static inline unsigned long get_asid(void)
+{
+ unsigned long long sr;
+
+ asm volatile ("getcon " __SR ", %0\n\t"
+ : "=r" (sr));
+
+ sr = (sr >> SR_ASID_SHIFT) & MMU_CONTEXT_ASID_MASK;
+ return (unsigned long) sr;
+}
+
+/* Set ASID into SR */
+static inline void set_asid(unsigned long asid)
+{
+ unsigned long long sr, pc;
+
+ asm volatile ("getcon " __SR ", %0" : "=r" (sr));
+
+ sr = (sr & SR_ASID_MASK) | (asid << SR_ASID_SHIFT);
+
+ /*
+ * It is possible that this function may be inlined and so to avoid
+ * the assembler reporting duplicate symbols we make use of the gas trick
+ * of generating symbols using numerics and forward reference.
+ */
+ asm volatile ("movi 1, %1\n\t"
+ "shlli %1, 28, %1\n\t"
+ "or %0, %1, %1\n\t"
+ "putcon %1, " __SR "\n\t"
+ "putcon %0, " __SSR "\n\t"
+ "movi 1f, %1\n\t"
+ "ori %1, 1 , %1\n\t"
+ "putcon %1, " __SPC "\n\t"
+ "rte\n"
+ "1:\n\t"
+ : "=r" (sr), "=r" (pc) : "0" (sr));
+}
+
+/*
+ * After we have set current->mm to a new value, this activates
+ * the context for the new mm so we see the new mappings.
+ */
+static __inline__ void activate_context(struct mm_struct *mm)
+{
+ get_mmu_context(mm);
+ set_asid(mm->context & MMU_CONTEXT_ASID_MASK);
+}
+
+
+static __inline__ void switch_mm(struct mm_struct *prev,
+ struct mm_struct *next,
+ struct task_struct *tsk)
+{
+ if (prev != next) {
+ mmu_pdtp_cache = next->pgd;
+ activate_context(next);
+ }
+}
+
+#define deactivate_mm(tsk,mm) do { } while (0)
+
+#define activate_mm(prev, next) \
+ switch_mm((prev),(next),NULL)
+
+static inline void
+enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
+{
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_SH64_MMU_CONTEXT_H */
--- /dev/null
+#ifndef __ASM_SH64_MODULE_H
+#define __ASM_SH64_MODULE_H
+/*
+ * This file contains the SH architecture specific module code.
+ */
+
+#define module_map(x) vmalloc(x)
+#define module_unmap(x) vfree(x)
+#define module_arch_init(x) (0)
+#define arch_init_modules(x) do { } while (0)
+
+#endif /* __ASM_SH64_MODULE_H */
--- /dev/null
+#ifndef __ASM_SH64_MSGBUF_H
+#define __ASM_SH64_MSGBUF_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/msgbuf.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * The msqid64_ds structure for i386 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 64-bit time_t to solve y2038 problem
+ * - 2 miscellaneous 32-bit values
+ */
+
+struct msqid64_ds {
+ struct ipc64_perm msg_perm;
+ __kernel_time_t msg_stime; /* last msgsnd time */
+ unsigned long __unused1;
+ __kernel_time_t msg_rtime; /* last msgrcv time */
+ unsigned long __unused2;
+ __kernel_time_t msg_ctime; /* last change time */
+ unsigned long __unused3;
+ unsigned long msg_cbytes; /* current number of bytes on queue */
+ unsigned long msg_qnum; /* number of messages in queue */
+ unsigned long msg_qbytes; /* max number of bytes on queue */
+ __kernel_pid_t msg_lspid; /* pid of last msgsnd */
+ __kernel_pid_t msg_lrpid; /* last receive pid */
+ unsigned long __unused4;
+ unsigned long __unused5;
+};
+
+#endif /* __ASM_SH64_MSGBUF_H */
--- /dev/null
+#ifndef __ASM_SH64_NAMEI_H
+#define __ASM_SH64_NAMEI_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/namei.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * Included from linux/fs/namei.c
+ *
+ */
+
+/* This dummy routine maybe changed to something useful
+ * for /usr/gnemul/ emulation stuff.
+ * Look at asm-sparc/namei.h for details.
+ */
+
+#define __emul_prefix() NULL
+
+#endif /* __ASM_SH64_NAMEI_H */
--- /dev/null
+#ifndef __ASM_SH64_PAGE_H
+#define __ASM_SH64_PAGE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/page.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * benedict.gaster@superh.com 19th, 24th July 2002.
+ *
+ * Modified to take account of enabling for D-CACHE support.
+ *
+ */
+
+#include <linux/config.h>
+
+/* PAGE_SHIFT determines the page size */
+#define PAGE_SHIFT 12
+#ifdef __ASSEMBLY__
+#define PAGE_SIZE 4096
+#else
+#define PAGE_SIZE (1UL << PAGE_SHIFT)
+#endif
+#define PAGE_MASK (~(PAGE_SIZE-1))
+#define PTE_MASK PAGE_MASK
+
+#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+#define HPAGE_SHIFT 16
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+#define HPAGE_SHIFT 20
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
+#define HPAGE_SHIFT 29
+#endif
+
+#ifdef CONFIG_HUGETLB_PAGE
+#define HPAGE_SIZE (1UL << HPAGE_SHIFT)
+#define HPAGE_MASK (~(HPAGE_SIZE-1))
+#define HUGETLB_PAGE_ORDER (HPAGE_SHIFT-PAGE_SHIFT)
+#endif
+
+#ifdef __KERNEL__
+#ifndef __ASSEMBLY__
+
+extern struct page *mem_map;
+extern void sh64_page_clear(void *page);
+extern void sh64_page_copy(void *from, void *to);
+
+#define clear_page(page) sh64_page_clear(page)
+#define copy_page(to,from) sh64_page_copy(from, to)
+
+#if defined(CONFIG_DCACHE_DISABLED)
+
+#define clear_user_page(page, vaddr, pg) clear_page(page)
+#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
+
+#else
+
+extern void clear_user_page(void *to, unsigned long address, struct page *pg);
+extern void copy_user_page(void *to, void *from, unsigned long address, struct page *pg);
+
+#endif /* defined(CONFIG_DCACHE_DISABLED) */
+
+/*
+ * These are used to make use of C type-checking..
+ */
+typedef struct { unsigned long long pte; } pte_t;
+typedef struct { unsigned long pmd; } pmd_t;
+typedef struct { unsigned long pgd; } pgd_t;
+typedef struct { unsigned long pgprot; } pgprot_t;
+
+#define pte_val(x) ((x).pte)
+#define pmd_val(x) ((x).pmd)
+#define pgd_val(x) ((x).pgd)
+#define pgprot_val(x) ((x).pgprot)
+
+#define __pte(x) ((pte_t) { (x) } )
+#define __pmd(x) ((pmd_t) { (x) } )
+#define __pgd(x) ((pgd_t) { (x) } )
+#define __pgprot(x) ((pgprot_t) { (x) } )
+
+#endif /* !__ASSEMBLY__ */
+
+/* to align the pointer to the (next) page boundary */
+#define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK)
+
+/*
+ * Kconfig defined.
+ */
+#define __MEMORY_START (CONFIG_MEMORY_START)
+#define PAGE_OFFSET (CONFIG_CACHED_MEMORY_OFFSET)
+
+#define __pa(x) ((unsigned long)(x)-PAGE_OFFSET)
+#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
+#define MAP_NR(addr) ((__pa(addr)-__MEMORY_START) >> PAGE_SHIFT)
+#define VALID_PAGE(page) ((page - mem_map) < max_mapnr)
+
+#define phys_to_page(phys) (mem_map + (((phys) - __MEMORY_START) >> PAGE_SHIFT))
+#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START)
+
+/* PFN start number, because of __MEMORY_START */
+#define PFN_START (__MEMORY_START >> PAGE_SHIFT)
+
+#define pfn_to_page(pfn) (mem_map + (pfn) - PFN_START)
+#define page_to_pfn(page) ((unsigned long)((page) - mem_map) + PFN_START)
+#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
+#define pfn_valid(pfn) (((pfn) - PFN_START) < max_mapnr)
+#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+
+#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
+ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+
+#ifndef __ASSEMBLY__
+
+/* Pure 2^n version of get_order */
+extern __inline__ int get_order(unsigned long size)
+{
+ int order;
+
+ size = (size-1) >> (PAGE_SHIFT-1);
+ order = -1;
+ do {
+ size >>= 1;
+ order++;
+ } while (size);
+ return order;
+}
+
+#endif
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_PAGE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/param.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+#ifndef __ASM_SH64_PARAM_H
+#define __ASM_SH64_PARAM_H
+
+#include <linux/config.h>
+
+#ifdef __KERNEL__
+# ifdef CONFIG_SH_WDT
+# define HZ 1000 /* Needed for high-res WOVF */
+# else
+# define HZ 100
+# endif
+# define USER_HZ 100 /* User interfaces are in "ticks" */
+# define CLOCKS_PER_SEC (USER_HZ) /* frequency at which times() counts */
+#endif
+
+#ifndef HZ
+#define HZ 100
+#endif
+
+#define EXEC_PAGESIZE 4096
+
+#ifndef NGROUPS
+#define NGROUPS 32
+#endif
+
+#ifndef NOGROUP
+#define NOGROUP (-1)
+#endif
+
+#define MAXHOSTNAMELEN 64 /* max length of hostname */
+
+#endif /* __ASM_SH64_PARAM_H */
--- /dev/null
+#ifndef __ASM_SH64_PCI_H
+#define __ASM_SH64_PCI_H
+
+#ifdef __KERNEL__
+
+#include <linux/dma-mapping.h>
+
+/* Can be used to override the logic in pci_scan_bus for skipping
+ already-configured bus numbers - to be used for buggy BIOSes
+ or architectures with incomplete PCI setup by the loader */
+
+#define pcibios_assign_all_busses() 1
+
+/*
+ * These are currently the correct values for the STM overdrive board
+ * We need some way of setting this on a board specific way, it will
+ * not be the same on other boards I think
+ */
+#if defined(CONFIG_CPU_SUBTYPE_SH5_101) || defined(CONFIG_CPU_SUBTYPE_SH5_103)
+#define PCIBIOS_MIN_IO 0x2000
+#define PCIBIOS_MIN_MEM 0x40000000
+#endif
+
+extern void pcibios_set_master(struct pci_dev *dev);
+
+/*
+ * Set penalize isa irq function
+ */
+static inline void pcibios_penalize_isa_irq(int irq)
+{
+ /* We don't do dynamic PCI IRQ allocation */
+}
+
+/* Dynamic DMA mapping stuff.
+ * SuperH has everything mapped statically like x86.
+ */
+
+/* The PCI address space does equal the physical memory
+ * address space. The networking and block device layers use
+ * this boolean for bounce buffer decisions.
+ */
+#define PCI_DMA_BUS_IS_PHYS (1)
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/scatterlist.h>
+#include <linux/string.h>
+#include <asm/io.h>
+
+/* pci_unmap_{single,page} being a nop depends upon the
+ * configuration.
+ */
+#ifdef CONFIG_SH_PCIDMA_NONCOHERENT
+#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME) \
+ dma_addr_t ADDR_NAME;
+#define DECLARE_PCI_UNMAP_LEN(LEN_NAME) \
+ __u32 LEN_NAME;
+#define pci_unmap_addr(PTR, ADDR_NAME) \
+ ((PTR)->ADDR_NAME)
+#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) \
+ (((PTR)->ADDR_NAME) = (VAL))
+#define pci_unmap_len(PTR, LEN_NAME) \
+ ((PTR)->LEN_NAME)
+#define pci_unmap_len_set(PTR, LEN_NAME, VAL) \
+ (((PTR)->LEN_NAME) = (VAL))
+#else
+#define DECLARE_PCI_UNMAP_ADDR(ADDR_NAME)
+#define DECLARE_PCI_UNMAP_LEN(LEN_NAME)
+#define pci_unmap_addr(PTR, ADDR_NAME) (0)
+#define pci_unmap_addr_set(PTR, ADDR_NAME, VAL) do { } while (0)
+#define pci_unmap_len(PTR, LEN_NAME) (0)
+#define pci_unmap_len_set(PTR, LEN_NAME, VAL) do { } while (0)
+#endif
+
+/* Not supporting more than 32-bit PCI bus addresses now, but
+ * must satisfy references to this function. Change if needed.
+ */
+#define pci_dac_dma_supported(pci_dev, mask) (0)
+
+/* These macros should be used after a pci_map_sg call has been done
+ * to get bus addresses of each of the SG entries and their lengths.
+ * You should only work with the number of sg entries pci_map_sg
+ * returns, or alternatively stop on the first sg_dma_len(sg) which
+ * is 0.
+ */
+#define sg_dma_address(sg) ((sg)->dma_address)
+#define sg_dma_len(sg) ((sg)->length)
+
+/* Board-specific fixup routines. */
+extern void pcibios_fixup(void);
+extern void pcibios_fixup_irqs(void);
+
+#ifdef CONFIG_PCI_AUTO
+extern int pciauto_assign_resources(int busno, struct pci_channel *hose);
+#endif
+
+static inline void pcibios_add_platform_entries(struct pci_dev *dev)
+{
+}
+
+#endif /* __KERNEL__ */
+
+/* generic pci stuff */
+#include <asm-generic/pci.h>
+
+/* generic DMA-mapping stuff */
+#include <asm-generic/pci-dma-compat.h>
+
+#endif /* __ASM_SH64_PCI_H */
+
--- /dev/null
+#ifndef __ASM_SH64_PERCPU
+#define __ASM_SH64_PERCPU
+
+#include <asm-generic/percpu.h>
+
+#endif /* __ASM_SH64_PERCPU */
--- /dev/null
+#ifndef __ASM_SH64_PGALLOC_H
+#define __ASM_SH64_PGALLOC_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/pgalloc.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ */
+
+#include <asm/processor.h>
+#include <linux/threads.h>
+#include <linux/mm.h>
+
+#define pgd_quicklist (current_cpu_data.pgd_quick)
+#define pmd_quicklist (current_cpu_data.pmd_quick)
+#define pte_quicklist (current_cpu_data.pte_quick)
+#define pgtable_cache_size (current_cpu_data.pgtable_cache_sz)
+
+static inline void pgd_init(unsigned long page)
+{
+ unsigned long *pgd = (unsigned long *)page;
+ extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
+ int i;
+
+ for (i = 0; i < USER_PTRS_PER_PGD; i++)
+ pgd[i] = (unsigned long)empty_bad_pte_table;
+}
+
+/*
+ * Allocate and free page tables. The xxx_kernel() versions are
+ * used to allocate a kernel page table - this turns on ASN bits
+ * if any.
+ */
+
+extern __inline__ pgd_t *get_pgd_slow(void)
+{
+ unsigned int pgd_size = (USER_PTRS_PER_PGD * sizeof(pgd_t));
+ pgd_t *ret = (pgd_t *)kmalloc(pgd_size, GFP_KERNEL);
+ return ret;
+}
+
+extern __inline__ pgd_t *get_pgd_fast(void)
+{
+ unsigned long *ret;
+
+ if ((ret = pgd_quicklist) != NULL) {
+ pgd_quicklist = (unsigned long *)(*ret);
+ ret[0] = 0;
+ pgtable_cache_size--;
+ } else
+ ret = (unsigned long *)get_pgd_slow();
+
+ if (ret) {
+ memset(ret, 0, USER_PTRS_PER_PGD * sizeof(pgd_t));
+ }
+ return (pgd_t *)ret;
+}
+
+extern __inline__ void free_pgd_fast(pgd_t *pgd)
+{
+ *(unsigned long *)pgd = (unsigned long) pgd_quicklist;
+ pgd_quicklist = (unsigned long *) pgd;
+ pgtable_cache_size++;
+}
+
+extern __inline__ void free_pgd_slow(pgd_t *pgd)
+{
+ kfree((void *)pgd);
+}
+
+extern pte_t *get_pte_slow(pmd_t *pmd, unsigned long address_preadjusted);
+extern pte_t *get_pte_kernel_slow(pmd_t *pmd, unsigned long address_preadjusted);
+
+extern __inline__ pte_t *get_pte_fast(void)
+{
+ unsigned long *ret;
+
+ if((ret = (unsigned long *)pte_quicklist) != NULL) {
+ pte_quicklist = (unsigned long *)(*ret);
+ ret[0] = ret[1];
+ pgtable_cache_size--;
+ }
+ return (pte_t *)ret;
+}
+
+extern __inline__ void free_pte_fast(pte_t *pte)
+{
+ *(unsigned long *)pte = (unsigned long) pte_quicklist;
+ pte_quicklist = (unsigned long *) pte;
+ pgtable_cache_size++;
+}
+
+static inline void pte_free_kernel(pte_t *pte)
+{
+ free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct page *pte)
+{
+ __free_page(pte);
+}
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+ unsigned long address)
+{
+ pte_t *pte;
+
+ pte = (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT);
+ if (pte)
+ clear_page(pte);
+
+ return pte;
+}
+
+static inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address)
+{
+ struct page *pte;
+
+ pte = alloc_pages(GFP_KERNEL|__GFP_REPEAT, 0);
+ if (pte)
+ clear_page(page_address(pte));
+
+ return pte;
+}
+
+#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
+
+/*
+ * allocating and freeing a pmd is trivial: the 1-entry pmd is
+ * inside the pgd, so has no extra memory associated with it.
+ */
+
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+
+#define pmd_alloc_one(mm, addr) ({ BUG(); ((pmd_t *)2); })
+#define pmd_free(x) do { } while (0)
+#define pgd_populate(mm, pmd, pte) BUG()
+#define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte))
+#define __pmd_free_tlb(tlb,pmd) do { } while (0)
+
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+
+static __inline__ pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address)
+{
+ pmd_t *pmd;
+ pmd = (pmd_t *) __get_free_page(GFP_KERNEL|__GFP_REPEAT);
+ if (pmd)
+ clear_page(pmd);
+ return pmd;
+}
+
+static __inline__ void pmd_free(pmd_t *pmd)
+{
+ free_page((unsigned long) pmd);
+}
+
+#define pgd_populate(mm, pgd, pmd) pgd_set(pgd, pmd)
+#define __pmd_free_tlb(tlb,pmd) pmd_free(pmd)
+
+#else
+#error "No defined page table size"
+#endif
+
+#define check_pgt_cache() do { } while (0)
+#define pgd_free(pgd) free_pgd_slow(pgd)
+#define pgd_alloc(mm) get_pgd_fast()
+
+extern int do_check_pgt_cache(int, int);
+
+extern inline void set_pgdir(unsigned long address, pgd_t entry)
+{
+ struct task_struct * p;
+ pgd_t *pgd;
+
+ read_lock(&tasklist_lock);
+ for_each_process(p) {
+ if (!p->mm)
+ continue;
+ *pgd_offset(p->mm,address) = entry;
+ }
+ read_unlock(&tasklist_lock);
+ for (pgd = (pgd_t *)pgd_quicklist; pgd; pgd = (pgd_t *)*(unsigned long *)pgd)
+ pgd[address >> PGDIR_SHIFT] = entry;
+}
+
+#define pmd_populate_kernel(mm, pmd, pte) \
+ set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) (pte)))
+
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
+ struct page *pte)
+{
+ set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) page_address (pte)));
+}
+
+#endif /* __ASM_SH64_PGALLOC_H */
--- /dev/null
+#ifndef __ASM_SH64_PGTABLE_H
+#define __ASM_SH64_PGTABLE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/pgtable.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ * This file contains the functions and defines necessary to modify and use
+ * the SuperH page table tree.
+ */
+
+#ifndef __ASSEMBLY__
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <linux/threads.h>
+#include <linux/config.h>
+
+extern void paging_init(void);
+
+/* We provide our own get_unmapped_area to avoid cache synonym issue */
+#define HAVE_ARCH_UNMAPPED_AREA
+
+/*
+ * Basically we have the same two-level (which is the logical three level
+ * Linux page table layout folded) page tables as the i386.
+ */
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern unsigned char empty_zero_page[PAGE_SIZE];
+#define ZERO_PAGE(vaddr) (mem_map + MAP_NR(empty_zero_page))
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * NEFF and NPHYS related defines.
+ * FIXME : These need to be model-dependent. For now this is OK, SH5-101 and SH5-103
+ * implement 32 bits effective and 32 bits physical. But future implementations may
+ * extend beyond this.
+ */
+#define NEFF 32
+#define NEFF_SIGN (1LL << (NEFF - 1))
+#define NEFF_MASK (-1LL << NEFF)
+
+#define NPHYS 32
+#define NPHYS_SIGN (1LL << (NPHYS - 1))
+#define NPHYS_MASK (-1LL << NPHYS)
+
+/* Typically 2-level is sufficient up to 32 bits of virtual address space, beyond
+ that 3-level would be appropriate. */
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+/* For 4k pages, this contains 512 entries, i.e. 9 bits worth of address. */
+#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+#define PTE_SHIFT PAGE_SHIFT
+#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+
+/* top level: PMD. */
+#define PGDIR_SHIFT (PTE_SHIFT + PTE_BITS)
+#define PGD_BITS (NEFF - PGDIR_SHIFT)
+#define PTRS_PER_PGD (1<<PGD_BITS)
+
+/* middle level: PMD. This doesn't do anything for the 2-level case. */
+#define PTRS_PER_PMD (1)
+
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+#define PMD_SHIFT PGDIR_SHIFT
+#define PMD_SIZE PGDIR_SIZE
+#define PMD_MASK PGDIR_MASK
+
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+/*
+ * three-level asymmetric paging structure: PGD is top level.
+ * The asymmetry comes from 32-bit pointers and 64-bit PTEs.
+ */
+/* bottom level: PTE. It's 9 bits = 512 pointers */
+#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+#define PTE_SHIFT PAGE_SHIFT
+#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+
+/* middle level: PMD. It's 10 bits = 1024 pointers */
+#define PTRS_PER_PMD ((1<<PAGE_SHIFT)/sizeof(unsigned long long *))
+#define PMD_MAGNITUDE 2 /* sizeof(unsigned long long *) magnit. */
+#define PMD_SHIFT (PTE_SHIFT + PTE_BITS)
+#define PMD_BITS (PAGE_SHIFT - PMD_MAGNITUDE)
+
+/* top level: PMD. It's 1 bit = 2 pointers */
+#define PGDIR_SHIFT (PMD_SHIFT + PMD_BITS)
+#define PGD_BITS (NEFF - PGDIR_SHIFT)
+#define PTRS_PER_PGD (1<<PGD_BITS)
+
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+#else
+#error "No defined number of page table levels"
+#endif
+
+/*
+ * Error outputs.
+ */
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+
+/*
+ * Table setting routines. Used within arch/mm only.
+ */
+#define set_pgd(pgdptr, pgdval) (*(pgdptr) = pgdval)
+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+
+static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ unsigned long long x = ((unsigned long long) pteval.pte);
+ unsigned long long *xp = (unsigned long long *) pteptr;
+ /*
+ * Sign-extend based on NPHYS.
+ */
+ *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
+}
+
+static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
+{
+ pmd_val(*pmdp) = (unsigned long) ptep;
+}
+
+/*
+ * PGD defines. Top level.
+ */
+
+/* To find an entry in a generic PGD. */
+#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
+#define __pgd_offset(address) pgd_index(address)
+#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+
+/* To find an entry in a kernel PGD. */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/*
+ * PGD level access routines.
+ *
+ * Note1:
+ * There's no need to use physical addresses since the tree walk is all
+ * in performed in software, until the PTE translation.
+ *
+ * Note 2:
+ * A PGD entry can be uninitialized (_PGD_UNUSED), generically bad,
+ * clear (_PGD_EMPTY), present. When present, lower 3 nibbles contain
+ * _KERNPG_TABLE. Being a kernel virtual pointer also bit 31 must
+ * be 1. Assuming an arbitrary clear value of bit 31 set to 0 and
+ * lower 3 nibbles set to 0xFFF (_PGD_EMPTY) any other value is a
+ * bad pgd that must be notified via printk().
+ *
+ */
+#define _PGD_EMPTY 0x0
+
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+#define pgd_present(pgd) ((pgd_val(pgd) & _PAGE_PRESENT) ? 1 : 0)
+#define pgd_clear(xx) do { } while(0)
+
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+#define pgd_present(pgd_entry) (1)
+#define pgd_none(pgd_entry) (pgd_val((pgd_entry)) == _PGD_EMPTY)
+/* TODO: Think later about what a useful definition of 'bad' would be now. */
+#define pgd_bad(pgd_entry) (0)
+#define pgd_clear(pgd_entry_p) (set_pgd((pgd_entry_p), __pgd(_PGD_EMPTY)))
+
+#endif
+
+
+#define pgd_page(pgd_entry) ((unsigned long) (pgd_val(pgd_entry) & PAGE_MASK))
+
+/*
+ * PMD defines. Middle level.
+ */
+
+/* PGD to PMD dereferencing */
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
+{
+ return (pmd_t *) dir;
+}
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+#define __pmd_offset(address) \
+ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+#define pmd_offset(dir, addr) \
+ ((pmd_t *) ((pgd_val(*(dir))) & PAGE_MASK) + __pmd_offset((addr)))
+#endif
+
+/*
+ * PMD level access routines. Same notes as above.
+ */
+#define _PMD_EMPTY 0x0
+/* Either the PMD is empty or present, it's not paged out */
+#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
+#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
+#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
+#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+
+#define pmd_page_kernel(pmd_entry) \
+ ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
+
+#define pmd_page(pmd) \
+ (virt_to_page(pmd_val(pmd)))
+
+/* PMD to PTE dereferencing */
+#define pte_index(address) \
+ ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+#define pte_offset_kernel(dir, addr) \
+ ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
+
+#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
+#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
+#define pte_unmap(pte) do { } while (0)
+#define pte_unmap_nested(pte) do { } while (0)
+
+/* Round it up ! */
+#define USER_PTRS_PER_PGD ((TASK_SIZE+PGDIR_SIZE-1)/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
+
+#ifndef __ASSEMBLY__
+#define VMALLOC_END 0xff000000
+#define VMALLOC_START 0xf0000000
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+
+#define IOBASE_VADDR 0xff000000
+#define IOBASE_END 0xffffffff
+
+/*
+ * PTEL coherent flags.
+ * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
+ */
+/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
+ positions, to avoid expensive bit shuffling on every refill. The remaining
+ bits are used for s/w purposes and masked out on each refill.
+
+ Note, the PTE slots are used to hold data of type swp_entry_t when a page is
+ swapped out. Only the _PAGE_PRESENT flag is significant when the page is
+ swapped out, and it must be placed so that it doesn't overlap either the
+ type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
+ at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
+ scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
+ [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
+ into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
+#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
+#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
+#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
+#define _PAGE_PRESENT 0x004 /* software: page referenced */
+#define _PAGE_FILE 0x004 /* software: only when !present */
+#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
+#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
+#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
+#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
+#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
+#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
+#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
+#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
+#define _PAGE_ACCESSED 0x800 /* software: page referenced */
+
+/* Mask which drops software flags */
+#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
+/* Flags default: 4KB, Read, Not write, Not execute, Not user */
+#define _PAGE_FLAGS_HARDWARE_DEFAULT 0x0000000000000040LL
+
+/*
+ * HugeTLB support
+ */
+#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+#define _PAGE_SZHUGE (_PAGE_SIZE0)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+#define _PAGE_SZHUGE (_PAGE_SIZE1)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
+#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
+#endif
+
+/*
+ * Default flags for a Kernel page.
+ * This is fundametally also SHARED because the main use of this define
+ * (other than for PGD/PMD entries) is for the VMALLOC pool which is
+ * contextless.
+ *
+ * _PAGE_EXECUTE is required for modules
+ *
+ */
+#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ _PAGE_EXECUTE | \
+ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
+ _PAGE_SHARED)
+
+/* Default flags for a User page */
+#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
+
+#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
+#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_USER | \
+ _PAGE_SHARED)
+/* We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
+ * protection mode for the stack. */
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+ _PAGE_ACCESSED | _PAGE_USER | _PAGE_EXECUTE)
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+ _PAGE_ACCESSED | _PAGE_USER)
+#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
+
+
+/*
+ * In ST50 we have full permissions (Read/Write/Execute/Shared).
+ * Just match'em all. These are for mmap(), therefore all at least
+ * User/Cachable/Present/Accessed. No point in making Fault on Write.
+ */
+#define __MMAP_COMMON (_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
+ /* sxwr */
+#define __P000 __pgprot(__MMAP_COMMON)
+#define __P001 __pgprot(__MMAP_COMMON | _PAGE_READ)
+#define __P010 __pgprot(__MMAP_COMMON)
+#define __P011 __pgprot(__MMAP_COMMON | _PAGE_READ)
+#define __P100 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+#define __P101 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+#define __P110 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+#define __P111 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+
+#define __S000 __pgprot(__MMAP_COMMON | _PAGE_SHARED)
+#define __S001 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ)
+#define __S010 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_WRITE)
+#define __S011 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ | _PAGE_WRITE)
+#define __S100 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE)
+#define __S101 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ)
+#define __S110 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_WRITE)
+#define __S111 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ | _PAGE_WRITE)
+
+/* Make it a device mapping for maximum safety (e.g. for mapping device
+ registers into user-space via /dev/map). */
+#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
+#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+
+/*
+ * Handling allocation failures during page table setup.
+ */
+extern void __handle_bad_pmd_kernel(pmd_t * pmd);
+#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
+
+/*
+ * PTE level access routines.
+ *
+ * Note1:
+ * It's the tree walk leaf. This is physical address to be stored.
+ *
+ * Note 2:
+ * Regarding the choice of _PTE_EMPTY:
+
+ We must choose a bit pattern that cannot be valid, whether or not the page
+ is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
+ out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
+ left for us to select. If we force bit[7]==0 when swapped out, we could use
+ the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
+ we force bit[7]==1 when swapped out, we can use all zeroes to indicate
+ empty. This is convenient, because the page tables get cleared to zero
+ when they are allocated.
+
+ */
+#define _PTE_EMPTY 0x0
+#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
+#define pte_clear(xp) (set_pte(xp, __pte(_PTE_EMPTY)))
+#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
+
+/*
+ * Some definitions to translate between mem_map, PTEs, and page
+ * addresses:
+ */
+
+/*
+ * Given a PTE, return the index of the mem_map[] entry corresponding
+ * to the page frame the PTE. Get the absolute physical address, make
+ * a relative physical address and translate it to an index.
+ */
+#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
+ __MEMORY_START) >> PAGE_SHIFT)
+
+/*
+ * Given a PTE, return the "struct page *".
+ */
+#define pte_page(x) (mem_map + pte_pagenr(x))
+
+/*
+ * Return number of (down rounded) MB corresponding to x pages.
+ */
+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
+
+/*
+ * The following have defined behavior only work if pte_present() is true.
+ */
+static inline int pte_read(pte_t pte) { return pte_val(pte) & _PAGE_READ; }
+static inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXECUTE; }
+static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
+static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
+static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+
+extern inline pte_t pte_rdprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_READ)); return pte; }
+extern inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
+extern inline pte_t pte_exprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_EXECUTE)); return pte; }
+extern inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
+extern inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
+
+extern inline pte_t pte_mkread(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_READ)); return pte; }
+extern inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
+extern inline pte_t pte_mkexec(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_EXECUTE)); return pte; }
+extern inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
+extern inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
+
+/*
+ * Conversion functions: convert a page and protection to a page entry.
+ *
+ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
+ */
+#define mk_pte(page,pgprot) \
+({ \
+ pte_t __pte; \
+ \
+ set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
+ __MEMORY_START | pgprot_val((pgprot)))); \
+ __pte; \
+})
+
+/*
+ * This takes a (absolute) physical page address that is used
+ * by the remapping functions
+ */
+#define mk_pte_phys(physpage, pgprot) \
+({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
+
+extern inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
+
+#define page_pte_prot(page, prot) mk_pte(page, prot)
+#define page_pte(page) page_pte_prot(page, __pgprot(0))
+
+typedef pte_t *pte_addr_t;
+#define pgtable_cache_init() do { } while (0)
+
+extern void update_mmu_cache(struct vm_area_struct * vma,
+ unsigned long address, pte_t pte);
+
+/* Encode and decode a swap entry */
+#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
+#define __swp_offset(x) ((x).val >> 8)
+#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+/* Encode and decode a nonlinear file mapping entry */
+#define PTE_FILE_MAX_BITS 29
+#define pte_to_pgoff(pte) (pte_val(pte))
+#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
+
+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+#define PageSkip(page) (0)
+#define kern_addr_valid(addr) (1)
+
+#define io_remap_page_range remap_page_range
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * No page table caches to initialise
+ */
+#define pgtable_cache_init() do { } while (0)
+
+#define pte_pfn(x) (((unsigned long)((x).pte)) >> PAGE_SHIFT)
+#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+#include <asm-generic/pgtable.h>
+
+#endif /* __ASM_SH64_PGTABLE_H */
--- /dev/null
+#ifndef __ASM_SH64_PLATFORM_H
+#define __ASM_SH64_PLATFORM_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/platform.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time.
+ */
+
+#include <linux/ioport.h>
+#include <asm/irq.h>
+
+
+/*
+ * Platform definition structure.
+ */
+struct sh64_platform {
+ unsigned int readonly_rootfs;
+ unsigned int ramdisk_flags;
+ unsigned int initial_root_dev;
+ unsigned int loader_type;
+ unsigned int initrd_start;
+ unsigned int initrd_size;
+ unsigned int fpu_flags;
+ unsigned int io_res_count;
+ unsigned int kram_res_count;
+ unsigned int xram_res_count;
+ unsigned int rom_res_count;
+ struct resource *io_res_p;
+ struct resource *kram_res_p;
+ struct resource *xram_res_p;
+ struct resource *rom_res_p;
+};
+
+extern struct sh64_platform platform_parms;
+
+extern unsigned long long memory_start, memory_end;
+
+extern unsigned long long fpu_in_use;
+
+extern int platform_int_priority[NR_INTC_IRQS];
+
+#define FPU_FLAGS (platform_parms.fpu_flags)
+#define STANDARD_IO_RESOURCES (platform_parms.io_res_count)
+#define STANDARD_KRAM_RESOURCES (platform_parms.kram_res_count)
+#define STANDARD_XRAM_RESOURCES (platform_parms.xram_res_count)
+#define STANDARD_ROM_RESOURCES (platform_parms.rom_res_count)
+
+/*
+ * Kernel Memory description, Respectively:
+ * code = last but one memory descriptor
+ * data = last memory descriptor
+ */
+#define code_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 2])
+#define data_resource (platform_parms.kram_res_p[STANDARD_KRAM_RESOURCES - 1])
+
+/* Be prepared to 64-bit sign extensions */
+#define PFN_UP(x) ((((x) + PAGE_SIZE-1) >> PAGE_SHIFT) & 0x000fffff)
+#define PFN_DOWN(x) (((x) >> PAGE_SHIFT) & 0x000fffff)
+#define PFN_PHYS(x) ((x) << PAGE_SHIFT)
+
+#endif /* __ASM_SH64_PLATFORM_H */
--- /dev/null
+#ifndef __ASM_SH64_POLL_H
+#define __ASM_SH64_POLL_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/poll.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/* These are specified by iBCS2 */
+#define POLLIN 0x0001
+#define POLLPRI 0x0002
+#define POLLOUT 0x0004
+#define POLLERR 0x0008
+#define POLLHUP 0x0010
+#define POLLNVAL 0x0020
+
+/* The rest seem to be more-or-less nonstandard. Check them! */
+#define POLLRDNORM 0x0040
+#define POLLRDBAND 0x0080
+#define POLLWRNORM 0x0100
+#define POLLWRBAND 0x0200
+#define POLLMSG 0x0400
+
+struct pollfd {
+ int fd;
+ short events;
+ short revents;
+};
+
+#endif /* __ASM_SH64_POLL_H */
--- /dev/null
+#ifndef __ASM_SH64_POSIX_TYPES_H
+#define __ASM_SH64_POSIX_TYPES_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/posix_types.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * This file is generally used by user-level software, so you need to
+ * be a little careful about namespace pollution etc. Also, we cannot
+ * assume GCC is being used.
+ */
+
+typedef unsigned long __kernel_ino_t;
+typedef unsigned short __kernel_mode_t;
+typedef unsigned short __kernel_nlink_t;
+typedef long __kernel_off_t;
+typedef int __kernel_pid_t;
+typedef unsigned short __kernel_ipc_pid_t;
+typedef unsigned short __kernel_uid_t;
+typedef unsigned short __kernel_gid_t;
+typedef long unsigned int __kernel_size_t;
+typedef int __kernel_ssize_t;
+typedef int __kernel_ptrdiff_t;
+typedef long __kernel_time_t;
+typedef long __kernel_suseconds_t;
+typedef long __kernel_clock_t;
+typedef int __kernel_timer_t;
+typedef int __kernel_clockid_t;
+typedef int __kernel_daddr_t;
+typedef char * __kernel_caddr_t;
+typedef unsigned short __kernel_uid16_t;
+typedef unsigned short __kernel_gid16_t;
+typedef unsigned int __kernel_uid32_t;
+typedef unsigned int __kernel_gid32_t;
+
+typedef unsigned short __kernel_old_uid_t;
+typedef unsigned short __kernel_old_gid_t;
+typedef unsigned short __kernel_old_dev_t;
+
+#ifdef __GNUC__
+typedef long long __kernel_loff_t;
+#endif
+
+typedef struct {
+#if defined(__KERNEL__) || defined(__USE_ALL)
+ int val[2];
+#else /* !defined(__KERNEL__) && !defined(__USE_ALL) */
+ int __val[2];
+#endif /* !defined(__KERNEL__) && !defined(__USE_ALL) */
+} __kernel_fsid_t;
+
+#if defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2)
+
+#undef __FD_SET
+static __inline__ void __FD_SET(unsigned long __fd, __kernel_fd_set *__fdsetp)
+{
+ unsigned long __tmp = __fd / __NFDBITS;
+ unsigned long __rem = __fd % __NFDBITS;
+ __fdsetp->fds_bits[__tmp] |= (1UL<<__rem);
+}
+
+#undef __FD_CLR
+static __inline__ void __FD_CLR(unsigned long __fd, __kernel_fd_set *__fdsetp)
+{
+ unsigned long __tmp = __fd / __NFDBITS;
+ unsigned long __rem = __fd % __NFDBITS;
+ __fdsetp->fds_bits[__tmp] &= ~(1UL<<__rem);
+}
+
+
+#undef __FD_ISSET
+static __inline__ int __FD_ISSET(unsigned long __fd, const __kernel_fd_set *__p)
+{
+ unsigned long __tmp = __fd / __NFDBITS;
+ unsigned long __rem = __fd % __NFDBITS;
+ return (__p->fds_bits[__tmp] & (1UL<<__rem)) != 0;
+}
+
+/*
+ * This will unroll the loop for the normal constant case (8 ints,
+ * for a 256-bit fd_set)
+ */
+#undef __FD_ZERO
+static __inline__ void __FD_ZERO(__kernel_fd_set *__p)
+{
+ unsigned long *__tmp = __p->fds_bits;
+ int __i;
+
+ if (__builtin_constant_p(__FDSET_LONGS)) {
+ switch (__FDSET_LONGS) {
+ case 16:
+ __tmp[ 0] = 0; __tmp[ 1] = 0;
+ __tmp[ 2] = 0; __tmp[ 3] = 0;
+ __tmp[ 4] = 0; __tmp[ 5] = 0;
+ __tmp[ 6] = 0; __tmp[ 7] = 0;
+ __tmp[ 8] = 0; __tmp[ 9] = 0;
+ __tmp[10] = 0; __tmp[11] = 0;
+ __tmp[12] = 0; __tmp[13] = 0;
+ __tmp[14] = 0; __tmp[15] = 0;
+ return;
+
+ case 8:
+ __tmp[ 0] = 0; __tmp[ 1] = 0;
+ __tmp[ 2] = 0; __tmp[ 3] = 0;
+ __tmp[ 4] = 0; __tmp[ 5] = 0;
+ __tmp[ 6] = 0; __tmp[ 7] = 0;
+ return;
+
+ case 4:
+ __tmp[ 0] = 0; __tmp[ 1] = 0;
+ __tmp[ 2] = 0; __tmp[ 3] = 0;
+ return;
+ }
+ }
+ __i = __FDSET_LONGS;
+ while (__i) {
+ __i--;
+ *__tmp = 0;
+ __tmp++;
+ }
+}
+
+#endif /* defined(__KERNEL__) || !defined(__GLIBC__) || (__GLIBC__ < 2) */
+
+#endif /* __ASM_SH64_POSIX_TYPES_H */
--- /dev/null
+#ifndef __ASM_SH64_PROCESSOR_H
+#define __ASM_SH64_PROCESSOR_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/processor.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ */
+
+#include <asm/page.h>
+
+#ifndef __ASSEMBLY__
+
+#include <asm/types.h>
+#include <asm/cache.h>
+#include <asm/registers.h>
+#include <linux/threads.h>
+
+/*
+ * Default implementation of macro that returns current
+ * instruction pointer ("program counter").
+ */
+#define current_text_addr() ({ \
+void *pc; \
+unsigned long long __dummy = 0; \
+__asm__("gettr tr0, %1\n\t" \
+ "pta 4, tr0\n\t" \
+ "gettr tr0, %0\n\t" \
+ "ptabs %1, tr0\n\t" \
+ :"=r" (pc), "=r" (__dummy) \
+ : "1" (__dummy)); \
+pc; })
+
+/*
+ * CPU type and hardware bug flags. Kept separately for each CPU.
+ */
+enum cpu_type {
+ CPU_SH5_101,
+ CPU_SH5_103,
+ CPU_SH_NONE
+};
+
+/*
+ * TLB information structure
+ *
+ * Defined for both I and D tlb, per-processor.
+ */
+struct tlb_info {
+ unsigned long long next;
+ unsigned long long first;
+ unsigned long long last;
+
+ unsigned int entries;
+ unsigned int step;
+
+ unsigned long flags;
+};
+
+struct sh_cpuinfo {
+ enum cpu_type type;
+ unsigned long loops_per_jiffy;
+
+ char hard_math;
+
+ unsigned long *pgd_quick;
+ unsigned long *pmd_quick;
+ unsigned long *pte_quick;
+ unsigned long pgtable_cache_sz;
+ unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+
+ /* Cache info */
+ struct cache_info icache;
+ struct cache_info dcache;
+
+ /* TLB info */
+ struct tlb_info itlb;
+ struct tlb_info dtlb;
+};
+
+extern struct sh_cpuinfo boot_cpu_data;
+
+#define cpu_data (&boot_cpu_data)
+#define current_cpu_data boot_cpu_data
+
+#endif
+
+/*
+ * User space process size: 2GB - 4k.
+ */
+#define TASK_SIZE 0x7ffff000UL
+
+/* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+#define TASK_UNMAPPED_BASE (TASK_SIZE / 3)
+
+/*
+ * Bit of SR register
+ *
+ * FD-bit:
+ * When it's set, it means the processor doesn't have right to use FPU,
+ * and it results exception when the floating operation is executed.
+ *
+ * IMASK-bit:
+ * Interrupt level mask
+ *
+ * STEP-bit:
+ * Single step bit
+ *
+ */
+#define SR_FD 0x00008000
+
+#if defined(CONFIG_SH64_SR_WATCH)
+#define SR_MMU 0x84000000
+#else
+#define SR_MMU 0x80000000
+#endif
+
+#define SR_IMASK 0x000000f0
+#define SR_SSTEP 0x08000000
+
+#ifndef __ASSEMBLY__
+
+/*
+ * FPU structure and data : require 8-byte alignment as we need to access it
+ with fld.p, fst.p
+ */
+
+struct sh_fpu_hard_struct {
+ unsigned long fp_regs[64];
+ unsigned int fpscr;
+ /* long status; * software status information */
+};
+
+#if 0
+/* Dummy fpu emulator */
+struct sh_fpu_soft_struct {
+ unsigned long long fp_regs[32];
+ unsigned int fpscr;
+ unsigned char lookahead;
+ unsigned long entry_pc;
+};
+#endif
+
+union sh_fpu_union {
+ struct sh_fpu_hard_struct hard;
+ /* 'hard' itself only produces 32 bit alignment, yet we need
+ to access it using 64 bit load/store as well. */
+ unsigned long long alignment_dummy;
+};
+
+struct thread_struct {
+ unsigned long sp;
+ unsigned long pc;
+ /* This stores the address of the pt_regs built during a context
+ switch, or of the register save area built for a kernel mode
+ exception. It is used for backtracing the stack of a sleeping task
+ or one that traps in kernel mode. */
+ struct pt_regs *kregs;
+ /* This stores the address of the pt_regs constructed on entry from
+ user mode. It is a fixed value over the lifetime of a process, or
+ NULL for a kernel thread. */
+ struct pt_regs *uregs;
+
+ unsigned long trap_no, error_code;
+ unsigned long address;
+ /* Hardware debugging registers may come here */
+
+ /* floating point info */
+ union sh_fpu_union fpu;
+};
+
+#define INIT_MMAP \
+{ &init_mm, 0, 0, NULL, PAGE_SHARED, VM_READ | VM_WRITE | VM_EXEC, 1, NULL, NULL }
+
+extern struct pt_regs fake_swapper_regs;
+
+#define INIT_THREAD { \
+ .sp = sizeof(init_stack) + \
+ (long) &init_stack, \
+ .pc = 0, \
+ .kregs = &fake_swapper_regs, \
+ .uregs = NULL, \
+ .trap_no = 0, \
+ .error_code = 0, \
+ .address = 0, \
+ .fpu = { { { 0, } }, } \
+}
+
+/*
+ * Do necessary setup to start up a newly executed thread.
+ */
+#define SR_USER (SR_MMU | SR_FD)
+
+#define start_thread(regs, new_pc, new_sp) \
+ set_fs(USER_DS); \
+ regs->sr = SR_USER; /* User mode. */ \
+ regs->pc = new_pc - 4; /* Compensate syscall exit */ \
+ regs->pc |= 1; /* Set SHmedia ! */ \
+ regs->regs[18] = 0; \
+ regs->regs[15] = new_sp
+
+/* Forward declaration, a strange C thing */
+struct task_struct;
+struct mm_struct;
+
+/* Free all resources held by a thread. */
+extern void release_thread(struct task_struct *);
+/*
+ * create a kernel thread without removing it from tasklists
+ */
+extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
+
+/*
+ * Bus types
+ */
+#define MCA_bus 0
+#define MCA_bus__is_a_macro /* for versions in ksyms.c */
+
+
+/* Copy and release all segment info associated with a VM */
+#define copy_segments(p, mm) do { } while (0)
+#define release_segments(mm) do { } while (0)
+#define forget_segments() do { } while (0)
+#define prepare_to_copy(tsk) do { } while (0)
+/*
+ * FPU lazy state save handling.
+ */
+
+extern __inline__ void release_fpu(void)
+{
+ unsigned long long __dummy;
+
+ /* Set FD flag in SR */
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "or %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy)
+ : "r" (SR_FD));
+}
+
+extern __inline__ void grab_fpu(void)
+{
+ unsigned long long __dummy;
+
+ /* Clear out FD flag in SR */
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "and %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy)
+ : "r" (~SR_FD));
+}
+
+/* Round to nearest, no exceptions on inexact, overflow, underflow,
+ zero-divide, invalid. Configure option for whether to flush denorms to
+ zero, or except if a denorm is encountered. */
+#if defined(CONFIG_SH64_FPU_DENORM_FLUSH)
+#define FPSCR_INIT 0x00040000
+#else
+#define FPSCR_INIT 0x00000000
+#endif
+
+/* Save the current FP regs */
+void fpsave(struct sh_fpu_hard_struct *fpregs);
+
+/* Initialise the FP state of a task */
+void fpinit(struct sh_fpu_hard_struct *fpregs);
+
+extern struct task_struct *last_task_used_math;
+
+/*
+ * Return saved PC of a blocked thread.
+ */
+#define thread_saved_pc(tsk) (tsk->thread.pc)
+
+extern unsigned long get_wchan(struct task_struct *p);
+
+#define KSTK_EIP(tsk) ((tsk)->thread.pc)
+#define KSTK_ESP(tsk) ((tsk)->thread.sp)
+
+#define cpu_relax() do { } while (0)
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_SH64_PROCESSOR_H */
+
--- /dev/null
+#ifndef __ASM_SH64_PTRACE_H
+#define __ASM_SH64_PTRACE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ptrace.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * This struct defines the way the registers are stored on the
+ * kernel stack during a system call or other kernel entry.
+ */
+struct pt_regs {
+ unsigned long long pc;
+ unsigned long long sr;
+ unsigned long long syscall_nr;
+ unsigned long long regs[63];
+ unsigned long long tregs[8];
+ unsigned long long pad[2];
+};
+
+#ifdef __KERNEL__
+#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
+#define instruction_pointer(regs) ((regs)->pc)
+extern void show_regs(struct pt_regs *);
+#endif
+
+#define PTRACE_O_TRACESYSGOOD 0x00000001
+
+#endif /* __ASM_SH64_PTRACE_H */
--- /dev/null
+#ifndef __ASM_SH64_REGISTERS_H
+#define __ASM_SH64_REGISTERS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/registers.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2004 Richard Curnow
+ */
+
+#ifdef __ASSEMBLY__
+/* =====================================================================
+**
+** Section 1: acts on assembly sources pre-processed by GPP ( <source.S>).
+** Assigns symbolic names to control & target registers.
+*/
+
+/*
+ * Define some useful aliases for control registers.
+ */
+#define SR cr0
+#define SSR cr1
+#define PSSR cr2
+ /* cr3 UNDEFINED */
+#define INTEVT cr4
+#define EXPEVT cr5
+#define PEXPEVT cr6
+#define TRA cr7
+#define SPC cr8
+#define PSPC cr9
+#define RESVEC cr10
+#define VBR cr11
+ /* cr12 UNDEFINED */
+#define TEA cr13
+ /* cr14-cr15 UNDEFINED */
+#define DCR cr16
+#define KCR0 cr17
+#define KCR1 cr18
+ /* cr19-cr31 UNDEFINED */
+ /* cr32-cr61 RESERVED */
+#define CTC cr62
+#define USR cr63
+
+/*
+ * ABI dependent registers (general purpose set)
+ */
+#define RET r2
+#define ARG1 r2
+#define ARG2 r3
+#define ARG3 r4
+#define ARG4 r5
+#define ARG5 r6
+#define ARG6 r7
+#define SP r15
+#define LINK r18
+#define ZERO r63
+
+/*
+ * Status register defines: used only by assembly sources (and
+ * syntax independednt)
+ */
+#define SR_RESET_VAL 0x0000000050008000
+#define SR_HARMLESS 0x00000000500080f0 /* Write ignores for most */
+#define SR_ENABLE_FPU 0xffffffffffff7fff /* AND with this */
+
+#if defined (CONFIG_SH64_SR_WATCH)
+#define SR_ENABLE_MMU 0x0000000084000000 /* OR with this */
+#else
+#define SR_ENABLE_MMU 0x0000000080000000 /* OR with this */
+#endif
+
+#define SR_UNBLOCK_EXC 0xffffffffefffffff /* AND with this */
+#define SR_BLOCK_EXC 0x0000000010000000 /* OR with this */
+
+#else /* Not __ASSEMBLY__ syntax */
+
+/*
+** Stringify reg. name
+*/
+#define __str(x) #x
+
+/* Stringify control register names for use in inline assembly */
+#define __SR __str(SR)
+#define __SSR __str(SSR)
+#define __PSSR __str(PSSR)
+#define __INTEVT __str(INTEVT)
+#define __EXPEVT __str(EXPEVT)
+#define __PEXPEVT __str(PEXPEVT)
+#define __TRA __str(TRA)
+#define __SPC __str(SPC)
+#define __PSPC __str(PSPC)
+#define __RESVEC __str(RESVEC)
+#define __VBR __str(VBR)
+#define __TEA __str(TEA)
+#define __DCR __str(DCR)
+#define __KCR0 __str(KCR0)
+#define __KCR1 __str(KCR1)
+#define __CTC __str(CTC)
+#define __USR __str(USR)
+
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_SH64_REGISTERS_H */
--- /dev/null
+#ifndef __ASM_SH64_RESOURCE_H
+#define __ASM_SH64_RESOURCE_H
+
+#include <asm-sh/resource.h>
+
+#endif /* __ASM_SH64_RESOURCE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/scatterlist.h
+ *
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+#ifndef __ASM_SH64_SCATTERLIST_H
+#define __ASM_SH64_SCATTERLIST_H
+
+struct scatterlist {
+ struct page * page; /* Location for highmem page, if any */
+ unsigned int offset;/* for highmem, page offset */
+ dma_addr_t dma_address;
+ unsigned int length;
+};
+
+#define ISA_DMA_THRESHOLD (0xffffffff)
+
+#endif /* !__ASM_SH64_SCATTERLIST_H */
--- /dev/null
+#ifndef __ASM_SH64_SECTIONS_H
+#define __ASM_SH64_SECTIONS_H
+
+#include <asm-sh/sections.h>
+
+#endif /* __ASM_SH64_SECTIONS_H */
+
--- /dev/null
+#ifndef _ASM_SEGMENT_H
+#define _ASM_SEGMENT_H
+
+/* Only here because we have some old header files that expect it.. */
+
+#endif /* _ASM_SEGMENT_H */
--- /dev/null
+#ifndef __ASM_SH64_SEMAPHORE_HELPER_H
+#define __ASM_SH64_SEMAPHORE_HELPER_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/semaphore-helper.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+#include <asm/errno.h>
+
+/*
+ * SMP- and interrupt-safe semaphores helper functions.
+ *
+ * (C) Copyright 1996 Linus Torvalds
+ * (C) Copyright 1999 Andrea Arcangeli
+ */
+
+/*
+ * These two _must_ execute atomically wrt each other.
+ *
+ * This is trivially done with load_locked/store_cond,
+ * which we have. Let the rest of the losers suck eggs.
+ */
+static __inline__ void wake_one_more(struct semaphore * sem)
+{
+ atomic_inc((atomic_t *)&sem->sleepers);
+}
+
+static __inline__ int waking_non_zero(struct semaphore *sem)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&semaphore_wake_lock, flags);
+ if (sem->sleepers > 0) {
+ sem->sleepers--;
+ ret = 1;
+ }
+ spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+ return ret;
+}
+
+/*
+ * waking_non_zero_interruptible:
+ * 1 got the lock
+ * 0 go to sleep
+ * -EINTR interrupted
+ *
+ * We must undo the sem->count down_interruptible() increment while we are
+ * protected by the spinlock in order to make atomic this atomic_inc() with the
+ * atomic_read() in wake_one_more(), otherwise we can race. -arca
+ */
+static __inline__ int waking_non_zero_interruptible(struct semaphore *sem,
+ struct task_struct *tsk)
+{
+ unsigned long flags;
+ int ret = 0;
+
+ spin_lock_irqsave(&semaphore_wake_lock, flags);
+ if (sem->sleepers > 0) {
+ sem->sleepers--;
+ ret = 1;
+ } else if (signal_pending(tsk)) {
+ atomic_inc(&sem->count);
+ ret = -EINTR;
+ }
+ spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+ return ret;
+}
+
+/*
+ * waking_non_zero_trylock:
+ * 1 failed to lock
+ * 0 got the lock
+ *
+ * We must undo the sem->count down_trylock() increment while we are
+ * protected by the spinlock in order to make atomic this atomic_inc() with the
+ * atomic_read() in wake_one_more(), otherwise we can race. -arca
+ */
+static __inline__ int waking_non_zero_trylock(struct semaphore *sem)
+{
+ unsigned long flags;
+ int ret = 1;
+
+ spin_lock_irqsave(&semaphore_wake_lock, flags);
+ if (sem->sleepers <= 0)
+ atomic_inc(&sem->count);
+ else {
+ sem->sleepers--;
+ ret = 0;
+ }
+ spin_unlock_irqrestore(&semaphore_wake_lock, flags);
+ return ret;
+}
+
+#endif /* __ASM_SH64_SEMAPHORE_HELPER_H */
--- /dev/null
+#ifndef __ASM_SH64_SEMAPHORE_H
+#define __ASM_SH64_SEMAPHORE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/semaphore.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * SMP- and interrupt-safe semaphores.
+ *
+ * (C) Copyright 1996 Linus Torvalds
+ *
+ * SuperH verison by Niibe Yutaka
+ * (Currently no asm implementation but generic C code...)
+ *
+ */
+
+#include <linux/linkage.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/rwsem.h>
+
+#include <asm/system.h>
+#include <asm/atomic.h>
+
+struct semaphore {
+ atomic_t count;
+ int sleepers;
+ wait_queue_head_t wait;
+#ifdef WAITQUEUE_DEBUG
+ long __magic;
+#endif
+};
+
+#ifdef WAITQUEUE_DEBUG
+# define __SEM_DEBUG_INIT(name) \
+ , (int)&(name).__magic
+#else
+# define __SEM_DEBUG_INIT(name)
+#endif
+
+#define __SEMAPHORE_INITIALIZER(name,count) \
+{ ATOMIC_INIT(count), 0, __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
+ __SEM_DEBUG_INIT(name) }
+
+#define __MUTEX_INITIALIZER(name) \
+ __SEMAPHORE_INITIALIZER(name,1)
+
+#define __DECLARE_SEMAPHORE_GENERIC(name,count) \
+ struct semaphore name = __SEMAPHORE_INITIALIZER(name,count)
+
+#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name,1)
+#define DECLARE_MUTEX_LOCKED(name) __DECLARE_SEMAPHORE_GENERIC(name,0)
+
+static inline void sema_init (struct semaphore *sem, int val)
+{
+/*
+ * *sem = (struct semaphore)__SEMAPHORE_INITIALIZER((*sem),val);
+ *
+ * i'd rather use the more flexible initialization above, but sadly
+ * GCC 2.7.2.3 emits a bogus warning. EGCS doesnt. Oh well.
+ */
+ atomic_set(&sem->count, val);
+ sem->sleepers = 0;
+ init_waitqueue_head(&sem->wait);
+#ifdef WAITQUEUE_DEBUG
+ sem->__magic = (int)&sem->__magic;
+#endif
+}
+
+static inline void init_MUTEX (struct semaphore *sem)
+{
+ sema_init(sem, 1);
+}
+
+static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+{
+ sema_init(sem, 0);
+}
+
+#if 0
+asmlinkage void __down_failed(void /* special register calling convention */);
+asmlinkage int __down_failed_interruptible(void /* params in registers */);
+asmlinkage int __down_failed_trylock(void /* params in registers */);
+asmlinkage void __up_wakeup(void /* special register calling convention */);
+#endif
+
+asmlinkage void __down(struct semaphore * sem);
+asmlinkage int __down_interruptible(struct semaphore * sem);
+asmlinkage int __down_trylock(struct semaphore * sem);
+asmlinkage void __up(struct semaphore * sem);
+
+extern spinlock_t semaphore_wake_lock;
+
+static inline void down(struct semaphore * sem)
+{
+#ifdef WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+
+ if (atomic_dec_return(&sem->count) < 0)
+ __down(sem);
+}
+
+static inline int down_interruptible(struct semaphore * sem)
+{
+ int ret = 0;
+#ifdef WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_interruptible(sem);
+ return ret;
+}
+
+static inline int down_trylock(struct semaphore * sem)
+{
+ int ret = 0;
+#ifdef WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_trylock(sem);
+ return ret;
+}
+
+/*
+ * Note! This is subtle. We jump to wake people up only if
+ * the semaphore was negative (== somebody was waiting on it).
+ */
+static inline void up(struct semaphore * sem)
+{
+#ifdef WAITQUEUE_DEBUG
+ CHECK_MAGIC(sem->__magic);
+#endif
+ if (atomic_inc_return(&sem->count) <= 0)
+ __up(sem);
+}
+
+#endif /* __ASM_SH64_SEMAPHORE_H */
--- /dev/null
+#ifndef __ASM_SH64_SEMBUF_H
+#define __ASM_SH64_SEMBUF_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/sembuf.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * The semid64_ds structure for i386 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 64-bit time_t to solve y2038 problem
+ * - 2 miscellaneous 32-bit values
+ */
+
+struct semid64_ds {
+ struct ipc64_perm sem_perm; /* permissions .. see ipc.h */
+ __kernel_time_t sem_otime; /* last semop time */
+ unsigned long __unused1;
+ __kernel_time_t sem_ctime; /* last change time */
+ unsigned long __unused2;
+ unsigned long sem_nsems; /* no. of semaphores in array */
+ unsigned long __unused3;
+ unsigned long __unused4;
+};
+
+#endif /* __ASM_SH64_SEMBUF_H */
--- /dev/null
+/*
+ * include/asm-sh/serial.h
+ *
+ * Configuration details for 8250, 16450, 16550, etc. serial ports
+ */
+
+#ifndef _ASM_SERIAL_H
+#define _ASM_SERIAL_H
+
+/*
+ * This assumes you have a 1.8432 MHz clock for your UART.
+ *
+ * It'd be nice if someone built a serial card with a 24.576 MHz
+ * clock, since the 16550A is capable of handling a top speed of 1.5
+ * megabits/second; but this requires the faster clock.
+ */
+#define BASE_BAUD ( 1843200 / 16 )
+
+#define RS_TABLE_SIZE 2
+
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST)
+
+#define STD_SERIAL_PORT_DEFNS \
+ /* UART CLK PORT IRQ FLAGS */ \
+ { 0, BASE_BAUD, 0x3F8, 4, STD_COM_FLAGS }, /* ttyS0 */ \
+ { 0, BASE_BAUD, 0x2F8, 3, STD_COM_FLAGS } /* ttyS1 */
+
+#define SERIAL_PORT_DFNS STD_SERIAL_PORT_DEFNS
+
+/* XXX: This should be moved ino irq.h */
+#define irq_cannonicalize(x) (x)
+
+#endif /* _ASM_SERIAL_H */
--- /dev/null
+#ifndef __ASM_SH64_SETUP_H
+#define __ASM_SH64_SETUP_H
+
+#define PARAM ((unsigned char *)empty_zero_page)
+#define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000))
+#define RAMDISK_FLAGS (*(unsigned long *) (PARAM+0x004))
+#define ORIG_ROOT_DEV (*(unsigned long *) (PARAM+0x008))
+#define LOADER_TYPE (*(unsigned long *) (PARAM+0x00c))
+#define INITRD_START (*(unsigned long *) (PARAM+0x010))
+#define INITRD_SIZE (*(unsigned long *) (PARAM+0x014))
+
+#define COMMAND_LINE ((char *) (PARAM+256))
+#define COMMAND_LINE_SIZE 256
+
+#endif /* __ASM_SH64_SETUP_H */
+
--- /dev/null
+#ifndef __ASM_SH64_SHMBUF_H
+#define __ASM_SH64_SHMBUF_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/shmbuf.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * The shmid64_ds structure for i386 architecture.
+ * Note extra padding because this structure is passed back and forth
+ * between kernel and user space.
+ *
+ * Pad space is left for:
+ * - 64-bit time_t to solve y2038 problem
+ * - 2 miscellaneous 32-bit values
+ */
+
+struct shmid64_ds {
+ struct ipc64_perm shm_perm; /* operation perms */
+ size_t shm_segsz; /* size of segment (bytes) */
+ __kernel_time_t shm_atime; /* last attach time */
+ unsigned long __unused1;
+ __kernel_time_t shm_dtime; /* last detach time */
+ unsigned long __unused2;
+ __kernel_time_t shm_ctime; /* last change time */
+ unsigned long __unused3;
+ __kernel_pid_t shm_cpid; /* pid of creator */
+ __kernel_pid_t shm_lpid; /* pid of last operator */
+ unsigned long shm_nattch; /* no. of current attaches */
+ unsigned long __unused4;
+ unsigned long __unused5;
+};
+
+struct shminfo64 {
+ unsigned long shmmax;
+ unsigned long shmmin;
+ unsigned long shmmni;
+ unsigned long shmseg;
+ unsigned long shmall;
+ unsigned long __unused1;
+ unsigned long __unused2;
+ unsigned long __unused3;
+ unsigned long __unused4;
+};
+
+#endif /* __ASM_SH64_SHMBUF_H */
--- /dev/null
+#ifndef __ASM_SH64_SHMPARAM_H
+#define __ASM_SH64_SHMPARAM_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/shmparam.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <asm/cache.h>
+
+/* attach addr a multiple of this */
+#define SHMLBA (cpu_data->dcache.sets * L1_CACHE_BYTES)
+
+#endif /* __ASM_SH64_SHMPARAM_H */
--- /dev/null
+#ifndef __ASM_SH64_SIGCONTEXT_H
+#define __ASM_SH64_SIGCONTEXT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/sigcontext.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+struct sigcontext {
+ unsigned long oldmask;
+
+ /* CPU registers */
+ unsigned long long sc_regs[63];
+ unsigned long long sc_tregs[8];
+ unsigned long long sc_pc;
+ unsigned long long sc_sr;
+
+ /* FPU registers */
+ unsigned long long sc_fpregs[32];
+ unsigned int sc_fpscr;
+ unsigned int sc_fpvalid;
+};
+
+#endif /* __ASM_SH64_SIGCONTEXT_H */
--- /dev/null
+#ifndef __ASM_SH64_SIGINFO_H
+#define __ASM_SH64_SIGINFO_H
+
+#include <asm-generic/siginfo.h>
+
+#endif /* __ASM_SH64_SIGINFO_H */
--- /dev/null
+#ifndef __ASM_SH64_SIGNAL_H
+#define __ASM_SH64_SIGNAL_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/signal.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/types.h>
+#include <asm/processor.h>
+
+/* Avoid too many header ordering problems. */
+struct siginfo;
+
+#define _NSIG 64
+#define _NSIG_BPW 32
+#define _NSIG_WORDS (_NSIG / _NSIG_BPW)
+
+typedef unsigned long old_sigset_t; /* at least 32 bits */
+
+typedef struct {
+ unsigned long sig[_NSIG_WORDS];
+} sigset_t;
+
+#define SIGHUP 1
+#define SIGINT 2
+#define SIGQUIT 3
+#define SIGILL 4
+#define SIGTRAP 5
+#define SIGABRT 6
+#define SIGIOT 6
+#define SIGBUS 7
+#define SIGFPE 8
+#define SIGKILL 9
+#define SIGUSR1 10
+#define SIGSEGV 11
+#define SIGUSR2 12
+#define SIGPIPE 13
+#define SIGALRM 14
+#define SIGTERM 15
+#define SIGSTKFLT 16
+#define SIGCHLD 17
+#define SIGCONT 18
+#define SIGSTOP 19
+#define SIGTSTP 20
+#define SIGTTIN 21
+#define SIGTTOU 22
+#define SIGURG 23
+#define SIGXCPU 24
+#define SIGXFSZ 25
+#define SIGVTALRM 26
+#define SIGPROF 27
+#define SIGWINCH 28
+#define SIGIO 29
+#define SIGPOLL SIGIO
+/*
+#define SIGLOST 29
+*/
+#define SIGPWR 30
+#define SIGSYS 31
+#define SIGUNUSED 31
+
+/* These should not be considered constants from userland. */
+#define SIGRTMIN 32
+#define SIGRTMAX (_NSIG-1)
+
+/*
+ * SA_FLAGS values:
+ *
+ * SA_ONSTACK indicates that a registered stack_t will be used.
+ * SA_INTERRUPT is a no-op, but left due to historical reasons. Use the
+ * SA_RESTART flag to get restarting signals (which were the default long ago)
+ * SA_NOCLDSTOP flag to turn off SIGCHLD when children stop.
+ * SA_RESETHAND clears the handler when the signal is delivered.
+ * SA_NOCLDWAIT flag on SIGCHLD to inhibit zombies.
+ * SA_NODEFER prevents the current signal from being masked in the handler.
+ *
+ * SA_ONESHOT and SA_NOMASK are the historical Linux names for the Single
+ * Unix names RESETHAND and NODEFER respectively.
+ */
+#define SA_NOCLDSTOP 0x00000001
+#define SA_NOCLDWAIT 0x00000002 /* not supported yet */
+#define SA_SIGINFO 0x00000004
+#define SA_ONSTACK 0x08000000
+#define SA_RESTART 0x10000000
+#define SA_NODEFER 0x40000000
+#define SA_RESETHAND 0x80000000
+
+#define SA_NOMASK SA_NODEFER
+#define SA_ONESHOT SA_RESETHAND
+#define SA_INTERRUPT 0x20000000 /* dummy -- ignored */
+
+#define SA_RESTORER 0x04000000
+
+/*
+ * sigaltstack controls
+ */
+#define SS_ONSTACK 1
+#define SS_DISABLE 2
+
+#define MINSIGSTKSZ 2048
+#define SIGSTKSZ THREAD_SIZE
+
+#ifdef __KERNEL__
+
+/*
+ * These values of sa_flags are used only by the kernel as part of the
+ * irq handling routines.
+ *
+ * SA_INTERRUPT is also used by the irq handling routines.
+ * SA_SHIRQ is for shared interrupt support on PCI and EISA.
+ */
+#define SA_PROBE SA_ONESHOT
+#define SA_SAMPLE_RANDOM SA_RESTART
+#define SA_SHIRQ 0x04000000
+#endif
+
+#define SIG_BLOCK 0 /* for blocking signals */
+#define SIG_UNBLOCK 1 /* for unblocking signals */
+#define SIG_SETMASK 2 /* for setting the signal mask */
+
+/* Type of a signal handler. */
+typedef void (*__sighandler_t)(int);
+
+#define SIG_DFL ((__sighandler_t)0) /* default signal handling */
+#define SIG_IGN ((__sighandler_t)1) /* ignore signal */
+#define SIG_ERR ((__sighandler_t)-1) /* error return from signal */
+
+#ifdef __KERNEL__
+struct old_sigaction {
+ __sighandler_t sa_handler;
+ old_sigset_t sa_mask;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+};
+
+struct sigaction {
+ __sighandler_t sa_handler;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+ sigset_t sa_mask; /* mask last for extensibility */
+};
+
+struct k_sigaction {
+ struct sigaction sa;
+};
+#else
+/* Here we must cater to libcs that poke about in kernel headers. */
+
+struct sigaction {
+ union {
+ __sighandler_t _sa_handler;
+ void (*_sa_sigaction)(int, struct siginfo *, void *);
+ } _u;
+ sigset_t sa_mask;
+ unsigned long sa_flags;
+ void (*sa_restorer)(void);
+};
+
+#define sa_handler _u._sa_handler
+#define sa_sigaction _u._sa_sigaction
+
+#endif /* __KERNEL__ */
+
+typedef struct sigaltstack {
+ void *ss_sp;
+ int ss_flags;
+ size_t ss_size;
+} stack_t;
+
+#ifdef __KERNEL__
+#include <asm/sigcontext.h>
+
+#define sigmask(sig) (1UL << ((sig) - 1))
+#define ptrace_signal_deliver(regs, cookie) do { } while (0)
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_SIGNAL_H */
--- /dev/null
+#ifndef __ASM_SH64_SMP_H
+#define __ASM_SH64_SMP_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/smp.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#endif /* __ASM_SH64_SMP_H */
--- /dev/null
+#ifndef __ASM_SH64_SMPLOCK_H
+#define __ASM_SH64_SMPLOCK_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/smplock.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/config.h>
+
+#ifndef CONFIG_SMP
+
+#define lock_kernel() do { } while(0)
+#define unlock_kernel() do { } while(0)
+#define release_kernel_lock(task, cpu, depth) ((depth) = 1)
+#define reacquire_kernel_lock(task, cpu, depth) do { } while(0)
+
+#else
+
+#error "We do not support SMP on SH64 yet"
+/*
+ * Default SMP lock implementation
+ */
+
+#include <linux/interrupt.h>
+#include <asm/spinlock.h>
+
+extern spinlock_t kernel_flag;
+
+/*
+ * Getting the big kernel lock.
+ *
+ * This cannot happen asynchronously,
+ * so we only need to worry about other
+ * CPU's.
+ */
+extern __inline__ void lock_kernel(void)
+{
+ if (!++current->lock_depth)
+ spin_lock(&kernel_flag);
+}
+
+extern __inline__ void unlock_kernel(void)
+{
+ if (--current->lock_depth < 0)
+ spin_unlock(&kernel_flag);
+}
+
+/*
+ * Release global kernel lock and global interrupt lock
+ */
+#define release_kernel_lock(task, cpu) \
+do { \
+ if (task->lock_depth >= 0) \
+ spin_unlock(&kernel_flag); \
+ release_irqlock(cpu); \
+ __sti(); \
+} while (0)
+
+/*
+ * Re-acquire the kernel lock
+ */
+#define reacquire_kernel_lock(task) \
+do { \
+ if (task->lock_depth >= 0) \
+ spin_lock(&kernel_flag); \
+} while (0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* __ASM_SH64_SMPLOCK_H */
--- /dev/null
+#ifndef __ASM_SH64_SOCKET_H
+#define __ASM_SH64_SOCKET_H
+
+#include <asm-sh/socket.h>
+
+#endif /* __ASM_SH64_SOCKET_H */
--- /dev/null
+#ifndef __ASM_SH64_SOCKIOS_H
+#define __ASM_SH64_SOCKIOS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/sockios.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/* Socket-level I/O control calls. */
+#define FIOGETOWN _IOR('f', 123, int)
+#define FIOSETOWN _IOW('f', 124, int)
+
+#define SIOCATMARK _IOR('s', 7, int)
+#define SIOCSPGRP _IOW('s', 8, pid_t)
+#define SIOCGPGRP _IOR('s', 9, pid_t)
+
+#define SIOCGSTAMP _IOR('s', 100, struct timeval) /* Get stamp - linux-specific */
+#endif /* __ASM_SH64_SOCKIOS_H */
--- /dev/null
+#ifndef __ASM_SH_SOFTIRQ_H
+#define __ASM_SH_SOFTIRQ_H
+
+#include <asm/atomic.h>
+#include <asm/hardirq.h>
+
+#define local_bh_disable() \
+do { \
+ local_bh_count(smp_processor_id())++; \
+ barrier(); \
+} while (0)
+
+#define __local_bh_enable() \
+do { \
+ barrier(); \
+ local_bh_count(smp_processor_id())--; \
+} while (0)
+
+#define local_bh_enable() \
+do { \
+ barrier(); \
+ if (!--local_bh_count(smp_processor_id()) \
+ && softirq_pending(smp_processor_id())) { \
+ do_softirq(); \
+ } \
+} while (0)
+
+#define in_softirq() (local_bh_count(smp_processor_id()) != 0)
+
+#endif /* __ASM_SH_SOFTIRQ_H */
--- /dev/null
+#ifndef __ASM_SH64_SPINLOCK_H
+#define __ASM_SH64_SPINLOCK_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/spinlock.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#error "No SMP on SH64"
+
+#endif /* __ASM_SH64_SPINLOCK_H */
--- /dev/null
+#ifndef __ASM_SH64_STAT_H
+#define __ASM_SH64_STAT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/stat.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+struct __old_kernel_stat {
+ unsigned short st_dev;
+ unsigned short st_ino;
+ unsigned short st_mode;
+ unsigned short st_nlink;
+ unsigned short st_uid;
+ unsigned short st_gid;
+ unsigned short st_rdev;
+ unsigned long st_size;
+ unsigned long st_atime;
+ unsigned long st_mtime;
+ unsigned long st_ctime;
+};
+
+struct stat {
+ unsigned short st_dev;
+ unsigned short __pad1;
+ unsigned long st_ino;
+ unsigned short st_mode;
+ unsigned short st_nlink;
+ unsigned short st_uid;
+ unsigned short st_gid;
+ unsigned short st_rdev;
+ unsigned short __pad2;
+ unsigned long st_size;
+ unsigned long st_blksize;
+ unsigned long st_blocks;
+ unsigned long st_atime;
+ unsigned long st_atime_nsec;
+ unsigned long st_mtime;
+ unsigned long st_mtime_nsec;
+ unsigned long st_ctime;
+ unsigned long st_ctime_nsec;
+ unsigned long __unused4;
+ unsigned long __unused5;
+};
+
+/* This matches struct stat64 in glibc2.1, hence the absolutely
+ * insane amounts of padding around dev_t's.
+ */
+struct stat64 {
+ unsigned short st_dev;
+ unsigned char __pad0[10];
+
+ unsigned long st_ino;
+ unsigned int st_mode;
+ unsigned int st_nlink;
+
+ unsigned long st_uid;
+ unsigned long st_gid;
+
+ unsigned short st_rdev;
+ unsigned char __pad3[10];
+
+ long long st_size;
+ unsigned long st_blksize;
+
+ unsigned long st_blocks; /* Number 512-byte blocks allocated. */
+ unsigned long __pad4; /* future possible st_blocks high bits */
+
+ unsigned long st_atime;
+ unsigned long st_atime_nsec;
+
+ unsigned long st_mtime;
+ unsigned long st_mtime_nsec;
+
+ unsigned long st_ctime;
+ unsigned long st_ctime_nsec; /* will be high 32 bits of ctime someday */
+
+ unsigned long __unused1;
+ unsigned long __unused2;
+};
+
+#endif /* __ASM_SH64_STAT_H */
--- /dev/null
+#ifndef __ASM_SH64_STATFS_H
+#define __ASM_SH64_STATFS_H
+
+#include <asm-generic/statfs.h>
+
+#endif /* __ASM_SH64_STATFS_H */
--- /dev/null
+#ifndef __ASM_SH64_STRING_H
+#define __ASM_SH64_STRING_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/string.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * Empty on purpose. ARCH SH64 ASM libs are out of the current project scope.
+ *
+ */
+
+#define __HAVE_ARCH_MEMCPY
+
+extern void *memcpy(void *dest, const void *src, size_t count);
+
+#endif
--- /dev/null
+#ifndef __ASM_SH64_SYSTEM_H
+#define __ASM_SH64_SYSTEM_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/system.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <asm/registers.h>
+#include <asm/processor.h>
+
+/*
+ * switch_to() should switch tasks to task nr n, first
+ */
+
+typedef struct {
+ unsigned long seg;
+} mm_segment_t;
+
+extern struct task_struct *sh64_switch_to(struct task_struct *prev,
+ struct thread_struct *prev_thread,
+ struct task_struct *next,
+ struct thread_struct *next_thread);
+
+#define switch_to(prev,next,last) \
+ do {\
+ if (last_task_used_math != next) {\
+ struct pt_regs *regs = next->thread.uregs;\
+ if (regs) regs->sr |= SR_FD;\
+ }\
+ last = sh64_switch_to(prev, &prev->thread, next, &next->thread);\
+ } while(0)
+
+#define nop() __asm__ __volatile__ ("nop")
+
+#define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
+
+#define tas(ptr) (xchg((ptr), 1))
+
+extern void __xchg_called_with_bad_pointer(void);
+
+#define mb() __asm__ __volatile__ ("synco": : :"memory")
+#define rmb() mb()
+#define wmb() __asm__ __volatile__ ("synco": : :"memory")
+#define read_barrier_depends() do { } while (0)
+
+#ifdef CONFIG_SMP
+#define smp_mb() mb()
+#define smp_rmb() rmb()
+#define smp_wmb() wmb()
+#define smp_read_barrier_depends() read_barrier_depends()
+#else
+#define smp_mb() barrier()
+#define smp_rmb() barrier()
+#define smp_wmb() barrier()
+#define smp_read_barrier_depends() do { } while (0)
+#endif /* CONFIG_SMP */
+
+#define set_rmb(var, value) do { xchg(&var, value); } while (0)
+#define set_mb(var, value) set_rmb(var, value)
+#define set_wmb(var, value) do { var = value; wmb(); } while (0)
+
+/* Interrupt Control */
+#ifndef HARD_CLI
+#define SR_MASK_L 0x000000f0L
+#define SR_MASK_LL 0x00000000000000f0LL
+#else
+#define SR_MASK_L 0x10000000L
+#define SR_MASK_LL 0x0000000010000000LL
+#endif
+
+static __inline__ void local_irq_enable(void)
+{
+ /* cli/sti based on SR.BL */
+ unsigned long long __dummy0, __dummy1=~SR_MASK_LL;
+
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "and %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy0)
+ : "r" (__dummy1));
+}
+
+static __inline__ void local_irq_disable(void)
+{
+ /* cli/sti based on SR.BL */
+ unsigned long long __dummy0, __dummy1=SR_MASK_LL;
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "or %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy0)
+ : "r" (__dummy1));
+}
+
+#define local_save_flags(x) \
+(__extension__ ({ unsigned long long __dummy=SR_MASK_LL; \
+ __asm__ __volatile__( \
+ "getcon " __SR ", %0\n\t" \
+ "and %0, %1, %0" \
+ : "=&r" (x) \
+ : "r" (__dummy));}))
+
+#define local_irq_save(x) \
+(__extension__ ({ unsigned long long __d2=SR_MASK_LL, __d1; \
+ __asm__ __volatile__( \
+ "getcon " __SR ", %1\n\t" \
+ "or %1, r63, %0\n\t" \
+ "or %1, %2, %1\n\t" \
+ "putcon %1, " __SR "\n\t" \
+ "and %0, %2, %0" \
+ : "=&r" (x), "=&r" (__d1) \
+ : "r" (__d2));}));
+
+#define local_irq_restore(x) do { \
+ if ( ((x) & SR_MASK_L) == 0 ) /* dropping to 0 ? */ \
+ local_irq_enable(); /* yes...re-enable */ \
+} while (0)
+
+#define irqs_disabled() \
+({ \
+ unsigned long flags; \
+ local_save_flags(flags); \
+ (flags != 0); \
+})
+
+extern __inline__ unsigned long xchg_u32(volatile int * m, unsigned long val)
+{
+ unsigned long flags, retval;
+
+ local_irq_save(flags);
+ retval = *m;
+ *m = val;
+ local_irq_restore(flags);
+ return retval;
+}
+
+extern __inline__ unsigned long xchg_u8(volatile unsigned char * m, unsigned long val)
+{
+ unsigned long flags, retval;
+
+ local_irq_save(flags);
+ retval = *m;
+ *m = val & 0xff;
+ local_irq_restore(flags);
+ return retval;
+}
+
+static __inline__ unsigned long __xchg(unsigned long x, volatile void * ptr, int size)
+{
+ switch (size) {
+ case 4:
+ return xchg_u32(ptr, x);
+ break;
+ case 1:
+ return xchg_u8(ptr, x);
+ break;
+ }
+ __xchg_called_with_bad_pointer();
+ return x;
+}
+
+/* XXX
+ * disable hlt during certain critical i/o operations
+ */
+#define HAVE_DISABLE_HLT
+void disable_hlt(void);
+void enable_hlt(void);
+
+
+#define smp_mb() barrier()
+#define smp_rmb() barrier()
+#define smp_wmb() barrier()
+
+#ifdef CONFIG_SH_ALPHANUMERIC
+/* This is only used for debugging. */
+extern void print_seg(char *file,int line);
+#define PLS() print_seg(__FILE__,__LINE__)
+#else /* CONFIG_SH_ALPHANUMERIC */
+#define PLS()
+#endif /* CONFIG_SH_ALPHANUMERIC */
+
+#define PL() printk("@ <%s,%s:%d>\n",__FILE__,__FUNCTION__,__LINE__)
+
+#endif /* __ASM_SH64_SYSTEM_H */
--- /dev/null
+#ifndef __ASM_SH64_TERMBITS_H
+#define __ASM_SH64_TERMBITS_H
+
+#include <asm-sh/termbits.h>
+
+#endif /* __ASM_SH64_TERMBITS_H */
--- /dev/null
+#ifndef __ASM_SH64_TERMIOS_H
+#define __ASM_SH64_TERMIOS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/termios.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <asm/termbits.h>
+#include <asm/ioctls.h>
+
+struct winsize {
+ unsigned short ws_row;
+ unsigned short ws_col;
+ unsigned short ws_xpixel;
+ unsigned short ws_ypixel;
+};
+
+#define NCC 8
+struct termio {
+ unsigned short c_iflag; /* input mode flags */
+ unsigned short c_oflag; /* output mode flags */
+ unsigned short c_cflag; /* control mode flags */
+ unsigned short c_lflag; /* local mode flags */
+ unsigned char c_line; /* line discipline */
+ unsigned char c_cc[NCC]; /* control characters */
+};
+
+/* modem lines */
+#define TIOCM_LE 0x001
+#define TIOCM_DTR 0x002
+#define TIOCM_RTS 0x004
+#define TIOCM_ST 0x008
+#define TIOCM_SR 0x010
+#define TIOCM_CTS 0x020
+#define TIOCM_CAR 0x040
+#define TIOCM_RNG 0x080
+#define TIOCM_DSR 0x100
+#define TIOCM_CD TIOCM_CAR
+#define TIOCM_RI TIOCM_RNG
+#define TIOCM_OUT1 0x2000
+#define TIOCM_OUT2 0x4000
+#define TIOCM_LOOP 0x8000
+
+/* ioctl (fd, TIOCSERGETLSR, &result) where result may be as below */
+
+/* line disciplines */
+#define N_TTY 0
+#define N_SLIP 1
+#define N_MOUSE 2
+#define N_PPP 3
+#define N_STRIP 4
+#define N_AX25 5
+#define N_X25 6 /* X.25 async */
+#define N_6PACK 7
+#define N_MASC 8 /* Reserved for Mobitex module <kaz@cafe.net> */
+#define N_R3964 9 /* Reserved for Simatic R3964 module */
+#define N_PROFIBUS_FDL 10 /* Reserved for Profibus <Dave@mvhi.com> */
+#define N_IRDA 11 /* Linux IR - http://www.cs.uit.no/~dagb/irda/irda.html */
+#define N_SMSBLOCK 12 /* SMS block mode - for talking to GSM data cards about SMS messages */
+#define N_HDLC 13 /* synchronous HDLC */
+#define N_SYNC_PPP 14
+#define N_HCI 15 /* Bluetooth HCI UART */
+
+#ifdef __KERNEL__
+
+/* intr=^C quit=^\ erase=del kill=^U
+ eof=^D vtime=\0 vmin=\1 sxtc=\0
+ start=^Q stop=^S susp=^Z eol=\0
+ reprint=^R discard=^U werase=^W lnext=^V
+ eol2=\0
+*/
+#define INIT_C_CC "\003\034\177\025\004\0\1\0\021\023\032\0\022\017\027\026\0"
+
+/*
+ * Translate a "termio" structure into a "termios". Ugh.
+ */
+#define SET_LOW_TERMIOS_BITS(termios, termio, x) { \
+ unsigned short __tmp; \
+ get_user(__tmp,&(termio)->x); \
+ *(unsigned short *) &(termios)->x = __tmp; \
+}
+
+#define user_termio_to_kernel_termios(termios, termio) \
+({ \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_iflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_oflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_cflag); \
+ SET_LOW_TERMIOS_BITS(termios, termio, c_lflag); \
+ copy_from_user((termios)->c_cc, (termio)->c_cc, NCC); \
+})
+
+/*
+ * Translate a "termios" structure into a "termio". Ugh.
+ */
+#define kernel_termios_to_user_termio(termio, termios) \
+({ \
+ put_user((termios)->c_iflag, &(termio)->c_iflag); \
+ put_user((termios)->c_oflag, &(termio)->c_oflag); \
+ put_user((termios)->c_cflag, &(termio)->c_cflag); \
+ put_user((termios)->c_lflag, &(termio)->c_lflag); \
+ put_user((termios)->c_line, &(termio)->c_line); \
+ copy_to_user((termio)->c_cc, (termios)->c_cc, NCC); \
+})
+
+#define user_termios_to_kernel_termios(k, u) copy_from_user(k, u, sizeof(struct termios))
+#define kernel_termios_to_user_termios(u, k) copy_to_user(u, k, sizeof(struct termios))
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_TERMIOS_H */
--- /dev/null
+#ifndef __ASM_SH64_THREAD_INFO_H
+#define __ASM_SH64_THREAD_INFO_H
+
+/*
+ * SuperH 5 version
+ * Copyright (C) 2003 Paul Mundt
+ */
+
+#ifdef __KERNEL__
+
+#ifndef __ASSEMBLY__
+#include <asm/registers.h>
+
+/*
+ * low level task data that entry.S needs immediate access to
+ * - this struct should fit entirely inside of one cache line
+ * - this struct shares the supervisor stack pages
+ * - if the contents of this structure are changed, the assembly constants must also be changed
+ */
+struct thread_info {
+ struct task_struct *task; /* main task structure */
+ struct exec_domain *exec_domain; /* execution domain */
+ unsigned long flags; /* low level flags */
+ /* Put the 4 32-bit fields together to make asm offsetting easier. */
+ __s32 preempt_count; /* 0 => preemptable, <0 => BUG */
+ __u16 cpu;
+
+ mm_segment_t addr_limit;
+ struct restart_block restart_block;
+
+ __u8 supervisor_stack[0];
+};
+
+/*
+ * macros/functions for gaining access to the thread information structure
+ */
+#define INIT_THREAD_INFO(tsk) \
+{ \
+ .task = &tsk, \
+ .exec_domain = &default_exec_domain, \
+ .flags = 0, \
+ .cpu = 0, \
+ .preempt_count = 1, \
+ .addr_limit = KERNEL_DS, \
+ .restart_block = { \
+ .fn = do_no_restart_syscall, \
+ }, \
+}
+
+#define init_thread_info (init_thread_union.thread_info)
+#define init_stack (init_thread_union.stack)
+
+/* how to get the thread information struct from C */
+static inline struct thread_info *current_thread_info(void)
+{
+ struct thread_info *ti;
+
+ __asm__ __volatile__ ("getcon " __KCR0 ", %0\n\t" : "=r" (ti));
+
+ return ti;
+}
+
+/* thread information allocation */
+#define alloc_thread_info(ti) ((struct thread_info *) __get_free_pages(GFP_KERNEL,2))
+#define free_thread_info(ti) free_pages((unsigned long) (ti), 1)
+#define get_thread_info(ti) get_task_struct((ti)->task)
+#define put_thread_info(ti) put_task_struct((ti)->task)
+
+#endif /* __ASSEMBLY__ */
+
+#define PREEMPT_ACTIVE 0x4000000
+
+/* thread information flags */
+#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
+#define TIF_SIGPENDING 2 /* signal pending */
+#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
+
+#define THREAD_SIZE 16384
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_THREAD_INFO_H */
--- /dev/null
+#ifndef __ASM_SH64_TIMEX_H
+#define __ASM_SH64_TIMEX_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/timex.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * sh-5 architecture timex specifications
+ *
+ */
+
+#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+#define CLOCK_TICK_FACTOR 20 /* Factor of both 1000000 and CLOCK_TICK_RATE */
+#define FINETUNE ((((((long)LATCH * HZ - CLOCK_TICK_RATE) << SHIFT_HZ) * \
+ (1000000/CLOCK_TICK_FACTOR) / (CLOCK_TICK_RATE/CLOCK_TICK_FACTOR)) \
+ << (SHIFT_SCALE-SHIFT_HZ)) / HZ)
+
+typedef unsigned long cycles_t;
+
+extern cycles_t cacheflush_time;
+
+static __inline__ cycles_t get_cycles (void)
+{
+ return 0;
+}
+
+#define vxtime_lock() do {} while (0)
+#define vxtime_unlock() do {} while (0)
+
+#endif /* __ASM_SH64_TIMEX_H */
--- /dev/null
+/*
+ * include/asm-sh64/tlb.h
+ *
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ */
+#ifndef __ASM_SH64_TLB_H
+#define __ASM_SH64_TLB_H
+
+/*
+ * Note! These are mostly unused, we just need the xTLB_LAST_VAR_UNRESTRICTED
+ * for head.S! Once this limitation is gone, we can clean the rest of this up.
+ */
+
+/* ITLB defines */
+#define ITLB_FIXED 0x00000000 /* First fixed ITLB, see head.S */
+#define ITLB_LAST_VAR_UNRESTRICTED 0x000003F0 /* Last ITLB */
+
+/* DTLB defines */
+#define DTLB_FIXED 0x00800000 /* First fixed DTLB, see head.S */
+#define DTLB_LAST_VAR_UNRESTRICTED 0x008003F0 /* Last DTLB */
+
+#ifndef __ASSEMBLY__
+
+/**
+ * for_each_dtlb_entry
+ *
+ * @tlb: TLB entry
+ *
+ * Iterate over free (non-wired) DTLB entries
+ */
+#define for_each_dtlb_entry(tlb) \
+ for (tlb = cpu_data->dtlb.first; \
+ tlb <= cpu_data->dtlb.last; \
+ tlb += cpu_data->dtlb.step)
+
+/**
+ * for_each_itlb_entry
+ *
+ * @tlb: TLB entry
+ *
+ * Iterate over free (non-wired) ITLB entries
+ */
+#define for_each_itlb_entry(tlb) \
+ for (tlb = cpu_data->itlb.first; \
+ tlb <= cpu_data->itlb.last; \
+ tlb += cpu_data->itlb.step)
+
+/**
+ * __flush_tlb_slot
+ *
+ * @slot: Address of TLB slot.
+ *
+ * Flushes TLB slot @slot.
+ */
+static inline void __flush_tlb_slot(unsigned long long slot)
+{
+ __asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
+}
+
+/* arch/sh64/mm/tlb.c */
+extern int sh64_tlb_init(void);
+extern unsigned long long sh64_next_free_dtlb_entry(void);
+extern unsigned long long sh64_get_wired_dtlb_entry(void);
+extern int sh64_put_wired_dtlb_entry(unsigned long long entry);
+
+extern void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr, unsigned long asid, unsigned long paddr);
+extern void sh64_teardown_tlb_slot(unsigned long long config_addr);
+
+#define tlb_start_vma(tlb, vma) \
+ flush_cache_range(vma, vma->vm_start, vma->vm_end)
+
+#define tlb_end_vma(tlb, vma) \
+ flush_tlb_range(vma, vma->vm_start, vma->vm_end)
+
+#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0)
+
+/*
+ * Flush whole TLBs for MM
+ */
+#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
+
+#include <asm-generic/tlb.h>
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_SH64_TLB_H */
+
--- /dev/null
+#ifndef __ASM_SH64_TLBFLUSH_H
+#define __ASM_SH64_TLBFLUSH_H
+
+#include <asm/pgalloc.h>
+
+/*
+ * TLB flushing:
+ *
+ * - flush_tlb() flushes the current mm struct TLBs
+ * - flush_tlb_all() flushes all processes TLBs
+ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+ * - flush_tlb_page(vma, vmaddr) flushes one page
+ * - flush_tlb_range(mm, start, end) flushes a range of pages
+ *
+ */
+
+extern void flush_tlb(void);
+extern void flush_tlb_all(void);
+extern void flush_tlb_mm(struct mm_struct *mm);
+extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end);
+extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern inline void flush_tlb_pgtables(struct mm_struct *mm,
+ unsigned long start, unsigned long end)
+{
+}
+
+extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
+
+#endif /* __ASM_SH64_TLBFLUSH_H */
+
--- /dev/null
+#ifndef __ASM_SH64_TOPOLOGY_H
+#define __ASM_SH64_TOPOLOGY_H
+
+#include <asm-generic/topology.h>
+
+#endif /* __ASM_SH64_TOPOLOGY_H */
--- /dev/null
+#ifndef __ASM_SH64_TYPES_H
+#define __ASM_SH64_TYPES_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/types.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#ifndef __ASSEMBLY__
+
+typedef unsigned short umode_t;
+
+/*
+ * __xx is ok: it doesn't pollute the POSIX namespace. Use these in the
+ * header files exported to user space
+ */
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+typedef __signed__ long long __s64;
+typedef unsigned long long __u64;
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+/*
+ * These aren't exported outside the kernel to avoid name space clashes
+ */
+#ifdef __KERNEL__
+
+#ifndef __ASSEMBLY__
+
+typedef __signed__ char s8;
+typedef unsigned char u8;
+
+typedef __signed__ short s16;
+typedef unsigned short u16;
+
+typedef __signed__ int s32;
+typedef unsigned int u32;
+
+typedef __signed__ long long s64;
+typedef unsigned long long u64;
+
+/* DMA addresses come in generic and 64-bit flavours. */
+
+#ifdef CONFIG_HIGHMEM64G
+typedef u64 dma_addr_t;
+#else
+typedef u32 dma_addr_t;
+#endif
+typedef u64 dma64_addr_t;
+
+typedef unsigned int kmem_bufctl_t;
+
+#endif /* __ASSEMBLY__ */
+
+#define BITS_PER_LONG 32
+
+#endif /* __KERNEL__ */
+
+#endif /* __ASM_SH64_TYPES_H */
--- /dev/null
+#ifndef __ASM_SH64_UACCESS_H
+#define __ASM_SH64_UACCESS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/uaccess.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ * User space memory access functions
+ *
+ * Copyright (C) 1999 Niibe Yutaka
+ *
+ * Based on:
+ * MIPS implementation version 1.15 by
+ * Copyright (C) 1996, 1997, 1998 by Ralf Baechle
+ * and i386 version.
+ *
+ */
+
+#include <linux/errno.h>
+#include <linux/sched.h>
+
+#define VERIFY_READ 0
+#define VERIFY_WRITE 1
+
+/*
+ * The fs value determines whether argument validity checking should be
+ * performed or not. If get_fs() == USER_DS, checking is performed, with
+ * get_fs() == KERNEL_DS, checking is bypassed.
+ *
+ * For historical reasons (Data Segment Register?), these macros are misnamed.
+ */
+
+#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
+
+#define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)
+#define USER_DS MAKE_MM_SEG(0x80000000)
+
+#define get_ds() (KERNEL_DS)
+#define get_fs() (current_thread_info()->addr_limit)
+#define set_fs(x) (current_thread_info()->addr_limit=(x))
+
+#define segment_eq(a,b) ((a).seg == (b).seg)
+
+#define __addr_ok(addr) ((unsigned long)(addr) < (current_thread_info()->addr_limit.seg))
+
+/*
+ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
+ *
+ * sum := addr + size; carry? --> flag = true;
+ * if (sum >= addr_limit) flag = true;
+ */
+#define __range_ok(addr,size) (((unsigned long) (addr) + (size) < (current_thread_info()->addr_limit.seg)) ? 0 : 1)
+
+#define access_ok(type,addr,size) (__range_ok(addr,size) == 0)
+#define __access_ok(addr,size) (__range_ok(addr,size) == 0)
+
+extern inline int verify_area(int type, const void __user * addr, unsigned long size)
+{
+ return access_ok(type,addr,size) ? 0 : -EFAULT;
+}
+
+/*
+ * Uh, these should become the main single-value transfer routines ...
+ * They automatically use the right size if we just have the right
+ * pointer type ...
+ *
+ * As MIPS uses the same address space for kernel and user data, we
+ * can just do these as direct assignments.
+ *
+ * Careful to not
+ * (a) re-use the arguments for side effects (sizeof is ok)
+ * (b) require any knowledge of processes at this stage
+ */
+#define put_user(x,ptr) __put_user_check((x),(ptr),sizeof(*(ptr)))
+#define get_user(x,ptr) __get_user_check((x),(ptr),sizeof(*(ptr)))
+
+/*
+ * The "__xxx" versions do not do address space checking, useful when
+ * doing multiple accesses to the same area (the user has to do the
+ * checks by hand with "access_ok()")
+ */
+#define __put_user(x,ptr) __put_user_nocheck((x),(ptr),sizeof(*(ptr)))
+#define __get_user(x,ptr) __get_user_nocheck((x),(ptr),sizeof(*(ptr)))
+
+/*
+ * The "xxx_ret" versions return constant specified in third argument, if
+ * something bad happens. These macros can be optimized for the
+ * case of just returning from the function xxx_ret is used.
+ */
+
+#define put_user_ret(x,ptr,ret) ({ \
+if (put_user(x,ptr)) return ret; })
+
+#define get_user_ret(x,ptr,ret) ({ \
+if (get_user(x,ptr)) return ret; })
+
+#define __put_user_ret(x,ptr,ret) ({ \
+if (__put_user(x,ptr)) return ret; })
+
+#define __get_user_ret(x,ptr,ret) ({ \
+if (__get_user(x,ptr)) return ret; })
+
+struct __large_struct { unsigned long buf[100]; };
+#define __m(x) (*(struct __large_struct *)(x))
+
+#define __get_user_size(x,ptr,size,retval) \
+do { \
+ retval = 0; \
+ switch (size) { \
+ case 1: \
+ retval = __get_user_asm_b(x, ptr); \
+ break; \
+ case 2: \
+ retval = __get_user_asm_w(x, ptr); \
+ break; \
+ case 4: \
+ retval = __get_user_asm_l(x, ptr); \
+ break; \
+ case 8: \
+ retval = __get_user_asm_q(x, ptr); \
+ break; \
+ default: \
+ __get_user_unknown(); \
+ break; \
+ } \
+} while (0)
+
+#define __get_user_nocheck(x,ptr,size) \
+({ \
+ long __gu_addr = (long)(ptr); \
+ long __gu_err; \
+ __typeof(*(ptr)) __gu_val; \
+ __asm__ ("":"=r" (__gu_val)); \
+ __asm__ ("":"=r" (__gu_err)); \
+ __get_user_size((void *)&__gu_val, __gu_addr, (size), __gu_err); \
+ (x) = (__typeof__(*(ptr))) __gu_val; \
+ __gu_err; \
+})
+
+#define __get_user_check(x,ptr,size) \
+({ \
+ long __gu_addr = (long)(ptr); \
+ long __gu_err = -EFAULT; \
+ __typeof(*(ptr)) __gu_val; \
+ __asm__ ("":"=r" (__gu_val)); \
+ __asm__ ("":"=r" (__gu_err)); \
+ if (__access_ok(__gu_addr, (size))) \
+ __get_user_size((void *)&__gu_val, __gu_addr, (size), __gu_err); \
+ (x) = (__typeof__(*(ptr))) __gu_val; \
+ __gu_err; \
+})
+
+extern long __get_user_asm_b(void *, long);
+extern long __get_user_asm_w(void *, long);
+extern long __get_user_asm_l(void *, long);
+extern long __get_user_asm_q(void *, long);
+extern void __get_user_unknown(void);
+
+#define __put_user_size(x,ptr,size,retval) \
+do { \
+ retval = 0; \
+ switch (size) { \
+ case 1: \
+ retval = __put_user_asm_b(x, ptr); \
+ break; \
+ case 2: \
+ retval = __put_user_asm_w(x, ptr); \
+ break; \
+ case 4: \
+ retval = __put_user_asm_l(x, ptr); \
+ break; \
+ case 8: \
+ retval = __put_user_asm_q(x, ptr); \
+ break; \
+ default: \
+ __put_user_unknown(); \
+ } \
+} while (0)
+
+#define __put_user_nocheck(x,ptr,size) \
+({ \
+ long __pu_err; \
+ __typeof__(*(ptr)) __pu_val = (x); \
+ __put_user_size((void *)&__pu_val, (long)(ptr), (size), __pu_err); \
+ __pu_err; \
+})
+
+#define __put_user_check(x,ptr,size) \
+({ \
+ long __pu_err = -EFAULT; \
+ long __pu_addr = (long)(ptr); \
+ __typeof__(*(ptr)) __pu_val = (x); \
+ \
+ if (__access_ok(__pu_addr, (size))) \
+ __put_user_size((void *)&__pu_val, __pu_addr, (size), __pu_err);\
+ __pu_err; \
+})
+
+extern long __put_user_asm_b(void *, long);
+extern long __put_user_asm_w(void *, long);
+extern long __put_user_asm_l(void *, long);
+extern long __put_user_asm_q(void *, long);
+extern void __put_user_unknown(void);
+
+\f
+/* Generic arbitrary sized copy. */
+/* Return the number of bytes NOT copied */
+/* XXX: should be such that: 4byte and the rest. */
+extern __kernel_size_t __copy_user(void *__to, const void *__from, __kernel_size_t __n);
+
+#define copy_to_user(to,from,n) ({ \
+void *__copy_to = (void *) (to); \
+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
+__kernel_size_t __copy_res; \
+if(__copy_size && __access_ok((unsigned long)__copy_to, __copy_size)) { \
+__copy_res = __copy_user(__copy_to, (void *) (from), __copy_size); \
+} else __copy_res = __copy_size; \
+__copy_res; })
+
+#define copy_to_user_ret(to,from,n,retval) ({ \
+if (copy_to_user(to,from,n)) \
+ return retval; \
+})
+
+#define __copy_to_user(to,from,n) \
+ __copy_user((void *)(to), \
+ (void *)(from), n)
+
+#define __copy_to_user_ret(to,from,n,retval) ({ \
+if (__copy_to_user(to,from,n)) \
+ return retval; \
+})
+
+#define copy_from_user(to,from,n) ({ \
+void *__copy_to = (void *) (to); \
+void *__copy_from = (void *) (from); \
+__kernel_size_t __copy_size = (__kernel_size_t) (n); \
+__kernel_size_t __copy_res; \
+if(__copy_size && __access_ok((unsigned long)__copy_from, __copy_size)) { \
+__copy_res = __copy_user(__copy_to, __copy_from, __copy_size); \
+} else __copy_res = __copy_size; \
+__copy_res; })
+
+#define copy_from_user_ret(to,from,n,retval) ({ \
+if (copy_from_user(to,from,n)) \
+ return retval; \
+})
+
+#define __copy_from_user(to,from,n) \
+ __copy_user((void *)(to), \
+ (void *)(from), n)
+
+#define __copy_from_user_ret(to,from,n,retval) ({ \
+if (__copy_from_user(to,from,n)) \
+ return retval; \
+})
+
+#define __copy_to_user_inatomic __copy_to_user
+#define __copy_from_user_inatomic __copy_from_user
+
+/* XXX: Not sure it works well..
+ should be such that: 4byte clear and the rest. */
+extern __kernel_size_t __clear_user(void *addr, __kernel_size_t size);
+
+#define clear_user(addr,n) ({ \
+void * __cl_addr = (addr); \
+unsigned long __cl_size = (n); \
+if (__cl_size && __access_ok(((unsigned long)(__cl_addr)), __cl_size)) \
+__cl_size = __clear_user(__cl_addr, __cl_size); \
+__cl_size; })
+
+extern int __strncpy_from_user(unsigned long __dest, unsigned long __src, int __count);
+
+#define strncpy_from_user(dest,src,count) ({ \
+unsigned long __sfu_src = (unsigned long) (src); \
+int __sfu_count = (int) (count); \
+long __sfu_res = -EFAULT; \
+if(__access_ok(__sfu_src, __sfu_count)) { \
+__sfu_res = __strncpy_from_user((unsigned long) (dest), __sfu_src, __sfu_count); \
+} __sfu_res; })
+
+#define strlen_user(str) strnlen_user(str, ~0UL >> 1)
+
+/*
+ * Return the size of a string (including the ending 0!)
+ */
+extern long __strnlen_user(const char *__s, long __n);
+
+extern __inline__ long strnlen_user(const char *s, long n)
+{
+ if (!__addr_ok(s))
+ return 0;
+ else
+ return __strnlen_user(s, n);
+}
+
+struct exception_table_entry
+{
+ unsigned long insn, fixup;
+};
+
+#define ARCH_HAS_SEARCH_EXTABLE
+
+/* If gcc inlines memset, it will use st.q instructions. Therefore, we need
+ kmalloc allocations to be 8-byte aligned. Without this, the alignment
+ becomes BYTE_PER_WORD i.e. only 4 (since sizeof(long)==sizeof(void*)==4 on
+ sh64 at the moment). */
+#define ARCH_KMALLOC_MINALIGN 8
+
+/* Returns 0 if exception not found and fixup.unit otherwise. */
+extern unsigned long search_exception_table(unsigned long addr);
+extern const struct exception_table_entry *search_exception_tables (unsigned long addr);
+
+#endif /* __ASM_SH64_UACCESS_H */
--- /dev/null
+#ifndef __ASM_SH64_UCONTEXT_H
+#define __ASM_SH64_UCONTEXT_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ucontext.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+struct ucontext {
+ unsigned long uc_flags;
+ struct ucontext *uc_link;
+ stack_t uc_stack;
+ struct sigcontext uc_mcontext;
+ sigset_t uc_sigmask; /* mask last for extensibility */
+};
+
+#endif /* __ASM_SH64_UCONTEXT_H */
--- /dev/null
+#ifndef __ASM_SH64_UNALIGNED_H
+#define __ASM_SH64_UNALIGNED_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/unaligned.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/string.h>
+
+
+/* Use memmove here, so gcc does not insert a __builtin_memcpy. */
+
+#define get_unaligned(ptr) \
+ ({ __typeof__(*(ptr)) __tmp; memmove(&__tmp, (ptr), sizeof(*(ptr))); __tmp; })
+
+#define put_unaligned(val, ptr) \
+ ({ __typeof__(*(ptr)) __tmp = (val); \
+ memmove((ptr), &__tmp, sizeof(*(ptr))); \
+ (void)0; })
+
+#endif /* __ASM_SH64_UNALIGNED_H */
--- /dev/null
+#ifndef __ASM_SH64_UNISTD_H
+#define __ASM_SH64_UNISTD_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/unistd.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2004 Sean McGoogan
+ *
+ * This file contains the system call numbers.
+ *
+ */
+
+#define __NR_setup 0 /* used only by init, to get system going */
+#define __NR_exit 1
+#define __NR_fork 2
+#define __NR_read 3
+#define __NR_write 4
+#define __NR_open 5
+#define __NR_close 6
+#define __NR_waitpid 7
+#define __NR_creat 8
+#define __NR_link 9
+#define __NR_unlink 10
+#define __NR_execve 11
+#define __NR_chdir 12
+#define __NR_time 13
+#define __NR_mknod 14
+#define __NR_chmod 15
+#define __NR_lchown 16
+#define __NR_break 17
+#define __NR_oldstat 18
+#define __NR_lseek 19
+#define __NR_getpid 20
+#define __NR_mount 21
+#define __NR_umount 22
+#define __NR_setuid 23
+#define __NR_getuid 24
+#define __NR_stime 25
+#define __NR_ptrace 26
+#define __NR_alarm 27
+#define __NR_oldfstat 28
+#define __NR_pause 29
+#define __NR_utime 30
+#define __NR_stty 31
+#define __NR_gtty 32
+#define __NR_access 33
+#define __NR_nice 34
+#define __NR_ftime 35
+#define __NR_sync 36
+#define __NR_kill 37
+#define __NR_rename 38
+#define __NR_mkdir 39
+#define __NR_rmdir 40
+#define __NR_dup 41
+#define __NR_pipe 42
+#define __NR_times 43
+#define __NR_prof 44
+#define __NR_brk 45
+#define __NR_setgid 46
+#define __NR_getgid 47
+#define __NR_signal 48
+#define __NR_geteuid 49
+#define __NR_getegid 50
+#define __NR_acct 51
+#define __NR_umount2 52
+#define __NR_lock 53
+#define __NR_ioctl 54
+#define __NR_fcntl 55
+#define __NR_mpx 56
+#define __NR_setpgid 57
+#define __NR_ulimit 58
+#define __NR_oldolduname 59
+#define __NR_umask 60
+#define __NR_chroot 61
+#define __NR_ustat 62
+#define __NR_dup2 63
+#define __NR_getppid 64
+#define __NR_getpgrp 65
+#define __NR_setsid 66
+#define __NR_sigaction 67
+#define __NR_sgetmask 68
+#define __NR_ssetmask 69
+#define __NR_setreuid 70
+#define __NR_setregid 71
+#define __NR_sigsuspend 72
+#define __NR_sigpending 73
+#define __NR_sethostname 74
+#define __NR_setrlimit 75
+#define __NR_getrlimit 76 /* Back compatible 2Gig limited rlimit */
+#define __NR_getrusage 77
+#define __NR_gettimeofday 78
+#define __NR_settimeofday 79
+#define __NR_getgroups 80
+#define __NR_setgroups 81
+#define __NR_select 82
+#define __NR_symlink 83
+#define __NR_oldlstat 84
+#define __NR_readlink 85
+#define __NR_uselib 86
+#define __NR_swapon 87
+#define __NR_reboot 88
+#define __NR_readdir 89
+#define __NR_mmap 90
+#define __NR_munmap 91
+#define __NR_truncate 92
+#define __NR_ftruncate 93
+#define __NR_fchmod 94
+#define __NR_fchown 95
+#define __NR_getpriority 96
+#define __NR_setpriority 97
+#define __NR_profil 98
+#define __NR_statfs 99
+#define __NR_fstatfs 100
+#define __NR_ioperm 101
+#define __NR_socketcall 102 /* old implementation of socket systemcall */
+#define __NR_syslog 103
+#define __NR_setitimer 104
+#define __NR_getitimer 105
+#define __NR_stat 106
+#define __NR_lstat 107
+#define __NR_fstat 108
+#define __NR_olduname 109
+#define __NR_iopl 110
+#define __NR_vhangup 111
+#define __NR_idle 112
+#define __NR_vm86old 113
+#define __NR_wait4 114
+#define __NR_swapoff 115
+#define __NR_sysinfo 116
+#define __NR_ipc 117
+#define __NR_fsync 118
+#define __NR_sigreturn 119
+#define __NR_clone 120
+#define __NR_setdomainname 121
+#define __NR_uname 122
+#define __NR_modify_ldt 123
+#define __NR_adjtimex 124
+#define __NR_mprotect 125
+#define __NR_sigprocmask 126
+#define __NR_create_module 127
+#define __NR_init_module 128
+#define __NR_delete_module 129
+#define __NR_get_kernel_syms 130
+#define __NR_quotactl 131
+#define __NR_getpgid 132
+#define __NR_fchdir 133
+#define __NR_bdflush 134
+#define __NR_sysfs 135
+#define __NR_personality 136
+#define __NR_afs_syscall 137 /* Syscall for Andrew File System */
+#define __NR_setfsuid 138
+#define __NR_setfsgid 139
+#define __NR__llseek 140
+#define __NR_getdents 141
+#define __NR__newselect 142
+#define __NR_flock 143
+#define __NR_msync 144
+#define __NR_readv 145
+#define __NR_writev 146
+#define __NR_getsid 147
+#define __NR_fdatasync 148
+#define __NR__sysctl 149
+#define __NR_mlock 150
+#define __NR_munlock 151
+#define __NR_mlockall 152
+#define __NR_munlockall 153
+#define __NR_sched_setparam 154
+#define __NR_sched_getparam 155
+#define __NR_sched_setscheduler 156
+#define __NR_sched_getscheduler 157
+#define __NR_sched_yield 158
+#define __NR_sched_get_priority_max 159
+#define __NR_sched_get_priority_min 160
+#define __NR_sched_rr_get_interval 161
+#define __NR_nanosleep 162
+#define __NR_mremap 163
+#define __NR_setresuid 164
+#define __NR_getresuid 165
+#define __NR_vm86 166
+#define __NR_query_module 167
+#define __NR_poll 168
+#define __NR_nfsservctl 169
+#define __NR_setresgid 170
+#define __NR_getresgid 171
+#define __NR_prctl 172
+#define __NR_rt_sigreturn 173
+#define __NR_rt_sigaction 174
+#define __NR_rt_sigprocmask 175
+#define __NR_rt_sigpending 176
+#define __NR_rt_sigtimedwait 177
+#define __NR_rt_sigqueueinfo 178
+#define __NR_rt_sigsuspend 179
+#define __NR_pread 180
+#define __NR_pwrite 181
+#define __NR_chown 182
+#define __NR_getcwd 183
+#define __NR_capget 184
+#define __NR_capset 185
+#define __NR_sigaltstack 186
+#define __NR_sendfile 187
+#define __NR_streams1 188 /* some people actually want it */
+#define __NR_streams2 189 /* some people actually want it */
+#define __NR_vfork 190
+#define __NR_ugetrlimit 191 /* SuS compliant getrlimit */
+#define __NR_mmap2 192
+#define __NR_truncate64 193
+#define __NR_ftruncate64 194
+#define __NR_stat64 195
+#define __NR_lstat64 196
+#define __NR_fstat64 197
+#define __NR_lchown32 198
+#define __NR_getuid32 199
+#define __NR_getgid32 200
+#define __NR_geteuid32 201
+#define __NR_getegid32 202
+#define __NR_setreuid32 203
+#define __NR_setregid32 204
+#define __NR_getgroups32 205
+#define __NR_setgroups32 206
+#define __NR_fchown32 207
+#define __NR_setresuid32 208
+#define __NR_getresuid32 209
+#define __NR_setresgid32 210
+#define __NR_getresgid32 211
+#define __NR_chown32 212
+#define __NR_setuid32 213
+#define __NR_setgid32 214
+#define __NR_setfsuid32 215
+#define __NR_setfsgid32 216
+#define __NR_pivot_root 217
+#define __NR_mincore 218
+#define __NR_madvise 219
+
+/* Non-multiplexed socket family */
+#define __NR_socket 220
+#define __NR_bind 221
+#define __NR_connect 222
+#define __NR_listen 223
+#define __NR_accept 224
+#define __NR_getsockname 225
+#define __NR_getpeername 226
+#define __NR_socketpair 227
+#define __NR_send 228
+#define __NR_sendto 229
+#define __NR_recv 230
+#define __NR_recvfrom 231
+#define __NR_shutdown 232
+#define __NR_setsockopt 233
+#define __NR_getsockopt 234
+#define __NR_sendmsg 235
+#define __NR_recvmsg 236
+
+/* Non-multiplexed IPC family */
+#define __NR_semop 237
+#define __NR_semget 238
+#define __NR_semctl 239
+#define __NR_msgsnd 240
+#define __NR_msgrcv 241
+#define __NR_msgget 242
+#define __NR_msgctl 243
+#if 0
+#define __NR_shmatcall 244
+#endif
+#define __NR_shmdt 245
+#define __NR_shmget 246
+#define __NR_shmctl 247
+
+#define __NR_getdents64 248
+#define __NR_fcntl64 249
+/* 223 is unused */
+#define __NR_gettid 252
+#define __NR_readahead 253
+#define __NR_setxattr 254
+#define __NR_lsetxattr 255
+#define __NR_fsetxattr 256
+#define __NR_getxattr 257
+#define __NR_lgetxattr 258
+#define __NR_fgetxattr 269
+#define __NR_listxattr 260
+#define __NR_llistxattr 261
+#define __NR_flistxattr 262
+#define __NR_removexattr 263
+#define __NR_lremovexattr 264
+#define __NR_fremovexattr 265
+#define __NR_tkill 266
+#define __NR_sendfile64 267
+#define __NR_futex 268
+#define __NR_sched_setaffinity 269
+#define __NR_sched_getaffinity 270
+#define __NR_set_thread_area 271
+#define __NR_get_thread_area 272
+#define __NR_io_setup 273
+#define __NR_io_destroy 274
+#define __NR_io_getevents 275
+#define __NR_io_submit 276
+#define __NR_io_cancel 277
+#define __NR_fadvise64 278
+#define __NR_exit_group 280
+
+#define __NR_lookup_dcookie 281
+#define __NR_epoll_create 282
+#define __NR_epoll_ctl 283
+#define __NR_epoll_wait 284
+#define __NR_remap_file_pages 285
+#define __NR_set_tid_address 286
+#define __NR_timer_create 287
+#define __NR_timer_settime (__NR_timer_create+1)
+#define __NR_timer_gettime (__NR_timer_create+2)
+#define __NR_timer_getoverrun (__NR_timer_create+3)
+#define __NR_timer_delete (__NR_timer_create+4)
+#define __NR_clock_settime (__NR_timer_create+5)
+#define __NR_clock_gettime (__NR_timer_create+6)
+#define __NR_clock_getres (__NR_timer_create+7)
+#define __NR_clock_nanosleep (__NR_timer_create+8)
+#define __NR_statfs64 296
+#define __NR_fstatfs64 297
+#define __NR_tgkill 298
+#define __NR_utimes 299
+#define __NR_fadvise64_64 300
+#define __NR_vserver 301
+#define __NR_mbind 302
+#define __NR_get_mempolicy 303
+#define __NR_set_mempolicy 304
+#define __NR_mq_open 305
+#define __NR_mq_unlink (__NR_mq_open+1)
+#define __NR_mq_timedsend (__NR_mq_open+2)
+#define __NR_mq_timedreceive (__NR_mq_open+3)
+#define __NR_mq_notify (__NR_mq_open+4)
+#define __NR_mq_getsetattr (__NR_mq_open+5)
+
+#define NR_syscalls 311
+
+/* user-visible error numbers are in the range -1 - -125: see <asm-sh64/errno.h> */
+
+#define __syscall_return(type, res) \
+do { \
+ /* Note: when returning from kernel the return value is in r9 \
+ ** This prevents conflicts between return value and arg1 \
+ ** when dispatching signal handler, in other words makes \
+ ** life easier in the system call epilogue (see entry.S) \
+ */ \
+ register unsigned long __sr2 __asm__ ("r2") = res; \
+ if ((unsigned long)(res) >= (unsigned long)(-125)) { \
+ errno = -(res); \
+ __sr2 = -1; \
+ } \
+ return (type) (__sr2); \
+} while (0)
+
+/* XXX - _foo needs to be __foo, while __NR_bar could be _NR_bar. */
+
+#define _syscall0(type,name) \
+type name(void) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x10 << 16) | __NR_##name); \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "()" \
+ : "=r" (__sc0) \
+ : "r" (__sc0) ); \
+__syscall_return(type,__sc0); \
+}
+
+ /*
+ * The apparent spurious "dummy" assembler comment is *needed*,
+ * as without it, the compiler treats the arg<n> variables
+ * as no longer live just before the asm. The compiler can
+ * then optimize the storage into any registers it wishes.
+ * The additional dummy statement forces the compiler to put
+ * the arguments into the correct registers before the TRAPA.
+ */
+#define _syscall1(type,name,type1,arg1) \
+type name(type1 arg1) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x11 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2)); \
+__asm__ __volatile__ ("!dummy %0 %1" \
+ : \
+ : "r" (__sc0), "r" (__sc2)); \
+__syscall_return(type,__sc0); \
+}
+
+#define _syscall2(type,name,type1,arg1,type2,arg2) \
+type name(type1 arg1,type2 arg2) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x12 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+register unsigned long __sc3 __asm__ ("r3") = (unsigned long) arg2; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2,%3)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3) ); \
+__asm__ __volatile__ ("!dummy %0 %1 %2" \
+ : \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3) ); \
+__syscall_return(type,__sc0); \
+}
+
+#define _syscall3(type,name,type1,arg1,type2,arg2,type3,arg3) \
+type name(type1 arg1,type2 arg2,type3 arg3) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x13 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+register unsigned long __sc3 __asm__ ("r3") = (unsigned long) arg2; \
+register unsigned long __sc4 __asm__ ("r4") = (unsigned long) arg3; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2,%3,%4)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4) ); \
+__asm__ __volatile__ ("!dummy %0 %1 %2 %3" \
+ : \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4) ); \
+__syscall_return(type,__sc0); \
+}
+
+#define _syscall4(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x14 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+register unsigned long __sc3 __asm__ ("r3") = (unsigned long) arg2; \
+register unsigned long __sc4 __asm__ ("r4") = (unsigned long) arg3; \
+register unsigned long __sc5 __asm__ ("r5") = (unsigned long) arg4; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2,%3,%4,%5)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5) );\
+__asm__ __volatile__ ("!dummy %0 %1 %2 %3 %4" \
+ : \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5) );\
+__syscall_return(type,__sc0); \
+}
+
+#define _syscall5(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x15 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+register unsigned long __sc3 __asm__ ("r3") = (unsigned long) arg2; \
+register unsigned long __sc4 __asm__ ("r4") = (unsigned long) arg3; \
+register unsigned long __sc5 __asm__ ("r5") = (unsigned long) arg4; \
+register unsigned long __sc6 __asm__ ("r6") = (unsigned long) arg5; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2,%3,%4,%5,%6)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5), \
+ "r" (__sc6)); \
+__asm__ __volatile__ ("!dummy %0 %1 %2 %3 %4 %5" \
+ : \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5), \
+ "r" (__sc6)); \
+__syscall_return(type,__sc0); \
+}
+
+#define _syscall6(type,name,type1,arg1,type2,arg2,type3,arg3,type4,arg4,type5,arg5, type6, arg6) \
+type name (type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5, type6 arg6) \
+{ \
+register unsigned long __sc0 __asm__ ("r9") = ((0x16 << 16) | __NR_##name); \
+register unsigned long __sc2 __asm__ ("r2") = (unsigned long) arg1; \
+register unsigned long __sc3 __asm__ ("r3") = (unsigned long) arg2; \
+register unsigned long __sc4 __asm__ ("r4") = (unsigned long) arg3; \
+register unsigned long __sc5 __asm__ ("r5") = (unsigned long) arg4; \
+register unsigned long __sc6 __asm__ ("r6") = (unsigned long) arg5; \
+register unsigned long __sc7 __asm__ ("r7") = (unsigned long) arg6; \
+__asm__ __volatile__ ("trapa %1 !\t\t\t" #name "(%2,%3,%4,%5,%6,%7)" \
+ : "=r" (__sc0) \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5), \
+ "r" (__sc6), "r" (__sc7)); \
+__asm__ __volatile__ ("!dummy %0 %1 %2 %3 %4 %5 %6" \
+ : \
+ : "r" (__sc0), "r" (__sc2), "r" (__sc3), "r" (__sc4), "r" (__sc5), \
+ "r" (__sc6), "r" (__sc7)); \
+__syscall_return(type,__sc0); \
+}
+
+#ifdef __KERNEL__
+#define __ARCH_WANT_IPC_PARSE_VERSION
+#define __ARCH_WANT_OLD_READDIR
+#define __ARCH_WANT_OLD_STAT
+#define __ARCH_WANT_STAT64
+#define __ARCH_WANT_SYS_ALARM
+#define __ARCH_WANT_SYS_GETHOSTNAME
+#define __ARCH_WANT_SYS_PAUSE
+#define __ARCH_WANT_SYS_SGETMASK
+#define __ARCH_WANT_SYS_SIGNAL
+#define __ARCH_WANT_SYS_TIME
+#define __ARCH_WANT_SYS_UTIME
+#define __ARCH_WANT_SYS_WAITPID
+#define __ARCH_WANT_SYS_SOCKETCALL
+#define __ARCH_WANT_SYS_FADVISE64
+#define __ARCH_WANT_SYS_GETPGRP
+#define __ARCH_WANT_SYS_LLSEEK
+#define __ARCH_WANT_SYS_NICE
+#define __ARCH_WANT_SYS_OLD_GETRLIMIT
+#define __ARCH_WANT_SYS_OLDUMOUNT
+#define __ARCH_WANT_SYS_SIGPENDING
+#define __ARCH_WANT_SYS_SIGPROCMASK
+#define __ARCH_WANT_SYS_RT_SIGACTION
+#endif
+
+#ifdef __KERNEL_SYSCALLS__
+
+/* Copy from sh */
+#include <linux/compiler.h>
+#include <linux/types.h>
+#include <asm/ptrace.h>
+
+/*
+ * we need this inline - forking from kernel space will result
+ * in NO COPY ON WRITE (!!!), until an execve is executed. This
+ * is no problem, but for the stack. This is handled by not letting
+ * main() use the stack at all after fork(). Thus, no function
+ * calls - which means inline code for fork too, as otherwise we
+ * would use the stack upon exit from 'fork()'.
+ *
+ * Actually only pause and fork are needed inline, so that there
+ * won't be any messing with the stack from main(), but we define
+ * some others too.
+ */
+#define __NR__exit __NR_exit
+static inline _syscall0(int,pause)
+static inline _syscall1(int,setup,int,magic)
+static inline _syscall0(int,sync)
+static inline _syscall0(pid_t,setsid)
+static inline _syscall3(int,write,int,fd,const char *,buf,off_t,count)
+static inline _syscall3(int,read,int,fd,char *,buf,off_t,count)
+static inline _syscall3(off_t,lseek,int,fd,off_t,offset,int,count)
+static inline _syscall1(int,dup,int,fd)
+static inline _syscall3(int,execve,const char *,file,char **,argv,char **,envp)
+static inline _syscall3(int,open,const char *,file,int,flag,int,mode)
+static inline _syscall1(int,close,int,fd)
+static inline _syscall1(int,_exit,int,exitcode)
+static inline _syscall3(pid_t,waitpid,pid_t,pid,int *,wait_stat,int,options)
+static inline _syscall1(int,delete_module,const char *,name)
+
+static inline pid_t wait(int * wait_stat)
+{
+ return waitpid(-1,wait_stat,0);
+}
+#endif
+
+/*
+ * "Conditional" syscalls
+ *
+ * What we want is __attribute__((weak,alias("sys_ni_syscall"))),
+ * but it doesn't work on all toolchains, so we just do it by hand
+ */
+#ifndef cond_syscall
+#define cond_syscall(x) asm(".weak\t" #x "\n\t.set\t" #x ",sys_ni_syscall");
+#endif
+
+#endif /* __ASM_SH64_UNISTD_H */
--- /dev/null
+#ifndef __ASM_SH64_USER_H
+#define __ASM_SH64_USER_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/user.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/types.h>
+#include <asm/processor.h>
+#include <asm/ptrace.h>
+#include <asm/page.h>
+
+/*
+ * Core file format: The core file is written in such a way that gdb
+ * can understand it and provide useful information to the user (under
+ * linux we use the `trad-core' bfd). The file contents are as follows:
+ *
+ * upage: 1 page consisting of a user struct that tells gdb
+ * what is present in the file. Directly after this is a
+ * copy of the task_struct, which is currently not used by gdb,
+ * but it may come in handy at some point. All of the registers
+ * are stored as part of the upage. The upage should always be
+ * only one page long.
+ * data: The data segment follows next. We use current->end_text to
+ * current->brk to pick up all of the user variables, plus any memory
+ * that may have been sbrk'ed. No attempt is made to determine if a
+ * page is demand-zero or if a page is totally unused, we just cover
+ * the entire range. All of the addresses are rounded in such a way
+ * that an integral number of pages is written.
+ * stack: We need the stack information in order to get a meaningful
+ * backtrace. We need to write the data from usp to
+ * current->start_stack, so we round each of these in order to be able
+ * to write an integer number of pages.
+ */
+
+struct user_fpu_struct {
+ unsigned long long fp_regs[32];
+ unsigned int fpscr;
+};
+
+struct user {
+ struct pt_regs regs; /* entire machine state */
+ struct user_fpu_struct fpu; /* Math Co-processor registers */
+ int u_fpvalid; /* True if math co-processor being used */
+ size_t u_tsize; /* text size (pages) */
+ size_t u_dsize; /* data size (pages) */
+ size_t u_ssize; /* stack size (pages) */
+ unsigned long start_code; /* text starting address */
+ unsigned long start_data; /* data starting address */
+ unsigned long start_stack; /* stack starting address */
+ long int signal; /* signal causing core dump */
+ struct regs * u_ar0; /* help gdb find registers */
+ struct user_fpu_struct* u_fpstate; /* Math Co-processor pointer */
+ unsigned long magic; /* identifies a core file */
+ char u_comm[32]; /* user command name */
+};
+
+#define NBPG PAGE_SIZE
+#define UPAGES 1
+#define HOST_TEXT_START_ADDR (u.start_code)
+#define HOST_DATA_START_ADDR (u.start_data)
+#define HOST_STACK_END_ADDR (u.start_stack + u.u_ssize * NBPG)
+
+#endif /* __ASM_SH64_USER_H */
--- /dev/null
+#ifndef _SPARC64_CMT_H
+#define _SPARC64_CMT_H
+
+/* cmt.h: Chip Multi-Threading register definitions
+ *
+ * Copyright (C) 2004 David S. Miller (davem@redhat.com)
+ */
+
+/* ASI_CORE_ID - private */
+#define LP_ID 0x0000000000000010UL
+#define LP_ID_MAX 0x00000000003f0000UL
+#define LP_ID_ID 0x000000000000003fUL
+
+/* ASI_INTR_ID - private */
+#define LP_INTR_ID 0x0000000000000000UL
+#define LP_INTR_ID_ID 0x00000000000003ffUL
+
+/* ASI_CESR_ID - private */
+#define CESR_ID 0x0000000000000040UL
+#define CESR_ID_ID 0x00000000000000ffUL
+
+/* ASI_CORE_AVAILABLE - shared */
+#define LP_AVAIL 0x0000000000000000UL
+#define LP_AVAIL_1 0x0000000000000002UL
+#define LP_AVAIL_0 0x0000000000000001UL
+
+/* ASI_CORE_ENABLE_STATUS - shared */
+#define LP_ENAB_STAT 0x0000000000000010UL
+#define LP_ENAB_STAT_1 0x0000000000000002UL
+#define LP_ENAB_STAT_0 0x0000000000000001UL
+
+/* ASI_CORE_ENABLE - shared */
+#define LP_ENAB 0x0000000000000020UL
+#define LP_ENAB_1 0x0000000000000002UL
+#define LP_ENAB_0 0x0000000000000001UL
+
+/* ASI_CORE_RUNNING - shared */
+#define LP_RUNNING_RW 0x0000000000000050UL
+#define LP_RUNNING_W1S 0x0000000000000060UL
+#define LP_RUNNING_W1C 0x0000000000000068UL
+#define LP_RUNNING_1 0x0000000000000002UL
+#define LP_RUNNING_0 0x0000000000000001UL
+
+/* ASI_CORE_RUNNING_STAT - shared */
+#define LP_RUN_STAT 0x0000000000000058UL
+#define LP_RUN_STAT_1 0x0000000000000002UL
+#define LP_RUN_STAT_0 0x0000000000000001UL
+
+/* ASI_XIR_STEERING - shared */
+#define LP_XIR_STEER 0x0000000000000030UL
+#define LP_XIR_STEER_1 0x0000000000000002UL
+#define LP_XIR_STEER_0 0x0000000000000001UL
+
+/* ASI_CMT_ERROR_STEERING - shared */
+#define CMT_ER_STEER 0x0000000000000040UL
+#define CMT_ER_STEER_1 0x0000000000000002UL
+#define CMT_ER_STEER_0 0x0000000000000001UL
+
+#endif /* _SPARC64_CMT_H */
--- /dev/null
+#ifndef __UM_MODULE_I386_H
+#define __UM_MODULE_I386_H
+
+/* UML is simple */
+struct mod_arch_specific
+{
+};
+
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define Elf_Ehdr Elf32_Ehdr
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2000, 2001, 2002 Jeff Dike (jdike@karaya.com)
+ * Derived from include/asm-i386/pgtable.h
+ * Licensed under the GPL
+ */
+
+#ifndef __UM_PGTABLE_H
+#define __UM_PGTABLE_H
+
+#include "linux/sched.h"
+#include "asm/processor.h"
+#include "asm/page.h"
+#include "asm/fixmap.h"
+
+extern pgd_t swapper_pg_dir[1024];
+
+extern void *um_virt_to_phys(struct task_struct *task, unsigned long virt,
+ pte_t *pte_out);
+
+/* zero page used for uninitialized stuff */
+extern unsigned long *empty_zero_page;
+
+#define pgtable_cache_init() do ; while (0)
+
+/* PMD_SHIFT determines the size of the area a second-level page table can map */
+#define PMD_SHIFT 22
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+#define PGDIR_SHIFT 22
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/*
+ * entries per page directory level: the i386 is two-level, so
+ * we don't really have any PMD directory physically.
+ */
+#define PTRS_PER_PTE 1024
+#define PTRS_PER_PMD 1
+#define PTRS_PER_PGD 1024
+#define USER_PTRS_PER_PGD (TASK_SIZE/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
+
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+
+/*
+ * pgd entries used up by user/kernel:
+ */
+
+#define USER_PGD_PTRS (TASK_SIZE >> PGDIR_SHIFT)
+#define KERNEL_PGD_PTRS (PTRS_PER_PGD-USER_PGD_PTRS)
+
+#ifndef __ASSEMBLY__
+/* Just any arbitrary offset to the start of the vmalloc VM area: the
+ * current 8MB value just means that there will be a 8MB "hole" after the
+ * physical memory until the kernel virtual memory starts. That means that
+ * any out-of-bounds memory accesses will hopefully be caught.
+ * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+ * area for the same reason. ;)
+ */
+
+extern unsigned long high_physmem;
+
+#define VMALLOC_OFFSET (__va_space)
+#define VMALLOC_START (((unsigned long) high_physmem + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
+
+#ifdef CONFIG_HIGHMEM
+# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
+#else
+# define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
+#endif
+
+#define _PAGE_PRESENT 0x001
+#define _PAGE_NEWPAGE 0x002
+#define _PAGE_PROTNONE 0x004 /* If not present */
+#define _PAGE_RW 0x008
+#define _PAGE_USER 0x010
+#define _PAGE_ACCESSED 0x020
+#define _PAGE_DIRTY 0x040
+#define _PAGE_NEWPROT 0x080
+
+#define REGION_MASK 0xf0000000
+#define REGION_SHIFT 28
+
+#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED | _PAGE_DIRTY)
+#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
+#define _PAGE_CHG_MASK (PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
+#define PAGE_NONE __pgprot(_PAGE_PROTNONE | _PAGE_ACCESSED)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_USER | _PAGE_ACCESSED)
+#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
+#define PAGE_KERNEL_RO __pgprot(_PAGE_PRESENT | _PAGE_DIRTY | _PAGE_ACCESSED)
+
+/*
+ * The i386 can't do page protection for execute, and considers that the same are read.
+ * Also, write permissions imply read permissions. This is the closest we can get..
+ */
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY
+#define __P101 PAGE_READONLY
+#define __P110 PAGE_COPY
+#define __P111 PAGE_COPY
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY
+#define __S101 PAGE_READONLY
+#define __S110 PAGE_SHARED
+#define __S111 PAGE_SHARED
+
+/*
+ * Define this if things work differently on an i386 and an i486:
+ * it will (on an i486) warn about kernel memory accesses that are
+ * done without a 'verify_area(VERIFY_WRITE,..)'
+ */
+#undef TEST_VERIFY_AREA
+
+/* page table for 0-4MB for everybody */
+extern unsigned long pg0[1024];
+
+/*
+ * BAD_PAGETABLE is used when we need a bogus page-table, while
+ * BAD_PAGE is used for a bogus page.
+ *
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern pte_t __bad_page(void);
+extern pte_t * __bad_pagetable(void);
+
+#define BAD_PAGETABLE __bad_pagetable()
+#define BAD_PAGE __bad_page()
+#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page))
+
+/* number of bits that fit into a memory pointer */
+#define BITS_PER_PTR (8*sizeof(unsigned long))
+
+/* to align the pointer to a pointer address */
+#define PTR_MASK (~(sizeof(void*)-1))
+
+/* sizeof(void*)==1<<SIZEOF_PTR_LOG2 */
+/* 64-bit machines, beware! SRB. */
+#define SIZEOF_PTR_LOG2 2
+
+/* to find an entry in a page-table */
+#define PAGE_PTR(address) \
+((unsigned long)(address)>>(PAGE_SHIFT-SIZEOF_PTR_LOG2)&PTR_MASK&~PAGE_MASK)
+
+#define pte_none(x) !(pte_val(x) & ~_PAGE_NEWPAGE)
+#define pte_present(x) (pte_val(x) & (_PAGE_PRESENT | _PAGE_PROTNONE))
+
+#define pte_clear(xp) do { pte_val(*(xp)) = _PAGE_NEWPAGE; } while (0)
+
+#define phys_region_index(x) (((x) & REGION_MASK) >> REGION_SHIFT)
+#define pte_region_index(x) phys_region_index(pte_val(x))
+
+#define pmd_none(x) (!(pmd_val(x) & ~_PAGE_NEWPAGE))
+#define pmd_bad(x) ((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+#define pmd_present(x) (pmd_val(x) & _PAGE_PRESENT)
+#define pmd_clear(xp) do { pmd_val(*(xp)) = _PAGE_NEWPAGE; } while (0)
+
+#define pmd_newpage(x) (pmd_val(x) & _PAGE_NEWPAGE)
+#define pmd_mkuptodate(x) (pmd_val(x) &= ~_PAGE_NEWPAGE)
+
+/*
+ * The "pgd_xxx()" functions here are trivial for a folded two-level
+ * setup: the pgd is never bad, and a pmd always exists (as it's folded
+ * into the pgd entry)
+ */
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+static inline int pgd_present(pgd_t pgd) { return 1; }
+static inline void pgd_clear(pgd_t * pgdp) { }
+
+
+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
+extern struct page *pte_mem_map(pte_t pte);
+extern struct page *phys_mem_map(unsigned long phys);
+extern unsigned long phys_to_pfn(unsigned long p);
+extern unsigned long pfn_to_phys(unsigned long pfn);
+
+#define pte_page(x) pfn_to_page(pte_pfn(x))
+#define pte_address(x) (__va(pte_val(x) & PAGE_MASK))
+#define mk_phys(a, r) ((a) + (r << REGION_SHIFT))
+#define phys_addr(p) ((p) & ~REGION_MASK)
+#define phys_page(p) (phys_mem_map(p) + ((phys_addr(p)) >> PAGE_SHIFT))
+#define pte_pfn(x) phys_to_pfn(pte_val(x))
+#define pfn_pte(pfn, prot) __pte(pfn_to_phys(pfn) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(pfn_to_phys(pfn) | pgprot_val(prot))
+
+static inline pte_t pte_mknewprot(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_NEWPROT;
+ return(pte);
+}
+
+static inline pte_t pte_mknewpage(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_NEWPAGE;
+ return(pte);
+}
+
+static inline void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ /* If it's a swap entry, it needs to be marked _PAGE_NEWPAGE so
+ * fix_range knows to unmap it. _PAGE_NEWPROT is specific to
+ * mapped pages.
+ */
+ *pteptr = pte_mknewpage(pteval);
+ if(pte_present(*pteptr)) *pteptr = pte_mknewprot(*pteptr);
+}
+
+/*
+ * (pmds are folded into pgds so this doesn't get actually called,
+ * but the define is needed for a generic inline function.)
+ */
+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+#define set_pgd(pgdptr, pgdval) (*(pgdptr) = pgdval)
+
+/*
+ * The following only work if pte_present() is true.
+ * Undefined behaviour if not..
+ */
+static inline int pte_read(pte_t pte)
+{
+ return((pte_val(pte) & _PAGE_USER) &&
+ !(pte_val(pte) & _PAGE_PROTNONE));
+}
+
+static inline int pte_exec(pte_t pte){
+ return((pte_val(pte) & _PAGE_USER) &&
+ !(pte_val(pte) & _PAGE_PROTNONE));
+}
+
+static inline int pte_write(pte_t pte)
+{
+ return((pte_val(pte) & _PAGE_RW) &&
+ !(pte_val(pte) & _PAGE_PROTNONE));
+}
+
+static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
+static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
+static inline int pte_newpage(pte_t pte) { return pte_val(pte) & _PAGE_NEWPAGE; }
+static inline int pte_newprot(pte_t pte)
+{
+ return(pte_present(pte) && (pte_val(pte) & _PAGE_NEWPROT));
+}
+
+static inline pte_t pte_rdprotect(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_USER;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_exprotect(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_USER;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_DIRTY;
+ return(pte);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_ACCESSED;
+ return(pte);
+}
+
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_RW;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_mkread(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_USER;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_mkexec(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_USER;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_DIRTY;
+ return(pte);
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_ACCESSED;
+ return(pte);
+}
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+ pte_val(pte) |= _PAGE_RW;
+ return(pte_mknewprot(pte));
+}
+
+static inline pte_t pte_mkuptodate(pte_t pte)
+{
+ pte_val(pte) &= ~_PAGE_NEWPAGE;
+ if(pte_present(pte)) pte_val(pte) &= ~_PAGE_NEWPROT;
+ return(pte);
+}
+
+extern unsigned long page_to_phys(struct page *page);
+
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ */
+
+#define mk_pte(page, pgprot) \
+({ \
+ pte_t __pte; \
+ \
+ pte_val(__pte) = page_to_phys(page) + pgprot_val(pgprot);\
+ if(pte_present(__pte)) pte_mknewprot(pte_mknewpage(__pte)); \
+ __pte; \
+})
+
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
+ if(pte_present(pte)) pte = pte_mknewpage(pte_mknewprot(pte));
+ return pte;
+}
+
+#define pmd_page_kernel(pmd) ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK))
+#define pmd_page(pmd) (phys_mem_map(pmd_val(pmd) & PAGE_MASK) + \
+ ((phys_addr(pmd_val(pmd)) >> PAGE_SHIFT)))
+
+/* to find an entry in a page-table-directory. */
+#define pgd_index(address) ((address >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
+
+/* to find an entry in a page-table-directory */
+#define pgd_offset(mm, address) \
+((mm)->pgd + ((address) >> PGDIR_SHIFT))
+
+/* to find an entry in a kernel page-table-directory */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+#define pmd_index(address) \
+ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+
+/* Find an entry in the second-level page table.. */
+static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
+{
+ return (pmd_t *) dir;
+}
+
+/* Find an entry in the third-level page table.. */
+#define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+#define pte_offset_kernel(dir, address) \
+ ((pte_t *) pmd_page_kernel(*(dir)) + pte_index(address))
+#define pte_offset_map(dir, address) \
+ ((pte_t *)kmap_atomic(pmd_page(*(dir)),KM_PTE0) + pte_index(address))
+#define pte_offset_map_nested(dir, address) \
+ ((pte_t *)kmap_atomic(pmd_page(*(dir)),KM_PTE1) + pte_index(address))
+#define pte_unmap(pte) kunmap_atomic((pte), KM_PTE0)
+#define pte_unmap_nested(pte) kunmap_atomic((pte), KM_PTE1)
+
+#define update_mmu_cache(vma,address,pte) do ; while (0)
+
+/* Encode and de-code a swap entry */
+#define __swp_type(x) (((x).val >> 3) & 0x7f)
+#define __swp_offset(x) ((x).val >> 10)
+
+#define __swp_entry(type, offset) \
+ ((swp_entry_t) { ((type) << 3) | ((offset) << 10) })
+#define __pte_to_swp_entry(pte) \
+ ((swp_entry_t) { pte_val(pte_mkuptodate(pte)) })
+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+#define kern_addr_valid(addr) (1)
+
+#include <asm-generic/pgtable.h>
+
+#endif
+
+#endif
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#ifndef SETUP_H_INCLUDED
+#define SETUP_H_INCLUDED
+
+#define COMMAND_LINE_SIZE 512
+
+#endif /* SETUP_H_INCLUDED */
+++ /dev/null
-#ifndef __UM_SMPLOCK_H
-#define __UM_SMPLOCK_H
-
-#include "asm/arch/smplock.h"
-
-#endif
+++ /dev/null
-#ifndef __UM_SPINLOCK_H
-#define __UM_SPINLOCK_H
-
-#include "linux/config.h"
-
-#ifdef CONFIG_SMP
-#include "asm/arch/spinlock.h"
-#endif
-
-#endif
--- /dev/null
+/*
+ * Copyright (C) 2000, 2001 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef _UM_UNISTD_H_
+#define _UM_UNISTD_H_
+
+#include <linux/syscalls.h>
+#include "linux/resource.h"
+#include "asm/uaccess.h"
+
+extern int um_execve(const char *file, char *const argv[], char *const env[]);
+
+#ifdef __KERNEL__
+#define __ARCH_WANT_IPC_PARSE_VERSION
+#define __ARCH_WANT_OLD_READDIR
+#define __ARCH_WANT_OLD_STAT
+#define __ARCH_WANT_STAT64
+#define __ARCH_WANT_SYS_ALARM
+#define __ARCH_WANT_SYS_GETHOSTNAME
+#define __ARCH_WANT_SYS_PAUSE
+#define __ARCH_WANT_SYS_SGETMASK
+#define __ARCH_WANT_SYS_SIGNAL
+#define __ARCH_WANT_SYS_TIME
+#define __ARCH_WANT_SYS_UTIME
+#define __ARCH_WANT_SYS_WAITPID
+#define __ARCH_WANT_SYS_SOCKETCALL
+#define __ARCH_WANT_SYS_FADVISE64
+#define __ARCH_WANT_SYS_GETPGRP
+#define __ARCH_WANT_SYS_LLSEEK
+#define __ARCH_WANT_SYS_NICE
+#define __ARCH_WANT_SYS_OLD_GETRLIMIT
+#define __ARCH_WANT_SYS_OLDUMOUNT
+#define __ARCH_WANT_SYS_SIGPENDING
+#define __ARCH_WANT_SYS_SIGPROCMASK
+#define __ARCH_WANT_SYS_RT_SIGACTION
+#endif
+
+#ifdef __KERNEL_SYSCALLS__
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+#define KERNEL_CALL(ret_t, sys, args...) \
+ mm_segment_t fs = get_fs(); \
+ ret_t ret; \
+ set_fs(KERNEL_DS); \
+ ret = sys(args); \
+ set_fs(fs); \
+ return ret;
+
+static inline long open(const char *pathname, int flags, int mode)
+{
+ KERNEL_CALL(int, sys_open, pathname, flags, mode)
+}
+
+static inline long dup(unsigned int fd)
+{
+ KERNEL_CALL(int, sys_dup, fd);
+}
+
+static inline long close(unsigned int fd)
+{
+ KERNEL_CALL(int, sys_close, fd);
+}
+
+static inline int execve(const char *filename, char *const argv[],
+ char *const envp[])
+{
+ KERNEL_CALL(int, um_execve, filename, argv, envp);
+}
+
+static inline long waitpid(pid_t pid, unsigned int *status, int options)
+{
+ KERNEL_CALL(pid_t, sys_wait4, pid, status, options, NULL)
+}
+
+static inline pid_t setsid(void)
+{
+ KERNEL_CALL(pid_t, sys_setsid)
+}
+
+static inline long lseek(unsigned int fd, off_t offset, unsigned int whence)
+{
+ KERNEL_CALL(long, sys_lseek, fd, offset, whence)
+}
+
+static inline int read(unsigned int fd, char * buf, int len)
+{
+ KERNEL_CALL(int, sys_read, fd, buf, len)
+}
+
+static inline int write(unsigned int fd, char * buf, int len)
+{
+ KERNEL_CALL(int, sys_write, fd, buf, len)
+}
+
+long sys_mmap2(unsigned long addr, unsigned long len,
+ unsigned long prot, unsigned long flags,
+ unsigned long fd, unsigned long pgoff);
+int sys_execve(char *file, char **argv, char **env);
+long sys_clone(unsigned long clone_flags, unsigned long newsp,
+ int *parent_tid, int *child_tid);
+long sys_fork(void);
+long sys_vfork(void);
+int sys_pipe(unsigned long *fildes);
+int sys_ptrace(long request, long pid, long addr, long data);
+struct sigaction;
+asmlinkage long sys_rt_sigaction(int sig,
+ const struct sigaction __user *act,
+ struct sigaction __user *oact,
+ size_t sigsetsize);
+
+#endif
+
+/* Save the value of __KERNEL_SYSCALLS__, undefine it, include the underlying
+ * arch's unistd.h for the system call numbers, and restore the old
+ * __KERNEL_SYSCALLS__.
+ */
+
+#ifdef __KERNEL_SYSCALLS__
+#define __SAVE_KERNEL_SYSCALLS__ __KERNEL_SYSCALLS__
+#endif
+
+#undef __KERNEL_SYSCALLS__
+#include "asm/arch/unistd.h"
+
+#ifdef __KERNEL_SYSCALLS__
+#define __KERNEL_SYSCALLS__ __SAVE_KERNEL_SYSCALLS__
+#endif
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#ifndef _V850_SETUP_H
+#define _V850_SETUP_H
+
+#define COMMAND_LINE_SIZE 512
+
+#endif /* __SETUP_H */
--- /dev/null
+#ifndef _LINUX_CRC_CCITT_H
+#define _LINUX_CRC_CCITT_H
+
+#include <linux/types.h>
+
+extern u16 const crc_ccitt_table[256];
+
+extern u16 crc_ccitt(u16 crc, const u8 *buffer, size_t len);
+
+static inline u16 crc_ccitt_byte(u16 crc, const u8 c)
+{
+ return (crc >> 8) ^ crc_ccitt_table[(crc ^ c) & 0xff];
+}
+
+#endif /* _LINUX_CRC_CCITT_H */
--- /dev/null
+#ifndef __DMI_H__
+#define __DMI_H__
+
+enum dmi_field {
+ DMI_NONE,
+ DMI_BIOS_VENDOR,
+ DMI_BIOS_VERSION,
+ DMI_BIOS_DATE,
+ DMI_SYS_VENDOR,
+ DMI_PRODUCT_NAME,
+ DMI_PRODUCT_VERSION,
+ DMI_BOARD_VENDOR,
+ DMI_BOARD_NAME,
+ DMI_BOARD_VERSION,
+ DMI_STRING_MAX,
+};
+
+/*
+ * DMI callbacks for problem boards
+ */
+struct dmi_strmatch {
+ u8 slot;
+ char *substr;
+};
+
+struct dmi_system_id {
+ int (*callback)(struct dmi_system_id *);
+ char *ident;
+ struct dmi_strmatch matches[4];
+ void *driver_data;
+};
+
+#define DMI_MATCH(a,b) { a, b }
+
+#if defined(CONFIG_X86) && !defined(CONFIG_X86_64)
+
+extern int dmi_check_system(struct dmi_system_id *list);
+extern char * dmi_get_system_info(int field);
+
+#else
+
+static inline int dmi_check_system(struct dmi_system_id *list) { return 0; }
+static inline char * dmi_get_system_info(int field) { return NULL; }
+
+#endif
+
+#endif /* __DMI_H__ */
--- /dev/null
+/*
+ * Copyright (C) 1998, 1999, 2003 Ralf Baechle
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#ifndef __LINUX_DS1286_H
+#define __LINUX_DS1286_H
+
+#include <asm/ds1286.h>
+
+/**********************************************************************
+ * register summary
+ **********************************************************************/
+#define RTC_HUNDREDTH_SECOND 0
+#define RTC_SECONDS 1
+#define RTC_MINUTES 2
+#define RTC_MINUTES_ALARM 3
+#define RTC_HOURS 4
+#define RTC_HOURS_ALARM 5
+#define RTC_DAY 6
+#define RTC_DAY_ALARM 7
+#define RTC_DATE 8
+#define RTC_MONTH 9
+#define RTC_YEAR 10
+#define RTC_CMD 11
+#define RTC_WHSEC 12
+#define RTC_WSEC 13
+#define RTC_UNUSED 14
+
+/* RTC_*_alarm is always true if 2 MSBs are set */
+# define RTC_ALARM_DONT_CARE 0xC0
+
+
+/*
+ * Bits in the month register
+ */
+#define RTC_EOSC 0x80
+#define RTC_ESQW 0x40
+
+/*
+ * Bits in the Command register
+ */
+#define RTC_TDF 0x01
+#define RTC_WAF 0x02
+#define RTC_TDM 0x04
+#define RTC_WAM 0x08
+#define RTC_PU_LVL 0x10
+#define RTC_IBH_LO 0x20
+#define RTC_IPSW 0x40
+#define RTC_TE 0x80
+
+#endif /* __LINUX_DS1286_H */
--- /dev/null
+#ifndef __LINUX_GFP_H
+#define __LINUX_GFP_H
+
+#include <linux/mmzone.h>
+#include <linux/stddef.h>
+#include <linux/linkage.h>
+#include <linux/config.h>
+
+struct vm_area_struct;
+
+/*
+ * GFP bitmasks..
+ */
+/* Zone modifiers in GFP_ZONEMASK (see linux/mmzone.h - low two bits) */
+#define __GFP_DMA 0x01
+#define __GFP_HIGHMEM 0x02
+
+/*
+ * Action modifiers - doesn't change the zoning
+ *
+ * __GFP_REPEAT: Try hard to allocate the memory, but the allocation attempt
+ * _might_ fail. This depends upon the particular VM implementation.
+ *
+ * __GFP_NOFAIL: The VM implementation _must_ retry infinitely: the caller
+ * cannot handle allocation failures.
+ *
+ * __GFP_NORETRY: The VM implementation must not retry indefinitely.
+ */
+#define __GFP_WAIT 0x10 /* Can wait and reschedule? */
+#define __GFP_HIGH 0x20 /* Should access emergency pools? */
+#define __GFP_IO 0x40 /* Can start physical IO? */
+#define __GFP_FS 0x80 /* Can call down to low-level FS? */
+#define __GFP_COLD 0x100 /* Cache-cold page required */
+#define __GFP_NOWARN 0x200 /* Suppress page allocation failure warning */
+#define __GFP_REPEAT 0x400 /* Retry the allocation. Might fail */
+#define __GFP_NOFAIL 0x800 /* Retry for ever. Cannot fail */
+#define __GFP_NORETRY 0x1000 /* Do not retry. Might fail */
+#define __GFP_NO_GROW 0x2000 /* Slab internal usage */
+#define __GFP_COMP 0x4000 /* Add compound page metadata */
+
+#define __GFP_BITS_SHIFT 16 /* Room for 16 __GFP_FOO bits */
+#define __GFP_BITS_MASK ((1 << __GFP_BITS_SHIFT) - 1)
+
+/* if you forget to add the bitmask here kernel will crash, period */
+#define GFP_LEVEL_MASK (__GFP_WAIT|__GFP_HIGH|__GFP_IO|__GFP_FS| \
+ __GFP_COLD|__GFP_NOWARN|__GFP_REPEAT| \
+ __GFP_NOFAIL|__GFP_NORETRY|__GFP_NO_GROW|__GFP_COMP)
+
+#define GFP_ATOMIC (__GFP_HIGH)
+#define GFP_NOIO (__GFP_WAIT)
+#define GFP_NOFS (__GFP_WAIT | __GFP_IO)
+#define GFP_KERNEL (__GFP_WAIT | __GFP_IO | __GFP_FS)
+#define GFP_USER (__GFP_WAIT | __GFP_IO | __GFP_FS)
+#define GFP_HIGHUSER (__GFP_WAIT | __GFP_IO | __GFP_FS | __GFP_HIGHMEM)
+
+/* Flag - indicates that the buffer will be suitable for DMA. Ignored on some
+ platforms, used as appropriate on others */
+
+#define GFP_DMA __GFP_DMA
+
+
+/*
+ * There is only one page-allocator function, and two main namespaces to
+ * it. The alloc_page*() variants return 'struct page *' and as such
+ * can allocate highmem pages, the *get*page*() variants return
+ * virtual kernel addresses to the allocated page(s).
+ */
+
+/*
+ * We get the zone list from the current node and the gfp_mask.
+ * This zone list contains a maximum of MAXNODES*MAX_NR_ZONES zones.
+ *
+ * For the normal case of non-DISCONTIGMEM systems the NODE_DATA() gets
+ * optimized to &contig_page_data at compile-time.
+ */
+extern struct page *
+FASTCALL(__alloc_pages(unsigned int, unsigned int, struct zonelist *));
+
+static inline struct page *alloc_pages_node(int nid, unsigned int gfp_mask,
+ unsigned int order)
+{
+ if (unlikely(order >= MAX_ORDER))
+ return NULL;
+
+ return __alloc_pages(gfp_mask, order,
+ NODE_DATA(nid)->node_zonelists + (gfp_mask & GFP_ZONEMASK));
+}
+
+#ifdef CONFIG_NUMA
+extern struct page *alloc_pages_current(unsigned gfp_mask, unsigned order);
+
+static inline struct page *
+alloc_pages(unsigned int gfp_mask, unsigned int order)
+{
+ if (unlikely(order >= MAX_ORDER))
+ return NULL;
+
+ return alloc_pages_current(gfp_mask, order);
+}
+extern struct page *alloc_page_vma(unsigned gfp_mask,
+ struct vm_area_struct *vma, unsigned long addr);
+#else
+#define alloc_pages(gfp_mask, order) \
+ alloc_pages_node(numa_node_id(), gfp_mask, order)
+#define alloc_page_vma(gfp_mask, vma, addr) alloc_pages(gfp_mask, 0)
+#endif
+#define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)
+
+extern unsigned long FASTCALL(__get_free_pages(unsigned int gfp_mask, unsigned int order));
+extern unsigned long FASTCALL(get_zeroed_page(unsigned int gfp_mask));
+
+#define __get_free_page(gfp_mask) \
+ __get_free_pages((gfp_mask),0)
+
+#define __get_dma_pages(gfp_mask, order) \
+ __get_free_pages((gfp_mask) | GFP_DMA,(order))
+
+extern void FASTCALL(__free_pages(struct page *page, unsigned int order));
+extern void FASTCALL(free_pages(unsigned long addr, unsigned int order));
+extern void FASTCALL(free_hot_page(struct page *page));
+extern void FASTCALL(free_cold_page(struct page *page));
+
+#define __free_page(page) __free_pages((page), 0)
+#define free_page(addr) free_pages((addr),0)
+
+void page_alloc_init(void);
+
+#endif /* __LINUX_GFP_H */
--- /dev/null
+/*
+ * include/linux/ghash.h -- generic hashing with fuzzy retrieval
+ *
+ * (C) 1997 Thomas Schoebel-Theuer
+ *
+ * The algorithms implemented here seem to be a completely new invention,
+ * and I'll publish the fundamentals in a paper.
+ */
+
+#ifndef _GHASH_H
+#define _GHASH_H
+/* HASHSIZE _must_ be a power of two!!! */
+
+
+#define DEF_HASH_FUZZY_STRUCTS(NAME,HASHSIZE,TYPE) \
+\
+struct NAME##_table {\
+ TYPE * hashtable[HASHSIZE];\
+ TYPE * sorted_list;\
+ int nr_entries;\
+};\
+\
+struct NAME##_ptrs {\
+ TYPE * next_hash;\
+ TYPE * prev_hash;\
+ TYPE * next_sorted;\
+ TYPE * prev_sorted;\
+};
+
+#define DEF_HASH_FUZZY(LINKAGE,NAME,HASHSIZE,TYPE,PTRS,KEYTYPE,KEY,KEYCMP,KEYEQ,HASHFN)\
+\
+LINKAGE void insert_##NAME##_hash(struct NAME##_table * tbl, TYPE * elem)\
+{\
+ int ix = HASHFN(elem->KEY);\
+ TYPE ** base = &tbl->hashtable[ix];\
+ TYPE * ptr = *base;\
+ TYPE * prev = NULL;\
+\
+ tbl->nr_entries++;\
+ while(ptr && KEYCMP(ptr->KEY, elem->KEY)) {\
+ base = &ptr->PTRS.next_hash;\
+ prev = ptr;\
+ ptr = *base;\
+ }\
+ elem->PTRS.next_hash = ptr;\
+ elem->PTRS.prev_hash = prev;\
+ if(ptr) {\
+ ptr->PTRS.prev_hash = elem;\
+ }\
+ *base = elem;\
+\
+ ptr = prev;\
+ if(!ptr) {\
+ ptr = tbl->sorted_list;\
+ prev = NULL;\
+ } else {\
+ prev = ptr->PTRS.prev_sorted;\
+ }\
+ while(ptr) {\
+ TYPE * next = ptr->PTRS.next_hash;\
+ if(next && KEYCMP(next->KEY, elem->KEY)) {\
+ prev = ptr;\
+ ptr = next;\
+ } else if(KEYCMP(ptr->KEY, elem->KEY)) {\
+ prev = ptr;\
+ ptr = ptr->PTRS.next_sorted;\
+ } else\
+ break;\
+ }\
+ elem->PTRS.next_sorted = ptr;\
+ elem->PTRS.prev_sorted = prev;\
+ if(ptr) {\
+ ptr->PTRS.prev_sorted = elem;\
+ }\
+ if(prev) {\
+ prev->PTRS.next_sorted = elem;\
+ } else {\
+ tbl->sorted_list = elem;\
+ }\
+}\
+\
+LINKAGE void remove_##NAME##_hash(struct NAME##_table * tbl, TYPE * elem)\
+{\
+ TYPE * next = elem->PTRS.next_hash;\
+ TYPE * prev = elem->PTRS.prev_hash;\
+\
+ tbl->nr_entries--;\
+ if(next)\
+ next->PTRS.prev_hash = prev;\
+ if(prev)\
+ prev->PTRS.next_hash = next;\
+ else {\
+ int ix = HASHFN(elem->KEY);\
+ tbl->hashtable[ix] = next;\
+ }\
+\
+ next = elem->PTRS.next_sorted;\
+ prev = elem->PTRS.prev_sorted;\
+ if(next)\
+ next->PTRS.prev_sorted = prev;\
+ if(prev)\
+ prev->PTRS.next_sorted = next;\
+ else\
+ tbl->sorted_list = next;\
+}\
+\
+LINKAGE TYPE * find_##NAME##_hash(struct NAME##_table * tbl, KEYTYPE pos)\
+{\
+ int ix = hashfn(pos);\
+ TYPE * ptr = tbl->hashtable[ix];\
+ while(ptr && KEYCMP(ptr->KEY, pos))\
+ ptr = ptr->PTRS.next_hash;\
+ if(ptr && !KEYEQ(ptr->KEY, pos))\
+ ptr = NULL;\
+ return ptr;\
+}\
+\
+LINKAGE TYPE * find_##NAME##_hash_fuzzy(struct NAME##_table * tbl, KEYTYPE pos)\
+{\
+ int ix;\
+ int offset;\
+ TYPE * ptr;\
+ TYPE * next;\
+\
+ ptr = tbl->sorted_list;\
+ if(!ptr || KEYCMP(pos, ptr->KEY))\
+ return NULL;\
+ ix = HASHFN(pos);\
+ offset = HASHSIZE;\
+ do {\
+ offset >>= 1;\
+ next = tbl->hashtable[(ix+offset) & ((HASHSIZE)-1)];\
+ if(next && (KEYCMP(next->KEY, pos) || KEYEQ(next->KEY, pos))\
+ && KEYCMP(ptr->KEY, next->KEY))\
+ ptr = next;\
+ } while(offset);\
+\
+ for(;;) {\
+ next = ptr->PTRS.next_hash;\
+ if(next) {\
+ if(KEYCMP(next->KEY, pos)) {\
+ ptr = next;\
+ continue;\
+ }\
+ }\
+ next = ptr->PTRS.next_sorted;\
+ if(next && KEYCMP(next->KEY, pos)) {\
+ ptr = next;\
+ continue;\
+ }\
+ return ptr;\
+ }\
+ return NULL;\
+}
+
+/* LINKAGE - empty or "static", depending on whether you want the definitions to
+ * be public or not
+ * NAME - a string to stick in names to make this hash table type distinct from
+ * any others
+ * HASHSIZE - number of buckets
+ * TYPE - type of data contained in the buckets - must be a structure, one
+ * field is of type NAME_ptrs, another is the hash key
+ * PTRS - TYPE must contain a field of type NAME_ptrs, PTRS is the name of that
+ * field
+ * KEYTYPE - type of the key field within TYPE
+ * KEY - name of the key field within TYPE
+ * KEYCMP - pointer to function that compares KEYTYPEs to each other - the
+ * prototype is int KEYCMP(KEYTYPE, KEYTYPE), it returns zero for equal,
+ * non-zero for not equal
+ * HASHFN - the hash function - the prototype is int HASHFN(KEYTYPE),
+ * it returns a number in the range 0 ... HASHSIZE - 1
+ * Call DEF_HASH_STRUCTS, define your hash table as a NAME_table, then call
+ * DEF_HASH.
+ */
+
+#define DEF_HASH_STRUCTS(NAME,HASHSIZE,TYPE) \
+\
+struct NAME##_table {\
+ TYPE * hashtable[HASHSIZE];\
+ int nr_entries;\
+};\
+\
+struct NAME##_ptrs {\
+ TYPE * next_hash;\
+ TYPE * prev_hash;\
+};
+
+#define DEF_HASH(LINKAGE,NAME,TYPE,PTRS,KEYTYPE,KEY,KEYCMP,HASHFN)\
+\
+LINKAGE void insert_##NAME##_hash(struct NAME##_table * tbl, TYPE * elem)\
+{\
+ int ix = HASHFN(elem->KEY);\
+ TYPE ** base = &tbl->hashtable[ix];\
+ TYPE * ptr = *base;\
+ TYPE * prev = NULL;\
+\
+ tbl->nr_entries++;\
+ while(ptr && KEYCMP(ptr->KEY, elem->KEY)) {\
+ base = &ptr->PTRS.next_hash;\
+ prev = ptr;\
+ ptr = *base;\
+ }\
+ elem->PTRS.next_hash = ptr;\
+ elem->PTRS.prev_hash = prev;\
+ if(ptr) {\
+ ptr->PTRS.prev_hash = elem;\
+ }\
+ *base = elem;\
+}\
+\
+LINKAGE void remove_##NAME##_hash(struct NAME##_table * tbl, TYPE * elem)\
+{\
+ TYPE * next = elem->PTRS.next_hash;\
+ TYPE * prev = elem->PTRS.prev_hash;\
+\
+ tbl->nr_entries--;\
+ if(next)\
+ next->PTRS.prev_hash = prev;\
+ if(prev)\
+ prev->PTRS.next_hash = next;\
+ else {\
+ int ix = HASHFN(elem->KEY);\
+ tbl->hashtable[ix] = next;\
+ }\
+}\
+\
+LINKAGE TYPE * find_##NAME##_hash(struct NAME##_table * tbl, KEYTYPE pos)\
+{\
+ int ix = HASHFN(pos);\
+ TYPE * ptr = tbl->hashtable[ix];\
+ while(ptr && KEYCMP(ptr->KEY, pos))\
+ ptr = ptr->PTRS.next_hash;\
+ return ptr;\
+}
+
+#endif
--- /dev/null
+#ifndef __HPET__
+#define __HPET__ 1
+
+/*
+ * Offsets into HPET Registers
+ */
+
+struct hpet {
+ u64 hpet_cap; /* capabilities */
+ u64 res0; /* reserved */
+ u64 hpet_config; /* configuration */
+ u64 res1; /* reserved */
+ u64 hpet_isr; /* interrupt status reg */
+ u64 res2[25]; /* reserved */
+ union { /* main counter */
+ u64 _hpet_mc64;
+ u32 _hpet_mc32;
+ unsigned long _hpet_mc;
+ } _u0;
+ u64 res3; /* reserved */
+ struct hpet_timer {
+ u64 hpet_config; /* configuration/cap */
+ union { /* timer compare register */
+ u64 _hpet_hc64;
+ u32 _hpet_hc32;
+ unsigned long _hpet_compare;
+ } _u1;
+ u64 hpet_fsb[2]; /* FSB route */
+ } hpet_timers[1];
+};
+
+#define hpet_mc _u0._hpet_mc
+#define hpet_compare _u1._hpet_compare
+
+#define HPET_MAX_TIMERS (32)
+
+/*
+ * HPET general capabilities register
+ */
+
+#define HPET_COUNTER_CLK_PERIOD_MASK (0xffffffff00000000ULL)
+#define HPET_COUNTER_CLK_PERIOD_SHIFT (32UL)
+#define HPET_VENDOR_ID_MASK (0x00000000ffff0000ULL)
+#define HPET_VENDOR_ID_SHIFT (16ULL)
+#define HPET_LEG_RT_CAP_MASK (0x8000)
+#define HPET_COUNTER_SIZE_MASK (0x2000)
+#define HPET_NUM_TIM_CAP_MASK (0x1f00)
+#define HPET_NUM_TIM_CAP_SHIFT (8ULL)
+
+/*
+ * HPET general configuration register
+ */
+
+#define HPET_LEG_RT_CNF_MASK (2UL)
+#define HPET_ENABLE_CNF_MASK (1UL)
+
+/*
+ * HPET interrupt status register
+ */
+
+#define HPET_ISR_CLEAR(HPET, TIMER) \
+ (HPET)->hpet_isr |= (1UL << TIMER)
+
+/*
+ * Timer configuration register
+ */
+
+#define Tn_INT_ROUTE_CAP_MASK (0xffffffff00000000ULL)
+#define Tn_INI_ROUTE_CAP_SHIFT (32UL)
+#define Tn_FSB_INT_DELCAP_MASK (0x8000UL)
+#define Tn_FSB_INT_DELCAP_SHIFT (15)
+#define Tn_FSB_EN_CNF_MASK (0x4000UL)
+#define Tn_FSB_EN_CNF_SHIFT (14)
+#define Tn_INT_ROUTE_CNF_MASK (0x3e00UL)
+#define Tn_INT_ROUTE_CNF_SHIFT (9)
+#define Tn_32MODE_CNF_MASK (0x0100UL)
+#define Tn_VAL_SET_CNF_MASK (0x0040UL)
+#define Tn_SIZE_CAP_MASK (0x0020UL)
+#define Tn_PER_INT_CAP_MASK (0x0010UL)
+#define Tn_TYPE_CNF_MASK (0x0008UL)
+#define Tn_INT_ENB_CNF_MASK (0x0004UL)
+#define Tn_INT_TYPE_CNF_MASK (0x0002UL)
+
+/*
+ * Timer FSB Interrupt Route Register
+ */
+
+#define Tn_FSB_INT_ADDR_MASK (0xffffffff00000000ULL)
+#define Tn_FSB_INT_ADDR_SHIFT (32UL)
+#define Tn_FSB_INT_VAL_MASK (0x00000000ffffffffULL)
+
+struct hpet_info {
+ unsigned long hi_ireqfreq; /* Hz */
+ unsigned long hi_flags; /* information */
+ unsigned short hi_hpet;
+ unsigned short hi_timer;
+};
+
+#define HPET_INFO_PERIODIC 0x0001 /* timer is periodic */
+
+#define HPET_IE_ON _IO('h', 0x01) /* interrupt on */
+#define HPET_IE_OFF _IO('h', 0x02) /* interrupt off */
+#define HPET_INFO _IOR('h', 0x03, struct hpet_info)
+#define HPET_EPI _IO('h', 0x04) /* enable periodic */
+#define HPET_DPI _IO('h', 0x05) /* disable periodic */
+#define HPET_IRQFREQ _IOW('h', 0x6, unsigned long) /* IRQFREQ usec */
+
+/*
+ * exported interfaces
+ */
+
+struct hpet_task {
+ void (*ht_func) (void *);
+ void *ht_data;
+ void *ht_opaque;
+};
+
+struct hpet_data {
+ unsigned long hd_address;
+ unsigned short hd_nirqs;
+ unsigned short hd_flags;
+ unsigned int hd_state; /* timer allocated */
+ unsigned int hd_irq[HPET_MAX_TIMERS];
+};
+
+#define HPET_DATA_PLATFORM 0x0001 /* platform call to hpet_alloc */
+
+int hpet_alloc(struct hpet_data *);
+int hpet_register(struct hpet_task *, int);
+int hpet_unregister(struct hpet_task *);
+int hpet_control(struct hpet_task *, unsigned int, unsigned long);
+
+#endif /* !__HPET__ */
--- /dev/null
+#ifndef _LINUX_MEMPOLICY_H
+#define _LINUX_MEMPOLICY_H 1
+
+#include <linux/errno.h>
+
+/*
+ * NUMA memory policies for Linux.
+ * Copyright 2003,2004 Andi Kleen SuSE Labs
+ */
+
+/* Policies */
+#define MPOL_DEFAULT 0
+#define MPOL_PREFERRED 1
+#define MPOL_BIND 2
+#define MPOL_INTERLEAVE 3
+
+#define MPOL_MAX MPOL_INTERLEAVE
+
+/* Flags for get_mem_policy */
+#define MPOL_F_NODE (1<<0) /* return next IL mode instead of node mask */
+#define MPOL_F_ADDR (1<<1) /* look up vma using address */
+
+/* Flags for mbind */
+#define MPOL_MF_STRICT (1<<0) /* Verify existing pages in the mapping */
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <linux/mmzone.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/rbtree.h>
+#include <asm/semaphore.h>
+
+struct vm_area_struct;
+
+#ifdef CONFIG_NUMA
+
+/*
+ * Describe a memory policy.
+ *
+ * A mempolicy can be either associated with a process or with a VMA.
+ * For VMA related allocations the VMA policy is preferred, otherwise
+ * the process policy is used. Interrupts ignore the memory policy
+ * of the current process.
+ *
+ * Locking policy for interlave:
+ * In process context there is no locking because only the process accesses
+ * its own state. All vma manipulation is somewhat protected by a down_read on
+ * mmap_sem. For allocating in the interleave policy the page_table_lock
+ * must be also aquired to protect il_next.
+ *
+ * Freeing policy:
+ * When policy is MPOL_BIND v.zonelist is kmalloc'ed and must be kfree'd.
+ * All other policies don't have any external state. mpol_free() handles this.
+ *
+ * Copying policy objects:
+ * For MPOL_BIND the zonelist must be always duplicated. mpol_clone() does this.
+ */
+struct mempolicy {
+ atomic_t refcnt;
+ short policy; /* See MPOL_* above */
+ union {
+ struct zonelist *zonelist; /* bind */
+ short preferred_node; /* preferred */
+ DECLARE_BITMAP(nodes, MAX_NUMNODES); /* interleave */
+ /* undefined for default */
+ } v;
+};
+
+/* An NULL mempolicy pointer is a synonym of &default_policy. */
+extern struct mempolicy default_policy;
+
+/*
+ * Support for managing mempolicy data objects (clone, copy, destroy)
+ * The default fast path of a NULL MPOL_DEFAULT policy is always inlined.
+ */
+
+extern void __mpol_free(struct mempolicy *pol);
+static inline void mpol_free(struct mempolicy *pol)
+{
+ if (pol)
+ __mpol_free(pol);
+}
+
+extern struct mempolicy *__mpol_copy(struct mempolicy *pol);
+static inline struct mempolicy *mpol_copy(struct mempolicy *pol)
+{
+ if (pol)
+ pol = __mpol_copy(pol);
+ return pol;
+}
+
+#define vma_policy(vma) ((vma)->vm_policy)
+#define vma_set_policy(vma, pol) ((vma)->vm_policy = (pol))
+
+static inline void mpol_get(struct mempolicy *pol)
+{
+ if (pol)
+ atomic_inc(&pol->refcnt);
+}
+
+extern int __mpol_equal(struct mempolicy *a, struct mempolicy *b);
+static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b)
+{
+ if (a == b)
+ return 1;
+ return __mpol_equal(a, b);
+}
+#define vma_mpol_equal(a,b) mpol_equal(vma_policy(a), vma_policy(b))
+
+/* Could later add inheritance of the process policy here. */
+
+#define mpol_set_vma_default(vma) ((vma)->vm_policy = NULL)
+
+/*
+ * Hugetlb policy. i386 hugetlb so far works with node numbers
+ * instead of zone lists, so give it special interfaces for now.
+ */
+extern int mpol_first_node(struct vm_area_struct *vma, unsigned long addr);
+extern int mpol_node_valid(int nid, struct vm_area_struct *vma,
+ unsigned long addr);
+
+/*
+ * Tree of shared policies for a shared memory region.
+ * Maintain the policies in a pseudo mm that contains vmas. The vmas
+ * carry the policy. As a special twist the pseudo mm is indexed in pages, not
+ * bytes, so that we can work with shared memory segments bigger than
+ * unsigned long.
+ */
+
+struct sp_node {
+ struct rb_node nd;
+ unsigned long start, end;
+ struct mempolicy *policy;
+};
+
+struct shared_policy {
+ struct rb_root root;
+ struct semaphore sem;
+};
+
+static inline void mpol_shared_policy_init(struct shared_policy *info)
+{
+ info->root = RB_ROOT;
+ init_MUTEX(&info->sem);
+}
+
+int mpol_set_shared_policy(struct shared_policy *info,
+ struct vm_area_struct *vma,
+ struct mempolicy *new);
+void mpol_free_shared_policy(struct shared_policy *p);
+struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
+ unsigned long idx);
+
+#else
+
+struct mempolicy {};
+
+static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b)
+{
+ return 1;
+}
+#define vma_mpol_equal(a,b) 1
+
+#define mpol_set_vma_default(vma) do {} while(0)
+
+static inline void mpol_free(struct mempolicy *p)
+{
+}
+
+static inline void mpol_get(struct mempolicy *pol)
+{
+}
+
+static inline struct mempolicy *mpol_copy(struct mempolicy *old)
+{
+ return NULL;
+}
+
+static inline int mpol_first_node(struct vm_area_struct *vma, unsigned long a)
+{
+ return numa_node_id();
+}
+
+static inline int
+mpol_node_valid(int nid, struct vm_area_struct *vma, unsigned long a)
+{
+ return 1;
+}
+
+struct shared_policy {};
+
+static inline int mpol_set_shared_policy(struct shared_policy *info,
+ struct vm_area_struct *vma,
+ struct mempolicy *new)
+{
+ return -EINVAL;
+}
+
+static inline void mpol_shared_policy_init(struct shared_policy *info)
+{
+}
+
+static inline void mpol_free_shared_policy(struct shared_policy *p)
+{
+}
+
+static inline struct mempolicy *
+mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+{
+ return NULL;
+}
+
+#define vma_policy(vma) NULL
+#define vma_set_policy(vma, pol) do {} while(0)
+
+#endif /* CONFIG_NUMA */
+#endif /* __KERNEL__ */
+
+#endif
--- /dev/null
+#ifndef _LINUX_MM_H
+#define _LINUX_MM_H
+
+#include <linux/sched.h>
+#include <linux/errno.h>
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <linux/gfp.h>
+#include <linux/list.h>
+#include <linux/mmzone.h>
+#include <linux/rbtree.h>
+#include <linux/prio_tree.h>
+#include <linux/fs.h>
+
+struct mempolicy;
+struct anon_vma;
+
+#ifndef CONFIG_DISCONTIGMEM /* Don't use mapnrs, do it properly */
+extern unsigned long max_mapnr;
+#endif
+
+extern unsigned long num_physpages;
+extern void * high_memory;
+extern int page_cluster;
+
+#include <asm/page.h>
+#include <asm/pgtable.h>
+#include <asm/processor.h>
+#include <asm/atomic.h>
+
+#ifndef MM_VM_SIZE
+#define MM_VM_SIZE(mm) TASK_SIZE
+#endif
+
+/*
+ * Linux kernel virtual memory manager primitives.
+ * The idea being to have a "virtual" mm in the same way
+ * we have a virtual fs - giving a cleaner interface to the
+ * mm details, and allowing different kinds of memory mappings
+ * (from shared memory to executable loading to arbitrary
+ * mmap() functions).
+ */
+
+/*
+ * This struct defines a memory VMM memory area. There is one of these
+ * per VM-area/task. A VM area is any part of the process virtual memory
+ * space that has a special rule for the page-fault handlers (ie a shared
+ * library, the executable area etc).
+ */
+struct vm_area_struct {
+ struct mm_struct * vm_mm; /* The address space we belong to. */
+ unsigned long vm_start; /* Our start address within vm_mm. */
+ unsigned long vm_end; /* The first byte after our end address
+ within vm_mm. */
+
+ /* linked list of VM areas per task, sorted by address */
+ struct vm_area_struct *vm_next;
+
+ pgprot_t vm_page_prot; /* Access permissions of this VMA. */
+ unsigned long vm_flags; /* Flags, listed below. */
+
+ struct rb_node vm_rb;
+
+ /*
+ * For areas with an address space and backing store,
+ * linkage into the address_space->i_mmap prio tree, or
+ * linkage to the list of like vmas hanging off its node, or
+ * linkage of vma in the address_space->i_mmap_nonlinear list.
+ */
+ union {
+ struct {
+ struct list_head list;
+ void *parent; /* aligns with prio_tree_node parent */
+ struct vm_area_struct *head;
+ } vm_set;
+
+ struct prio_tree_node prio_tree_node;
+ } shared;
+
+ /*
+ * A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
+ * list, after a COW of one of the file pages. A MAP_SHARED vma
+ * can only be in the i_mmap tree. An anonymous MAP_PRIVATE, stack
+ * or brk vma (with NULL file) can only be in an anon_vma list.
+ */
+ struct list_head anon_vma_node; /* Serialized by anon_vma->lock */
+ struct anon_vma *anon_vma; /* Serialized by page_table_lock */
+
+ /* Function pointers to deal with this struct. */
+ struct vm_operations_struct * vm_ops;
+
+ /* Information about our backing store: */
+ unsigned long vm_pgoff; /* Offset (within vm_file) in PAGE_SIZE
+ units, *not* PAGE_CACHE_SIZE */
+ struct file * vm_file; /* File we map to (can be NULL). */
+ void * vm_private_data; /* was vm_pte (shared mem) */
+
+#ifdef CONFIG_NUMA
+ struct mempolicy *vm_policy; /* NUMA policy for the VMA */
+#endif
+};
+
+/*
+ * vm_flags..
+ */
+#define VM_READ 0x00000001 /* currently active flags */
+#define VM_WRITE 0x00000002
+#define VM_EXEC 0x00000004
+#define VM_SHARED 0x00000008
+
+#define VM_MAYREAD 0x00000010 /* limits for mprotect() etc */
+#define VM_MAYWRITE 0x00000020
+#define VM_MAYEXEC 0x00000040
+#define VM_MAYSHARE 0x00000080
+
+#define VM_GROWSDOWN 0x00000100 /* general info on the segment */
+#define VM_GROWSUP 0x00000200
+#define VM_SHM 0x00000400 /* shared memory area, don't swap out */
+#define VM_DENYWRITE 0x00000800 /* ETXTBSY on write attempts.. */
+
+#define VM_EXECUTABLE 0x00001000
+#define VM_LOCKED 0x00002000
+#define VM_IO 0x00004000 /* Memory mapped I/O or similar */
+
+ /* Used by sys_madvise() */
+#define VM_SEQ_READ 0x00008000 /* App will access data sequentially */
+#define VM_RAND_READ 0x00010000 /* App will not benefit from clustered reads */
+
+#define VM_DONTCOPY 0x00020000 /* Do not copy this vma on fork */
+#define VM_DONTEXPAND 0x00040000 /* Cannot expand with mremap() */
+#define VM_RESERVED 0x00080000 /* Don't unmap it from swap_out */
+#define VM_ACCOUNT 0x00100000 /* Is a VM accounted object */
+#define VM_HUGETLB 0x00400000 /* Huge TLB Page VM */
+#define VM_NONLINEAR 0x00800000 /* Is non-linear (remap_file_pages) */
+
+#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
+#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
+#endif
+
+#ifdef CONFIG_STACK_GROWSUP
+#define VM_STACK_FLAGS (VM_GROWSUP | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
+#else
+#define VM_STACK_FLAGS (VM_GROWSDOWN | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)
+#endif
+
+#define VM_READHINTMASK (VM_SEQ_READ | VM_RAND_READ)
+#define VM_ClearReadHint(v) (v)->vm_flags &= ~VM_READHINTMASK
+#define VM_NormalReadHint(v) (!((v)->vm_flags & VM_READHINTMASK))
+#define VM_SequentialReadHint(v) ((v)->vm_flags & VM_SEQ_READ)
+#define VM_RandomReadHint(v) ((v)->vm_flags & VM_RAND_READ)
+
+/*
+ * mapping from the currently active vm_flags protection bits (the
+ * low four bits) to a page protection mask..
+ */
+extern pgprot_t protection_map[16];
+
+
+/*
+ * These are the virtual MM functions - opening of an area, closing and
+ * unmapping it (needed to keep files on disk up-to-date etc), pointer
+ * to the functions called when a no-page or a wp-page exception occurs.
+ */
+struct vm_operations_struct {
+ void (*open)(struct vm_area_struct * area);
+ void (*close)(struct vm_area_struct * area);
+ struct page * (*nopage)(struct vm_area_struct * area, unsigned long address, int *type);
+ int (*populate)(struct vm_area_struct * area, unsigned long address, unsigned long len, pgprot_t prot, unsigned long pgoff, int nonblock);
+#ifdef CONFIG_NUMA
+ int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
+ struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
+ unsigned long addr);
+#endif
+};
+
+struct mmu_gather;
+struct inode;
+
+#ifdef ARCH_HAS_ATOMIC_UNSIGNED
+typedef unsigned page_flags_t;
+#else
+typedef unsigned long page_flags_t;
+#endif
+
+/*
+ * Each physical page in the system has a struct page associated with
+ * it to keep track of whatever it is we are using the page for at the
+ * moment. Note that we have no way to track which tasks are using
+ * a page.
+ */
+struct page {
+ page_flags_t flags; /* Atomic flags, some possibly
+ * updated asynchronously */
+ atomic_t _count; /* Usage count, see below. */
+ unsigned int mapcount; /* Count of ptes mapped in mms,
+ * to show when page is mapped
+ * & limit reverse map searches,
+ * protected by PG_maplock.
+ */
+ unsigned long private; /* Mapping-private opaque data:
+ * usually used for buffer_heads
+ * if PagePrivate set; used for
+ * swp_entry_t if PageSwapCache
+ */
+ struct address_space *mapping; /* If PG_anon clear, points to
+ * inode address_space, or NULL.
+ * If page mapped as anonymous
+ * memory, PG_anon is set, and
+ * it points to anon_vma object.
+ */
+ pgoff_t index; /* Our offset within mapping. */
+ struct list_head lru; /* Pageout list, eg. active_list
+ * protected by zone->lru_lock !
+ */
+ /*
+ * On machines where all RAM is mapped into kernel address space,
+ * we can simply calculate the virtual address. On machines with
+ * highmem some memory is mapped into kernel virtual memory
+ * dynamically, so we need a place to store that address.
+ * Note that this field could be 16 bits on x86 ... ;)
+ *
+ * Architectures with slow multiplication can define
+ * WANT_PAGE_VIRTUAL in asm/page.h
+ */
+#if defined(WANT_PAGE_VIRTUAL)
+ void *virtual; /* Kernel virtual address (NULL if
+ not kmapped, ie. highmem) */
+#endif /* WANT_PAGE_VIRTUAL */
+};
+
+/*
+ * FIXME: take this include out, include page-flags.h in
+ * files which need it (119 of them)
+ */
+#include <linux/page-flags.h>
+
+/*
+ * Methods to modify the page usage count.
+ *
+ * What counts for a page usage:
+ * - cache mapping (page->mapping)
+ * - private data (page->private)
+ * - page mapped in a task's page tables, each mapping
+ * is counted separately
+ *
+ * Also, many kernel routines increase the page count before a critical
+ * routine so they can be sure the page doesn't go away from under them.
+ *
+ * Since 2.6.6 (approx), a free page has ->_count = -1. This is so that we
+ * can use atomic_add_negative(-1, page->_count) to detect when the page
+ * becomes free and so that we can also use atomic_inc_and_test to atomically
+ * detect when we just tried to grab a ref on a page which some other CPU has
+ * already deemed to be freeable.
+ *
+ * NO code should make assumptions about this internal detail! Use the provided
+ * macros which retain the old rules: page_count(page) == 0 is a free page.
+ */
+
+/*
+ * Drop a ref, return true if the logical refcount fell to zero (the page has
+ * no users)
+ */
+#define put_page_testzero(p) \
+ ({ \
+ BUG_ON(page_count(p) == 0); \
+ atomic_add_negative(-1, &(p)->_count); \
+ })
+
+/*
+ * Grab a ref, return true if the page previously had a logical refcount of
+ * zero. ie: returns true if we just grabbed an already-deemed-to-be-free page
+ */
+#define get_page_testone(p) atomic_inc_and_test(&(p)->_count)
+
+#define set_page_count(p,v) atomic_set(&(p)->_count, v - 1)
+#define __put_page(p) atomic_dec(&(p)->_count)
+
+extern void FASTCALL(__page_cache_release(struct page *));
+
+#ifdef CONFIG_HUGETLB_PAGE
+
+static inline int page_count(struct page *p)
+{
+ if (PageCompound(p))
+ p = (struct page *)p->private;
+ return atomic_read(&(p)->_count) + 1;
+}
+
+static inline void get_page(struct page *page)
+{
+ if (unlikely(PageCompound(page)))
+ page = (struct page *)page->private;
+ atomic_inc(&page->_count);
+}
+
+void put_page(struct page *page);
+
+#else /* CONFIG_HUGETLB_PAGE */
+
+#define page_count(p) (atomic_read(&(p)->_count) + 1)
+
+static inline void get_page(struct page *page)
+{
+ atomic_inc(&page->_count);
+}
+
+static inline void put_page(struct page *page)
+{
+ if (!PageReserved(page) && put_page_testzero(page))
+ __page_cache_release(page);
+}
+
+#endif /* CONFIG_HUGETLB_PAGE */
+
+/*
+ * Multiple processes may "see" the same page. E.g. for untouched
+ * mappings of /dev/null, all processes see the same page full of
+ * zeroes, and text pages of executables and shared libraries have
+ * only one copy in memory, at most, normally.
+ *
+ * For the non-reserved pages, page_count(page) denotes a reference count.
+ * page_count() == 0 means the page is free.
+ * page_count() == 1 means the page is used for exactly one purpose
+ * (e.g. a private data page of one process).
+ *
+ * A page may be used for kmalloc() or anyone else who does a
+ * __get_free_page(). In this case the page_count() is at least 1, and
+ * all other fields are unused but should be 0 or NULL. The
+ * management of this page is the responsibility of the one who uses
+ * it.
+ *
+ * The other pages (we may call them "process pages") are completely
+ * managed by the Linux memory manager: I/O, buffers, swapping etc.
+ * The following discussion applies only to them.
+ *
+ * A page may belong to an inode's memory mapping. In this case,
+ * page->mapping is the pointer to the inode, and page->index is the
+ * file offset of the page, in units of PAGE_CACHE_SIZE.
+ *
+ * A page contains an opaque `private' member, which belongs to the
+ * page's address_space. Usually, this is the address of a circular
+ * list of the page's disk buffers.
+ *
+ * For pages belonging to inodes, the page_count() is the number of
+ * attaches, plus 1 if `private' contains something, plus one for
+ * the page cache itself.
+ *
+ * All pages belonging to an inode are in these doubly linked lists:
+ * mapping->clean_pages, mapping->dirty_pages and mapping->locked_pages;
+ * using the page->list list_head. These fields are also used for
+ * freelist managemet (when page_count()==0).
+ *
+ * There is also a per-mapping radix tree mapping index to the page
+ * in memory if present. The tree is rooted at mapping->root.
+ *
+ * All process pages can do I/O:
+ * - inode pages may need to be read from disk,
+ * - inode pages which have been modified and are MAP_SHARED may need
+ * to be written to disk,
+ * - private pages which have been modified may need to be swapped out
+ * to swap space and (later) to be read back into memory.
+ */
+
+/*
+ * The zone field is never updated after free_area_init_core()
+ * sets it, so none of the operations on it need to be atomic.
+ * We'll have up to (MAX_NUMNODES * MAX_NR_ZONES) zones total,
+ * so we use (MAX_NODES_SHIFT + MAX_ZONES_SHIFT) here to get enough bits.
+ */
+#define NODEZONE_SHIFT (sizeof(page_flags_t)*8 - MAX_NODES_SHIFT - MAX_ZONES_SHIFT)
+#define NODEZONE(node, zone) ((node << ZONES_SHIFT) | zone)
+
+static inline unsigned long page_zonenum(struct page *page)
+{
+ return (page->flags >> NODEZONE_SHIFT) & (~(~0UL << ZONES_SHIFT));
+}
+static inline unsigned long page_to_nid(struct page *page)
+{
+ return (page->flags >> (NODEZONE_SHIFT + ZONES_SHIFT));
+}
+
+struct zone;
+extern struct zone *zone_table[];
+
+static inline struct zone *page_zone(struct page *page)
+{
+ return zone_table[page->flags >> NODEZONE_SHIFT];
+}
+
+static inline void set_page_zone(struct page *page, unsigned long nodezone_num)
+{
+ page->flags &= ~(~0UL << NODEZONE_SHIFT);
+ page->flags |= nodezone_num << NODEZONE_SHIFT;
+}
+
+#ifndef CONFIG_DISCONTIGMEM
+/* The array of struct pages - for discontigmem use pgdat->lmem_map */
+extern struct page *mem_map;
+#endif
+
+static inline void *lowmem_page_address(struct page *page)
+{
+ return __va(page_to_pfn(page) << PAGE_SHIFT);
+}
+
+#if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL)
+#define HASHED_PAGE_VIRTUAL
+#endif
+
+#if defined(WANT_PAGE_VIRTUAL)
+#define page_address(page) ((page)->virtual)
+#define set_page_address(page, address) \
+ do { \
+ (page)->virtual = (address); \
+ } while(0)
+#define page_address_init() do { } while(0)
+#endif
+
+#if defined(HASHED_PAGE_VIRTUAL)
+void *page_address(struct page *page);
+void set_page_address(struct page *page, void *virtual);
+void page_address_init(void);
+#endif
+
+#if !defined(HASHED_PAGE_VIRTUAL) && !defined(WANT_PAGE_VIRTUAL)
+#define page_address(page) lowmem_page_address(page)
+#define set_page_address(page, address) do { } while(0)
+#define page_address_init() do { } while(0)
+#endif
+
+/*
+ * On an anonymous page mapped into a user virtual memory area,
+ * page->mapping points to its anon_vma, not to a struct address_space.
+ *
+ * Please note that, confusingly, "page_mapping" refers to the inode
+ * address_space which maps the page from disk; whereas "page_mapped"
+ * refers to user virtual address space into which the page is mapped.
+ */
+extern struct address_space swapper_space;
+static inline struct address_space *page_mapping(struct page *page)
+{
+ struct address_space *mapping = NULL;
+
+ if (unlikely(PageSwapCache(page)))
+ mapping = &swapper_space;
+ else if (likely(!PageAnon(page)))
+ mapping = page->mapping;
+ return mapping;
+}
+
+/*
+ * Return the pagecache index of the passed page. Regular pagecache pages
+ * use ->index whereas swapcache pages use ->private
+ */
+static inline pgoff_t page_index(struct page *page)
+{
+ if (unlikely(PageSwapCache(page)))
+ return page->private;
+ return page->index;
+}
+
+/*
+ * Return true if this page is mapped into pagetables.
+ */
+static inline int page_mapped(struct page *page)
+{
+ return page->mapcount != 0;
+}
+
+/*
+ * Error return values for the *_nopage functions
+ */
+#define NOPAGE_SIGBUS (NULL)
+#define NOPAGE_OOM ((struct page *) (-1))
+
+/*
+ * Different kinds of faults, as returned by handle_mm_fault().
+ * Used to decide whether a process gets delivered SIGBUS or
+ * just gets major/minor fault counters bumped up.
+ */
+#define VM_FAULT_OOM (-1)
+#define VM_FAULT_SIGBUS 0
+#define VM_FAULT_MINOR 1
+#define VM_FAULT_MAJOR 2
+
+#define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK)
+
+extern void show_free_areas(void);
+
+struct page *shmem_nopage(struct vm_area_struct * vma,
+ unsigned long address, int *type);
+int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *new);
+struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
+ unsigned long addr);
+struct file *shmem_file_setup(char * name, loff_t size, unsigned long flags);
+void shmem_lock(struct file * file, int lock);
+int shmem_zero_setup(struct vm_area_struct *);
+
+/*
+ * Parameter block passed down to zap_pte_range in exceptional cases.
+ */
+struct zap_details {
+ struct vm_area_struct *nonlinear_vma; /* Check page->index if set */
+ struct address_space *check_mapping; /* Check page->mapping if set */
+ pgoff_t first_index; /* Lowest page->index to unmap */
+ pgoff_t last_index; /* Highest page->index to unmap */
+ int atomic; /* May not schedule() */
+};
+
+void zap_page_range(struct vm_area_struct *vma, unsigned long address,
+ unsigned long size, struct zap_details *);
+int unmap_vmas(struct mmu_gather **tlbp, struct mm_struct *mm,
+ struct vm_area_struct *start_vma, unsigned long start_addr,
+ unsigned long end_addr, unsigned long *nr_accounted,
+ struct zap_details *);
+void clear_page_tables(struct mmu_gather *tlb, unsigned long first, int nr);
+int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
+ struct vm_area_struct *vma);
+int zeromap_page_range(struct vm_area_struct *vma, unsigned long from,
+ unsigned long size, pgprot_t prot);
+void unmap_mapping_range(struct address_space *mapping,
+ loff_t const holebegin, loff_t const holelen, int even_cows);
+
+static inline void unmap_shared_mapping_range(struct address_space *mapping,
+ loff_t const holebegin, loff_t const holelen)
+{
+ unmap_mapping_range(mapping, holebegin, holelen, 0);
+}
+
+extern int vmtruncate(struct inode * inode, loff_t offset);
+extern pmd_t *FASTCALL(__pmd_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address));
+extern pte_t *FASTCALL(pte_alloc_kernel(struct mm_struct *mm, pmd_t *pmd, unsigned long address));
+extern pte_t *FASTCALL(pte_alloc_map(struct mm_struct *mm, pmd_t *pmd, unsigned long address));
+extern int install_page(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, struct page *page, pgprot_t prot);
+extern int install_file_pte(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long pgoff, pgprot_t prot);
+extern int handle_mm_fault(struct mm_struct *mm,struct vm_area_struct *vma, unsigned long address, int write_access);
+extern int make_pages_present(unsigned long addr, unsigned long end);
+extern int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, int len, int write);
+void install_arg_page(struct vm_area_struct *, struct page *, unsigned long);
+
+int get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start,
+ int len, int write, int force, struct page **pages, struct vm_area_struct **vmas);
+
+int __set_page_dirty_buffers(struct page *page);
+int __set_page_dirty_nobuffers(struct page *page);
+int redirty_page_for_writepage(struct writeback_control *wbc,
+ struct page *page);
+int FASTCALL(set_page_dirty(struct page *page));
+int set_page_dirty_lock(struct page *page);
+int clear_page_dirty_for_io(struct page *page);
+
+/*
+ * Prototype to add a shrinker callback for ageable caches.
+ *
+ * These functions are passed a count `nr_to_scan' and a gfpmask. They should
+ * scan `nr_to_scan' objects, attempting to free them.
+ *
+ * The callback must the number of objects which remain in the cache.
+ *
+ * The callback will be passes nr_to_scan == 0 when the VM is querying the
+ * cache size, so a fastpath for that case is appropriate.
+ */
+typedef int (*shrinker_t)(int nr_to_scan, unsigned int gfp_mask);
+
+/*
+ * Add an aging callback. The int is the number of 'seeks' it takes
+ * to recreate one of the objects that these functions age.
+ */
+
+#define DEFAULT_SEEKS 2
+struct shrinker;
+extern struct shrinker *set_shrinker(int, shrinker_t);
+extern void remove_shrinker(struct shrinker *shrinker);
+
+/*
+ * On a two-level page table, this ends up being trivial. Thus the
+ * inlining and the symmetry break with pte_alloc_map() that does all
+ * of this out-of-line.
+ */
+static inline pmd_t *pmd_alloc(struct mm_struct *mm, pgd_t *pgd, unsigned long address)
+{
+ if (pgd_none(*pgd))
+ return __pmd_alloc(mm, pgd, address);
+ return pmd_offset(pgd, address);
+}
+
+extern void free_area_init(unsigned long * zones_size);
+extern void free_area_init_node(int nid, pg_data_t *pgdat, struct page *pmap,
+ unsigned long * zones_size, unsigned long zone_start_pfn,
+ unsigned long *zholes_size);
+extern void memmap_init_zone(struct page *, unsigned long, int,
+ unsigned long, unsigned long);
+extern void mem_init(void);
+extern void show_mem(void);
+extern void si_meminfo(struct sysinfo * val);
+extern void si_meminfo_node(struct sysinfo *val, int nid);
+
+static inline void vma_prio_tree_init(struct vm_area_struct *vma)
+{
+ vma->shared.vm_set.list.next = NULL;
+ vma->shared.vm_set.list.prev = NULL;
+ vma->shared.vm_set.parent = NULL;
+ vma->shared.vm_set.head = NULL;
+}
+
+/* prio_tree.c */
+void vma_prio_tree_add(struct vm_area_struct *, struct vm_area_struct *old);
+void vma_prio_tree_insert(struct vm_area_struct *, struct prio_tree_root *);
+void vma_prio_tree_remove(struct vm_area_struct *, struct prio_tree_root *);
+struct vm_area_struct *vma_prio_tree_next(
+ struct vm_area_struct *, struct prio_tree_root *,
+ struct prio_tree_iter *, pgoff_t begin, pgoff_t end);
+
+/* mmap.c */
+extern void vma_adjust(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert);
+extern struct vm_area_struct *vma_merge(struct mm_struct *,
+ struct vm_area_struct *prev, unsigned long addr, unsigned long end,
+ unsigned long vm_flags, struct anon_vma *, struct file *, pgoff_t,
+ struct mempolicy *);
+extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
+extern int split_vma(struct mm_struct *,
+ struct vm_area_struct *, unsigned long addr, int new_below);
+extern void insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
+extern void __vma_link_rb(struct mm_struct *, struct vm_area_struct *,
+ struct rb_node **, struct rb_node *);
+extern struct vm_area_struct *copy_vma(struct vm_area_struct **,
+ unsigned long addr, unsigned long len, pgoff_t pgoff);
+extern void exit_mmap(struct mm_struct *);
+
+extern unsigned long get_unmapped_area(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+
+extern unsigned long do_mmap_pgoff(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flag, unsigned long pgoff);
+
+static inline unsigned long do_mmap(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flag, unsigned long offset)
+{
+ unsigned long ret = -EINVAL;
+ if ((offset + PAGE_ALIGN(len)) < offset)
+ goto out;
+ if (!(offset & ~PAGE_MASK))
+ ret = do_mmap_pgoff(file, addr, len, prot, flag, offset >> PAGE_SHIFT);
+out:
+ return ret;
+}
+
+extern int do_munmap(struct mm_struct *, unsigned long, size_t);
+
+extern unsigned long do_brk(unsigned long, unsigned long);
+
+/* filemap.c */
+extern unsigned long page_unuse(struct page *);
+extern void truncate_inode_pages(struct address_space *, loff_t);
+
+/* generic vm_area_ops exported for stackable file systems */
+struct page *filemap_nopage(struct vm_area_struct *, unsigned long, int *);
+
+/* mm/page-writeback.c */
+int write_one_page(struct page *page, int wait);
+
+/* readahead.c */
+#define VM_MAX_READAHEAD 128 /* kbytes */
+#define VM_MIN_READAHEAD 16 /* kbytes (includes current page) */
+
+int do_page_cache_readahead(struct address_space *mapping, struct file *filp,
+ unsigned long offset, unsigned long nr_to_read);
+int force_page_cache_readahead(struct address_space *mapping, struct file *filp,
+ unsigned long offset, unsigned long nr_to_read);
+void page_cache_readahead(struct address_space *mapping,
+ struct file_ra_state *ra,
+ struct file *filp,
+ unsigned long offset);
+void handle_ra_miss(struct address_space *mapping,
+ struct file_ra_state *ra, pgoff_t offset);
+unsigned long max_sane_readahead(unsigned long nr);
+
+/* Do stack extension */
+extern int expand_stack(struct vm_area_struct * vma, unsigned long address);
+
+/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
+extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr);
+extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr,
+ struct vm_area_struct **pprev);
+
+/* Look up the first VMA which intersects the interval start_addr..end_addr-1,
+ NULL if none. Assume start_addr < end_addr. */
+static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr)
+{
+ struct vm_area_struct * vma = find_vma(mm,start_addr);
+
+ if (vma && end_addr <= vma->vm_start)
+ vma = NULL;
+ return vma;
+}
+
+static inline unsigned long vma_pages(struct vm_area_struct *vma)
+{
+ return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+}
+
+extern struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned long addr);
+
+extern unsigned int nr_used_zone_pages(void);
+
+extern struct page * vmalloc_to_page(void *addr);
+extern struct page * follow_page(struct mm_struct *mm, unsigned long address,
+ int write);
+extern int remap_page_range(struct vm_area_struct *vma, unsigned long from,
+ unsigned long to, unsigned long size, pgprot_t prot);
+
+#ifndef CONFIG_DEBUG_PAGEALLOC
+static inline void
+kernel_map_pages(struct page *page, int numpages, int enable)
+{
+}
+#endif
+
+#ifndef CONFIG_ARCH_GATE_AREA
+extern struct vm_area_struct *get_gate_vma(struct task_struct *tsk);
+int in_gate_area(struct task_struct *task, unsigned long addr);
+#endif
+
+#endif /* __KERNEL__ */
+#endif /* _LINUX_MM_H */
--- /dev/null
+/*
+ * For boards with physically mapped flash and using
+ * drivers/mtd/maps/physmap.c mapping driver.
+ *
+ * $Id: physmap.h,v 1.2 2004/07/14 17:48:46 dwmw2 Exp $
+ *
+ * Copyright (C) 2003 MontaVista Software Inc.
+ * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+
+#ifndef __LINUX_MTD_PHYSMAP__
+
+#include <linux/config.h>
+
+#if defined(CONFIG_MTD_PHYSMAP)
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+/*
+ * The map_info for physmap. Board can override size, buswidth, phys,
+ * (*set_vpp)(), etc in their initial setup routine.
+ */
+extern struct map_info physmap_map;
+
+/*
+ * Board needs to specify the exact mapping during their setup time.
+ */
+static inline void physmap_configure(unsigned long addr, unsigned long size, int buswidth, void (*set_vpp)(struct map_info *, int) )
+{
+ physmap_map.phys = addr;
+ physmap_map.size = size;
+ physmap_map.buswidth = buswidth;
+ physmap_map.set_vpp = set_vpp;
+}
+
+#if defined(CONFIG_MTD_PARTITIONS)
+
+/*
+ * Machines that wish to do flash partition may want to call this function in
+ * their setup routine.
+ *
+ * physmap_set_partitions(mypartitions, num_parts);
+ *
+ * Note that one can always override this hard-coded partition with
+ * command line partition (you need to enable CONFIG_MTD_CMDLINE_PARTS).
+ */
+void physmap_set_partitions(struct mtd_partition *parts, int num_parts);
+
+#endif /* defined(CONFIG_MTD_PARTITIONS) */
+#endif /* defined(CONFIG_MTD) */
+
+#endif /* __LINUX_MTD_PHYSMAP__ */
+
--- /dev/null
+#ifndef _IPT_ADDRTYPE_H
+#define _IPT_ADDRTYPE_H
+
+struct ipt_addrtype_info {
+ u_int16_t source; /* source-type mask */
+ u_int16_t dest; /* dest-type mask */
+ u_int32_t invert_source;
+ u_int32_t invert_dest;
+};
+
+#endif
--- /dev/null
+#ifndef _IPT_REALM_H
+#define _IPT_REALM_H
+
+struct ipt_realm_info {
+ u_int32_t id;
+ u_int32_t mask;
+ u_int8_t invert;
+};
+
+#endif /* _IPT_REALM_H */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __PROC_MM_H
+#define __PROC_MM_H
+
+#include "linux/sched.h"
+
+#define MM_MMAP 54
+#define MM_MUNMAP 55
+#define MM_MPROTECT 56
+#define MM_COPY_SEGMENTS 57
+
+struct mm_mmap {
+ unsigned long addr;
+ unsigned long len;
+ unsigned long prot;
+ unsigned long flags;
+ unsigned long fd;
+ unsigned long offset;
+};
+
+struct mm_munmap {
+ unsigned long addr;
+ unsigned long len;
+};
+
+struct mm_mprotect {
+ unsigned long addr;
+ unsigned long len;
+ unsigned int prot;
+};
+
+struct proc_mm_op {
+ int op;
+ union {
+ struct mm_mmap mmap;
+ struct mm_munmap munmap;
+ struct mm_mprotect mprotect;
+ int copy_segments;
+ } u;
+};
+
+extern struct mm_struct *proc_mm_get_mm(int fd);
+
+#endif
--- /dev/null
+#include <linux/init.h>
+#include <linux/posix_acl.h>
+#include <linux/xattr_acl.h>
+
+#define REISERFS_ACL_VERSION 0x0001
+
+typedef struct {
+ __u16 e_tag;
+ __u16 e_perm;
+ __u32 e_id;
+} reiserfs_acl_entry;
+
+typedef struct {
+ __u16 e_tag;
+ __u16 e_perm;
+} reiserfs_acl_entry_short;
+
+typedef struct {
+ __u32 a_version;
+} reiserfs_acl_header;
+
+static inline size_t reiserfs_acl_size(int count)
+{
+ if (count <= 4) {
+ return sizeof(reiserfs_acl_header) +
+ count * sizeof(reiserfs_acl_entry_short);
+ } else {
+ return sizeof(reiserfs_acl_header) +
+ 4 * sizeof(reiserfs_acl_entry_short) +
+ (count - 4) * sizeof(reiserfs_acl_entry);
+ }
+}
+
+static inline int reiserfs_acl_count(size_t size)
+{
+ ssize_t s;
+ size -= sizeof(reiserfs_acl_header);
+ s = size - 4 * sizeof(reiserfs_acl_entry_short);
+ if (s < 0) {
+ if (size % sizeof(reiserfs_acl_entry_short))
+ return -1;
+ return size / sizeof(reiserfs_acl_entry_short);
+ } else {
+ if (s % sizeof(reiserfs_acl_entry))
+ return -1;
+ return s / sizeof(reiserfs_acl_entry) + 4;
+ }
+}
+
+
+#ifdef CONFIG_REISERFS_FS_POSIX_ACL
+struct posix_acl * reiserfs_get_acl(struct inode *inode, int type);
+int reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl);
+int reiserfs_acl_chmod (struct inode *inode);
+int reiserfs_inherit_default_acl (struct inode *dir, struct dentry *dentry, struct inode *inode);
+int reiserfs_cache_default_acl (struct inode *dir);
+extern int reiserfs_xattr_posix_acl_init (void) __init;
+extern int reiserfs_xattr_posix_acl_exit (void);
+extern struct reiserfs_xattr_handler posix_acl_default_handler;
+extern struct reiserfs_xattr_handler posix_acl_access_handler;
+#else
+
+#define reiserfs_set_acl NULL
+#define reiserfs_get_acl NULL
+#define reiserfs_cache_default_acl(inode) 0
+
+static inline int
+reiserfs_xattr_posix_acl_init (void)
+{
+ return 0;
+}
+
+static inline int
+reiserfs_xattr_posix_acl_exit (void)
+{
+ return 0;
+}
+
+static inline int
+reiserfs_acl_chmod (struct inode *inode)
+{
+ return 0;
+}
+
+static inline int
+reiserfs_inherit_default_acl (const struct inode *dir, struct dentry *dentry, struct inode *inode)
+{
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+ File: linux/reiserfs_xattr.h
+*/
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/xattr.h>
+
+/* Magic value in header */
+#define REISERFS_XATTR_MAGIC 0x52465841 /* "RFXA" */
+
+struct reiserfs_xattr_header {
+ __u32 h_magic; /* magic number for identification */
+ __u32 h_hash; /* hash of the value */
+};
+
+#ifdef __KERNEL__
+
+struct reiserfs_xattr_handler {
+ char *prefix;
+ int (*init)(void);
+ void (*exit)(void);
+ int (*get)(struct inode *inode, const char *name, void *buffer,
+ size_t size);
+ int (*set)(struct inode *inode, const char *name, const void *buffer,
+ size_t size, int flags);
+ int (*del)(struct inode *inode, const char *name);
+ int (*list)(struct inode *inode, const char *name, int namelen, char *out);
+ struct list_head handlers;
+};
+
+
+#ifdef CONFIG_REISERFS_FS_XATTR
+#define is_reiserfs_priv_object(inode) (REISERFS_I(inode)->i_flags & i_priv_object)
+#define has_xattr_dir(inode) (REISERFS_I(inode)->i_flags & i_has_xattr_dir)
+ssize_t reiserfs_getxattr (struct dentry *dentry, const char *name,
+ void *buffer, size_t size);
+int reiserfs_setxattr (struct dentry *dentry, const char *name,
+ const void *value, size_t size, int flags);
+ssize_t reiserfs_listxattr (struct dentry *dentry, char *buffer, size_t size);
+int reiserfs_removexattr (struct dentry *dentry, const char *name);
+int reiserfs_delete_xattrs (struct inode *inode);
+int reiserfs_chown_xattrs (struct inode *inode, struct iattr *attrs);
+int reiserfs_xattr_init (struct super_block *sb, int mount_flags);
+int reiserfs_permission (struct inode *inode, int mask, struct nameidata *nd);
+int reiserfs_permission_locked (struct inode *inode, int mask, struct nameidata *nd);
+
+int reiserfs_xattr_del (struct inode *, const char *);
+int reiserfs_xattr_get (const struct inode *, const char *, void *, size_t);
+int reiserfs_xattr_set (struct inode *, const char *, const void *,
+ size_t, int);
+
+extern struct reiserfs_xattr_handler user_handler;
+extern struct reiserfs_xattr_handler trusted_handler;
+#ifdef CONFIG_REISERFS_FS_SECURITY
+extern struct reiserfs_xattr_handler security_handler;
+#endif
+
+int reiserfs_xattr_register_handlers (void) __init;
+void reiserfs_xattr_unregister_handlers (void);
+
+static inline void
+reiserfs_write_lock_xattrs(struct super_block *sb)
+{
+ down_write (&REISERFS_XATTR_DIR_SEM(sb));
+}
+static inline void
+reiserfs_write_unlock_xattrs(struct super_block *sb)
+{
+ up_write (&REISERFS_XATTR_DIR_SEM(sb));
+}
+static inline void
+reiserfs_read_lock_xattrs(struct super_block *sb)
+{
+ down_read (&REISERFS_XATTR_DIR_SEM(sb));
+}
+
+static inline void
+reiserfs_read_unlock_xattrs(struct super_block *sb)
+{
+ up_read (&REISERFS_XATTR_DIR_SEM(sb));
+}
+
+static inline void
+reiserfs_write_lock_xattr_i(struct inode *inode)
+{
+ down_write (&REISERFS_I(inode)->xattr_sem);
+}
+static inline void
+reiserfs_write_unlock_xattr_i(struct inode *inode)
+{
+ up_write (&REISERFS_I(inode)->xattr_sem);
+}
+static inline void
+reiserfs_read_lock_xattr_i(struct inode *inode)
+{
+ down_read (&REISERFS_I(inode)->xattr_sem);
+}
+
+static inline void
+reiserfs_read_unlock_xattr_i(struct inode *inode)
+{
+ up_read (&REISERFS_I(inode)->xattr_sem);
+}
+
+#else
+
+#define is_reiserfs_priv_object(inode) 0
+#define reiserfs_getxattr NULL
+#define reiserfs_setxattr NULL
+#define reiserfs_listxattr NULL
+#define reiserfs_removexattr NULL
+#define reiserfs_write_lock_xattrs(sb)
+#define reiserfs_write_unlock_xattrs(sb)
+#define reiserfs_read_lock_xattrs(sb)
+#define reiserfs_read_unlock_xattrs(sb)
+
+#define reiserfs_permission NULL
+
+#define reiserfs_xattr_register_handlers() 0
+#define reiserfs_xattr_unregister_handlers()
+
+static inline int reiserfs_delete_xattrs (struct inode *inode) { return 0; };
+static inline int reiserfs_chown_xattrs (struct inode *inode, struct iattr *attrs) { return 0; };
+static inline int reiserfs_xattr_init (struct super_block *sb, int mount_flags)
+{
+ sb->s_flags = (sb->s_flags & ~MS_POSIXACL); /* to be sure */
+ return 0;
+};
+#endif
+
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * Definitions for MIBs
+ *
+ * Author: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
+ */
+
+#ifndef _LINUX_SNMP_H
+#define _LINUX_SNMP_H
+
+/* ipstats mib definitions */
+/*
+ * RFC 1213: MIB-II
+ * RFC 2011 (updates 1213): SNMPv2-MIB-IP
+ * RFC 2863: Interfaces Group MIB
+ * RFC 2465: IPv6 MIB: General Group
+ * draft-ietf-ipv6-rfc2011-update-10.txt: MIB for IP: IP Statistics Tables
+ */
+enum
+{
+ IPSTATS_MIB_NUM = 0,
+ IPSTATS_MIB_INRECEIVES, /* InReceives */
+ IPSTATS_MIB_INHDRERRORS, /* InHdrErrors */
+ IPSTATS_MIB_INTOOBIGERRORS, /* InTooBigErrors */
+ IPSTATS_MIB_INNOROUTES, /* InNoRoutes */
+ IPSTATS_MIB_INADDRERRORS, /* InAddrErrors */
+ IPSTATS_MIB_INUNKNOWNPROTOS, /* InUnknownProtos */
+ IPSTATS_MIB_INTRUNCATEDPKTS, /* InTruncatedPkts */
+ IPSTATS_MIB_INDISCARDS, /* InDiscards */
+ IPSTATS_MIB_INDELIVERS, /* InDelivers */
+ IPSTATS_MIB_OUTFORWDATAGRAMS, /* OutForwDatagrams */
+ IPSTATS_MIB_OUTREQUESTS, /* OutRequests */
+ IPSTATS_MIB_OUTDISCARDS, /* OutDiscards */
+ IPSTATS_MIB_OUTNOROUTES, /* OutNoRoutes */
+ IPSTATS_MIB_REASMTIMEOUT, /* ReasmTimeout */
+ IPSTATS_MIB_REASMREQDS, /* ReasmReqds */
+ IPSTATS_MIB_REASMOKS, /* ReasmOKs */
+ IPSTATS_MIB_REASMFAILS, /* ReasmFails */
+ IPSTATS_MIB_FRAGOKS, /* FragOKs */
+ IPSTATS_MIB_FRAGFAILS, /* FragFails */
+ IPSTATS_MIB_FRAGCREATES, /* FragCreates */
+ IPSTATS_MIB_INMCASTPKTS, /* InMcastPkts */
+ IPSTATS_MIB_OUTMCASTPKTS, /* OutMcastPkts */
+ __IPSTATS_MIB_MAX
+};
+
+/* icmp mib definitions */
+/*
+ * RFC 1213: MIB-II ICMP Group
+ * RFC 2011 (updates 1213): SNMPv2 MIB for IP: ICMP group
+ */
+enum
+{
+ ICMP_MIB_NUM = 0,
+ ICMP_MIB_INMSGS, /* InMsgs */
+ ICMP_MIB_INERRORS, /* InErrors */
+ ICMP_MIB_INDESTUNREACHS, /* InDestUnreachs */
+ ICMP_MIB_INTIMEEXCDS, /* InTimeExcds */
+ ICMP_MIB_INPARMPROBS, /* InParmProbs */
+ ICMP_MIB_INSRCQUENCHS, /* InSrcQuenchs */
+ ICMP_MIB_INREDIRECTS, /* InRedirects */
+ ICMP_MIB_INECHOS, /* InEchos */
+ ICMP_MIB_INECHOREPS, /* InEchoReps */
+ ICMP_MIB_INTIMESTAMPS, /* InTimestamps */
+ ICMP_MIB_INTIMESTAMPREPS, /* InTimestampReps */
+ ICMP_MIB_INADDRMASKS, /* InAddrMasks */
+ ICMP_MIB_INADDRMASKREPS, /* InAddrMaskReps */
+ ICMP_MIB_OUTMSGS, /* OutMsgs */
+ ICMP_MIB_OUTERRORS, /* OutErrors */
+ ICMP_MIB_OUTDESTUNREACHS, /* OutDestUnreachs */
+ ICMP_MIB_OUTTIMEEXCDS, /* OutTimeExcds */
+ ICMP_MIB_OUTPARMPROBS, /* OutParmProbs */
+ ICMP_MIB_OUTSRCQUENCHS, /* OutSrcQuenchs */
+ ICMP_MIB_OUTREDIRECTS, /* OutRedirects */
+ ICMP_MIB_OUTECHOS, /* OutEchos */
+ ICMP_MIB_OUTECHOREPS, /* OutEchoReps */
+ ICMP_MIB_OUTTIMESTAMPS, /* OutTimestamps */
+ ICMP_MIB_OUTTIMESTAMPREPS, /* OutTimestampReps */
+ ICMP_MIB_OUTADDRMASKS, /* OutAddrMasks */
+ ICMP_MIB_OUTADDRMASKREPS, /* OutAddrMaskReps */
+ __ICMP_MIB_MAX
+};
+
+/* icmp6 mib definitions */
+/*
+ * RFC 2466: ICMPv6-MIB
+ */
+enum
+{
+ ICMP6_MIB_NUM = 0,
+ ICMP6_MIB_INMSGS, /* InMsgs */
+ ICMP6_MIB_INERRORS, /* InErrors */
+ ICMP6_MIB_INDESTUNREACHS, /* InDestUnreachs */
+ ICMP6_MIB_INPKTTOOBIGS, /* InPktTooBigs */
+ ICMP6_MIB_INTIMEEXCDS, /* InTimeExcds */
+ ICMP6_MIB_INPARMPROBLEMS, /* InParmProblems */
+ ICMP6_MIB_INECHOS, /* InEchos */
+ ICMP6_MIB_INECHOREPLIES, /* InEchoReplies */
+ ICMP6_MIB_INGROUPMEMBQUERIES, /* InGroupMembQueries */
+ ICMP6_MIB_INGROUPMEMBRESPONSES, /* InGroupMembResponses */
+ ICMP6_MIB_INGROUPMEMBREDUCTIONS, /* InGroupMembReductions */
+ ICMP6_MIB_INROUTERSOLICITS, /* InRouterSolicits */
+ ICMP6_MIB_INROUTERADVERTISEMENTS, /* InRouterAdvertisements */
+ ICMP6_MIB_INNEIGHBORSOLICITS, /* InNeighborSolicits */
+ ICMP6_MIB_INNEIGHBORADVERTISEMENTS, /* InNeighborAdvertisements */
+ ICMP6_MIB_INREDIRECTS, /* InRedirects */
+ ICMP6_MIB_OUTMSGS, /* OutMsgs */
+ ICMP6_MIB_OUTDESTUNREACHS, /* OutDestUnreachs */
+ ICMP6_MIB_OUTPKTTOOBIGS, /* OutPktTooBigs */
+ ICMP6_MIB_OUTTIMEEXCDS, /* OutTimeExcds */
+ ICMP6_MIB_OUTPARMPROBLEMS, /* OutParmProblems */
+ ICMP6_MIB_OUTECHOREPLIES, /* OutEchoReplies */
+ ICMP6_MIB_OUTROUTERSOLICITS, /* OutRouterSolicits */
+ ICMP6_MIB_OUTNEIGHBORSOLICITS, /* OutNeighborSolicits */
+ ICMP6_MIB_OUTNEIGHBORADVERTISEMENTS, /* OutNeighborAdvertisements */
+ ICMP6_MIB_OUTREDIRECTS, /* OutRedirects */
+ ICMP6_MIB_OUTGROUPMEMBRESPONSES, /* OutGroupMembResponses */
+ ICMP6_MIB_OUTGROUPMEMBREDUCTIONS, /* OutGroupMembReductions */
+ __ICMP6_MIB_MAX
+};
+
+/* tcp mib definitions */
+/*
+ * RFC 1213: MIB-II TCP group
+ * RFC 2012 (updates 1213): SNMPv2-MIB-TCP
+ */
+enum
+{
+ TCP_MIB_NUM = 0,
+ TCP_MIB_RTOALGORITHM, /* RtoAlgorithm */
+ TCP_MIB_RTOMIN, /* RtoMin */
+ TCP_MIB_RTOMAX, /* RtoMax */
+ TCP_MIB_MAXCONN, /* MaxConn */
+ TCP_MIB_ACTIVEOPENS, /* ActiveOpens */
+ TCP_MIB_PASSIVEOPENS, /* PassiveOpens */
+ TCP_MIB_ATTEMPTFAILS, /* AttemptFails */
+ TCP_MIB_ESTABRESETS, /* EstabResets */
+ TCP_MIB_CURRESTAB, /* CurrEstab */
+ TCP_MIB_INSEGS, /* InSegs */
+ TCP_MIB_OUTSEGS, /* OutSegs */
+ TCP_MIB_RETRANSSEGS, /* RetransSegs */
+ TCP_MIB_INERRS, /* InErrs */
+ TCP_MIB_OUTRSTS, /* OutRsts */
+ __TCP_MIB_MAX
+};
+
+/* udp mib definitions */
+/*
+ * RFC 1213: MIB-II UDP group
+ * RFC 2013 (updates 1213): SNMPv2-MIB-UDP
+ */
+enum
+{
+ UDP_MIB_NUM = 0,
+ UDP_MIB_INDATAGRAMS, /* InDatagrams */
+ UDP_MIB_NOPORTS, /* NoPorts */
+ UDP_MIB_INERRORS, /* InErrors */
+ UDP_MIB_OUTDATAGRAMS, /* OutDatagrams */
+ __UDP_MIB_MAX
+};
+
+/* sctp mib definitions */
+/*
+ * draft-ietf-sigtran-sctp-mib-07.txt
+ */
+enum
+{
+ SCTP_MIB_NUM = 0,
+ SCTP_MIB_CURRESTAB, /* CurrEstab */
+ SCTP_MIB_ACTIVEESTABS, /* ActiveEstabs */
+ SCTP_MIB_PASSIVEESTABS, /* PassiveEstabs */
+ SCTP_MIB_ABORTEDS, /* Aborteds */
+ SCTP_MIB_SHUTDOWNS, /* Shutdowns */
+ SCTP_MIB_OUTOFBLUES, /* OutOfBlues */
+ SCTP_MIB_CHECKSUMERRORS, /* ChecksumErrors */
+ SCTP_MIB_OUTCTRLCHUNKS, /* OutCtrlChunks */
+ SCTP_MIB_OUTORDERCHUNKS, /* OutOrderChunks */
+ SCTP_MIB_OUTUNORDERCHUNKS, /* OutUnorderChunks */
+ SCTP_MIB_INCTRLCHUNKS, /* InCtrlChunks */
+ SCTP_MIB_INORDERCHUNKS, /* InOrderChunks */
+ SCTP_MIB_INUNORDERCHUNKS, /* InUnorderChunks */
+ SCTP_MIB_FRAGUSRMSGS, /* FragUsrMsgs */
+ SCTP_MIB_REASMUSRMSGS, /* ReasmUsrMsgs */
+ SCTP_MIB_OUTSCTPPACKS, /* OutSCTPPacks */
+ SCTP_MIB_INSCTPPACKS, /* InSCTPPacks */
+ SCTP_MIB_RTOALGORITHM, /* RtoAlgorithm */
+ SCTP_MIB_RTOMIN, /* RtoMin */
+ SCTP_MIB_RTOMAX, /* RtoMax */
+ SCTP_MIB_RTOINITIAL, /* RtoInitial */
+ SCTP_MIB_VALCOOKIELIFE, /* ValCookieLife */
+ SCTP_MIB_MAXINITRETR, /* MaxInitRetr */
+ __SCTP_MIB_MAX
+};
+
+/* linux mib definitions */
+enum
+{
+ LINUX_MIB_NUM = 0,
+ LINUX_MIB_SYNCOOKIESSENT, /* SyncookiesSent */
+ LINUX_MIB_SYNCOOKIESRECV, /* SyncookiesRecv */
+ LINUX_MIB_SYNCOOKIESFAILED, /* SyncookiesFailed */
+ LINUX_MIB_EMBRYONICRSTS, /* EmbryonicRsts */
+ LINUX_MIB_PRUNECALLED, /* PruneCalled */
+ LINUX_MIB_RCVPRUNED, /* RcvPruned */
+ LINUX_MIB_OFOPRUNED, /* OfoPruned */
+ LINUX_MIB_OUTOFWINDOWICMPS, /* OutOfWindowIcmps */
+ LINUX_MIB_LOCKDROPPEDICMPS, /* LockDroppedIcmps */
+ LINUX_MIB_ARPFILTER, /* ArpFilter */
+ LINUX_MIB_TIMEWAITED, /* TimeWaited */
+ LINUX_MIB_TIMEWAITRECYCLED, /* TimeWaitRecycled */
+ LINUX_MIB_TIMEWAITKILLED, /* TimeWaitKilled */
+ LINUX_MIB_PAWSPASSIVEREJECTED, /* PAWSPassiveRejected */
+ LINUX_MIB_PAWSACTIVEREJECTED, /* PAWSActiveRejected */
+ LINUX_MIB_PAWSESTABREJECTED, /* PAWSEstabRejected */
+ LINUX_MIB_DELAYEDACKS, /* DelayedACKs */
+ LINUX_MIB_DELAYEDACKLOCKED, /* DelayedACKLocked */
+ LINUX_MIB_DELAYEDACKLOST, /* DelayedACKLost */
+ LINUX_MIB_LISTENOVERFLOWS, /* ListenOverflows */
+ LINUX_MIB_LISTENDROPS, /* ListenDrops */
+ LINUX_MIB_TCPPREQUEUED, /* TCPPrequeued */
+ LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, /* TCPDirectCopyFromBacklog */
+ LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, /* TCPDirectCopyFromPrequeue */
+ LINUX_MIB_TCPPREQUEUEDROPPED, /* TCPPrequeueDropped */
+ LINUX_MIB_TCPHPHITS, /* TCPHPHits */
+ LINUX_MIB_TCPHPHITSTOUSER, /* TCPHPHitsToUser */
+ LINUX_MIB_TCPPUREACKS, /* TCPPureAcks */
+ LINUX_MIB_TCPHPACKS, /* TCPHPAcks */
+ LINUX_MIB_TCPRENORECOVERY, /* TCPRenoRecovery */
+ LINUX_MIB_TCPSACKRECOVERY, /* TCPSackRecovery */
+ LINUX_MIB_TCPSACKRENEGING, /* TCPSACKReneging */
+ LINUX_MIB_TCPFACKREORDER, /* TCPFACKReorder */
+ LINUX_MIB_TCPSACKREORDER, /* TCPSACKReorder */
+ LINUX_MIB_TCPRENOREORDER, /* TCPRenoReorder */
+ LINUX_MIB_TCPTSREORDER, /* TCPTSReorder */
+ LINUX_MIB_TCPFULLUNDO, /* TCPFullUndo */
+ LINUX_MIB_TCPPARTIALUNDO, /* TCPPartialUndo */
+ LINUX_MIB_TCPDSACKUNDO, /* TCPDSACKUndo */
+ LINUX_MIB_TCPLOSSUNDO, /* TCPLossUndo */
+ LINUX_MIB_TCPLOSS, /* TCPLoss */
+ LINUX_MIB_TCPLOSTRETRANSMIT, /* TCPLostRetransmit */
+ LINUX_MIB_TCPRENOFAILURES, /* TCPRenoFailures */
+ LINUX_MIB_TCPSACKFAILURES, /* TCPSackFailures */
+ LINUX_MIB_TCPLOSSFAILURES, /* TCPLossFailures */
+ LINUX_MIB_TCPFASTRETRANS, /* TCPFastRetrans */
+ LINUX_MIB_TCPFORWARDRETRANS, /* TCPForwardRetrans */
+ LINUX_MIB_TCPSLOWSTARTRETRANS, /* TCPSlowStartRetrans */
+ LINUX_MIB_TCPTIMEOUTS, /* TCPTimeouts */
+ LINUX_MIB_TCPRENORECOVERYFAIL, /* TCPRenoRecoveryFail */
+ LINUX_MIB_TCPSACKRECOVERYFAIL, /* TCPSackRecoveryFail */
+ LINUX_MIB_TCPSCHEDULERFAILED, /* TCPSchedulerFailed */
+ LINUX_MIB_TCPRCVCOLLAPSED, /* TCPRcvCollapsed */
+ LINUX_MIB_TCPDSACKOLDSENT, /* TCPDSACKOldSent */
+ LINUX_MIB_TCPDSACKOFOSENT, /* TCPDSACKOfoSent */
+ LINUX_MIB_TCPDSACKRECV, /* TCPDSACKRecv */
+ LINUX_MIB_TCPDSACKOFORECV, /* TCPDSACKOfoRecv */
+ LINUX_MIB_TCPABORTONSYN, /* TCPAbortOnSyn */
+ LINUX_MIB_TCPABORTONDATA, /* TCPAbortOnData */
+ LINUX_MIB_TCPABORTONCLOSE, /* TCPAbortOnClose */
+ LINUX_MIB_TCPABORTONMEMORY, /* TCPAbortOnMemory */
+ LINUX_MIB_TCPABORTONTIMEOUT, /* TCPAbortOnTimeout */
+ LINUX_MIB_TCPABORTONLINGER, /* TCPAbortOnLinger */
+ LINUX_MIB_TCPABORTFAILED, /* TCPAbortFailed */
+ LINUX_MIB_TCPMEMORYPRESSURES, /* TCPMemoryPressures */
+ __LINUX_MIB_MAX
+};
+
+#endif /* _LINUX_SNMP_H */
--- /dev/null
+/* OmniVision* camera chip driver API
+ *
+ * Copyright (c) 1999-2004 Mark McClelland
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ *
+ * * OmniVision is a trademark of OmniVision Technologies, Inc. This driver
+ * is not sponsored or developed by them.
+ */
+
+#ifndef __LINUX_OVCAMCHIP_H
+#define __LINUX_OVCAMCHIP_H
+
+#include <linux/videodev.h>
+#include <linux/i2c.h>
+
+/* Remove these once they are officially defined */
+#ifndef I2C_DRIVERID_OVCAMCHIP
+ #define I2C_DRIVERID_OVCAMCHIP 0xf00f
+#endif
+#ifndef I2C_HW_SMBUS_OV511
+ #define I2C_HW_SMBUS_OV511 0xfe
+#endif
+#ifndef I2C_HW_SMBUS_OV518
+ #define I2C_HW_SMBUS_OV518 0xff
+#endif
+#ifndef I2C_HW_SMBUS_OVFX2
+ #define I2C_HW_SMBUS_OVFX2 0xfd
+#endif
+
+/* --------------------------------- */
+/* ENUMERATIONS */
+/* --------------------------------- */
+
+/* Controls */
+enum {
+ OVCAMCHIP_CID_CONT, /* Contrast */
+ OVCAMCHIP_CID_BRIGHT, /* Brightness */
+ OVCAMCHIP_CID_SAT, /* Saturation */
+ OVCAMCHIP_CID_HUE, /* Hue */
+ OVCAMCHIP_CID_EXP, /* Exposure */
+ OVCAMCHIP_CID_FREQ, /* Light frequency */
+ OVCAMCHIP_CID_BANDFILT, /* Banding filter */
+ OVCAMCHIP_CID_AUTOBRIGHT, /* Auto brightness */
+ OVCAMCHIP_CID_AUTOEXP, /* Auto exposure */
+ OVCAMCHIP_CID_BACKLIGHT, /* Back light compensation */
+ OVCAMCHIP_CID_MIRROR, /* Mirror horizontally */
+};
+
+/* Chip types */
+#define NUM_CC_TYPES 9
+enum {
+ CC_UNKNOWN,
+ CC_OV76BE,
+ CC_OV7610,
+ CC_OV7620,
+ CC_OV7620AE,
+ CC_OV6620,
+ CC_OV6630,
+ CC_OV6630AE,
+ CC_OV6630AF,
+};
+
+/* --------------------------------- */
+/* I2C ADDRESSES */
+/* --------------------------------- */
+
+#define OV7xx0_SID (0x42 >> 1)
+#define OV6xx0_SID (0xC0 >> 1)
+
+/* --------------------------------- */
+/* API */
+/* --------------------------------- */
+
+struct ovcamchip_control {
+ __u32 id;
+ __s32 value;
+};
+
+struct ovcamchip_window {
+ int x;
+ int y;
+ int width;
+ int height;
+ int format;
+ int quarter; /* Scale width and height down 2x */
+
+ /* This stuff will be removed eventually */
+ int clockdiv; /* Clock divisor setting */
+};
+
+/* Commands */
+#define OVCAMCHIP_CMD_Q_SUBTYPE _IOR (0x88, 0x00, int)
+#define OVCAMCHIP_CMD_INITIALIZE _IOW (0x88, 0x01, int)
+/* You must call OVCAMCHIP_CMD_INITIALIZE before any of commands below! */
+#define OVCAMCHIP_CMD_S_CTRL _IOW (0x88, 0x02, struct ovcamchip_control)
+#define OVCAMCHIP_CMD_G_CTRL _IOWR (0x88, 0x03, struct ovcamchip_control)
+#define OVCAMCHIP_CMD_S_MODE _IOW (0x88, 0x04, struct ovcamchip_window)
+#define OVCAMCHIP_MAX_CMD _IO (0x88, 0x3f)
+
+#endif
--- /dev/null
+/*
+ * $Id: inftl-user.h,v 1.1 2004/05/05 15:17:00 dwmw2 Exp $
+ *
+ * Parts of INFTL headers shared with userspace
+ *
+ */
+
+#ifndef __MTD_INFTL_USER_H__
+#define __MTD_INFTL_USER_H__
+
+#define OSAK_VERSION 0x5120
+#define PERCENTUSED 98
+
+#define SECTORSIZE 512
+
+/* Block Control Information */
+
+struct inftl_bci {
+ uint8_t ECCsig[6];
+ uint8_t Status;
+ uint8_t Status1;
+} __attribute__((packed));
+
+struct inftl_unithead1 {
+ uint16_t virtualUnitNo;
+ uint16_t prevUnitNo;
+ uint8_t ANAC;
+ uint8_t NACs;
+ uint8_t parityPerField;
+ uint8_t discarded;
+} __attribute__((packed));
+
+struct inftl_unithead2 {
+ uint8_t parityPerField;
+ uint8_t ANAC;
+ uint16_t prevUnitNo;
+ uint16_t virtualUnitNo;
+ uint8_t NACs;
+ uint8_t discarded;
+} __attribute__((packed));
+
+struct inftl_unittail {
+ uint8_t Reserved[4];
+ uint16_t EraseMark;
+ uint16_t EraseMark1;
+} __attribute__((packed));
+
+union inftl_uci {
+ struct inftl_unithead1 a;
+ struct inftl_unithead2 b;
+ struct inftl_unittail c;
+};
+
+struct inftl_oob {
+ struct inftl_bci b;
+ union inftl_uci u;
+};
+
+
+/* INFTL Media Header */
+
+struct INFTLPartition {
+ __u32 virtualUnits;
+ __u32 firstUnit;
+ __u32 lastUnit;
+ __u32 flags;
+ __u32 spareUnits;
+ __u32 Reserved0;
+ __u32 Reserved1;
+} __attribute__((packed));
+
+struct INFTLMediaHeader {
+ char bootRecordID[8];
+ __u32 NoOfBootImageBlocks;
+ __u32 NoOfBinaryPartitions;
+ __u32 NoOfBDTLPartitions;
+ __u32 BlockMultiplierBits;
+ __u32 FormatFlags;
+ __u32 OsakVersion;
+ __u32 PercentUsed;
+ struct INFTLPartition Partitions[4];
+} __attribute__((packed));
+
+/* Partition flag types */
+#define INFTL_BINARY 0x20000000
+#define INFTL_BDTL 0x40000000
+#define INFTL_LAST 0x80000000
+
+#endif /* __MTD_INFTL_USER_H__ */
+
+
--- /dev/null
+/*
+ * $Id: jffs2-user.h,v 1.1 2004/05/05 11:57:54 dwmw2 Exp $
+ *
+ * JFFS2 definitions for use in user space only
+ */
+
+#ifndef __JFFS2_USER_H__
+#define __JFFS2_USER_H__
+
+/* This file is blessed for inclusion by userspace */
+#include <linux/jffs2.h>
+#include <endian.h>
+#include <byteswap.h>
+
+#undef cpu_to_je16
+#undef cpu_to_je32
+#undef cpu_to_jemode
+#undef je16_to_cpu
+#undef je32_to_cpu
+#undef jemode_to_cpu
+
+extern int target_endian;
+
+#define t16(x) ({ uint16_t __b = (x); (target_endian==__BYTE_ORDER)?__b:bswap_16(__b); })
+#define t32(x) ({ uint32_t __b = (x); (target_endian==__BYTE_ORDER)?__b:bswap_32(__b); })
+
+#define cpu_to_je16(x) ((jint16_t){t16(x)})
+#define cpu_to_je32(x) ((jint32_t){t32(x)})
+#define cpu_to_jemode(x) ((jmode_t){t32(x)})
+
+#define je16_to_cpu(x) (t16((x).v16))
+#define je32_to_cpu(x) (t32((x).v32))
+#define jemode_to_cpu(x) (t32((x).m))
+
+#endif /* __JFFS2_USER_H__ */
--- /dev/null
+/*
+ * $Id: mtd-abi.h,v 1.5 2004/06/22 09:29:35 gleixner Exp $
+ *
+ * Portions of MTD ABI definition which are shared by kernel and user space
+ */
+
+#ifndef __MTD_ABI_H__
+#define __MTD_ABI_H__
+
+struct erase_info_user {
+ uint32_t start;
+ uint32_t length;
+};
+
+struct mtd_oob_buf {
+ uint32_t start;
+ uint32_t length;
+ unsigned char *ptr;
+};
+
+#define MTD_ABSENT 0
+#define MTD_RAM 1
+#define MTD_ROM 2
+#define MTD_NORFLASH 3
+#define MTD_NANDFLASH 4
+#define MTD_PEROM 5
+#define MTD_OTHER 14
+#define MTD_UNKNOWN 15
+
+#define MTD_CLEAR_BITS 1 // Bits can be cleared (flash)
+#define MTD_SET_BITS 2 // Bits can be set
+#define MTD_ERASEABLE 4 // Has an erase function
+#define MTD_WRITEB_WRITEABLE 8 // Direct IO is possible
+#define MTD_VOLATILE 16 // Set for RAMs
+#define MTD_XIP 32 // eXecute-In-Place possible
+#define MTD_OOB 64 // Out-of-band data (NAND flash)
+#define MTD_ECC 128 // Device capable of automatic ECC
+
+// Some common devices / combinations of capabilities
+#define MTD_CAP_ROM 0
+#define MTD_CAP_RAM (MTD_CLEAR_BITS|MTD_SET_BITS|MTD_WRITEB_WRITEABLE)
+#define MTD_CAP_NORFLASH (MTD_CLEAR_BITS|MTD_ERASEABLE)
+#define MTD_CAP_NANDFLASH (MTD_CLEAR_BITS|MTD_ERASEABLE|MTD_OOB)
+#define MTD_WRITEABLE (MTD_CLEAR_BITS|MTD_SET_BITS)
+
+
+// Types of automatic ECC/Checksum available
+#define MTD_ECC_NONE 0 // No automatic ECC available
+#define MTD_ECC_RS_DiskOnChip 1 // Automatic ECC on DiskOnChip
+#define MTD_ECC_SW 2 // SW ECC for Toshiba & Samsung devices
+
+/* ECC byte placement */
+#define MTD_NANDECC_OFF 0 // Switch off ECC (Not recommended)
+#define MTD_NANDECC_PLACE 1 // Use the given placement in the structure (YAFFS1 legacy mode)
+#define MTD_NANDECC_AUTOPLACE 2 // Use the default placement scheme
+#define MTD_NANDECC_PLACEONLY 3 // Use the given placement in the structure (Do not store ecc result on read)
+
+struct mtd_info_user {
+ uint8_t type;
+ uint32_t flags;
+ uint32_t size; // Total size of the MTD
+ uint32_t erasesize;
+ uint32_t oobblock; // Size of OOB blocks (e.g. 512)
+ uint32_t oobsize; // Amount of OOB data per block (e.g. 16)
+ uint32_t ecctype;
+ uint32_t eccsize;
+};
+
+struct region_info_user {
+ uint32_t offset; /* At which this region starts,
+ * from the beginning of the MTD */
+ uint32_t erasesize; /* For this region */
+ uint32_t numblocks; /* Number of blocks in this region */
+ uint32_t regionindex;
+};
+
+#define MEMGETINFO _IOR('M', 1, struct mtd_info_user)
+#define MEMERASE _IOW('M', 2, struct erase_info_user)
+#define MEMWRITEOOB _IOWR('M', 3, struct mtd_oob_buf)
+#define MEMREADOOB _IOWR('M', 4, struct mtd_oob_buf)
+#define MEMLOCK _IOW('M', 5, struct erase_info_user)
+#define MEMUNLOCK _IOW('M', 6, struct erase_info_user)
+#define MEMGETREGIONCOUNT _IOR('M', 7, int)
+#define MEMGETREGIONINFO _IOWR('M', 8, struct region_info_user)
+#define MEMSETOOBSEL _IOW('M', 9, struct nand_oobinfo)
+#define MEMGETOOBSEL _IOR('M', 10, struct nand_oobinfo)
+#define MEMGETBADBLOCK _IOW('M', 11, loff_t)
+#define MEMSETBADBLOCK _IOW('M', 12, loff_t)
+
+struct nand_oobinfo {
+ uint32_t useecc;
+ uint32_t eccbytes;
+ uint32_t oobfree[8][2];
+ uint32_t eccpos[32];
+};
+
+#endif /* __MTD_ABI_H__ */
--- /dev/null
+/*
+ * $Id: mtd-user.h,v 1.2 2004/05/05 14:44:57 dwmw2 Exp $
+ *
+ * MTD ABI header for use by user space only.
+ */
+
+#ifndef __MTD_USER_H__
+#define __MTD_USER_H__
+
+#include <stdint.h>
+
+/* This file is blessed for inclusion by userspace */
+#include <mtd/mtd-abi.h>
+
+typedef struct mtd_info_user mtd_info_t;
+typedef struct erase_info_user erase_info_t;
+typedef struct region_info_user region_info_t;
+typedef struct nand_oobinfo nand_oobinfo_t;
+
+#endif /* __MTD_USER_H__ */
--- /dev/null
+/*
+ * $Id: nftl-user.h,v 1.1 2004/05/05 14:44:57 dwmw2 Exp $
+ *
+ * Parts of NFTL headers shared with userspace
+ *
+ */
+
+#ifndef __MTD_NFTL_USER_H__
+#define __MTD_NFTL_USER_H__
+
+/* Block Control Information */
+
+struct nftl_bci {
+ unsigned char ECCSig[6];
+ uint8_t Status;
+ uint8_t Status1;
+}__attribute__((packed));
+
+/* Unit Control Information */
+
+struct nftl_uci0 {
+ uint16_t VirtUnitNum;
+ uint16_t ReplUnitNum;
+ uint16_t SpareVirtUnitNum;
+ uint16_t SpareReplUnitNum;
+} __attribute__((packed));
+
+struct nftl_uci1 {
+ uint32_t WearInfo;
+ uint16_t EraseMark;
+ uint16_t EraseMark1;
+} __attribute__((packed));
+
+struct nftl_uci2 {
+ uint16_t FoldMark;
+ uint16_t FoldMark1;
+ uint32_t unused;
+} __attribute__((packed));
+
+union nftl_uci {
+ struct nftl_uci0 a;
+ struct nftl_uci1 b;
+ struct nftl_uci2 c;
+};
+
+struct nftl_oob {
+ struct nftl_bci b;
+ union nftl_uci u;
+};
+
+/* NFTL Media Header */
+
+struct NFTLMediaHeader {
+ char DataOrgID[6];
+ uint16_t NumEraseUnits;
+ uint16_t FirstPhysicalEUN;
+ uint32_t FormattedSize;
+ unsigned char UnitSizeFactor;
+} __attribute__((packed));
+
+#define MAX_ERASE_ZONES (8192 - 512)
+
+#define ERASE_MARK 0x3c69
+#define SECTOR_FREE 0xff
+#define SECTOR_USED 0x55
+#define SECTOR_IGNORE 0x11
+#define SECTOR_DELETED 0x00
+
+#define FOLD_MARK_IN_PROGRESS 0x5555
+
+#define ZONE_GOOD 0xff
+#define ZONE_BAD_ORIGINAL 0
+#define ZONE_BAD_MARKED 7
+
+
+#endif /* __MTD_NFTL_USER_H__ */
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * Checksumming functions for IPv6
+ *
+ * Authors: Jorge Cwik, <jorge@laser.satlink.net>
+ * Arnt Gulbrandsen, <agulbra@nvg.unit.no>
+ * Borrows very liberally from tcp.c and ip.c, see those
+ * files for more names.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/*
+ * Fixes:
+ *
+ * Ralf Baechle : generic ipv6 checksum
+ * <ralf@waldorf-gmbh.de>
+ */
+
+#ifndef _CHECKSUM_IPV6_H
+#define _CHECKSUM_IPV6_H
+
+#include <asm/types.h>
+#include <asm/byteorder.h>
+#include <net/ip.h>
+#include <asm/checksum.h>
+#include <linux/in6.h>
+
+#ifndef _HAVE_ARCH_IPV6_CSUM
+
+static __inline__ unsigned short int csum_ipv6_magic(struct in6_addr *saddr,
+ struct in6_addr *daddr,
+ __u16 len,
+ unsigned short proto,
+ unsigned int csum)
+{
+
+ int carry;
+ __u32 ulen;
+ __u32 uproto;
+
+ csum += saddr->s6_addr32[0];
+ carry = (csum < saddr->s6_addr32[0]);
+ csum += carry;
+
+ csum += saddr->s6_addr32[1];
+ carry = (csum < saddr->s6_addr32[1]);
+ csum += carry;
+
+ csum += saddr->s6_addr32[2];
+ carry = (csum < saddr->s6_addr32[2]);
+ csum += carry;
+
+ csum += saddr->s6_addr32[3];
+ carry = (csum < saddr->s6_addr32[3]);
+ csum += carry;
+
+ csum += daddr->s6_addr32[0];
+ carry = (csum < daddr->s6_addr32[0]);
+ csum += carry;
+
+ csum += daddr->s6_addr32[1];
+ carry = (csum < daddr->s6_addr32[1]);
+ csum += carry;
+
+ csum += daddr->s6_addr32[2];
+ carry = (csum < daddr->s6_addr32[2]);
+ csum += carry;
+
+ csum += daddr->s6_addr32[3];
+ carry = (csum < daddr->s6_addr32[3]);
+ csum += carry;
+
+ ulen = htonl((__u32) len);
+ csum += ulen;
+ carry = (csum < ulen);
+ csum += carry;
+
+ uproto = htonl(proto);
+ csum += uproto;
+ carry = (csum < uproto);
+ csum += carry;
+
+ return csum_fold(csum);
+}
+
+#endif
+#endif
--- /dev/null
+#ifndef __NET_PKT_ACT_H
+#define __NET_PKT_ACT_H
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/socket.h>
+#include <linux/sockios.h>
+#include <linux/in.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+#include <net/sock.h>
+#include <net/pkt_sched.h>
+
+#define tca_st(val) (struct tcf_##val *)
+#define PRIV(a,name) ( tca_st(name) (a)->priv)
+
+#if 0 /* control */
+#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define DPRINTK(format,args...)
+#endif
+
+#if 0 /* data */
+#define D2PRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define D2PRINTK(format,args...)
+#endif
+
+static __inline__ unsigned
+tcf_hash(u32 index)
+{
+ return index & MY_TAB_MASK;
+}
+
+/* probably move this from being inline
+ * and put into act_generic
+*/
+static inline void
+tcf_hash_destroy(struct tcf_st *p)
+{
+ unsigned h = tcf_hash(p->index);
+ struct tcf_st **p1p;
+
+ for (p1p = &tcf_ht[h]; *p1p; p1p = &(*p1p)->next) {
+ if (*p1p == p) {
+ write_lock_bh(&tcf_t_lock);
+ *p1p = p->next;
+ write_unlock_bh(&tcf_t_lock);
+#ifdef CONFIG_NET_ESTIMATOR
+ qdisc_kill_estimator(&p->stats);
+#endif
+ kfree(p);
+ return;
+ }
+ }
+ BUG_TRAP(0);
+}
+
+static inline int
+tcf_hash_release(struct tcf_st *p, int bind )
+{
+ int ret = 0;
+ if (p) {
+ if (bind) {
+ p->bindcnt--;
+ }
+ p->refcnt--;
+ if(p->bindcnt <=0 && p->refcnt <= 0) {
+ tcf_hash_destroy(p);
+ ret = 1;
+ }
+ }
+ return ret;
+}
+
+static __inline__ int
+tcf_dump_walker(struct sk_buff *skb, struct netlink_callback *cb,
+ struct tc_action *a)
+{
+ struct tcf_st *p;
+ int err =0, index = -1,i= 0, s_i = 0, n_i = 0;
+ struct rtattr *r ;
+
+ read_lock(&tcf_t_lock);
+
+ s_i = cb->args[0];
+
+ for (i = 0; i < MY_TAB_SIZE; i++) {
+ p = tcf_ht[tcf_hash(i)];
+
+ for (; p; p = p->next) {
+ index++;
+ if (index < s_i)
+ continue;
+ a->priv = p;
+ a->order = n_i;
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, a->order, 0, NULL);
+ err = tcf_action_dump_1(skb, a, 0, 0);
+ if (0 > err) {
+ index--;
+ skb_trim(skb, (u8*)r - skb->data);
+ goto done;
+ }
+ r->rta_len = skb->tail - (u8*)r;
+ n_i++;
+ if (n_i >= TCA_ACT_MAX_PRIO) {
+ goto done;
+ }
+ }
+ }
+done:
+ read_unlock(&tcf_t_lock);
+ if (n_i)
+ cb->args[0] += n_i;
+ return n_i;
+
+rtattr_failure:
+ skb_trim(skb, (u8*)r - skb->data);
+ goto done;
+}
+
+static __inline__ int
+tcf_del_walker(struct sk_buff *skb, struct tc_action *a)
+{
+ struct tcf_st *p, *s_p;
+ struct rtattr *r ;
+ int i= 0, n_i = 0;
+
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, a->order, 0, NULL);
+ RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind);
+ for (i = 0; i < MY_TAB_SIZE; i++) {
+ p = tcf_ht[tcf_hash(i)];
+
+ while (p != NULL) {
+ s_p = p->next;
+ if (ACT_P_DELETED == tcf_hash_release(p, 0)) {
+ module_put(a->ops->owner);
+ }
+ n_i++;
+ p = s_p;
+ }
+ }
+ RTA_PUT(skb, TCA_FCNT, 4, &n_i);
+ r->rta_len = skb->tail - (u8*)r;
+
+ return n_i;
+rtattr_failure:
+ skb_trim(skb, (u8*)r - skb->data);
+ return -EINVAL;
+}
+
+static __inline__ int
+tcf_generic_walker(struct sk_buff *skb, struct netlink_callback *cb, int type,
+ struct tc_action *a)
+{
+ if (type == RTM_DELACTION) {
+ return tcf_del_walker(skb,a);
+ } else if (type == RTM_GETACTION) {
+ return tcf_dump_walker(skb,cb,a);
+ } else {
+ printk("tcf_generic_walker: unknown action %d\n",type);
+ return -EINVAL;
+ }
+}
+
+static __inline__ struct tcf_st *
+tcf_hash_lookup(u32 index)
+{
+ struct tcf_st *p;
+
+ read_lock(&tcf_t_lock);
+ for (p = tcf_ht[tcf_hash(index)]; p; p = p->next) {
+ if (p->index == index)
+ break;
+ }
+ read_unlock(&tcf_t_lock);
+ return p;
+}
+
+static __inline__ u32
+tcf_hash_new_index(void)
+{
+ do {
+ if (++idx_gen == 0)
+ idx_gen = 1;
+ } while (tcf_hash_lookup(idx_gen));
+
+ return idx_gen;
+}
+
+
+static inline int
+tcf_hash_search(struct tc_action *a, u32 index)
+{
+ struct tcf_st *p = tcf_hash_lookup(index);
+
+ if (p != NULL) {
+ a->priv = p;
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+#ifdef CONFIG_NET_ACT_INIT
+static inline struct tcf_st *
+tcf_hash_check(struct tc_st *parm, struct tc_action *a, int ovr, int bind)
+{
+ struct tcf_st *p = NULL;
+ if (parm->index && (p = tcf_hash_lookup(parm->index)) != NULL) {
+ spin_lock(&p->lock);
+ if (bind) {
+ p->bindcnt++;
+ p->refcnt++;
+ }
+ spin_unlock(&p->lock);
+ a->priv = (void *) p;
+ }
+ return p;
+}
+
+static inline struct tcf_st *
+tcf_hash_create(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+{
+ unsigned h;
+ struct tcf_st *p = NULL;
+
+ p = kmalloc(size, GFP_KERNEL);
+ if (p == NULL)
+ return p;
+
+ memset(p, 0, size);
+ p->refcnt = 1;
+
+ if (bind) {
+ p->bindcnt = 1;
+ }
+
+ spin_lock_init(&p->lock);
+ p->stats_lock = &p->lock;
+ p->index = parm->index ? : tcf_hash_new_index();
+ p->tm.install = jiffies;
+ p->tm.lastuse = jiffies;
+#ifdef CONFIG_NET_ESTIMATOR
+ if (est) {
+ qdisc_new_estimator(&p->stats, p->stats_lock, est);
+ }
+#endif
+ h = tcf_hash(p->index);
+ write_lock_bh(&tcf_t_lock);
+ p->next = tcf_ht[h];
+ tcf_ht[h] = p;
+ write_unlock_bh(&tcf_t_lock);
+
+ a->priv = (void *) p;
+ return p;
+}
+
+static inline struct tcf_st *
+tcf_hash_init(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+{
+ struct tcf_st *p;
+ p = tcf_hash_check (parm,a,ovr,bind);
+ if (NULL == p) {
+ return tcf_hash_create(parm, est, a, size, ovr, bind);
+ }
+}
+
+#endif
+
+#endif
--- /dev/null
+#ifndef _SCSI_SCSI_DBG_H
+#define _SCSI_SCSI_DBG_H
+
+struct scsi_cmnd;
+struct scsi_request;
+
+extern void scsi_print_command(struct scsi_cmnd *);
+extern void __scsi_print_command(unsigned char *);
+extern void scsi_print_sense(const char *, struct scsi_cmnd *);
+extern void scsi_print_req_sense(const char *, struct scsi_request *);
+extern void scsi_print_driverbyte(int);
+extern void scsi_print_hostbyte(int);
+extern void scsi_print_status(unsigned char);
+extern int scsi_print_msg(const unsigned char *);
+extern const char *scsi_sense_key_string(unsigned char);
+extern const char *scsi_extd_sense_format(unsigned char, unsigned char);
+
+#endif /* _SCSI_SCSI_DBG_H */
--- /dev/null
+/*
+ * drivers/power/smp.c - Functions for stopping other CPUs.
+ *
+ * Copyright 2004 Pavel Machek <pavel@suse.cz>
+ * Copyright (C) 2002-2003 Nigel Cunningham <ncunningham@clear.net.nz>
+ *
+ * This file is released under the GPLv2.
+ */
+
+#undef DEBUG
+
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/suspend.h>
+#include <linux/module.h>
+#include <asm/atomic.h>
+#include <asm/tlbflush.h>
+
+static atomic_t cpu_counter, freeze;
+
+
+static void smp_pause(void * data)
+{
+ struct saved_context ctxt;
+ __save_processor_state(&ctxt);
+ printk("Sleeping in:\n");
+ dump_stack();
+ atomic_inc(&cpu_counter);
+ while (atomic_read(&freeze)) {
+ /* FIXME: restore takes place at random piece inside this.
+ This should probably be written in assembly, and
+ preserve general-purpose registers, too
+
+ What about stack? We may need to move to new stack here.
+
+ This should better be ran with interrupts disabled.
+ */
+ cpu_relax();
+ barrier();
+ }
+ atomic_dec(&cpu_counter);
+ __restore_processor_state(&ctxt);
+}
+
+cpumask_t oldmask;
+
+void disable_nonboot_cpus(void)
+{
+ printk("Freezing CPUs (at %d)", smp_processor_id());
+ oldmask = current->cpus_allowed;
+ set_cpus_allowed(current, cpumask_of_cpu(0));
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(HZ);
+ printk("...");
+ BUG_ON(smp_processor_id() != 0);
+
+ /* FIXME: for this to work, all the CPUs must be running
+ * "idle" thread (or we deadlock). Is that guaranteed? */
+
+ atomic_set(&cpu_counter, 0);
+ atomic_set(&freeze, 1);
+ smp_call_function(smp_pause, NULL, 0, 0);
+ while (atomic_read(&cpu_counter) < (num_online_cpus() - 1)) {
+ cpu_relax();
+ barrier();
+ }
+ printk("ok\n");
+}
+
+void enable_nonboot_cpus(void)
+{
+ printk("Restarting CPUs");
+ atomic_set(&freeze, 0);
+ while (atomic_read(&cpu_counter)) {
+ cpu_relax();
+ barrier();
+ }
+ printk("...");
+ set_cpus_allowed(current, oldmask);
+ schedule();
+ printk("ok\n");
+
+}
+
+
--- /dev/null
+/*
+ * linux/lib/crc-ccitt.c
+ *
+ * This source code is licensed under the GNU General Public License,
+ * Version 2. See the file COPYING for more details.
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/crc-ccitt.h>
+
+/*
+ * This mysterious table is just the CRC of each possible byte. It can be
+ * computed using the standard bit-at-a-time methods. The polynomial can
+ * be seen in entry 128, 0x8408. This corresponds to x^0 + x^5 + x^12.
+ * Add the implicit x^16, and you have the standard CRC-CCITT.
+ */
+u16 const crc_ccitt_table[256] = {
+ 0x0000, 0x1189, 0x2312, 0x329b, 0x4624, 0x57ad, 0x6536, 0x74bf,
+ 0x8c48, 0x9dc1, 0xaf5a, 0xbed3, 0xca6c, 0xdbe5, 0xe97e, 0xf8f7,
+ 0x1081, 0x0108, 0x3393, 0x221a, 0x56a5, 0x472c, 0x75b7, 0x643e,
+ 0x9cc9, 0x8d40, 0xbfdb, 0xae52, 0xdaed, 0xcb64, 0xf9ff, 0xe876,
+ 0x2102, 0x308b, 0x0210, 0x1399, 0x6726, 0x76af, 0x4434, 0x55bd,
+ 0xad4a, 0xbcc3, 0x8e58, 0x9fd1, 0xeb6e, 0xfae7, 0xc87c, 0xd9f5,
+ 0x3183, 0x200a, 0x1291, 0x0318, 0x77a7, 0x662e, 0x54b5, 0x453c,
+ 0xbdcb, 0xac42, 0x9ed9, 0x8f50, 0xfbef, 0xea66, 0xd8fd, 0xc974,
+ 0x4204, 0x538d, 0x6116, 0x709f, 0x0420, 0x15a9, 0x2732, 0x36bb,
+ 0xce4c, 0xdfc5, 0xed5e, 0xfcd7, 0x8868, 0x99e1, 0xab7a, 0xbaf3,
+ 0x5285, 0x430c, 0x7197, 0x601e, 0x14a1, 0x0528, 0x37b3, 0x263a,
+ 0xdecd, 0xcf44, 0xfddf, 0xec56, 0x98e9, 0x8960, 0xbbfb, 0xaa72,
+ 0x6306, 0x728f, 0x4014, 0x519d, 0x2522, 0x34ab, 0x0630, 0x17b9,
+ 0xef4e, 0xfec7, 0xcc5c, 0xddd5, 0xa96a, 0xb8e3, 0x8a78, 0x9bf1,
+ 0x7387, 0x620e, 0x5095, 0x411c, 0x35a3, 0x242a, 0x16b1, 0x0738,
+ 0xffcf, 0xee46, 0xdcdd, 0xcd54, 0xb9eb, 0xa862, 0x9af9, 0x8b70,
+ 0x8408, 0x9581, 0xa71a, 0xb693, 0xc22c, 0xd3a5, 0xe13e, 0xf0b7,
+ 0x0840, 0x19c9, 0x2b52, 0x3adb, 0x4e64, 0x5fed, 0x6d76, 0x7cff,
+ 0x9489, 0x8500, 0xb79b, 0xa612, 0xd2ad, 0xc324, 0xf1bf, 0xe036,
+ 0x18c1, 0x0948, 0x3bd3, 0x2a5a, 0x5ee5, 0x4f6c, 0x7df7, 0x6c7e,
+ 0xa50a, 0xb483, 0x8618, 0x9791, 0xe32e, 0xf2a7, 0xc03c, 0xd1b5,
+ 0x2942, 0x38cb, 0x0a50, 0x1bd9, 0x6f66, 0x7eef, 0x4c74, 0x5dfd,
+ 0xb58b, 0xa402, 0x9699, 0x8710, 0xf3af, 0xe226, 0xd0bd, 0xc134,
+ 0x39c3, 0x284a, 0x1ad1, 0x0b58, 0x7fe7, 0x6e6e, 0x5cf5, 0x4d7c,
+ 0xc60c, 0xd785, 0xe51e, 0xf497, 0x8028, 0x91a1, 0xa33a, 0xb2b3,
+ 0x4a44, 0x5bcd, 0x6956, 0x78df, 0x0c60, 0x1de9, 0x2f72, 0x3efb,
+ 0xd68d, 0xc704, 0xf59f, 0xe416, 0x90a9, 0x8120, 0xb3bb, 0xa232,
+ 0x5ac5, 0x4b4c, 0x79d7, 0x685e, 0x1ce1, 0x0d68, 0x3ff3, 0x2e7a,
+ 0xe70e, 0xf687, 0xc41c, 0xd595, 0xa12a, 0xb0a3, 0x8238, 0x93b1,
+ 0x6b46, 0x7acf, 0x4854, 0x59dd, 0x2d62, 0x3ceb, 0x0e70, 0x1ff9,
+ 0xf78f, 0xe606, 0xd49d, 0xc514, 0xb1ab, 0xa022, 0x92b9, 0x8330,
+ 0x7bc7, 0x6a4e, 0x58d5, 0x495c, 0x3de3, 0x2c6a, 0x1ef1, 0x0f78
+};
+EXPORT_SYMBOL(crc_ccitt_table);
+
+/**
+ * crc_ccitt - recompute the CRC for the data buffer
+ * @crc - previous CRC value
+ * @buffer - data pointer
+ * @len - number of bytes in the buffer
+ */
+u16 crc_ccitt(u16 crc, u8 const *buffer, size_t len)
+{
+ while (len--)
+ crc = crc_ccitt_byte(crc, *buffer++);
+ return crc;
+}
+EXPORT_SYMBOL(crc_ccitt);
+
+MODULE_DESCRIPTION("CRC-CCITT calculations");
+MODULE_LICENSE("GPL");
--- /dev/null
+#
+# Makefile for the linux memory manager.
+#
+
+mmu-y := nommu.o
+mmu-$(CONFIG_MMU) := fremap.o highmem.o madvise.o memory.o mincore.o \
+ mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
+ shmem.o vmalloc.o
+
+obj-y := bootmem.o filemap.o mempool.o oom_kill.o fadvise.o \
+ page_alloc.o page-writeback.o pdflush.o prio_tree.o \
+ readahead.o slab.o swap.o truncate.o vmscan.o \
+ $(mmu-y)
+
+obj-$(CONFIG_SWAP) += page_io.o swap_state.o swapfile.o
+obj-$(CONFIG_HUGETLBFS) += hugetlb.o
+obj-$(CONFIG_NUMA) += mempolicy.o
--- /dev/null
+/*
+ * mm/mmap.c
+ *
+ * Written by obz.
+ *
+ * Address space accounting code <alan@redhat.com>
+ */
+
+#include <linux/slab.h>
+#include <linux/shm.h>
+#include <linux/mman.h>
+#include <linux/pagemap.h>
+#include <linux/swap.h>
+#include <linux/syscalls.h>
+#include <linux/init.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/personality.h>
+#include <linux/security.h>
+#include <linux/hugetlb.h>
+#include <linux/profile.h>
+#include <linux/module.h>
+#include <linux/mount.h>
+#include <linux/mempolicy.h>
+#include <linux/rmap.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/cacheflush.h>
+#include <asm/tlb.h>
+
+/*
+ * WARNING: the debugging will use recursive algorithms so never enable this
+ * unless you know what you are doing.
+ */
+#undef DEBUG_MM_RB
+
+/* description of effects of mapping type and prot in current implementation.
+ * this is due to the limited x86 page protection hardware. The expected
+ * behavior is in parens:
+ *
+ * map_type prot
+ * PROT_NONE PROT_READ PROT_WRITE PROT_EXEC
+ * MAP_SHARED r: (no) no r: (yes) yes r: (no) yes r: (no) yes
+ * w: (no) no w: (no) no w: (yes) yes w: (no) no
+ * x: (no) no x: (no) yes x: (no) yes x: (yes) yes
+ *
+ * MAP_PRIVATE r: (no) no r: (yes) yes r: (no) yes r: (no) yes
+ * w: (no) no w: (no) no w: (copy) copy w: (no) no
+ * x: (no) no x: (no) yes x: (no) yes x: (yes) yes
+ *
+ */
+pgprot_t protection_map[16] = {
+ __P000, __P001, __P010, __P011, __P100, __P101, __P110, __P111,
+ __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111
+};
+
+int sysctl_overcommit_memory = 0; /* default is heuristic overcommit */
+int sysctl_overcommit_ratio = 50; /* default is 50% */
+int sysctl_max_map_count = DEFAULT_MAX_MAP_COUNT;
+atomic_t vm_committed_space = ATOMIC_INIT(0);
+
+EXPORT_SYMBOL(sysctl_overcommit_memory);
+EXPORT_SYMBOL(sysctl_overcommit_ratio);
+EXPORT_SYMBOL(sysctl_max_map_count);
+EXPORT_SYMBOL(vm_committed_space);
+
+/*
+ * Requires inode->i_mapping->i_mmap_lock
+ */
+static void __remove_shared_vm_struct(struct vm_area_struct *vma,
+ struct file *file, struct address_space *mapping)
+{
+ if (vma->vm_flags & VM_DENYWRITE)
+ atomic_inc(&file->f_dentry->d_inode->i_writecount);
+ if (vma->vm_flags & VM_SHARED)
+ mapping->i_mmap_writable--;
+
+ flush_dcache_mmap_lock(mapping);
+ if (unlikely(vma->vm_flags & VM_NONLINEAR))
+ list_del_init(&vma->shared.vm_set.list);
+ else
+ vma_prio_tree_remove(vma, &mapping->i_mmap);
+ flush_dcache_mmap_unlock(mapping);
+}
+
+/*
+ * Remove one vm structure and free it.
+ */
+static void remove_vm_struct(struct vm_area_struct *vma)
+{
+ struct file *file = vma->vm_file;
+
+ if (file) {
+ struct address_space *mapping = file->f_mapping;
+ spin_lock(&mapping->i_mmap_lock);
+ __remove_shared_vm_struct(vma, file, mapping);
+ spin_unlock(&mapping->i_mmap_lock);
+ }
+ if (vma->vm_ops && vma->vm_ops->close)
+ vma->vm_ops->close(vma);
+ if (file)
+ fput(file);
+ anon_vma_unlink(vma);
+ mpol_free(vma_policy(vma));
+ kmem_cache_free(vm_area_cachep, vma);
+}
+
+/*
+ * sys_brk() for the most part doesn't need the global kernel
+ * lock, except when an application is doing something nasty
+ * like trying to un-brk an area that has already been mapped
+ * to a regular file. in this case, the unmapping will need
+ * to invoke file system routines that need the global lock.
+ */
+asmlinkage unsigned long sys_brk(unsigned long brk)
+{
+ unsigned long rlim, retval;
+ unsigned long newbrk, oldbrk;
+ struct mm_struct *mm = current->mm;
+
+ down_write(&mm->mmap_sem);
+
+ if (brk < mm->end_code)
+ goto out;
+ newbrk = PAGE_ALIGN(brk);
+ oldbrk = PAGE_ALIGN(mm->brk);
+ if (oldbrk == newbrk)
+ goto set_brk;
+
+ /* Always allow shrinking brk. */
+ if (brk <= mm->brk) {
+ if (!do_munmap(mm, newbrk, oldbrk-newbrk))
+ goto set_brk;
+ goto out;
+ }
+
+ /* Check against rlimit.. */
+ rlim = current->rlim[RLIMIT_DATA].rlim_cur;
+ if (rlim < RLIM_INFINITY && brk - mm->start_data > rlim)
+ goto out;
+
+ /* Check against existing mmap mappings. */
+ if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))
+ goto out;
+
+ /* Ok, looks good - let it rip. */
+ if (do_brk(oldbrk, newbrk-oldbrk) != oldbrk)
+ goto out;
+set_brk:
+ mm->brk = brk;
+out:
+ retval = mm->brk;
+ up_write(&mm->mmap_sem);
+ return retval;
+}
+
+#ifdef DEBUG_MM_RB
+static int browse_rb(struct rb_root *root)
+{
+ int i = 0, j;
+ struct rb_node *nd, *pn = NULL;
+ unsigned long prev = 0, pend = 0;
+
+ for (nd = rb_first(root); nd; nd = rb_next(nd)) {
+ struct vm_area_struct *vma;
+ vma = rb_entry(nd, struct vm_area_struct, vm_rb);
+ if (vma->vm_start < prev)
+ printk("vm_start %lx prev %lx\n", vma->vm_start, prev), i = -1;
+ if (vma->vm_start < pend)
+ printk("vm_start %lx pend %lx\n", vma->vm_start, pend);
+ if (vma->vm_start > vma->vm_end)
+ printk("vm_end %lx < vm_start %lx\n", vma->vm_end, vma->vm_start);
+ i++;
+ pn = nd;
+ }
+ j = 0;
+ for (nd = pn; nd; nd = rb_prev(nd)) {
+ j++;
+ }
+ if (i != j)
+ printk("backwards %d, forwards %d\n", j, i), i = 0;
+ return i;
+}
+
+void validate_mm(struct mm_struct *mm)
+{
+ int bug = 0;
+ int i = 0;
+ struct vm_area_struct *tmp = mm->mmap;
+ while (tmp) {
+ tmp = tmp->vm_next;
+ i++;
+ }
+ if (i != mm->map_count)
+ printk("map_count %d vm_next %d\n", mm->map_count, i), bug = 1;
+ i = browse_rb(&mm->mm_rb);
+ if (i != mm->map_count)
+ printk("map_count %d rb %d\n", mm->map_count, i), bug = 1;
+ if (bug)
+ BUG();
+}
+#else
+#define validate_mm(mm) do { } while (0)
+#endif
+
+static struct vm_area_struct *
+find_vma_prepare(struct mm_struct *mm, unsigned long addr,
+ struct vm_area_struct **pprev, struct rb_node ***rb_link,
+ struct rb_node ** rb_parent)
+{
+ struct vm_area_struct * vma;
+ struct rb_node ** __rb_link, * __rb_parent, * rb_prev;
+
+ __rb_link = &mm->mm_rb.rb_node;
+ rb_prev = __rb_parent = NULL;
+ vma = NULL;
+
+ while (*__rb_link) {
+ struct vm_area_struct *vma_tmp;
+
+ __rb_parent = *__rb_link;
+ vma_tmp = rb_entry(__rb_parent, struct vm_area_struct, vm_rb);
+
+ if (vma_tmp->vm_end > addr) {
+ vma = vma_tmp;
+ if (vma_tmp->vm_start <= addr)
+ return vma;
+ __rb_link = &__rb_parent->rb_left;
+ } else {
+ rb_prev = __rb_parent;
+ __rb_link = &__rb_parent->rb_right;
+ }
+ }
+
+ *pprev = NULL;
+ if (rb_prev)
+ *pprev = rb_entry(rb_prev, struct vm_area_struct, vm_rb);
+ *rb_link = __rb_link;
+ *rb_parent = __rb_parent;
+ return vma;
+}
+
+static inline void
+__vma_link_list(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev, struct rb_node *rb_parent)
+{
+ if (prev) {
+ vma->vm_next = prev->vm_next;
+ prev->vm_next = vma;
+ } else {
+ mm->mmap = vma;
+ if (rb_parent)
+ vma->vm_next = rb_entry(rb_parent,
+ struct vm_area_struct, vm_rb);
+ else
+ vma->vm_next = NULL;
+ }
+}
+
+void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct rb_node **rb_link, struct rb_node *rb_parent)
+{
+ rb_link_node(&vma->vm_rb, rb_parent, rb_link);
+ rb_insert_color(&vma->vm_rb, &mm->mm_rb);
+}
+
+static inline void __vma_link_file(struct vm_area_struct *vma)
+{
+ struct file * file;
+
+ file = vma->vm_file;
+ if (file) {
+ struct address_space *mapping = file->f_mapping;
+
+ if (vma->vm_flags & VM_DENYWRITE)
+ atomic_dec(&file->f_dentry->d_inode->i_writecount);
+ if (vma->vm_flags & VM_SHARED)
+ mapping->i_mmap_writable++;
+
+ flush_dcache_mmap_lock(mapping);
+ if (unlikely(vma->vm_flags & VM_NONLINEAR))
+ list_add_tail(&vma->shared.vm_set.list,
+ &mapping->i_mmap_nonlinear);
+ else
+ vma_prio_tree_insert(vma, &mapping->i_mmap);
+ flush_dcache_mmap_unlock(mapping);
+ }
+}
+
+static void
+__vma_link(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev, struct rb_node **rb_link,
+ struct rb_node *rb_parent)
+{
+ __vma_link_list(mm, vma, prev, rb_parent);
+ __vma_link_rb(mm, vma, rb_link, rb_parent);
+ __anon_vma_link(vma);
+}
+
+static void vma_link(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev, struct rb_node **rb_link,
+ struct rb_node *rb_parent)
+{
+ struct address_space *mapping = NULL;
+
+ if (vma->vm_file)
+ mapping = vma->vm_file->f_mapping;
+
+ if (mapping)
+ spin_lock(&mapping->i_mmap_lock);
+ anon_vma_lock(vma);
+
+ __vma_link(mm, vma, prev, rb_link, rb_parent);
+ __vma_link_file(vma);
+
+ anon_vma_unlock(vma);
+ if (mapping)
+ spin_unlock(&mapping->i_mmap_lock);
+
+ mark_mm_hugetlb(mm, vma);
+ mm->map_count++;
+ validate_mm(mm);
+}
+
+/*
+ * Helper for vma_adjust in the split_vma insert case:
+ * insert vm structure into list and rbtree and anon_vma,
+ * but it has already been inserted into prio_tree earlier.
+ */
+static void
+__insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
+{
+ struct vm_area_struct * __vma, * prev;
+ struct rb_node ** rb_link, * rb_parent;
+
+ __vma = find_vma_prepare(mm, vma->vm_start,&prev, &rb_link, &rb_parent);
+ if (__vma && __vma->vm_start < vma->vm_end)
+ BUG();
+ __vma_link(mm, vma, prev, rb_link, rb_parent);
+ mm->map_count++;
+}
+
+static inline void
+__vma_unlink(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev)
+{
+ prev->vm_next = vma->vm_next;
+ rb_erase(&vma->vm_rb, &mm->mm_rb);
+ if (mm->mmap_cache == vma)
+ mm->mmap_cache = prev;
+}
+
+/*
+ * We cannot adjust vm_start, vm_end, vm_pgoff fields of a vma that
+ * is already present in an i_mmap tree without adjusting the tree.
+ * The following helper function should be used when such adjustments
+ * are necessary. The "insert" vma (if any) is to be inserted
+ * before we drop the necessary locks.
+ */
+void vma_adjust(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, pgoff_t pgoff, struct vm_area_struct *insert)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *next = vma->vm_next;
+ struct address_space *mapping = NULL;
+ struct prio_tree_root *root = NULL;
+ struct file *file = vma->vm_file;
+ struct anon_vma *anon_vma = NULL;
+ long adjust_next = 0;
+ int remove_next = 0;
+
+ if (next && !insert) {
+ if (end >= next->vm_end) {
+ /*
+ * vma expands, overlapping all the next, and
+ * perhaps the one after too (mprotect case 6).
+ */
+again: remove_next = 1 + (end > next->vm_end);
+ end = next->vm_end;
+ anon_vma = next->anon_vma;
+ } else if (end > next->vm_start) {
+ /*
+ * vma expands, overlapping part of the next:
+ * mprotect case 5 shifting the boundary up.
+ */
+ adjust_next = (end - next->vm_start) >> PAGE_SHIFT;
+ anon_vma = next->anon_vma;
+ } else if (end < vma->vm_end) {
+ /*
+ * vma shrinks, and !insert tells it's not
+ * split_vma inserting another: so it must be
+ * mprotect case 4 shifting the boundary down.
+ */
+ adjust_next = - ((vma->vm_end - end) >> PAGE_SHIFT);
+ anon_vma = next->anon_vma;
+ }
+ }
+
+ if (file) {
+ mapping = file->f_mapping;
+ if (!(vma->vm_flags & VM_NONLINEAR))
+ root = &mapping->i_mmap;
+ spin_lock(&mapping->i_mmap_lock);
+ if (insert) {
+ /*
+ * Put into prio_tree now, so instantiated pages
+ * are visible to arm/parisc __flush_dcache_page
+ * throughout; but we cannot insert into address
+ * space until vma start or end is updated.
+ */
+ __vma_link_file(insert);
+ }
+ }
+
+ /*
+ * When changing only vma->vm_end, we don't really need
+ * anon_vma lock: but is that case worth optimizing out?
+ */
+ if (vma->anon_vma)
+ anon_vma = vma->anon_vma;
+ if (anon_vma)
+ spin_lock(&anon_vma->lock);
+
+ if (root) {
+ flush_dcache_mmap_lock(mapping);
+ vma_prio_tree_remove(vma, root);
+ if (adjust_next)
+ vma_prio_tree_remove(next, root);
+ }
+
+ vma->vm_start = start;
+ vma->vm_end = end;
+ vma->vm_pgoff = pgoff;
+ if (adjust_next) {
+ next->vm_start += adjust_next << PAGE_SHIFT;
+ next->vm_pgoff += adjust_next;
+ }
+
+ if (root) {
+ if (adjust_next) {
+ vma_prio_tree_init(next);
+ vma_prio_tree_insert(next, root);
+ }
+ vma_prio_tree_init(vma);
+ vma_prio_tree_insert(vma, root);
+ flush_dcache_mmap_unlock(mapping);
+ }
+
+ if (remove_next) {
+ /*
+ * vma_merge has merged next into vma, and needs
+ * us to remove next before dropping the locks.
+ */
+ __vma_unlink(mm, next, vma);
+ if (file)
+ __remove_shared_vm_struct(next, file, mapping);
+ if (next->anon_vma)
+ __anon_vma_merge(vma, next);
+ } else if (insert) {
+ /*
+ * split_vma has split insert from vma, and needs
+ * us to insert it before dropping the locks
+ * (it may either follow vma or precede it).
+ */
+ __insert_vm_struct(mm, insert);
+ }
+
+ if (anon_vma)
+ spin_unlock(&anon_vma->lock);
+ if (mapping)
+ spin_unlock(&mapping->i_mmap_lock);
+
+ if (remove_next) {
+ if (file)
+ fput(file);
+ mm->map_count--;
+ mpol_free(vma_policy(next));
+ kmem_cache_free(vm_area_cachep, next);
+ /*
+ * In mprotect's case 6 (see comments on vma_merge),
+ * we must remove another next too. It would clutter
+ * up the code too much to do both in one go.
+ */
+ if (remove_next == 2) {
+ next = vma->vm_next;
+ goto again;
+ }
+ }
+
+ validate_mm(mm);
+}
+
+/*
+ * If the vma has a ->close operation then the driver probably needs to release
+ * per-vma resources, so we don't attempt to merge those.
+ */
+#define VM_SPECIAL (VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_RESERVED)
+
+static inline int is_mergeable_vma(struct vm_area_struct *vma,
+ struct file *file, unsigned long vm_flags)
+{
+ if (vma->vm_flags != vm_flags)
+ return 0;
+ if (vma->vm_file != file)
+ return 0;
+ if (vma->vm_ops && vma->vm_ops->close)
+ return 0;
+ return 1;
+}
+
+static inline int is_mergeable_anon_vma(struct anon_vma *anon_vma1,
+ struct anon_vma *anon_vma2)
+{
+ return !anon_vma1 || !anon_vma2 || (anon_vma1 == anon_vma2);
+}
+
+/*
+ * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff)
+ * in front of (at a lower virtual address and file offset than) the vma.
+ *
+ * We cannot merge two vmas if they have differently assigned (non-NULL)
+ * anon_vmas, nor if same anon_vma is assigned but offsets incompatible.
+ *
+ * We don't check here for the merged mmap wrapping around the end of pagecache
+ * indices (16TB on ia32) because do_mmap_pgoff() does not permit mmap's which
+ * wrap, nor mmaps which cover the final page at index -1UL.
+ */
+static int
+can_vma_merge_before(struct vm_area_struct *vma, unsigned long vm_flags,
+ struct anon_vma *anon_vma, struct file *file, pgoff_t vm_pgoff)
+{
+ if (is_mergeable_vma(vma, file, vm_flags) &&
+ is_mergeable_anon_vma(anon_vma, vma->anon_vma)) {
+ if (vma->vm_pgoff == vm_pgoff)
+ return 1;
+ }
+ return 0;
+}
+
+/*
+ * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff)
+ * beyond (at a higher virtual address and file offset than) the vma.
+ *
+ * We cannot merge two vmas if they have differently assigned (non-NULL)
+ * anon_vmas, nor if same anon_vma is assigned but offsets incompatible.
+ */
+static int
+can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
+ struct anon_vma *anon_vma, struct file *file, pgoff_t vm_pgoff)
+{
+ if (is_mergeable_vma(vma, file, vm_flags) &&
+ is_mergeable_anon_vma(anon_vma, vma->anon_vma)) {
+ pgoff_t vm_pglen;
+ vm_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
+ if (vma->vm_pgoff + vm_pglen == vm_pgoff)
+ return 1;
+ }
+ return 0;
+}
+
+/*
+ * Given a mapping request (addr,end,vm_flags,file,pgoff), figure out
+ * whether that can be merged with its predecessor or its successor.
+ * Or both (it neatly fills a hole).
+ *
+ * In most cases - when called for mmap, brk or mremap - [addr,end) is
+ * certain not to be mapped by the time vma_merge is called; but when
+ * called for mprotect, it is certain to be already mapped (either at
+ * an offset within prev, or at the start of next), and the flags of
+ * this area are about to be changed to vm_flags - and the no-change
+ * case has already been eliminated.
+ *
+ * The following mprotect cases have to be considered, where AAAA is
+ * the area passed down from mprotect_fixup, never extending beyond one
+ * vma, PPPPPP is the prev vma specified, and NNNNNN the next vma after:
+ *
+ * AAAA AAAA AAAA AAAA
+ * PPPPPPNNNNNN PPPPPPNNNNNN PPPPPPNNNNNN PPPPNNNNXXXX
+ * cannot merge might become might become might become
+ * PPNNNNNNNNNN PPPPPPPPPPNN PPPPPPPPPPPP 6 or
+ * mmap, brk or case 4 below case 5 below PPPPPPPPXXXX 7 or
+ * mremap move: PPPPNNNNNNNN 8
+ * AAAA
+ * PPPP NNNN PPPPPPPPPPPP PPPPPPPPNNNN PPPPNNNNNNNN
+ * might become case 1 below case 2 below case 3 below
+ *
+ * Odd one out? Case 8, because it extends NNNN but needs flags of XXXX:
+ * mprotect_fixup updates vm_flags & vm_page_prot on successful return.
+ */
+struct vm_area_struct *vma_merge(struct mm_struct *mm,
+ struct vm_area_struct *prev, unsigned long addr,
+ unsigned long end, unsigned long vm_flags,
+ struct anon_vma *anon_vma, struct file *file,
+ pgoff_t pgoff, struct mempolicy *policy)
+{
+ pgoff_t pglen = (end - addr) >> PAGE_SHIFT;
+ struct vm_area_struct *area, *next;
+
+ /*
+ * We later require that vma->vm_flags == vm_flags,
+ * so this tests vma->vm_flags & VM_SPECIAL, too.
+ */
+ if (vm_flags & VM_SPECIAL)
+ return NULL;
+
+ if (prev)
+ next = prev->vm_next;
+ else
+ next = mm->mmap;
+ area = next;
+ if (next && next->vm_end == end) /* cases 6, 7, 8 */
+ next = next->vm_next;
+
+ /*
+ * Can it merge with the predecessor?
+ */
+ if (prev && prev->vm_end == addr &&
+ mpol_equal(vma_policy(prev), policy) &&
+ can_vma_merge_after(prev, vm_flags,
+ anon_vma, file, pgoff)) {
+ /*
+ * OK, it can. Can we now merge in the successor as well?
+ */
+ if (next && end == next->vm_start &&
+ mpol_equal(policy, vma_policy(next)) &&
+ can_vma_merge_before(next, vm_flags,
+ anon_vma, file, pgoff+pglen) &&
+ is_mergeable_anon_vma(prev->anon_vma,
+ next->anon_vma)) {
+ /* cases 1, 6 */
+ vma_adjust(prev, prev->vm_start,
+ next->vm_end, prev->vm_pgoff, NULL);
+ } else /* cases 2, 5, 7 */
+ vma_adjust(prev, prev->vm_start,
+ end, prev->vm_pgoff, NULL);
+ return prev;
+ }
+
+ /*
+ * Can this new request be merged in front of next?
+ */
+ if (next && end == next->vm_start &&
+ mpol_equal(policy, vma_policy(next)) &&
+ can_vma_merge_before(next, vm_flags,
+ anon_vma, file, pgoff+pglen)) {
+ if (prev && addr < prev->vm_end) /* case 4 */
+ vma_adjust(prev, prev->vm_start,
+ addr, prev->vm_pgoff, NULL);
+ else /* cases 3, 8 */
+ vma_adjust(area, addr, next->vm_end,
+ next->vm_pgoff - pglen, NULL);
+ return area;
+ }
+
+ return NULL;
+}
+
+/*
+ * find_mergeable_anon_vma is used by anon_vma_prepare, to check
+ * neighbouring vmas for a suitable anon_vma, before it goes off
+ * to allocate a new anon_vma. It checks because a repetitive
+ * sequence of mprotects and faults may otherwise lead to distinct
+ * anon_vmas being allocated, preventing vma merge in subsequent
+ * mprotect.
+ */
+struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma)
+{
+ struct vm_area_struct *near;
+ unsigned long vm_flags;
+
+ near = vma->vm_next;
+ if (!near)
+ goto try_prev;
+
+ /*
+ * Since only mprotect tries to remerge vmas, match flags
+ * which might be mprotected into each other later on.
+ * Neither mlock nor madvise tries to remerge at present,
+ * so leave their flags as obstructing a merge.
+ */
+ vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC);
+ vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC);
+
+ if (near->anon_vma && vma->vm_end == near->vm_start &&
+ mpol_equal(vma_policy(vma), vma_policy(near)) &&
+ can_vma_merge_before(near, vm_flags,
+ NULL, vma->vm_file, vma->vm_pgoff +
+ ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT)))
+ return near->anon_vma;
+try_prev:
+ /*
+ * It is potentially slow to have to call find_vma_prev here.
+ * But it's only on the first write fault on the vma, not
+ * every time, and we could devise a way to avoid it later
+ * (e.g. stash info in next's anon_vma_node when assigning
+ * an anon_vma, or when trying vma_merge). Another time.
+ */
+ if (find_vma_prev(vma->vm_mm, vma->vm_start, &near) != vma)
+ BUG();
+ if (!near)
+ goto none;
+
+ vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC);
+ vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC);
+
+ if (near->anon_vma && near->vm_end == vma->vm_start &&
+ mpol_equal(vma_policy(near), vma_policy(vma)) &&
+ can_vma_merge_after(near, vm_flags,
+ NULL, vma->vm_file, vma->vm_pgoff))
+ return near->anon_vma;
+none:
+ /*
+ * There's no absolute need to look only at touching neighbours:
+ * we could search further afield for "compatible" anon_vmas.
+ * But it would probably just be a waste of time searching,
+ * or lead to too many vmas hanging off the same anon_vma.
+ * We're trying to allow mprotect remerging later on,
+ * not trying to minimize memory used for anon_vmas.
+ */
+ return NULL;
+}
+
+/*
+ * The caller must hold down_write(current->mm->mmap_sem).
+ */
+
+unsigned long do_mmap_pgoff(struct file * file, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, unsigned long pgoff)
+{
+ struct mm_struct * mm = current->mm;
+ struct vm_area_struct * vma, * prev;
+ struct inode *inode;
+ unsigned int vm_flags;
+ int correct_wcount = 0;
+ int error;
+ struct rb_node ** rb_link, * rb_parent;
+ int accountable = 1;
+ unsigned long charged = 0;
+
+ if (file) {
+ if (is_file_hugepages(file))
+ accountable = 0;
+
+ if (!file->f_op || !file->f_op->mmap)
+ return -ENODEV;
+
+ if ((prot & PROT_EXEC) &&
+ (file->f_vfsmnt->mnt_flags & MNT_NOEXEC))
+ return -EPERM;
+ }
+
+ if (!len)
+ return addr;
+
+ /* Careful about overflows.. */
+ len = PAGE_ALIGN(len);
+ if (!len || len > TASK_SIZE)
+ return -EINVAL;
+
+ /* offset overflow? */
+ if ((pgoff + (len >> PAGE_SHIFT)) < pgoff)
+ return -EINVAL;
+
+ /* Too many mappings? */
+ if (mm->map_count > sysctl_max_map_count)
+ return -ENOMEM;
+
+ /* Obtain the address to map to. we verify (or select) it and ensure
+ * that it represents a valid section of the address space.
+ */
+ addr = get_unmapped_area(file, addr, len, pgoff, flags);
+ if (addr & ~PAGE_MASK)
+ return addr;
+
+ /* Do simple checking here so the lower-level routines won't have
+ * to. we assume access permissions have been handled by the open
+ * of the memory object, so we don't do any here.
+ */
+ vm_flags = calc_vm_prot_bits(prot) | calc_vm_flag_bits(flags) |
+ mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
+
+ if (flags & MAP_LOCKED) {
+ if (!capable(CAP_IPC_LOCK))
+ return -EPERM;
+ vm_flags |= VM_LOCKED;
+ }
+ /* mlock MCL_FUTURE? */
+ if (vm_flags & VM_LOCKED) {
+ unsigned long locked = mm->locked_vm << PAGE_SHIFT;
+ locked += len;
+ if (locked > current->rlim[RLIMIT_MEMLOCK].rlim_cur)
+ return -EAGAIN;
+ }
+
+ inode = file ? file->f_dentry->d_inode : NULL;
+
+ if (file) {
+ switch (flags & MAP_TYPE) {
+ case MAP_SHARED:
+ if ((prot&PROT_WRITE) && !(file->f_mode&FMODE_WRITE))
+ return -EACCES;
+
+ /*
+ * Make sure we don't allow writing to an append-only
+ * file..
+ */
+ if (IS_APPEND(inode) && (file->f_mode & FMODE_WRITE))
+ return -EACCES;
+
+ /*
+ * Make sure there are no mandatory locks on the file.
+ */
+ if (locks_verify_locked(inode))
+ return -EAGAIN;
+
+ vm_flags |= VM_SHARED | VM_MAYSHARE;
+ if (!(file->f_mode & FMODE_WRITE))
+ vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
+
+ /* fall through */
+ case MAP_PRIVATE:
+ if (!(file->f_mode & FMODE_READ))
+ return -EACCES;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ } else {
+ switch (flags & MAP_TYPE) {
+ case MAP_SHARED:
+ vm_flags |= VM_SHARED | VM_MAYSHARE;
+ break;
+ case MAP_PRIVATE:
+ /*
+ * Set pgoff according to addr for anon_vma.
+ */
+ pgoff = addr >> PAGE_SHIFT;
+ break;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ error = security_file_mmap(file, prot, flags);
+ if (error)
+ return error;
+
+ /* Clear old maps */
+ error = -ENOMEM;
+munmap_back:
+ vma = find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent);
+ if (vma && vma->vm_start < addr + len) {
+ if (do_munmap(mm, addr, len))
+ return -ENOMEM;
+ goto munmap_back;
+ }
+
+ /* Check against address space limit. */
+ if ((mm->total_vm << PAGE_SHIFT) + len
+ > current->rlim[RLIMIT_AS].rlim_cur)
+ return -ENOMEM;
+
+ if (accountable && (!(flags & MAP_NORESERVE) ||
+ sysctl_overcommit_memory > 1)) {
+ if (vm_flags & VM_SHARED) {
+ /* Check memory availability in shmem_file_setup? */
+ vm_flags |= VM_ACCOUNT;
+ } else if (vm_flags & VM_WRITE) {
+ /*
+ * Private writable mapping: check memory availability
+ */
+ charged = len >> PAGE_SHIFT;
+ if (security_vm_enough_memory(charged))
+ return -ENOMEM;
+ vm_flags |= VM_ACCOUNT;
+ }
+ }
+
+ /*
+ * Can we just expand an old private anonymous mapping?
+ * The VM_SHARED test is necessary because shmem_zero_setup
+ * will create the file object for a shared anonymous map below.
+ */
+ if (!file && !(vm_flags & VM_SHARED) &&
+ vma_merge(mm, prev, addr, addr + len, vm_flags,
+ NULL, NULL, pgoff, NULL))
+ goto out;
+
+ /*
+ * Determine the object being mapped and call the appropriate
+ * specific mapper. the address has already been validated, but
+ * not unmapped, but the maps are removed from the list.
+ */
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (!vma) {
+ error = -ENOMEM;
+ goto unacct_error;
+ }
+ memset(vma, 0, sizeof(*vma));
+
+ vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + len;
+ vma->vm_flags = vm_flags;
+ vma->vm_page_prot = protection_map[vm_flags & 0x0f];
+ vma->vm_pgoff = pgoff;
+
+ if (file) {
+ error = -EINVAL;
+ if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP))
+ goto free_vma;
+ if (vm_flags & VM_DENYWRITE) {
+ error = deny_write_access(file);
+ if (error)
+ goto free_vma;
+ correct_wcount = 1;
+ }
+ vma->vm_file = file;
+ get_file(file);
+ error = file->f_op->mmap(file, vma);
+ if (error)
+ goto unmap_and_free_vma;
+ } else if (vm_flags & VM_SHARED) {
+ error = shmem_zero_setup(vma);
+ if (error)
+ goto free_vma;
+ }
+
+ /* We set VM_ACCOUNT in a shared mapping's vm_flags, to inform
+ * shmem_zero_setup (perhaps called through /dev/zero's ->mmap)
+ * that memory reservation must be checked; but that reservation
+ * belongs to shared memory object, not to vma: so now clear it.
+ */
+ if ((vm_flags & (VM_SHARED|VM_ACCOUNT)) == (VM_SHARED|VM_ACCOUNT))
+ vma->vm_flags &= ~VM_ACCOUNT;
+
+ /* Can addr have changed??
+ *
+ * Answer: Yes, several device drivers can do it in their
+ * f_op->mmap method. -DaveM
+ */
+ addr = vma->vm_start;
+
+ if (!file || !vma_merge(mm, prev, addr, vma->vm_end,
+ vma->vm_flags, NULL, file, pgoff, vma_policy(vma))) {
+ vma_link(mm, vma, prev, rb_link, rb_parent);
+ if (correct_wcount)
+ atomic_inc(&inode->i_writecount);
+ } else {
+ if (file) {
+ if (correct_wcount)
+ atomic_inc(&inode->i_writecount);
+ fput(file);
+ }
+ mpol_free(vma_policy(vma));
+ kmem_cache_free(vm_area_cachep, vma);
+ }
+out:
+ mm->total_vm += len >> PAGE_SHIFT;
+ if (vm_flags & VM_LOCKED) {
+ mm->locked_vm += len >> PAGE_SHIFT;
+ make_pages_present(addr, addr + len);
+ }
+ if (flags & MAP_POPULATE) {
+ up_write(&mm->mmap_sem);
+ sys_remap_file_pages(addr, len, 0,
+ pgoff, flags & MAP_NONBLOCK);
+ down_write(&mm->mmap_sem);
+ }
+ return addr;
+
+unmap_and_free_vma:
+ if (correct_wcount)
+ atomic_inc(&inode->i_writecount);
+ vma->vm_file = NULL;
+ fput(file);
+
+ /* Undo any partial mapping done by a device driver. */
+ zap_page_range(vma, vma->vm_start, vma->vm_end - vma->vm_start, NULL);
+free_vma:
+ kmem_cache_free(vm_area_cachep, vma);
+unacct_error:
+ if (charged)
+ vm_unacct_memory(charged);
+ return error;
+}
+
+EXPORT_SYMBOL(do_mmap_pgoff);
+
+/* Get an address range which is currently unmapped.
+ * For shmat() with addr=0.
+ *
+ * Ugly calling convention alert:
+ * Return value with the low bits set means error value,
+ * ie
+ * if (ret & ~PAGE_MASK)
+ * error = ret;
+ *
+ * This function "knows" that -ENOMEM has the bits set.
+ */
+#ifndef HAVE_ARCH_UNMAPPED_AREA
+static inline unsigned long
+arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff, unsigned long flags)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ unsigned long start_addr;
+
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+ start_addr = addr = mm->free_area_cache;
+
+full_search:
+ for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
+ /* At this point: (!vma || addr < vma->vm_end). */
+ if (TASK_SIZE - len < addr) {
+ /*
+ * Start a new search - just in case we missed
+ * some holes.
+ */
+ if (start_addr != TASK_UNMAPPED_BASE) {
+ start_addr = addr = TASK_UNMAPPED_BASE;
+ goto full_search;
+ }
+ return -ENOMEM;
+ }
+ if (!vma || addr + len <= vma->vm_start) {
+ /*
+ * Remember the place where we stopped the search:
+ */
+ mm->free_area_cache = addr + len;
+ return addr;
+ }
+ addr = vma->vm_end;
+ }
+}
+#else
+extern unsigned long
+arch_get_unmapped_area(struct file *, unsigned long, unsigned long,
+ unsigned long, unsigned long);
+#endif
+
+unsigned long
+get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ if (flags & MAP_FIXED) {
+ unsigned long ret;
+
+ if (addr > TASK_SIZE - len)
+ return -ENOMEM;
+ if (addr & ~PAGE_MASK)
+ return -EINVAL;
+ if (file && is_file_hugepages(file)) {
+ /*
+ * Check if the given range is hugepage aligned, and
+ * can be made suitable for hugepages.
+ */
+ ret = prepare_hugepage_range(addr, len);
+ } else {
+ /*
+ * Ensure that a normal request is not falling in a
+ * reserved hugepage range. For some archs like IA-64,
+ * there is a separate region for hugepages.
+ */
+ ret = is_hugepage_only_range(addr, len);
+ }
+ if (ret)
+ return -EINVAL;
+ return addr;
+ }
+
+ if (file && file->f_op && file->f_op->get_unmapped_area)
+ return file->f_op->get_unmapped_area(file, addr, len,
+ pgoff, flags);
+
+ return arch_get_unmapped_area(file, addr, len, pgoff, flags);
+}
+
+EXPORT_SYMBOL(get_unmapped_area);
+
+/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
+struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr)
+{
+ struct vm_area_struct *vma = NULL;
+
+ if (mm) {
+ /* Check the cache first. */
+ /* (Cache hit rate is typically around 35%.) */
+ vma = mm->mmap_cache;
+ if (!(vma && vma->vm_end > addr && vma->vm_start <= addr)) {
+ struct rb_node * rb_node;
+
+ rb_node = mm->mm_rb.rb_node;
+ vma = NULL;
+
+ while (rb_node) {
+ struct vm_area_struct * vma_tmp;
+
+ vma_tmp = rb_entry(rb_node,
+ struct vm_area_struct, vm_rb);
+
+ if (vma_tmp->vm_end > addr) {
+ vma = vma_tmp;
+ if (vma_tmp->vm_start <= addr)
+ break;
+ rb_node = rb_node->rb_left;
+ } else
+ rb_node = rb_node->rb_right;
+ }
+ if (vma)
+ mm->mmap_cache = vma;
+ }
+ }
+ return vma;
+}
+
+EXPORT_SYMBOL(find_vma);
+
+/* Same as find_vma, but also return a pointer to the previous VMA in *pprev. */
+struct vm_area_struct *
+find_vma_prev(struct mm_struct *mm, unsigned long addr,
+ struct vm_area_struct **pprev)
+{
+ struct vm_area_struct *vma = NULL, *prev = NULL;
+ struct rb_node * rb_node;
+ if (!mm)
+ goto out;
+
+ /* Guard against addr being lower than the first VMA */
+ vma = mm->mmap;
+
+ /* Go through the RB tree quickly. */
+ rb_node = mm->mm_rb.rb_node;
+
+ while (rb_node) {
+ struct vm_area_struct *vma_tmp;
+ vma_tmp = rb_entry(rb_node, struct vm_area_struct, vm_rb);
+
+ if (addr < vma_tmp->vm_end) {
+ rb_node = rb_node->rb_left;
+ } else {
+ prev = vma_tmp;
+ if (!prev->vm_next || (addr < prev->vm_next->vm_end))
+ break;
+ rb_node = rb_node->rb_right;
+ }
+ }
+
+out:
+ *pprev = prev;
+ return prev ? prev->vm_next : vma;
+}
+
+#ifdef CONFIG_STACK_GROWSUP
+/*
+ * vma is the first one with address > vma->vm_end. Have to extend vma.
+ */
+int expand_stack(struct vm_area_struct * vma, unsigned long address)
+{
+ unsigned long grow;
+
+ if (!(vma->vm_flags & VM_GROWSUP))
+ return -EFAULT;
+
+ /*
+ * We must make sure the anon_vma is allocated
+ * so that the anon_vma locking is not a noop.
+ */
+ if (unlikely(anon_vma_prepare(vma)))
+ return -ENOMEM;
+ anon_vma_lock(vma);
+
+ /*
+ * vma->vm_start/vm_end cannot change under us because the caller
+ * is required to hold the mmap_sem in read mode. We need the
+ * anon_vma lock to serialize against concurrent expand_stacks.
+ */
+ address += 4 + PAGE_SIZE - 1;
+ address &= PAGE_MASK;
+ grow = (address - vma->vm_end) >> PAGE_SHIFT;
+
+ /* Overcommit.. */
+ if (security_vm_enough_memory(grow)) {
+ anon_vma_unlock(vma);
+ return -ENOMEM;
+ }
+
+ if (address - vma->vm_start > current->rlim[RLIMIT_STACK].rlim_cur ||
+ ((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) >
+ current->rlim[RLIMIT_AS].rlim_cur) {
+ anon_vma_unlock(vma);
+ vm_unacct_memory(grow);
+ return -ENOMEM;
+ }
+ vma->vm_end = address;
+ vma->vm_mm->total_vm += grow;
+ if (vma->vm_flags & VM_LOCKED)
+ vma->vm_mm->locked_vm += grow;
+ anon_vma_unlock(vma);
+ return 0;
+}
+
+struct vm_area_struct *
+find_extend_vma(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma, *prev;
+
+ addr &= PAGE_MASK;
+ vma = find_vma_prev(mm, addr, &prev);
+ if (vma && (vma->vm_start <= addr))
+ return vma;
+ if (!prev || expand_stack(prev, addr))
+ return NULL;
+ if (prev->vm_flags & VM_LOCKED) {
+ make_pages_present(addr, prev->vm_end);
+ }
+ return prev;
+}
+#else
+/*
+ * vma is the first one with address < vma->vm_start. Have to extend vma.
+ */
+int expand_stack(struct vm_area_struct *vma, unsigned long address)
+{
+ unsigned long grow;
+
+ /*
+ * We must make sure the anon_vma is allocated
+ * so that the anon_vma locking is not a noop.
+ */
+ if (unlikely(anon_vma_prepare(vma)))
+ return -ENOMEM;
+ anon_vma_lock(vma);
+
+ /*
+ * vma->vm_start/vm_end cannot change under us because the caller
+ * is required to hold the mmap_sem in read mode. We need the
+ * anon_vma lock to serialize against concurrent expand_stacks.
+ */
+ address &= PAGE_MASK;
+ grow = (vma->vm_start - address) >> PAGE_SHIFT;
+
+ /* Overcommit.. */
+ if (security_vm_enough_memory(grow)) {
+ anon_vma_unlock(vma);
+ return -ENOMEM;
+ }
+
+ if (vma->vm_end - address > current->rlim[RLIMIT_STACK].rlim_cur ||
+ ((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) >
+ current->rlim[RLIMIT_AS].rlim_cur) {
+ anon_vma_unlock(vma);
+ vm_unacct_memory(grow);
+ return -ENOMEM;
+ }
+ vma->vm_start = address;
+ vma->vm_pgoff -= grow;
+ vma->vm_mm->total_vm += grow;
+ if (vma->vm_flags & VM_LOCKED)
+ vma->vm_mm->locked_vm += grow;
+ anon_vma_unlock(vma);
+ return 0;
+}
+
+struct vm_area_struct *
+find_extend_vma(struct mm_struct * mm, unsigned long addr)
+{
+ struct vm_area_struct * vma;
+ unsigned long start;
+
+ addr &= PAGE_MASK;
+ vma = find_vma(mm,addr);
+ if (!vma)
+ return NULL;
+ if (vma->vm_start <= addr)
+ return vma;
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ return NULL;
+ start = vma->vm_start;
+ if (expand_stack(vma, addr))
+ return NULL;
+ if (vma->vm_flags & VM_LOCKED) {
+ make_pages_present(addr, start);
+ }
+ return vma;
+}
+#endif
+
+/*
+ * Try to free as many page directory entries as we can,
+ * without having to work very hard at actually scanning
+ * the page tables themselves.
+ *
+ * Right now we try to free page tables if we have a nice
+ * PGDIR-aligned area that got free'd up. We could be more
+ * granular if we want to, but this is fast and simple,
+ * and covers the bad cases.
+ *
+ * "prev", if it exists, points to a vma before the one
+ * we just free'd - but there's no telling how much before.
+ */
+static void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *prev,
+ unsigned long start, unsigned long end)
+{
+ unsigned long first = start & PGDIR_MASK;
+ unsigned long last = end + PGDIR_SIZE - 1;
+ unsigned long start_index, end_index;
+ struct mm_struct *mm = tlb->mm;
+
+ if (!prev) {
+ prev = mm->mmap;
+ if (!prev)
+ goto no_mmaps;
+ if (prev->vm_end > start) {
+ if (last > prev->vm_start)
+ last = prev->vm_start;
+ goto no_mmaps;
+ }
+ }
+ for (;;) {
+ struct vm_area_struct *next = prev->vm_next;
+
+ if (next) {
+ if (next->vm_start < start) {
+ prev = next;
+ continue;
+ }
+ if (last > next->vm_start)
+ last = next->vm_start;
+ }
+ if (prev->vm_end > first)
+ first = prev->vm_end + PGDIR_SIZE - 1;
+ break;
+ }
+no_mmaps:
+ if (last < first) /* for arches with discontiguous pgd indices */
+ return;
+ /*
+ * If the PGD bits are not consecutive in the virtual address, the
+ * old method of shifting the VA >> by PGDIR_SHIFT doesn't work.
+ */
+ start_index = pgd_index(first);
+ if (start_index < FIRST_USER_PGD_NR)
+ start_index = FIRST_USER_PGD_NR;
+ end_index = pgd_index(last);
+ if (end_index > start_index) {
+ clear_page_tables(tlb, start_index, end_index - start_index);
+ flush_tlb_pgtables(mm, first & PGDIR_MASK, last & PGDIR_MASK);
+ }
+}
+
+/* Normal function to fix up a mapping
+ * This function is the default for when an area has no specific
+ * function. This may be used as part of a more specific routine.
+ *
+ * By the time this function is called, the area struct has been
+ * removed from the process mapping list.
+ */
+static void unmap_vma(struct mm_struct *mm, struct vm_area_struct *area)
+{
+ size_t len = area->vm_end - area->vm_start;
+
+ area->vm_mm->total_vm -= len >> PAGE_SHIFT;
+ if (area->vm_flags & VM_LOCKED)
+ area->vm_mm->locked_vm -= len >> PAGE_SHIFT;
+ /*
+ * Is this a new hole at the lowest possible address?
+ */
+ if (area->vm_start >= TASK_UNMAPPED_BASE &&
+ area->vm_start < area->vm_mm->free_area_cache)
+ area->vm_mm->free_area_cache = area->vm_start;
+
+ remove_vm_struct(area);
+}
+
+/*
+ * Update the VMA and inode share lists.
+ *
+ * Ok - we have the memory areas we should free on the 'free' list,
+ * so release them, and do the vma updates.
+ */
+static void unmap_vma_list(struct mm_struct *mm,
+ struct vm_area_struct *mpnt)
+{
+ do {
+ struct vm_area_struct *next = mpnt->vm_next;
+ unmap_vma(mm, mpnt);
+ mpnt = next;
+ } while (mpnt != NULL);
+ validate_mm(mm);
+}
+
+/*
+ * Get rid of page table information in the indicated region.
+ *
+ * Called with the page table lock held.
+ */
+static void unmap_region(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ struct vm_area_struct *prev,
+ unsigned long start,
+ unsigned long end)
+{
+ struct mmu_gather *tlb;
+ unsigned long nr_accounted = 0;
+
+ lru_add_drain();
+ tlb = tlb_gather_mmu(mm, 0);
+ unmap_vmas(&tlb, mm, vma, start, end, &nr_accounted, NULL);
+ vm_unacct_memory(nr_accounted);
+
+ if (is_hugepage_only_range(start, end - start))
+ hugetlb_free_pgtables(tlb, prev, start, end);
+ else
+ free_pgtables(tlb, prev, start, end);
+ tlb_finish_mmu(tlb, start, end);
+}
+
+/*
+ * Create a list of vma's touched by the unmap, removing them from the mm's
+ * vma list as we go..
+ */
+static void
+detach_vmas_to_be_unmapped(struct mm_struct *mm, struct vm_area_struct *vma,
+ struct vm_area_struct *prev, unsigned long end)
+{
+ struct vm_area_struct **insertion_point;
+ struct vm_area_struct *tail_vma = NULL;
+
+ insertion_point = (prev ? &prev->vm_next : &mm->mmap);
+ do {
+ rb_erase(&vma->vm_rb, &mm->mm_rb);
+ mm->map_count--;
+ tail_vma = vma;
+ vma = vma->vm_next;
+ } while (vma && vma->vm_start < end);
+ *insertion_point = vma;
+ tail_vma->vm_next = NULL;
+ mm->mmap_cache = NULL; /* Kill the cache. */
+}
+
+/*
+ * Split a vma into two pieces at address 'addr', a new vma is allocated
+ * either for the first part or the the tail.
+ */
+int split_vma(struct mm_struct * mm, struct vm_area_struct * vma,
+ unsigned long addr, int new_below)
+{
+ struct mempolicy *pol;
+ struct vm_area_struct *new;
+
+ if (mm->map_count >= sysctl_max_map_count)
+ return -ENOMEM;
+
+ new = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (!new)
+ return -ENOMEM;
+
+ /* most fields are the same, copy all, and then fixup */
+ *new = *vma;
+ vma_prio_tree_init(new);
+
+ if (new_below)
+ new->vm_end = addr;
+ else {
+ new->vm_start = addr;
+ new->vm_pgoff += ((addr - vma->vm_start) >> PAGE_SHIFT);
+ }
+
+ pol = mpol_copy(vma_policy(vma));
+ if (IS_ERR(pol)) {
+ kmem_cache_free(vm_area_cachep, new);
+ return PTR_ERR(pol);
+ }
+ vma_set_policy(new, pol);
+
+ if (new->vm_file)
+ get_file(new->vm_file);
+
+ if (new->vm_ops && new->vm_ops->open)
+ new->vm_ops->open(new);
+
+ if (new_below)
+ vma_adjust(vma, addr, vma->vm_end, vma->vm_pgoff +
+ ((addr - new->vm_start) >> PAGE_SHIFT), new);
+ else
+ vma_adjust(vma, vma->vm_start, addr, vma->vm_pgoff, new);
+
+ return 0;
+}
+
+/* Munmap is split into 2 main parts -- this part which finds
+ * what needs doing, and the areas themselves, which do the
+ * work. This now handles partial unmappings.
+ * Jeremy Fitzhardinge <jeremy@goop.org>
+ */
+int do_munmap(struct mm_struct *mm, unsigned long start, size_t len)
+{
+ unsigned long end;
+ struct vm_area_struct *mpnt, *prev, *last;
+
+ if ((start & ~PAGE_MASK) || start > TASK_SIZE || len > TASK_SIZE-start)
+ return -EINVAL;
+
+ if ((len = PAGE_ALIGN(len)) == 0)
+ return -EINVAL;
+
+ /* Find the first overlapping VMA */
+ mpnt = find_vma_prev(mm, start, &prev);
+ if (!mpnt)
+ return 0;
+ /* we have start < mpnt->vm_end */
+
+ if (is_vm_hugetlb_page(mpnt)) {
+ int ret = is_aligned_hugepage_range(start, len);
+
+ if (ret)
+ return ret;
+ }
+
+ /* if it doesn't overlap, we have nothing.. */
+ end = start + len;
+ if (mpnt->vm_start >= end)
+ return 0;
+
+ /* Something will probably happen, so notify. */
+ if (mpnt->vm_file && (mpnt->vm_flags & VM_EXEC))
+ profile_exec_unmap(mm);
+
+ /*
+ * If we need to split any vma, do it now to save pain later.
+ *
+ * Note: mremap's move_vma VM_ACCOUNT handling assumes a partially
+ * unmapped vm_area_struct will remain in use: so lower split_vma
+ * places tmp vma above, and higher split_vma places tmp vma below.
+ */
+ if (start > mpnt->vm_start) {
+ if (split_vma(mm, mpnt, start, 0))
+ return -ENOMEM;
+ prev = mpnt;
+ }
+
+ /* Does it split the last one? */
+ last = find_vma(mm, end);
+ if (last && end > last->vm_start) {
+ if (split_vma(mm, last, end, 1))
+ return -ENOMEM;
+ }
+ mpnt = prev? prev->vm_next: mm->mmap;
+
+ /*
+ * Remove the vma's, and unmap the actual pages
+ */
+ detach_vmas_to_be_unmapped(mm, mpnt, prev, end);
+ spin_lock(&mm->page_table_lock);
+ unmap_region(mm, mpnt, prev, start, end);
+ spin_unlock(&mm->page_table_lock);
+
+ /* Fix up all other VM information */
+ unmap_vma_list(mm, mpnt);
+
+ return 0;
+}
+
+EXPORT_SYMBOL(do_munmap);
+
+asmlinkage long sys_munmap(unsigned long addr, size_t len)
+{
+ int ret;
+ struct mm_struct *mm = current->mm;
+
+ down_write(&mm->mmap_sem);
+ ret = do_munmap(mm, addr, len);
+ up_write(&mm->mmap_sem);
+ return ret;
+}
+
+/*
+ * this is really a simplified "do_mmap". it only handles
+ * anonymous maps. eventually we may be able to do some
+ * brk-specific accounting here.
+ */
+unsigned long do_brk(unsigned long addr, unsigned long len)
+{
+ struct mm_struct * mm = current->mm;
+ struct vm_area_struct * vma, * prev;
+ unsigned long flags;
+ struct rb_node ** rb_link, * rb_parent;
+ pgoff_t pgoff = addr >> PAGE_SHIFT;
+
+ len = PAGE_ALIGN(len);
+ if (!len)
+ return addr;
+
+ if ((addr + len) > TASK_SIZE || (addr + len) < addr)
+ return -EINVAL;
+
+ /*
+ * mlock MCL_FUTURE?
+ */
+ if (mm->def_flags & VM_LOCKED) {
+ unsigned long locked = mm->locked_vm << PAGE_SHIFT;
+ locked += len;
+ if (locked > current->rlim[RLIMIT_MEMLOCK].rlim_cur)
+ return -EAGAIN;
+ }
+
+ /*
+ * Clear old maps. this also does some error checking for us
+ */
+ munmap_back:
+ vma = find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent);
+ if (vma && vma->vm_start < addr + len) {
+ if (do_munmap(mm, addr, len))
+ return -ENOMEM;
+ goto munmap_back;
+ }
+
+ /* Check against address space limits *after* clearing old maps... */
+ if ((mm->total_vm << PAGE_SHIFT) + len
+ > current->rlim[RLIMIT_AS].rlim_cur)
+ return -ENOMEM;
+
+ if (mm->map_count > sysctl_max_map_count)
+ return -ENOMEM;
+
+ if (security_vm_enough_memory(len >> PAGE_SHIFT))
+ return -ENOMEM;
+
+ flags = VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
+
+ /* Can we just expand an old private anonymous mapping? */
+ if (vma_merge(mm, prev, addr, addr + len, flags,
+ NULL, NULL, pgoff, NULL))
+ goto out;
+
+ /*
+ * create a vma struct for an anonymous mapping
+ */
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (!vma) {
+ vm_unacct_memory(len >> PAGE_SHIFT);
+ return -ENOMEM;
+ }
+ memset(vma, 0, sizeof(*vma));
+
+ vma->vm_mm = mm;
+ vma->vm_start = addr;
+ vma->vm_end = addr + len;
+ vma->vm_pgoff = pgoff;
+ vma->vm_flags = flags;
+ vma->vm_page_prot = protection_map[flags & 0x0f];
+ vma_link(mm, vma, prev, rb_link, rb_parent);
+out:
+ mm->total_vm += len >> PAGE_SHIFT;
+ if (flags & VM_LOCKED) {
+ mm->locked_vm += len >> PAGE_SHIFT;
+ make_pages_present(addr, addr + len);
+ }
+ return addr;
+}
+
+EXPORT_SYMBOL(do_brk);
+
+/* Release all mmaps. */
+void exit_mmap(struct mm_struct *mm)
+{
+ struct mmu_gather *tlb;
+ struct vm_area_struct *vma;
+ unsigned long nr_accounted = 0;
+
+ profile_exit_mmap(mm);
+
+ lru_add_drain();
+
+ spin_lock(&mm->page_table_lock);
+
+ tlb = tlb_gather_mmu(mm, 1);
+ flush_cache_mm(mm);
+ /* Use ~0UL here to ensure all VMAs in the mm are unmapped */
+ mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0,
+ ~0UL, &nr_accounted, NULL);
+ vm_unacct_memory(nr_accounted);
+ BUG_ON(mm->map_count); /* This is just debugging */
+ clear_page_tables(tlb, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD);
+ tlb_finish_mmu(tlb, 0, MM_VM_SIZE(mm));
+
+ vma = mm->mmap;
+ mm->mmap = mm->mmap_cache = NULL;
+ mm->mm_rb = RB_ROOT;
+ mm->rss = 0;
+ mm->total_vm = 0;
+ mm->locked_vm = 0;
+
+ spin_unlock(&mm->page_table_lock);
+
+ /*
+ * Walk the list again, actually closing and freeing it
+ * without holding any MM locks.
+ */
+ while (vma) {
+ struct vm_area_struct *next = vma->vm_next;
+ remove_vm_struct(vma);
+ vma = next;
+ }
+}
+
+/* Insert vm structure into process list sorted by address
+ * and into the inode's i_mmap tree. If vm_file is non-NULL
+ * then i_mmap_lock is taken here.
+ */
+void insert_vm_struct(struct mm_struct * mm, struct vm_area_struct * vma)
+{
+ struct vm_area_struct * __vma, * prev;
+ struct rb_node ** rb_link, * rb_parent;
+
+ /*
+ * The vm_pgoff of a purely anonymous vma should be irrelevant
+ * until its first write fault, when page's anon_vma and index
+ * are set. But now set the vm_pgoff it will almost certainly
+ * end up with (unless mremap moves it elsewhere before that
+ * first wfault), so /proc/pid/maps tells a consistent story.
+ *
+ * By setting it to reflect the virtual start address of the
+ * vma, merges and splits can happen in a seamless way, just
+ * using the existing file pgoff checks and manipulations.
+ * Similarly in do_mmap_pgoff and in do_brk.
+ */
+ if (!vma->vm_file) {
+ BUG_ON(vma->anon_vma);
+ vma->vm_pgoff = vma->vm_start >> PAGE_SHIFT;
+ }
+ __vma = find_vma_prepare(mm,vma->vm_start,&prev,&rb_link,&rb_parent);
+ if (__vma && __vma->vm_start < vma->vm_end)
+ BUG();
+ vma_link(mm, vma, prev, rb_link, rb_parent);
+}
+
+/*
+ * Copy the vma structure to a new location in the same mm,
+ * prior to moving page table entries, to effect an mremap move.
+ */
+struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+ unsigned long addr, unsigned long len, pgoff_t pgoff)
+{
+ struct vm_area_struct *vma = *vmap;
+ unsigned long vma_start = vma->vm_start;
+ struct mm_struct *mm = vma->vm_mm;
+ struct vm_area_struct *new_vma, *prev;
+ struct rb_node **rb_link, *rb_parent;
+ struct mempolicy *pol;
+
+ /*
+ * If anonymous vma has not yet been faulted, update new pgoff
+ * to match new location, to increase its chance of merging.
+ */
+ if (!vma->vm_file && !vma->anon_vma)
+ pgoff = addr >> PAGE_SHIFT;
+
+ find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent);
+ new_vma = vma_merge(mm, prev, addr, addr + len, vma->vm_flags,
+ vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma));
+ if (new_vma) {
+ /*
+ * Source vma may have been merged into new_vma
+ */
+ if (vma_start >= new_vma->vm_start &&
+ vma_start < new_vma->vm_end)
+ *vmap = new_vma;
+ } else {
+ new_vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (new_vma) {
+ *new_vma = *vma;
+ vma_prio_tree_init(new_vma);
+ pol = mpol_copy(vma_policy(vma));
+ if (IS_ERR(pol)) {
+ kmem_cache_free(vm_area_cachep, new_vma);
+ return NULL;
+ }
+ vma_set_policy(new_vma, pol);
+ new_vma->vm_start = addr;
+ new_vma->vm_end = addr + len;
+ new_vma->vm_pgoff = pgoff;
+ if (new_vma->vm_file)
+ get_file(new_vma->vm_file);
+ if (new_vma->vm_ops && new_vma->vm_ops->open)
+ new_vma->vm_ops->open(new_vma);
+ vma_link(mm, new_vma, prev, rb_link, rb_parent);
+ }
+ }
+ return new_vma;
+}
--- /dev/null
+/*
+ * mm/mprotect.c
+ *
+ * (C) Copyright 1994 Linus Torvalds
+ * (C) Copyright 2002 Christoph Hellwig
+ *
+ * Address space accounting code <alan@redhat.com>
+ * (C) Copyright 2002 Red Hat Inc, All Rights Reserved
+ */
+
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <linux/slab.h>
+#include <linux/shm.h>
+#include <linux/mman.h>
+#include <linux/fs.h>
+#include <linux/highmem.h>
+#include <linux/security.h>
+#include <linux/mempolicy.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/cacheflush.h>
+#include <asm/tlbflush.h>
+
+static inline void
+change_pte_range(pmd_t *pmd, unsigned long address,
+ unsigned long size, pgprot_t newprot)
+{
+ pte_t * pte;
+ unsigned long end;
+
+ if (pmd_none(*pmd))
+ return;
+ if (pmd_bad(*pmd)) {
+ pmd_ERROR(*pmd);
+ pmd_clear(pmd);
+ return;
+ }
+ pte = pte_offset_map(pmd, address);
+ address &= ~PMD_MASK;
+ end = address + size;
+ if (end > PMD_SIZE)
+ end = PMD_SIZE;
+ do {
+ if (pte_present(*pte)) {
+ pte_t entry;
+
+ /* Avoid an SMP race with hardware updated dirty/clean
+ * bits by wiping the pte and then setting the new pte
+ * into place.
+ */
+ entry = ptep_get_and_clear(pte);
+ set_pte(pte, pte_modify(entry, newprot));
+ }
+ address += PAGE_SIZE;
+ pte++;
+ } while (address && (address < end));
+ pte_unmap(pte - 1);
+}
+
+static inline void
+change_pmd_range(pgd_t *pgd, unsigned long address,
+ unsigned long size, pgprot_t newprot)
+{
+ pmd_t * pmd;
+ unsigned long end;
+
+ if (pgd_none(*pgd))
+ return;
+ if (pgd_bad(*pgd)) {
+ pgd_ERROR(*pgd);
+ pgd_clear(pgd);
+ return;
+ }
+ pmd = pmd_offset(pgd, address);
+ address &= ~PGDIR_MASK;
+ end = address + size;
+ if (end > PGDIR_SIZE)
+ end = PGDIR_SIZE;
+ do {
+ change_pte_range(pmd, address, end - address, newprot);
+ address = (address + PMD_SIZE) & PMD_MASK;
+ pmd++;
+ } while (address && (address < end));
+}
+
+static void
+change_protection(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, pgprot_t newprot)
+{
+ pgd_t *dir;
+ unsigned long beg = start;
+
+ dir = pgd_offset(current->mm, start);
+ flush_cache_range(vma, beg, end);
+ if (start >= end)
+ BUG();
+ spin_lock(¤t->mm->page_table_lock);
+ do {
+ change_pmd_range(dir, start, end - start, newprot);
+ start = (start + PGDIR_SIZE) & PGDIR_MASK;
+ dir++;
+ } while (start && (start < end));
+ flush_tlb_range(vma, beg, end);
+ spin_unlock(¤t->mm->page_table_lock);
+ return;
+}
+
+static int
+mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
+ unsigned long start, unsigned long end, unsigned int newflags)
+{
+ struct mm_struct * mm = vma->vm_mm;
+ unsigned long charged = 0;
+ pgprot_t newprot;
+ pgoff_t pgoff;
+ int error;
+
+ if (newflags == vma->vm_flags) {
+ *pprev = vma;
+ return 0;
+ }
+
+ /*
+ * If we make a private mapping writable we increase our commit;
+ * but (without finer accounting) cannot reduce our commit if we
+ * make it unwritable again.
+ *
+ * FIXME? We haven't defined a VM_NORESERVE flag, so mprotecting
+ * a MAP_NORESERVE private mapping to writable will now reserve.
+ */
+ if (newflags & VM_WRITE) {
+ if (!(vma->vm_flags & (VM_ACCOUNT|VM_WRITE|VM_SHARED|VM_HUGETLB))) {
+ charged = (end - start) >> PAGE_SHIFT;
+ if (security_vm_enough_memory(charged))
+ return -ENOMEM;
+ newflags |= VM_ACCOUNT;
+ }
+ }
+
+ newprot = protection_map[newflags & 0xf];
+
+ /*
+ * First try to merge with previous and/or next vma.
+ */
+ pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
+ *pprev = vma_merge(mm, *pprev, start, end, newflags,
+ vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma));
+ if (*pprev) {
+ vma = *pprev;
+ goto success;
+ }
+
+ if (start != vma->vm_start) {
+ error = split_vma(mm, vma, start, 1);
+ if (error)
+ goto fail;
+ }
+ /*
+ * Unless it returns an error, this function always sets *pprev to
+ * the first vma for which vma->vm_end >= end.
+ */
+ *pprev = vma;
+
+ if (end != vma->vm_end) {
+ error = split_vma(mm, vma, end, 0);
+ if (error)
+ goto fail;
+ }
+
+success:
+ /*
+ * vm_flags and vm_page_prot are protected by the mmap_sem
+ * held in write mode.
+ */
+ vma->vm_flags = newflags;
+ vma->vm_page_prot = newprot;
+ change_protection(vma, start, end, newprot);
+ return 0;
+
+fail:
+ vm_unacct_memory(charged);
+ return error;
+}
+
+asmlinkage long
+sys_mprotect(unsigned long start, size_t len, unsigned long prot)
+{
+ unsigned long vm_flags, nstart, end, tmp;
+ struct vm_area_struct *vma, *prev;
+ int error = -EINVAL;
+ const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
+ prot &= ~(PROT_GROWSDOWN|PROT_GROWSUP);
+ if (grows == (PROT_GROWSDOWN|PROT_GROWSUP)) /* can't be both */
+ return -EINVAL;
+
+ if (start & ~PAGE_MASK)
+ return -EINVAL;
+ len = PAGE_ALIGN(len);
+ end = start + len;
+ if (end < start)
+ return -ENOMEM;
+ if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM))
+ return -EINVAL;
+ if (end == start)
+ return 0;
+
+ vm_flags = calc_vm_prot_bits(prot);
+
+ down_write(¤t->mm->mmap_sem);
+
+ vma = find_vma_prev(current->mm, start, &prev);
+ error = -ENOMEM;
+ if (!vma)
+ goto out;
+ if (unlikely(grows & PROT_GROWSDOWN)) {
+ if (vma->vm_start >= end)
+ goto out;
+ start = vma->vm_start;
+ error = -EINVAL;
+ if (!(vma->vm_flags & VM_GROWSDOWN))
+ goto out;
+ }
+ else {
+ if (vma->vm_start > start)
+ goto out;
+ if (unlikely(grows & PROT_GROWSUP)) {
+ end = vma->vm_end;
+ error = -EINVAL;
+ if (!(vma->vm_flags & VM_GROWSUP))
+ goto out;
+ }
+ }
+ if (start > vma->vm_start)
+ prev = vma;
+
+ for (nstart = start ; ; ) {
+ unsigned int newflags;
+
+ /* Here we know that vma->vm_start <= nstart < vma->vm_end. */
+
+ if (is_vm_hugetlb_page(vma)) {
+ error = -EACCES;
+ goto out;
+ }
+
+ newflags = vm_flags | (vma->vm_flags & ~(VM_READ | VM_WRITE | VM_EXEC));
+
+ if ((newflags & ~(newflags >> 4)) & 0xf) {
+ error = -EACCES;
+ goto out;
+ }
+
+ error = security_file_mprotect(vma, prot);
+ if (error)
+ goto out;
+
+ tmp = vma->vm_end;
+ if (tmp > end)
+ tmp = end;
+ error = mprotect_fixup(vma, &prev, nstart, tmp, newflags);
+ if (error)
+ goto out;
+ nstart = tmp;
+
+ if (nstart < prev->vm_end)
+ nstart = prev->vm_end;
+ if (nstart >= end)
+ goto out;
+
+ vma = prev->vm_next;
+ if (!vma || vma->vm_start != nstart) {
+ error = -ENOMEM;
+ goto out;
+ }
+ }
+out:
+ up_write(¤t->mm->mmap_sem);
+ return error;
+}
--- /dev/null
+/*
+ * linux/mm/page_alloc.c
+ *
+ * Manages the free list, the system allocates free pages here.
+ * Note that kmalloc() lives in slab.c
+ *
+ * Copyright (C) 1991, 1992, 1993, 1994 Linus Torvalds
+ * Swap reorganised 29.12.95, Stephen Tweedie
+ * Support of BIGMEM added by Gerhard Wichert, Siemens AG, July 1999
+ * Reshaped it to be a zoned allocator, Ingo Molnar, Red Hat, 1999
+ * Discontiguous memory support, Kanoj Sarcar, SGI, Nov 1999
+ * Zone balancing, Kanoj Sarcar, SGI, Jan 2000
+ * Per cpu hot/cold page lists, bulk allocation, Martin J. Bligh, Sept 2002
+ * (lots of bits borrowed from Ingo Molnar & Andrew Morton)
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/interrupt.h>
+#include <linux/pagemap.h>
+#include <linux/bootmem.h>
+#include <linux/compiler.h>
+#include <linux/module.h>
+#include <linux/suspend.h>
+#include <linux/pagevec.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/notifier.h>
+#include <linux/topology.h>
+#include <linux/sysctl.h>
+#include <linux/cpu.h>
+
+#include <asm/tlbflush.h>
+
+DECLARE_BITMAP(node_online_map, MAX_NUMNODES);
+struct pglist_data *pgdat_list;
+unsigned long totalram_pages;
+unsigned long totalhigh_pages;
+int nr_swap_pages;
+int numnodes = 1;
+int sysctl_lower_zone_protection = 0;
+
+EXPORT_SYMBOL(totalram_pages);
+EXPORT_SYMBOL(nr_swap_pages);
+
+/*
+ * Used by page_zone() to look up the address of the struct zone whose
+ * id is encoded in the upper bits of page->flags
+ */
+struct zone *zone_table[1 << (ZONES_SHIFT + NODES_SHIFT)];
+EXPORT_SYMBOL(zone_table);
+
+static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" };
+int min_free_kbytes = 1024;
+
+/*
+ * Temporary debugging check for pages not lying within a given zone.
+ */
+static int bad_range(struct zone *zone, struct page *page)
+{
+ if (page_to_pfn(page) >= zone->zone_start_pfn + zone->spanned_pages)
+ return 1;
+ if (page_to_pfn(page) < zone->zone_start_pfn)
+ return 1;
+ if (zone != page_zone(page))
+ return 1;
+ return 0;
+}
+
+static void bad_page(const char *function, struct page *page)
+{
+ printk(KERN_EMERG "Bad page state at %s (in process '%s', page %p)\n",
+ function, current->comm, page);
+ printk(KERN_EMERG "flags:0x%08lx mapping:%p mapcount:%d count:%d\n",
+ (unsigned long)page->flags, page->mapping,
+ (int)page->mapcount, page_count(page));
+ printk(KERN_EMERG "Backtrace:\n");
+ dump_stack();
+ printk(KERN_EMERG "Trying to fix it up, but a reboot is needed\n");
+ page->flags &= ~(1 << PG_private |
+ 1 << PG_locked |
+ 1 << PG_lru |
+ 1 << PG_active |
+ 1 << PG_dirty |
+ 1 << PG_maplock |
+ 1 << PG_anon |
+ 1 << PG_swapcache |
+ 1 << PG_writeback);
+ set_page_count(page, 0);
+ page->mapping = NULL;
+ page->mapcount = 0;
+}
+
+#ifndef CONFIG_HUGETLB_PAGE
+#define prep_compound_page(page, order) do { } while (0)
+#define destroy_compound_page(page, order) do { } while (0)
+#else
+/*
+ * Higher-order pages are called "compound pages". They are structured thusly:
+ *
+ * The first PAGE_SIZE page is called the "head page".
+ *
+ * The remaining PAGE_SIZE pages are called "tail pages".
+ *
+ * All pages have PG_compound set. All pages have their ->private pointing at
+ * the head page (even the head page has this).
+ *
+ * The first tail page's ->mapping, if non-zero, holds the address of the
+ * compound page's put_page() function.
+ *
+ * The order of the allocation is stored in the first tail page's ->index
+ * This is only for debug at present. This usage means that zero-order pages
+ * may not be compound.
+ */
+static void prep_compound_page(struct page *page, unsigned long order)
+{
+ int i;
+ int nr_pages = 1 << order;
+
+ page[1].mapping = 0;
+ page[1].index = order;
+ for (i = 0; i < nr_pages; i++) {
+ struct page *p = page + i;
+
+ SetPageCompound(p);
+ p->private = (unsigned long)page;
+ }
+}
+
+static void destroy_compound_page(struct page *page, unsigned long order)
+{
+ int i;
+ int nr_pages = 1 << order;
+
+ if (!PageCompound(page))
+ return;
+
+ if (page[1].index != order)
+ bad_page(__FUNCTION__, page);
+
+ for (i = 0; i < nr_pages; i++) {
+ struct page *p = page + i;
+
+ if (!PageCompound(p))
+ bad_page(__FUNCTION__, page);
+ if (p->private != (unsigned long)page)
+ bad_page(__FUNCTION__, page);
+ ClearPageCompound(p);
+ }
+}
+#endif /* CONFIG_HUGETLB_PAGE */
+
+/*
+ * Freeing function for a buddy system allocator.
+ *
+ * The concept of a buddy system is to maintain direct-mapped table
+ * (containing bit values) for memory blocks of various "orders".
+ * The bottom level table contains the map for the smallest allocatable
+ * units of memory (here, pages), and each level above it describes
+ * pairs of units from the levels below, hence, "buddies".
+ * At a high level, all that happens here is marking the table entry
+ * at the bottom level available, and propagating the changes upward
+ * as necessary, plus some accounting needed to play nicely with other
+ * parts of the VM system.
+ * At each level, we keep one bit for each pair of blocks, which
+ * is set to 1 iff only one of the pair is allocated. So when we
+ * are allocating or freeing one, we can derive the state of the
+ * other. That is, if we allocate a small block, and both were
+ * free, the remainder of the region must be split into blocks.
+ * If a block is freed, and its buddy is also free, then this
+ * triggers coalescing into a block of larger size.
+ *
+ * -- wli
+ */
+
+static inline void __free_pages_bulk (struct page *page, struct page *base,
+ struct zone *zone, struct free_area *area, unsigned long mask,
+ unsigned int order)
+{
+ unsigned long page_idx, index;
+
+ if (order)
+ destroy_compound_page(page, order);
+ page_idx = page - base;
+ if (page_idx & ~mask)
+ BUG();
+ index = page_idx >> (1 + order);
+
+ zone->free_pages -= mask;
+ while (mask + (1 << (MAX_ORDER-1))) {
+ struct page *buddy1, *buddy2;
+
+ BUG_ON(area >= zone->free_area + MAX_ORDER);
+ if (!__test_and_change_bit(index, area->map))
+ /*
+ * the buddy page is still allocated.
+ */
+ break;
+ /*
+ * Move the buddy up one level.
+ * This code is taking advantage of the identity:
+ * -mask = 1+~mask
+ */
+ buddy1 = base + (page_idx ^ -mask);
+ buddy2 = base + page_idx;
+ BUG_ON(bad_range(zone, buddy1));
+ BUG_ON(bad_range(zone, buddy2));
+ list_del(&buddy1->lru);
+ mask <<= 1;
+ area++;
+ index >>= 1;
+ page_idx &= mask;
+ }
+ list_add(&(base + page_idx)->lru, &area->free_list);
+}
+
+static inline void free_pages_check(const char *function, struct page *page)
+{
+ if ( page_mapped(page) ||
+ page->mapping != NULL ||
+ page_count(page) != 0 ||
+ (page->flags & (
+ 1 << PG_lru |
+ 1 << PG_private |
+ 1 << PG_locked |
+ 1 << PG_active |
+ 1 << PG_reclaim |
+ 1 << PG_slab |
+ 1 << PG_maplock |
+ 1 << PG_anon |
+ 1 << PG_swapcache |
+ 1 << PG_writeback )))
+ bad_page(function, page);
+ if (PageDirty(page))
+ ClearPageDirty(page);
+}
+
+/*
+ * Frees a list of pages.
+ * Assumes all pages on list are in same zone, and of same order.
+ * count is the number of pages to free, or 0 for all on the list.
+ *
+ * If the zone was previously in an "all pages pinned" state then look to
+ * see if this freeing clears that state.
+ *
+ * And clear the zone's pages_scanned counter, to hold off the "all pages are
+ * pinned" detection logic.
+ */
+static int
+free_pages_bulk(struct zone *zone, int count,
+ struct list_head *list, unsigned int order)
+{
+ unsigned long mask, flags;
+ struct free_area *area;
+ struct page *base, *page = NULL;
+ int ret = 0;
+
+ mask = (~0UL) << order;
+ base = zone->zone_mem_map;
+ area = zone->free_area + order;
+ spin_lock_irqsave(&zone->lock, flags);
+ zone->all_unreclaimable = 0;
+ zone->pages_scanned = 0;
+ while (!list_empty(list) && count--) {
+ page = list_entry(list->prev, struct page, lru);
+ /* have to delete it as __free_pages_bulk list manipulates */
+ list_del(&page->lru);
+ __free_pages_bulk(page, base, zone, area, mask, order);
+ ret++;
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ return ret;
+}
+
+void __free_pages_ok(struct page *page, unsigned int order)
+{
+ LIST_HEAD(list);
+ int i;
+
+ mod_page_state(pgfree, 1 << order);
+ for (i = 0 ; i < (1 << order) ; ++i)
+ free_pages_check(__FUNCTION__, page + i);
+ list_add(&page->lru, &list);
+ kernel_map_pages(page, 1<<order, 0);
+ free_pages_bulk(page_zone(page), 1, &list, order);
+}
+
+#define MARK_USED(index, order, area) \
+ __change_bit((index) >> (1+(order)), (area)->map)
+
+static inline struct page *
+expand(struct zone *zone, struct page *page,
+ unsigned long index, int low, int high, struct free_area *area)
+{
+ unsigned long size = 1 << high;
+
+ while (high > low) {
+ BUG_ON(bad_range(zone, page));
+ area--;
+ high--;
+ size >>= 1;
+ list_add(&page->lru, &area->free_list);
+ MARK_USED(index, high, area);
+ index += size;
+ page += size;
+ }
+ return page;
+}
+
+static inline void set_page_refs(struct page *page, int order)
+{
+#ifdef CONFIG_MMU
+ set_page_count(page, 1);
+#else
+ int i;
+
+ /*
+ * We need to reference all the pages for this order, otherwise if
+ * anyone accesses one of the pages with (get/put) it will be freed.
+ */
+ for (i = 0; i < (1 << order); i++)
+ set_page_count(page+i, 1);
+#endif /* CONFIG_MMU */
+}
+
+/*
+ * This page is about to be returned from the page allocator
+ */
+static void prep_new_page(struct page *page, int order)
+{
+ if (page->mapping || page_mapped(page) ||
+ (page->flags & (
+ 1 << PG_private |
+ 1 << PG_locked |
+ 1 << PG_lru |
+ 1 << PG_active |
+ 1 << PG_dirty |
+ 1 << PG_reclaim |
+ 1 << PG_maplock |
+ 1 << PG_anon |
+ 1 << PG_swapcache |
+ 1 << PG_writeback )))
+ bad_page(__FUNCTION__, page);
+
+ page->flags &= ~(1 << PG_uptodate | 1 << PG_error |
+ 1 << PG_referenced | 1 << PG_arch_1 |
+ 1 << PG_checked | 1 << PG_mappedtodisk);
+ page->private = 0;
+ set_page_refs(page, order);
+}
+
+/*
+ * Do the hard work of removing an element from the buddy allocator.
+ * Call me with the zone->lock already held.
+ */
+static struct page *__rmqueue(struct zone *zone, unsigned int order)
+{
+ struct free_area * area;
+ unsigned int current_order;
+ struct page *page;
+ unsigned int index;
+
+ for (current_order = order; current_order < MAX_ORDER; ++current_order) {
+ area = zone->free_area + current_order;
+ if (list_empty(&area->free_list))
+ continue;
+
+ page = list_entry(area->free_list.next, struct page, lru);
+ list_del(&page->lru);
+ index = page - zone->zone_mem_map;
+ if (current_order != MAX_ORDER-1)
+ MARK_USED(index, current_order, area);
+ zone->free_pages -= 1UL << order;
+ return expand(zone, page, index, order, current_order, area);
+ }
+
+ return NULL;
+}
+
+/*
+ * Obtain a specified number of elements from the buddy allocator, all under
+ * a single hold of the lock, for efficiency. Add them to the supplied list.
+ * Returns the number of new pages which were placed at *list.
+ */
+static int rmqueue_bulk(struct zone *zone, unsigned int order,
+ unsigned long count, struct list_head *list)
+{
+ unsigned long flags;
+ int i;
+ int allocated = 0;
+ struct page *page;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ for (i = 0; i < count; ++i) {
+ page = __rmqueue(zone, order);
+ if (page == NULL)
+ break;
+ allocated++;
+ list_add_tail(&page->lru, list);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ return allocated;
+}
+
+#if defined(CONFIG_PM) || defined(CONFIG_HOTPLUG_CPU)
+static void __drain_pages(unsigned int cpu)
+{
+ struct zone *zone;
+ int i;
+
+ for_each_zone(zone) {
+ struct per_cpu_pageset *pset;
+
+ pset = &zone->pageset[cpu];
+ for (i = 0; i < ARRAY_SIZE(pset->pcp); i++) {
+ struct per_cpu_pages *pcp;
+
+ pcp = &pset->pcp[i];
+ pcp->count -= free_pages_bulk(zone, pcp->count,
+ &pcp->list, 0);
+ }
+ }
+}
+#endif /* CONFIG_PM || CONFIG_HOTPLUG_CPU */
+
+#ifdef CONFIG_PM
+int is_head_of_free_region(struct page *page)
+{
+ struct zone *zone = page_zone(page);
+ unsigned long flags;
+ int order;
+ struct list_head *curr;
+
+ /*
+ * Should not matter as we need quiescent system for
+ * suspend anyway, but...
+ */
+ spin_lock_irqsave(&zone->lock, flags);
+ for (order = MAX_ORDER - 1; order >= 0; --order)
+ list_for_each(curr, &zone->free_area[order].free_list)
+ if (page == list_entry(curr, struct page, lru)) {
+ spin_unlock_irqrestore(&zone->lock, flags);
+ return 1 << order;
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ return 0;
+}
+
+/*
+ * Spill all of this CPU's per-cpu pages back into the buddy allocator.
+ */
+void drain_local_pages(void)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ __drain_pages(smp_processor_id());
+ local_irq_restore(flags);
+}
+#endif /* CONFIG_PM */
+
+static void zone_statistics(struct zonelist *zonelist, struct zone *z)
+{
+#ifdef CONFIG_NUMA
+ unsigned long flags;
+ int cpu;
+ pg_data_t *pg = z->zone_pgdat;
+ pg_data_t *orig = zonelist->zones[0]->zone_pgdat;
+ struct per_cpu_pageset *p;
+
+ local_irq_save(flags);
+ cpu = smp_processor_id();
+ p = &z->pageset[cpu];
+ if (pg == orig) {
+ z->pageset[cpu].numa_hit++;
+ } else {
+ p->numa_miss++;
+ zonelist->zones[0]->pageset[cpu].numa_foreign++;
+ }
+ if (pg == NODE_DATA(numa_node_id()))
+ p->local_node++;
+ else
+ p->other_node++;
+ local_irq_restore(flags);
+#endif
+}
+
+/*
+ * Free a 0-order page
+ */
+static void FASTCALL(free_hot_cold_page(struct page *page, int cold));
+static void fastcall free_hot_cold_page(struct page *page, int cold)
+{
+ struct zone *zone = page_zone(page);
+ struct per_cpu_pages *pcp;
+ unsigned long flags;
+
+ kernel_map_pages(page, 1, 0);
+ inc_page_state(pgfree);
+ free_pages_check(__FUNCTION__, page);
+ pcp = &zone->pageset[get_cpu()].pcp[cold];
+ local_irq_save(flags);
+ if (pcp->count >= pcp->high)
+ pcp->count -= free_pages_bulk(zone, pcp->batch, &pcp->list, 0);
+ list_add(&page->lru, &pcp->list);
+ pcp->count++;
+ local_irq_restore(flags);
+ put_cpu();
+}
+
+void fastcall free_hot_page(struct page *page)
+{
+ free_hot_cold_page(page, 0);
+}
+
+void fastcall free_cold_page(struct page *page)
+{
+ free_hot_cold_page(page, 1);
+}
+
+/*
+ * Really, prep_compound_page() should be called from __rmqueue_bulk(). But
+ * we cheat by calling it from here, in the order > 0 path. Saves a branch
+ * or two.
+ */
+
+static struct page *
+buffered_rmqueue(struct zone *zone, int order, int gfp_flags)
+{
+ unsigned long flags;
+ struct page *page = NULL;
+ int cold = !!(gfp_flags & __GFP_COLD);
+
+ if (order == 0) {
+ struct per_cpu_pages *pcp;
+
+ pcp = &zone->pageset[get_cpu()].pcp[cold];
+ local_irq_save(flags);
+ if (pcp->count <= pcp->low)
+ pcp->count += rmqueue_bulk(zone, 0,
+ pcp->batch, &pcp->list);
+ if (pcp->count) {
+ page = list_entry(pcp->list.next, struct page, lru);
+ list_del(&page->lru);
+ pcp->count--;
+ }
+ local_irq_restore(flags);
+ put_cpu();
+ }
+
+ if (page == NULL) {
+ spin_lock_irqsave(&zone->lock, flags);
+ page = __rmqueue(zone, order);
+ spin_unlock_irqrestore(&zone->lock, flags);
+ }
+
+ if (page != NULL) {
+ BUG_ON(bad_range(zone, page));
+ mod_page_state_zone(zone, pgalloc, 1 << order);
+ prep_new_page(page, order);
+ if (order && (gfp_flags & __GFP_COMP))
+ prep_compound_page(page, order);
+ }
+ return page;
+}
+
+/*
+ * This is the 'heart' of the zoned buddy allocator.
+ *
+ * Herein lies the mysterious "incremental min". That's the
+ *
+ * local_low = z->pages_low;
+ * min += local_low;
+ *
+ * thing. The intent here is to provide additional protection to low zones for
+ * allocation requests which _could_ use higher zones. So a GFP_HIGHMEM
+ * request is not allowed to dip as deeply into the normal zone as a GFP_KERNEL
+ * request. This preserves additional space in those lower zones for requests
+ * which really do need memory from those zones. It means that on a decent
+ * sized machine, GFP_HIGHMEM and GFP_KERNEL requests basically leave the DMA
+ * zone untouched.
+ */
+struct page * fastcall
+__alloc_pages(unsigned int gfp_mask, unsigned int order,
+ struct zonelist *zonelist)
+{
+ const int wait = gfp_mask & __GFP_WAIT;
+ unsigned long min;
+ struct zone **zones;
+ struct page *page;
+ struct reclaim_state reclaim_state;
+ struct task_struct *p = current;
+ int i;
+ int alloc_type;
+ int do_retry;
+
+ might_sleep_if(wait);
+
+ zones = zonelist->zones; /* the list of zones suitable for gfp_mask */
+ if (zones[0] == NULL) /* no zones in the zonelist */
+ return NULL;
+
+ alloc_type = zone_idx(zones[0]);
+
+ /* Go through the zonelist once, looking for a zone with enough free */
+ for (i = 0; zones[i] != NULL; i++) {
+ struct zone *z = zones[i];
+
+ min = (1<<order) + z->protection[alloc_type];
+
+ /*
+ * We let real-time tasks dip their real-time paws a little
+ * deeper into reserves.
+ */
+ if (rt_task(p))
+ min -= z->pages_low >> 1;
+
+ if (z->free_pages >= min ||
+ (!wait && z->free_pages >= z->pages_high)) {
+ page = buffered_rmqueue(z, order, gfp_mask);
+ if (page) {
+ zone_statistics(zonelist, z);
+ goto got_pg;
+ }
+ }
+ }
+
+ /* we're somewhat low on memory, failed to find what we needed */
+ for (i = 0; zones[i] != NULL; i++)
+ wakeup_kswapd(zones[i]);
+
+ /* Go through the zonelist again, taking __GFP_HIGH into account */
+ for (i = 0; zones[i] != NULL; i++) {
+ struct zone *z = zones[i];
+
+ min = (1<<order) + z->protection[alloc_type];
+
+ if (gfp_mask & __GFP_HIGH)
+ min -= z->pages_low >> 2;
+ if (rt_task(p))
+ min -= z->pages_low >> 1;
+
+ if (z->free_pages >= min ||
+ (!wait && z->free_pages >= z->pages_high)) {
+ page = buffered_rmqueue(z, order, gfp_mask);
+ if (page) {
+ zone_statistics(zonelist, z);
+ goto got_pg;
+ }
+ }
+ }
+
+ /* here we're in the low on memory slow path */
+
+rebalance:
+ if ((p->flags & (PF_MEMALLOC | PF_MEMDIE)) && !in_interrupt()) {
+ /* go through the zonelist yet again, ignoring mins */
+ for (i = 0; zones[i] != NULL; i++) {
+ struct zone *z = zones[i];
+
+ page = buffered_rmqueue(z, order, gfp_mask);
+ if (page) {
+ zone_statistics(zonelist, z);
+ goto got_pg;
+ }
+ }
+ goto nopage;
+ }
+
+ /* Atomic allocations - we can't balance anything */
+ if (!wait)
+ goto nopage;
+
+ p->flags |= PF_MEMALLOC;
+ reclaim_state.reclaimed_slab = 0;
+ p->reclaim_state = &reclaim_state;
+
+ try_to_free_pages(zones, gfp_mask, order);
+
+ p->reclaim_state = NULL;
+ p->flags &= ~PF_MEMALLOC;
+
+ /* go through the zonelist yet one more time */
+ for (i = 0; zones[i] != NULL; i++) {
+ struct zone *z = zones[i];
+
+ min = (1UL << order) + z->protection[alloc_type];
+
+ if (z->free_pages >= min ||
+ (!wait && z->free_pages >= z->pages_high)) {
+ page = buffered_rmqueue(z, order, gfp_mask);
+ if (page) {
+ zone_statistics(zonelist, z);
+ goto got_pg;
+ }
+ }
+ }
+
+ /*
+ * Don't let big-order allocations loop unless the caller explicitly
+ * requests that. Wait for some write requests to complete then retry.
+ *
+ * In this implementation, __GFP_REPEAT means __GFP_NOFAIL, but that
+ * may not be true in other implementations.
+ */
+ do_retry = 0;
+ if (!(gfp_mask & __GFP_NORETRY)) {
+ if ((order <= 3) || (gfp_mask & __GFP_REPEAT))
+ do_retry = 1;
+ if (gfp_mask & __GFP_NOFAIL)
+ do_retry = 1;
+ }
+ if (do_retry) {
+ blk_congestion_wait(WRITE, HZ/50);
+ goto rebalance;
+ }
+
+nopage:
+ if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
+ printk(KERN_WARNING "%s: page allocation failure."
+ " order:%d, mode:0x%x\n",
+ p->comm, order, gfp_mask);
+ dump_stack();
+ }
+ return NULL;
+got_pg:
+ kernel_map_pages(page, 1 << order, 1);
+ return page;
+}
+
+EXPORT_SYMBOL(__alloc_pages);
+
+#ifdef CONFIG_NUMA
+/* Early boot: Everything is done by one cpu, but the data structures will be
+ * used by all cpus - spread them on all nodes.
+ */
+static __init unsigned long get_boot_pages(unsigned int gfp_mask, unsigned int order)
+{
+static int nodenr;
+ int i = nodenr;
+ struct page *page;
+
+ for (;;) {
+ if (i > nodenr + numnodes)
+ return 0;
+ if (node_present_pages(i%numnodes)) {
+ struct zone **z;
+ /* The node contains memory. Check that there is
+ * memory in the intended zonelist.
+ */
+ z = NODE_DATA(i%numnodes)->node_zonelists[gfp_mask & GFP_ZONEMASK].zones;
+ while (*z) {
+ if ( (*z)->free_pages > (1UL<<order))
+ goto found_node;
+ z++;
+ }
+ }
+ i++;
+ }
+found_node:
+ nodenr = i+1;
+ page = alloc_pages_node(i%numnodes, gfp_mask, order);
+ if (!page)
+ return 0;
+ return (unsigned long) page_address(page);
+}
+#endif
+
+/*
+ * Common helper functions.
+ */
+fastcall unsigned long __get_free_pages(unsigned int gfp_mask, unsigned int order)
+{
+ struct page * page;
+
+#ifdef CONFIG_NUMA
+ if (unlikely(system_state == SYSTEM_BOOTING))
+ return get_boot_pages(gfp_mask, order);
+#endif
+ page = alloc_pages(gfp_mask, order);
+ if (!page)
+ return 0;
+ return (unsigned long) page_address(page);
+}
+
+EXPORT_SYMBOL(__get_free_pages);
+
+fastcall unsigned long get_zeroed_page(unsigned int gfp_mask)
+{
+ struct page * page;
+
+ /*
+ * get_zeroed_page() returns a 32-bit address, which cannot represent
+ * a highmem page
+ */
+ BUG_ON(gfp_mask & __GFP_HIGHMEM);
+
+ page = alloc_pages(gfp_mask, 0);
+ if (page) {
+ void *address = page_address(page);
+ clear_page(address);
+ return (unsigned long) address;
+ }
+ return 0;
+}
+
+EXPORT_SYMBOL(get_zeroed_page);
+
+void __pagevec_free(struct pagevec *pvec)
+{
+ int i = pagevec_count(pvec);
+
+ while (--i >= 0)
+ free_hot_cold_page(pvec->pages[i], pvec->cold);
+}
+
+fastcall void __free_pages(struct page *page, unsigned int order)
+{
+ if (!PageReserved(page) && put_page_testzero(page)) {
+ if (order == 0)
+ free_hot_page(page);
+ else
+ __free_pages_ok(page, order);
+ }
+}
+
+EXPORT_SYMBOL(__free_pages);
+
+fastcall void free_pages(unsigned long addr, unsigned int order)
+{
+ if (addr != 0) {
+ BUG_ON(!virt_addr_valid(addr));
+ __free_pages(virt_to_page(addr), order);
+ }
+}
+
+EXPORT_SYMBOL(free_pages);
+
+/*
+ * Total amount of free (allocatable) RAM:
+ */
+unsigned int nr_free_pages(void)
+{
+ unsigned int sum = 0;
+ struct zone *zone;
+
+ for_each_zone(zone)
+ sum += zone->free_pages;
+
+ return sum;
+}
+
+EXPORT_SYMBOL(nr_free_pages);
+
+unsigned int nr_used_zone_pages(void)
+{
+ unsigned int pages = 0;
+ struct zone *zone;
+
+ for_each_zone(zone)
+ pages += zone->nr_active + zone->nr_inactive;
+
+ return pages;
+}
+
+#ifdef CONFIG_NUMA
+unsigned int nr_free_pages_pgdat(pg_data_t *pgdat)
+{
+ unsigned int i, sum = 0;
+
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ sum += pgdat->node_zones[i].free_pages;
+
+ return sum;
+}
+#endif
+
+static unsigned int nr_free_zone_pages(int offset)
+{
+ pg_data_t *pgdat;
+ unsigned int sum = 0;
+
+ for_each_pgdat(pgdat) {
+ struct zonelist *zonelist = pgdat->node_zonelists + offset;
+ struct zone **zonep = zonelist->zones;
+ struct zone *zone;
+
+ for (zone = *zonep++; zone; zone = *zonep++) {
+ unsigned long size = zone->present_pages;
+ unsigned long high = zone->pages_high;
+ if (size > high)
+ sum += size - high;
+ }
+ }
+
+ return sum;
+}
+
+/*
+ * Amount of free RAM allocatable within ZONE_DMA and ZONE_NORMAL
+ */
+unsigned int nr_free_buffer_pages(void)
+{
+ return nr_free_zone_pages(GFP_USER & GFP_ZONEMASK);
+}
+
+/*
+ * Amount of free RAM allocatable within all zones
+ */
+unsigned int nr_free_pagecache_pages(void)
+{
+ return nr_free_zone_pages(GFP_HIGHUSER & GFP_ZONEMASK);
+}
+
+#ifdef CONFIG_HIGHMEM
+unsigned int nr_free_highpages (void)
+{
+ pg_data_t *pgdat;
+ unsigned int pages = 0;
+
+ for_each_pgdat(pgdat)
+ pages += pgdat->node_zones[ZONE_HIGHMEM].free_pages;
+
+ return pages;
+}
+#endif
+
+#ifdef CONFIG_NUMA
+static void show_node(struct zone *zone)
+{
+ printk("Node %d ", zone->zone_pgdat->node_id);
+}
+#else
+#define show_node(zone) do { } while (0)
+#endif
+
+/*
+ * Accumulate the page_state information across all CPUs.
+ * The result is unavoidably approximate - it can change
+ * during and after execution of this function.
+ */
+DEFINE_PER_CPU(struct page_state, page_states) = {0};
+EXPORT_PER_CPU_SYMBOL(page_states);
+
+atomic_t nr_pagecache = ATOMIC_INIT(0);
+EXPORT_SYMBOL(nr_pagecache);
+#ifdef CONFIG_SMP
+DEFINE_PER_CPU(long, nr_pagecache_local) = 0;
+#endif
+
+void __get_page_state(struct page_state *ret, int nr)
+{
+ int cpu = 0;
+
+ memset(ret, 0, sizeof(*ret));
+ while (cpu < NR_CPUS) {
+ unsigned long *in, *out, off;
+
+ if (!cpu_possible(cpu)) {
+ cpu++;
+ continue;
+ }
+
+ in = (unsigned long *)&per_cpu(page_states, cpu);
+ cpu++;
+ if (cpu < NR_CPUS && cpu_possible(cpu))
+ prefetch(&per_cpu(page_states, cpu));
+ out = (unsigned long *)ret;
+ for (off = 0; off < nr; off++)
+ *out++ += *in++;
+ }
+}
+
+void get_page_state(struct page_state *ret)
+{
+ int nr;
+
+ nr = offsetof(struct page_state, GET_PAGE_STATE_LAST);
+ nr /= sizeof(unsigned long);
+
+ __get_page_state(ret, nr + 1);
+}
+
+void get_full_page_state(struct page_state *ret)
+{
+ __get_page_state(ret, sizeof(*ret) / sizeof(unsigned long));
+}
+
+unsigned long __read_page_state(unsigned offset)
+{
+ unsigned long ret = 0;
+ int cpu;
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ unsigned long in;
+
+ if (!cpu_possible(cpu))
+ continue;
+
+ in = (unsigned long)&per_cpu(page_states, cpu) + offset;
+ ret += *((unsigned long *)in);
+ }
+ return ret;
+}
+
+void get_zone_counts(unsigned long *active,
+ unsigned long *inactive, unsigned long *free)
+{
+ struct zone *zone;
+
+ *active = 0;
+ *inactive = 0;
+ *free = 0;
+ for_each_zone(zone) {
+ *active += zone->nr_active;
+ *inactive += zone->nr_inactive;
+ *free += zone->free_pages;
+ }
+}
+
+void si_meminfo(struct sysinfo *val)
+{
+ val->totalram = totalram_pages;
+ val->sharedram = 0;
+ val->freeram = nr_free_pages();
+ val->bufferram = nr_blockdev_pages();
+#ifdef CONFIG_HIGHMEM
+ val->totalhigh = totalhigh_pages;
+ val->freehigh = nr_free_highpages();
+#else
+ val->totalhigh = 0;
+ val->freehigh = 0;
+#endif
+ val->mem_unit = PAGE_SIZE;
+}
+
+EXPORT_SYMBOL(si_meminfo);
+
+#ifdef CONFIG_NUMA
+void si_meminfo_node(struct sysinfo *val, int nid)
+{
+ pg_data_t *pgdat = NODE_DATA(nid);
+
+ val->totalram = pgdat->node_present_pages;
+ val->freeram = nr_free_pages_pgdat(pgdat);
+ val->totalhigh = pgdat->node_zones[ZONE_HIGHMEM].present_pages;
+ val->freehigh = pgdat->node_zones[ZONE_HIGHMEM].free_pages;
+ val->mem_unit = PAGE_SIZE;
+}
+#endif
+
+#define K(x) ((x) << (PAGE_SHIFT-10))
+
+/*
+ * Show free area list (used inside shift_scroll-lock stuff)
+ * We also calculate the percentage fragmentation. We do this by counting the
+ * memory on each free list with the exception of the first item on the list.
+ */
+void show_free_areas(void)
+{
+ struct page_state ps;
+ int cpu, temperature;
+ unsigned long active;
+ unsigned long inactive;
+ unsigned long free;
+ struct zone *zone;
+
+ for_each_zone(zone) {
+ show_node(zone);
+ printk("%s per-cpu:", zone->name);
+
+ if (!zone->present_pages) {
+ printk(" empty\n");
+ continue;
+ } else
+ printk("\n");
+
+ for (cpu = 0; cpu < NR_CPUS; ++cpu) {
+ struct per_cpu_pageset *pageset;
+
+ if (!cpu_possible(cpu))
+ continue;
+
+ pageset = zone->pageset + cpu;
+
+ for (temperature = 0; temperature < 2; temperature++)
+ printk("cpu %d %s: low %d, high %d, batch %d\n",
+ cpu,
+ temperature ? "cold" : "hot",
+ pageset->pcp[temperature].low,
+ pageset->pcp[temperature].high,
+ pageset->pcp[temperature].batch);
+ }
+ }
+
+ get_page_state(&ps);
+ get_zone_counts(&active, &inactive, &free);
+
+ printk("\nFree pages: %11ukB (%ukB HighMem)\n",
+ K(nr_free_pages()),
+ K(nr_free_highpages()));
+
+ printk("Active:%lu inactive:%lu dirty:%lu writeback:%lu "
+ "unstable:%lu free:%u slab:%lu mapped:%lu pagetables:%lu\n",
+ active,
+ inactive,
+ ps.nr_dirty,
+ ps.nr_writeback,
+ ps.nr_unstable,
+ nr_free_pages(),
+ ps.nr_slab,
+ ps.nr_mapped,
+ ps.nr_page_table_pages);
+
+ for_each_zone(zone) {
+ int i;
+
+ show_node(zone);
+ printk("%s"
+ " free:%lukB"
+ " min:%lukB"
+ " low:%lukB"
+ " high:%lukB"
+ " active:%lukB"
+ " inactive:%lukB"
+ " present:%lukB"
+ "\n",
+ zone->name,
+ K(zone->free_pages),
+ K(zone->pages_min),
+ K(zone->pages_low),
+ K(zone->pages_high),
+ K(zone->nr_active),
+ K(zone->nr_inactive),
+ K(zone->present_pages)
+ );
+ printk("protections[]:");
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ printk(" %lu", zone->protection[i]);
+ printk("\n");
+ }
+
+ for_each_zone(zone) {
+ struct list_head *elem;
+ unsigned long nr, flags, order, total = 0;
+
+ show_node(zone);
+ printk("%s: ", zone->name);
+ if (!zone->present_pages) {
+ printk("empty\n");
+ continue;
+ }
+
+ spin_lock_irqsave(&zone->lock, flags);
+ for (order = 0; order < MAX_ORDER; order++) {
+ nr = 0;
+ list_for_each(elem, &zone->free_area[order].free_list)
+ ++nr;
+ total += nr << order;
+ printk("%lu*%lukB ", nr, K(1UL) << order);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ printk("= %lukB\n", K(total));
+ }
+
+ show_swap_cache_info();
+}
+
+/*
+ * Builds allocation fallback zone lists.
+ */
+static int __init build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist, int j, int k)
+{
+ switch (k) {
+ struct zone *zone;
+ default:
+ BUG();
+ case ZONE_HIGHMEM:
+ zone = pgdat->node_zones + ZONE_HIGHMEM;
+ if (zone->present_pages) {
+#ifndef CONFIG_HIGHMEM
+ BUG();
+#endif
+ zonelist->zones[j++] = zone;
+ }
+ case ZONE_NORMAL:
+ zone = pgdat->node_zones + ZONE_NORMAL;
+ if (zone->present_pages)
+ zonelist->zones[j++] = zone;
+ case ZONE_DMA:
+ zone = pgdat->node_zones + ZONE_DMA;
+ if (zone->present_pages)
+ zonelist->zones[j++] = zone;
+ }
+
+ return j;
+}
+
+#ifdef CONFIG_NUMA
+#define MAX_NODE_LOAD (numnodes)
+static int __initdata node_load[MAX_NUMNODES];
+/**
+ * find_next_best_node - find the next node that should appear in a given
+ * node's fallback list
+ * @node: node whose fallback list we're appending
+ * @used_node_mask: pointer to the bitmap of already used nodes
+ *
+ * We use a number of factors to determine which is the next node that should
+ * appear on a given node's fallback list. The node should not have appeared
+ * already in @node's fallback list, and it should be the next closest node
+ * according to the distance array (which contains arbitrary distance values
+ * from each node to each node in the system), and should also prefer nodes
+ * with no CPUs, since presumably they'll have very little allocation pressure
+ * on them otherwise.
+ * It returns -1 if no node is found.
+ */
+static int __init find_next_best_node(int node, void *used_node_mask)
+{
+ int i, n, val;
+ int min_val = INT_MAX;
+ int best_node = -1;
+
+ for (i = 0; i < numnodes; i++) {
+ cpumask_t tmp;
+
+ /* Start from local node */
+ n = (node+i)%numnodes;
+
+ /* Don't want a node to appear more than once */
+ if (test_bit(n, used_node_mask))
+ continue;
+
+ /* Use the distance array to find the distance */
+ val = node_distance(node, n);
+
+ /* Give preference to headless and unused nodes */
+ tmp = node_to_cpumask(n);
+ if (!cpus_empty(tmp))
+ val += PENALTY_FOR_NODE_WITH_CPUS;
+
+ /* Slight preference for less loaded node */
+ val *= (MAX_NODE_LOAD*MAX_NUMNODES);
+ val += node_load[n];
+
+ if (val < min_val) {
+ min_val = val;
+ best_node = n;
+ }
+ }
+
+ if (best_node >= 0)
+ set_bit(best_node, used_node_mask);
+
+ return best_node;
+}
+
+static void __init build_zonelists(pg_data_t *pgdat)
+{
+ int i, j, k, node, local_node;
+ int prev_node, load;
+ struct zonelist *zonelist;
+ DECLARE_BITMAP(used_mask, MAX_NUMNODES);
+
+ /* initialize zonelists */
+ for (i = 0; i < MAX_NR_ZONES; i++) {
+ zonelist = pgdat->node_zonelists + i;
+ memset(zonelist, 0, sizeof(*zonelist));
+ zonelist->zones[0] = NULL;
+ }
+
+ /* NUMA-aware ordering of nodes */
+ local_node = pgdat->node_id;
+ load = numnodes;
+ prev_node = local_node;
+ bitmap_zero(used_mask, MAX_NUMNODES);
+ while ((node = find_next_best_node(local_node, used_mask)) >= 0) {
+ /*
+ * We don't want to pressure a particular node.
+ * So adding penalty to the first node in same
+ * distance group to make it round-robin.
+ */
+ if (node_distance(local_node, node) !=
+ node_distance(local_node, prev_node))
+ node_load[node] += load;
+ prev_node = node;
+ load--;
+ for (i = 0; i < MAX_NR_ZONES; i++) {
+ zonelist = pgdat->node_zonelists + i;
+ for (j = 0; zonelist->zones[j] != NULL; j++);
+
+ k = ZONE_NORMAL;
+ if (i & __GFP_HIGHMEM)
+ k = ZONE_HIGHMEM;
+ if (i & __GFP_DMA)
+ k = ZONE_DMA;
+
+ j = build_zonelists_node(NODE_DATA(node), zonelist, j, k);
+ zonelist->zones[j] = NULL;
+ }
+ }
+}
+
+#else /* CONFIG_NUMA */
+
+static void __init build_zonelists(pg_data_t *pgdat)
+{
+ int i, j, k, node, local_node;
+
+ local_node = pgdat->node_id;
+ for (i = 0; i < MAX_NR_ZONES; i++) {
+ struct zonelist *zonelist;
+
+ zonelist = pgdat->node_zonelists + i;
+ memset(zonelist, 0, sizeof(*zonelist));
+
+ j = 0;
+ k = ZONE_NORMAL;
+ if (i & __GFP_HIGHMEM)
+ k = ZONE_HIGHMEM;
+ if (i & __GFP_DMA)
+ k = ZONE_DMA;
+
+ j = build_zonelists_node(pgdat, zonelist, j, k);
+ /*
+ * Now we build the zonelist so that it contains the zones
+ * of all the other nodes.
+ * We don't want to pressure a particular node, so when
+ * building the zones for node N, we make sure that the
+ * zones coming right after the local ones are those from
+ * node N+1 (modulo N)
+ */
+ for (node = local_node + 1; node < numnodes; node++)
+ j = build_zonelists_node(NODE_DATA(node), zonelist, j, k);
+ for (node = 0; node < local_node; node++)
+ j = build_zonelists_node(NODE_DATA(node), zonelist, j, k);
+
+ zonelist->zones[j] = NULL;
+ }
+}
+
+#endif /* CONFIG_NUMA */
+
+void __init build_all_zonelists(void)
+{
+ int i;
+
+ for(i = 0 ; i < numnodes ; i++)
+ build_zonelists(NODE_DATA(i));
+ printk("Built %i zonelists\n", numnodes);
+}
+
+/*
+ * Helper functions to size the waitqueue hash table.
+ * Essentially these want to choose hash table sizes sufficiently
+ * large so that collisions trying to wait on pages are rare.
+ * But in fact, the number of active page waitqueues on typical
+ * systems is ridiculously low, less than 200. So this is even
+ * conservative, even though it seems large.
+ *
+ * The constant PAGES_PER_WAITQUEUE specifies the ratio of pages to
+ * waitqueues, i.e. the size of the waitq table given the number of pages.
+ */
+#define PAGES_PER_WAITQUEUE 256
+
+static inline unsigned long wait_table_size(unsigned long pages)
+{
+ unsigned long size = 1;
+
+ pages /= PAGES_PER_WAITQUEUE;
+
+ while (size < pages)
+ size <<= 1;
+
+ /*
+ * Once we have dozens or even hundreds of threads sleeping
+ * on IO we've got bigger problems than wait queue collision.
+ * Limit the size of the wait table to a reasonable size.
+ */
+ size = min(size, 4096UL);
+
+ return max(size, 4UL);
+}
+
+/*
+ * This is an integer logarithm so that shifts can be used later
+ * to extract the more random high bits from the multiplicative
+ * hash function before the remainder is taken.
+ */
+static inline unsigned long wait_table_bits(unsigned long size)
+{
+ return ffz(~size);
+}
+
+#define LONG_ALIGN(x) (((x)+(sizeof(long))-1)&~((sizeof(long))-1))
+
+static void __init calculate_zone_totalpages(struct pglist_data *pgdat,
+ unsigned long *zones_size, unsigned long *zholes_size)
+{
+ unsigned long realtotalpages, totalpages = 0;
+ int i;
+
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ totalpages += zones_size[i];
+ pgdat->node_spanned_pages = totalpages;
+
+ realtotalpages = totalpages;
+ if (zholes_size)
+ for (i = 0; i < MAX_NR_ZONES; i++)
+ realtotalpages -= zholes_size[i];
+ pgdat->node_present_pages = realtotalpages;
+ printk("On node %d totalpages: %lu\n", pgdat->node_id, realtotalpages);
+}
+
+
+/*
+ * Initially all pages are reserved - free ones are freed
+ * up by free_all_bootmem() once the early boot process is
+ * done. Non-atomic initialization, single-pass.
+ */
+void __init memmap_init_zone(struct page *start, unsigned long size, int nid,
+ unsigned long zone, unsigned long start_pfn)
+{
+ struct page *page;
+
+ for (page = start; page < (start + size); page++) {
+ set_page_zone(page, NODEZONE(nid, zone));
+ set_page_count(page, 0);
+ SetPageReserved(page);
+ INIT_LIST_HEAD(&page->lru);
+#ifdef WANT_PAGE_VIRTUAL
+ /* The shift won't overflow because ZONE_NORMAL is below 4G. */
+ if (zone != ZONE_HIGHMEM)
+ set_page_address(page, __va(start_pfn << PAGE_SHIFT));
+#endif
+ start_pfn++;
+ }
+}
+
+#ifndef __HAVE_ARCH_MEMMAP_INIT
+#define memmap_init(start, size, nid, zone, start_pfn) \
+ memmap_init_zone((start), (size), (nid), (zone), (start_pfn))
+#endif
+
+/*
+ * Set up the zone data structures:
+ * - mark all pages reserved
+ * - mark all memory queues empty
+ * - clear the memory bitmaps
+ */
+static void __init free_area_init_core(struct pglist_data *pgdat,
+ unsigned long *zones_size, unsigned long *zholes_size)
+{
+ unsigned long i, j;
+ const unsigned long zone_required_alignment = 1UL << (MAX_ORDER-1);
+ int cpu, nid = pgdat->node_id;
+ struct page *lmem_map = pgdat->node_mem_map;
+ unsigned long zone_start_pfn = pgdat->node_start_pfn;
+
+ pgdat->nr_zones = 0;
+ init_waitqueue_head(&pgdat->kswapd_wait);
+
+ for (j = 0; j < MAX_NR_ZONES; j++) {
+ struct zone *zone = pgdat->node_zones + j;
+ unsigned long size, realsize;
+ unsigned long batch;
+
+ zone_table[NODEZONE(nid, j)] = zone;
+ realsize = size = zones_size[j];
+ if (zholes_size)
+ realsize -= zholes_size[j];
+
+ zone->spanned_pages = size;
+ zone->present_pages = realsize;
+ zone->name = zone_names[j];
+ spin_lock_init(&zone->lock);
+ spin_lock_init(&zone->lru_lock);
+ zone->zone_pgdat = pgdat;
+ zone->free_pages = 0;
+
+ zone->temp_priority = zone->prev_priority = DEF_PRIORITY;
+
+ /*
+ * The per-cpu-pages pools are set to around 1000th of the
+ * size of the zone. But no more than 1/4 of a meg - there's
+ * no point in going beyond the size of L2 cache.
+ *
+ * OK, so we don't know how big the cache is. So guess.
+ */
+ batch = zone->present_pages / 1024;
+ if (batch * PAGE_SIZE > 256 * 1024)
+ batch = (256 * 1024) / PAGE_SIZE;
+ batch /= 4; /* We effectively *= 4 below */
+ if (batch < 1)
+ batch = 1;
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ struct per_cpu_pages *pcp;
+
+ pcp = &zone->pageset[cpu].pcp[0]; /* hot */
+ pcp->count = 0;
+ pcp->low = 2 * batch;
+ pcp->high = 6 * batch;
+ pcp->batch = 1 * batch;
+ INIT_LIST_HEAD(&pcp->list);
+
+ pcp = &zone->pageset[cpu].pcp[1]; /* cold */
+ pcp->count = 0;
+ pcp->low = 0;
+ pcp->high = 2 * batch;
+ pcp->batch = 1 * batch;
+ INIT_LIST_HEAD(&pcp->list);
+ }
+ printk(" %s zone: %lu pages, LIFO batch:%lu\n",
+ zone_names[j], realsize, batch);
+ INIT_LIST_HEAD(&zone->active_list);
+ INIT_LIST_HEAD(&zone->inactive_list);
+ atomic_set(&zone->nr_scan_active, 0);
+ atomic_set(&zone->nr_scan_inactive, 0);
+ zone->nr_active = 0;
+ zone->nr_inactive = 0;
+ if (!size)
+ continue;
+
+ /*
+ * The per-page waitqueue mechanism uses hashed waitqueues
+ * per zone.
+ */
+ zone->wait_table_size = wait_table_size(size);
+ zone->wait_table_bits =
+ wait_table_bits(zone->wait_table_size);
+ zone->wait_table = (wait_queue_head_t *)
+ alloc_bootmem_node(pgdat, zone->wait_table_size
+ * sizeof(wait_queue_head_t));
+
+ for(i = 0; i < zone->wait_table_size; ++i)
+ init_waitqueue_head(zone->wait_table + i);
+
+ pgdat->nr_zones = j+1;
+
+ zone->zone_mem_map = lmem_map;
+ zone->zone_start_pfn = zone_start_pfn;
+
+ if ((zone_start_pfn) & (zone_required_alignment-1))
+ printk("BUG: wrong zone alignment, it will crash\n");
+
+ memmap_init(lmem_map, size, nid, j, zone_start_pfn);
+
+ zone_start_pfn += size;
+ lmem_map += size;
+
+ for (i = 0; ; i++) {
+ unsigned long bitmap_size;
+
+ INIT_LIST_HEAD(&zone->free_area[i].free_list);
+ if (i == MAX_ORDER-1) {
+ zone->free_area[i].map = NULL;
+ break;
+ }
+
+ /*
+ * Page buddy system uses "index >> (i+1)",
+ * where "index" is at most "size-1".
+ *
+ * The extra "+3" is to round down to byte
+ * size (8 bits per byte assumption). Thus
+ * we get "(size-1) >> (i+4)" as the last byte
+ * we can access.
+ *
+ * The "+1" is because we want to round the
+ * byte allocation up rather than down. So
+ * we should have had a "+7" before we shifted
+ * down by three. Also, we have to add one as
+ * we actually _use_ the last bit (it's [0,n]
+ * inclusive, not [0,n[).
+ *
+ * So we actually had +7+1 before we shift
+ * down by 3. But (n+8) >> 3 == (n >> 3) + 1
+ * (modulo overflows, which we do not have).
+ *
+ * Finally, we LONG_ALIGN because all bitmap
+ * operations are on longs.
+ */
+ bitmap_size = (size-1) >> (i+4);
+ bitmap_size = LONG_ALIGN(bitmap_size+1);
+ zone->free_area[i].map =
+ (unsigned long *) alloc_bootmem_node(pgdat, bitmap_size);
+ }
+ }
+}
+
+void __init free_area_init_node(int nid, struct pglist_data *pgdat,
+ struct page *node_mem_map, unsigned long *zones_size,
+ unsigned long node_start_pfn, unsigned long *zholes_size)
+{
+ unsigned long size;
+
+ pgdat->node_id = nid;
+ pgdat->node_start_pfn = node_start_pfn;
+ calculate_zone_totalpages(pgdat, zones_size, zholes_size);
+ if (!node_mem_map) {
+ size = (pgdat->node_spanned_pages + 1) * sizeof(struct page);
+ node_mem_map = alloc_bootmem_node(pgdat, size);
+ }
+ pgdat->node_mem_map = node_mem_map;
+
+ free_area_init_core(pgdat, zones_size, zholes_size);
+}
+
+#ifndef CONFIG_DISCONTIGMEM
+static bootmem_data_t contig_bootmem_data;
+struct pglist_data contig_page_data = { .bdata = &contig_bootmem_data };
+
+EXPORT_SYMBOL(contig_page_data);
+
+void __init free_area_init(unsigned long *zones_size)
+{
+ free_area_init_node(0, &contig_page_data, NULL, zones_size,
+ __pa(PAGE_OFFSET) >> PAGE_SHIFT, NULL);
+ mem_map = contig_page_data.node_mem_map;
+}
+#endif
+
+#ifdef CONFIG_PROC_FS
+
+#include <linux/seq_file.h>
+
+static void *frag_start(struct seq_file *m, loff_t *pos)
+{
+ pg_data_t *pgdat;
+ loff_t node = *pos;
+
+ for (pgdat = pgdat_list; pgdat && node; pgdat = pgdat->pgdat_next)
+ --node;
+
+ return pgdat;
+}
+
+static void *frag_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ pg_data_t *pgdat = (pg_data_t *)arg;
+
+ (*pos)++;
+ return pgdat->pgdat_next;
+}
+
+static void frag_stop(struct seq_file *m, void *arg)
+{
+}
+
+/*
+ * This walks the freelist for each zone. Whilst this is slow, I'd rather
+ * be slow here than slow down the fast path by keeping stats - mjbligh
+ */
+static int frag_show(struct seq_file *m, void *arg)
+{
+ pg_data_t *pgdat = (pg_data_t *)arg;
+ struct zone *zone;
+ struct zone *node_zones = pgdat->node_zones;
+ unsigned long flags;
+ int order;
+
+ for (zone = node_zones; zone - node_zones < MAX_NR_ZONES; ++zone) {
+ if (!zone->present_pages)
+ continue;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name);
+ for (order = 0; order < MAX_ORDER; ++order) {
+ unsigned long nr_bufs = 0;
+ struct list_head *elem;
+
+ list_for_each(elem, &(zone->free_area[order].free_list))
+ ++nr_bufs;
+ seq_printf(m, "%6lu ", nr_bufs);
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+ seq_putc(m, '\n');
+ }
+ return 0;
+}
+
+struct seq_operations fragmentation_op = {
+ .start = frag_start,
+ .next = frag_next,
+ .stop = frag_stop,
+ .show = frag_show,
+};
+
+static char *vmstat_text[] = {
+ "nr_dirty",
+ "nr_writeback",
+ "nr_unstable",
+ "nr_page_table_pages",
+ "nr_mapped",
+ "nr_slab",
+
+ "pgpgin",
+ "pgpgout",
+ "pswpin",
+ "pswpout",
+ "pgalloc_high",
+
+ "pgalloc_normal",
+ "pgalloc_dma",
+ "pgfree",
+ "pgactivate",
+ "pgdeactivate",
+
+ "pgfault",
+ "pgmajfault",
+ "pgrefill_high",
+ "pgrefill_normal",
+ "pgrefill_dma",
+
+ "pgsteal_high",
+ "pgsteal_normal",
+ "pgsteal_dma",
+ "pgscan_kswapd_high",
+ "pgscan_kswapd_normal",
+
+ "pgscan_kswapd_dma",
+ "pgscan_direct_high",
+ "pgscan_direct_normal",
+ "pgscan_direct_dma",
+ "pginodesteal",
+
+ "slabs_scanned",
+ "kswapd_steal",
+ "kswapd_inodesteal",
+ "pageoutrun",
+ "allocstall",
+
+ "pgrotated",
+};
+
+static void *vmstat_start(struct seq_file *m, loff_t *pos)
+{
+ struct page_state *ps;
+
+ if (*pos >= ARRAY_SIZE(vmstat_text))
+ return NULL;
+
+ ps = kmalloc(sizeof(*ps), GFP_KERNEL);
+ m->private = ps;
+ if (!ps)
+ return ERR_PTR(-ENOMEM);
+ get_full_page_state(ps);
+ ps->pgpgin /= 2; /* sectors -> kbytes */
+ ps->pgpgout /= 2;
+ return (unsigned long *)ps + *pos;
+}
+
+static void *vmstat_next(struct seq_file *m, void *arg, loff_t *pos)
+{
+ (*pos)++;
+ if (*pos >= ARRAY_SIZE(vmstat_text))
+ return NULL;
+ return (unsigned long *)m->private + *pos;
+}
+
+static int vmstat_show(struct seq_file *m, void *arg)
+{
+ unsigned long *l = arg;
+ unsigned long off = l - (unsigned long *)m->private;
+
+ seq_printf(m, "%s %lu\n", vmstat_text[off], *l);
+ return 0;
+}
+
+static void vmstat_stop(struct seq_file *m, void *arg)
+{
+ kfree(m->private);
+ m->private = NULL;
+}
+
+struct seq_operations vmstat_op = {
+ .start = vmstat_start,
+ .next = vmstat_next,
+ .stop = vmstat_stop,
+ .show = vmstat_show,
+};
+
+#endif /* CONFIG_PROC_FS */
+
+#ifdef CONFIG_HOTPLUG_CPU
+static int page_alloc_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ int cpu = (unsigned long)hcpu;
+ long *count;
+
+ if (action == CPU_DEAD) {
+ /* Drain local pagecache count. */
+ count = &per_cpu(nr_pagecache_local, cpu);
+ atomic_add(*count, &nr_pagecache);
+ *count = 0;
+ local_irq_disable();
+ __drain_pages(cpu);
+ local_irq_enable();
+ }
+ return NOTIFY_OK;
+}
+#endif /* CONFIG_HOTPLUG_CPU */
+
+void __init page_alloc_init(void)
+{
+ hotcpu_notifier(page_alloc_cpu_notify, 0);
+}
+
+static unsigned long higherzone_val(struct zone *z, int max_zone,
+ int alloc_type)
+{
+ int z_idx = zone_idx(z);
+ struct zone *higherzone;
+ unsigned long pages;
+
+ /* there is no higher zone to get a contribution from */
+ if (z_idx == MAX_NR_ZONES-1)
+ return 0;
+
+ higherzone = &z->zone_pgdat->node_zones[z_idx+1];
+
+ /* We always start with the higher zone's protection value */
+ pages = higherzone->protection[alloc_type];
+
+ /*
+ * We get a lower-zone-protection contribution only if there are
+ * pages in the higher zone and if we're not the highest zone
+ * in the current zonelist. e.g., never happens for GFP_DMA. Happens
+ * only for ZONE_DMA in a GFP_KERNEL allocation and happens for ZONE_DMA
+ * and ZONE_NORMAL for a GFP_HIGHMEM allocation.
+ */
+ if (higherzone->present_pages && z_idx < alloc_type)
+ pages += higherzone->pages_low * sysctl_lower_zone_protection;
+
+ return pages;
+}
+
+/*
+ * setup_per_zone_protection - called whenver min_free_kbytes or
+ * sysctl_lower_zone_protection changes. Ensures that each zone
+ * has a correct pages_protected value, so an adequate number of
+ * pages are left in the zone after a successful __alloc_pages().
+ *
+ * This algorithm is way confusing. I tries to keep the same behavior
+ * as we had with the incremental min iterative algorithm.
+ */
+static void setup_per_zone_protection(void)
+{
+ struct pglist_data *pgdat;
+ struct zone *zones, *zone;
+ int max_zone;
+ int i, j;
+
+ for_each_pgdat(pgdat) {
+ zones = pgdat->node_zones;
+
+ for (i = 0, max_zone = 0; i < MAX_NR_ZONES; i++)
+ if (zones[i].present_pages)
+ max_zone = i;
+
+ /*
+ * For each of the different allocation types:
+ * GFP_DMA -> GFP_KERNEL -> GFP_HIGHMEM
+ */
+ for (i = 0; i < MAX_NR_ZONES; i++) {
+ /*
+ * For each of the zones:
+ * ZONE_HIGHMEM -> ZONE_NORMAL -> ZONE_DMA
+ */
+ for (j = MAX_NR_ZONES-1; j >= 0; j--) {
+ zone = &zones[j];
+
+ /*
+ * We never protect zones that don't have memory
+ * in them (j>max_zone) or zones that aren't in
+ * the zonelists for a certain type of
+ * allocation (j>i). We have to assign these to
+ * zero because the lower zones take
+ * contributions from the higher zones.
+ */
+ if (j > max_zone || j > i) {
+ zone->protection[i] = 0;
+ continue;
+ }
+ /*
+ * The contribution of the next higher zone
+ */
+ zone->protection[i] = higherzone_val(zone,
+ max_zone, i);
+ zone->protection[i] += zone->pages_low;
+ }
+ }
+ }
+}
+
+/*
+ * setup_per_zone_pages_min - called when min_free_kbytes changes. Ensures
+ * that the pages_{min,low,high} values for each zone are set correctly
+ * with respect to min_free_kbytes.
+ */
+static void setup_per_zone_pages_min(void)
+{
+ unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10);
+ unsigned long lowmem_pages = 0;
+ struct zone *zone;
+ unsigned long flags;
+
+ /* Calculate total number of !ZONE_HIGHMEM pages */
+ for_each_zone(zone) {
+ if (!is_highmem(zone))
+ lowmem_pages += zone->present_pages;
+ }
+
+ for_each_zone(zone) {
+ spin_lock_irqsave(&zone->lru_lock, flags);
+ if (is_highmem(zone)) {
+ /*
+ * Often, highmem doesn't need to reserve any pages.
+ * But the pages_min/low/high values are also used for
+ * batching up page reclaim activity so we need a
+ * decent value here.
+ */
+ int min_pages;
+
+ min_pages = zone->present_pages / 1024;
+ if (min_pages < SWAP_CLUSTER_MAX)
+ min_pages = SWAP_CLUSTER_MAX;
+ if (min_pages > 128)
+ min_pages = 128;
+ zone->pages_min = min_pages;
+ } else {
+ /* if it's a lowmem zone, reserve a number of pages
+ * proportionate to the zone's size.
+ */
+ zone->pages_min = (pages_min * zone->present_pages) /
+ lowmem_pages;
+ }
+
+ zone->pages_low = zone->pages_min * 2;
+ zone->pages_high = zone->pages_min * 3;
+ spin_unlock_irqrestore(&zone->lru_lock, flags);
+ }
+}
+
+/*
+ * Initialise min_free_kbytes.
+ *
+ * For small machines we want it small (128k min). For large machines
+ * we want it large (16MB max). But it is not linear, because network
+ * bandwidth does not increase linearly with machine size. We use
+ *
+ * min_free_kbytes = sqrt(lowmem_kbytes)
+ *
+ * which yields
+ *
+ * 16MB: 128k
+ * 32MB: 181k
+ * 64MB: 256k
+ * 128MB: 362k
+ * 256MB: 512k
+ * 512MB: 724k
+ * 1024MB: 1024k
+ * 2048MB: 1448k
+ * 4096MB: 2048k
+ * 8192MB: 2896k
+ * 16384MB: 4096k
+ */
+static int __init init_per_zone_pages_min(void)
+{
+ unsigned long lowmem_kbytes;
+
+ lowmem_kbytes = nr_free_buffer_pages() * (PAGE_SIZE >> 10);
+
+ min_free_kbytes = int_sqrt(lowmem_kbytes);
+ if (min_free_kbytes < 128)
+ min_free_kbytes = 128;
+ if (min_free_kbytes > 16384)
+ min_free_kbytes = 16384;
+ setup_per_zone_pages_min();
+ setup_per_zone_protection();
+ return 0;
+}
+module_init(init_per_zone_pages_min)
+
+/*
+ * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so
+ * that we can call two helper functions whenever min_free_kbytes
+ * changes.
+ */
+int min_free_kbytes_sysctl_handler(ctl_table *table, int write,
+ struct file *file, void __user *buffer, size_t *length)
+{
+ proc_dointvec(table, write, file, buffer, length);
+ setup_per_zone_pages_min();
+ setup_per_zone_protection();
+ return 0;
+}
+
+/*
+ * lower_zone_protection_sysctl_handler - just a wrapper around
+ * proc_dointvec() so that we can call setup_per_zone_protection()
+ * whenever sysctl_lower_zone_protection changes.
+ */
+int lower_zone_protection_sysctl_handler(ctl_table *table, int write,
+ struct file *file, void __user *buffer, size_t *length)
+{
+ proc_dointvec_minmax(table, write, file, buffer, length);
+ setup_per_zone_protection();
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/mm.h"
+#include "linux/init.h"
+#include "linux/proc_fs.h"
+#include "linux/proc_mm.h"
+#include "linux/file.h"
+#include "asm/uaccess.h"
+#include "asm/mmu_context.h"
+
+static struct file_operations proc_mm_fops;
+
+struct mm_struct *proc_mm_get_mm(int fd)
+{
+ struct mm_struct *ret = ERR_PTR(-EBADF);
+ struct file *file;
+
+ file = fget(fd);
+ if (!file)
+ goto out;
+
+ ret = ERR_PTR(-EINVAL);
+ if(file->f_op != &proc_mm_fops)
+ goto out_fput;
+
+ ret = file->private_data;
+ out_fput:
+ fput(file);
+ out:
+ return(ret);
+}
+
+extern long do_mmap2(struct mm_struct *mm, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, unsigned long fd,
+ unsigned long pgoff);
+
+static ssize_t write_proc_mm(struct file *file, const char *buffer,
+ size_t count, loff_t *ppos)
+{
+ struct mm_struct *mm = file->private_data;
+ struct proc_mm_op req;
+ int n, ret;
+
+ if(count > sizeof(req))
+ return(-EINVAL);
+
+ n = copy_from_user(&req, buffer, count);
+ if(n != 0)
+ return(-EFAULT);
+
+ ret = count;
+ switch(req.op){
+ case MM_MMAP: {
+ struct mm_mmap *map = &req.u.mmap;
+
+ ret = do_mmap2(mm, map->addr, map->len, map->prot,
+ map->flags, map->fd, map->offset >> PAGE_SHIFT);
+ if((ret & ~PAGE_MASK) == 0)
+ ret = count;
+
+ break;
+ }
+ case MM_MUNMAP: {
+ struct mm_munmap *unmap = &req.u.munmap;
+
+ down_write(&mm->mmap_sem);
+ ret = do_munmap(mm, unmap->addr, unmap->len);
+ up_write(&mm->mmap_sem);
+
+ if(ret == 0)
+ ret = count;
+ break;
+ }
+ case MM_MPROTECT: {
+ struct mm_mprotect *protect = &req.u.mprotect;
+
+ ret = do_mprotect(mm, protect->addr, protect->len,
+ protect->prot);
+ if(ret == 0)
+ ret = count;
+ break;
+ }
+
+ case MM_COPY_SEGMENTS: {
+ struct mm_struct *from = proc_mm_get_mm(req.u.copy_segments);
+
+ if(IS_ERR(from)){
+ ret = PTR_ERR(from);
+ break;
+ }
+
+ mm_copy_segments(from, mm);
+ break;
+ }
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return(ret);
+}
+
+static int open_proc_mm(struct inode *inode, struct file *file)
+{
+ struct mm_struct *mm = mm_alloc();
+ int ret;
+
+ ret = -ENOMEM;
+ if(mm == NULL)
+ goto out_mem;
+
+ ret = init_new_context(current, mm);
+ if(ret)
+ goto out_free;
+
+ spin_lock(&mmlist_lock);
+ list_add(&mm->mmlist, ¤t->mm->mmlist);
+ mmlist_nr++;
+ spin_unlock(&mmlist_lock);
+
+ file->private_data = mm;
+
+ return(0);
+
+ out_free:
+ mmput(mm);
+ out_mem:
+ return(ret);
+}
+
+static int release_proc_mm(struct inode *inode, struct file *file)
+{
+ struct mm_struct *mm = file->private_data;
+
+ mmput(mm);
+ return(0);
+}
+
+static struct file_operations proc_mm_fops = {
+ .open = open_proc_mm,
+ .release = release_proc_mm,
+ .write = write_proc_mm,
+};
+
+static int make_proc_mm(void)
+{
+ struct proc_dir_entry *ent;
+
+ ent = create_proc_entry("mm", 0222, &proc_root);
+ if(ent == NULL){
+ printk("make_proc_mm : Failed to register /proc/mm\n");
+ return(0);
+ }
+ ent->proc_fops = &proc_mm_fops;
+
+ return(0);
+}
+
+__initcall(make_proc_mm);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+config BT_HIDP
+ tristate "HIDP protocol support"
+ depends on BT && BT_L2CAP
+ select INPUT
+ help
+ HIDP (Human Interface Device Protocol) is a transport layer
+ for HID reports. HIDP is required for the Bluetooth Human
+ Interface Device Profile.
+
+ Say Y here to compile HIDP support into the kernel or say M to
+ compile it as module (hidp).
+
--- /dev/null
+#
+# Makefile for the Linux Bluetooth HIDP layer
+#
+
+obj-$(CONFIG_BT_HIDP) += hidp.o
+
+hidp-objs := core.o sock.o
--- /dev/null
+/*
+ HIDP implementation for Linux Bluetooth stack (BlueZ).
+ Copyright (C) 2003-2004 Marcel Holtmann <marcel@holtmann.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License version 2 as
+ published by the Free Software Foundation;
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS.
+ IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) AND AUTHOR(S) BE LIABLE FOR ANY
+ CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+ ALL LIABILITY, INCLUDING LIABILITY FOR INFRINGEMENT OF ANY PATENTS,
+ COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS, RELATING TO USE OF THIS
+ SOFTWARE IS DISCLAIMED.
+*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/poll.h>
+#include <linux/fcntl.h>
+#include <linux/skbuff.h>
+#include <linux/socket.h>
+#include <linux/ioctl.h>
+#include <linux/file.h>
+#include <linux/init.h>
+#include <net/sock.h>
+
+#include <linux/input.h>
+
+#include <net/bluetooth/bluetooth.h>
+#include <net/bluetooth/l2cap.h>
+
+#include "hidp.h"
+
+#ifndef CONFIG_BT_HIDP_DEBUG
+#undef BT_DBG
+#define BT_DBG(D...)
+#endif
+
+#define VERSION "1.0"
+
+static DECLARE_RWSEM(hidp_session_sem);
+static LIST_HEAD(hidp_session_list);
+
+static unsigned char hidp_keycode[256] = {
+ 0, 0, 0, 0, 30, 48, 46, 32, 18, 33, 34, 35, 23, 36, 37, 38,
+ 50, 49, 24, 25, 16, 19, 31, 20, 22, 47, 17, 45, 21, 44, 2, 3,
+ 4, 5, 6, 7, 8, 9, 10, 11, 28, 1, 14, 15, 57, 12, 13, 26,
+ 27, 43, 43, 39, 40, 41, 51, 52, 53, 58, 59, 60, 61, 62, 63, 64,
+ 65, 66, 67, 68, 87, 88, 99, 70,119,110,102,104,111,107,109,106,
+ 105,108,103, 69, 98, 55, 74, 78, 96, 79, 80, 81, 75, 76, 77, 71,
+ 72, 73, 82, 83, 86,127,116,117,183,184,185,186,187,188,189,190,
+ 191,192,193,194,134,138,130,132,128,129,131,137,133,135,136,113,
+ 115,114, 0, 0, 0,121, 0, 89, 93,124, 92, 94, 95, 0, 0, 0,
+ 122,123, 90, 91, 85, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 29, 42, 56,125, 97, 54,100,126,164,166,165,163,161,115,114,113,
+ 150,158,159,128,136,177,178,176,142,152,173,140
+};
+
+static struct hidp_session *__hidp_get_session(bdaddr_t *bdaddr)
+{
+ struct hidp_session *session;
+ struct list_head *p;
+
+ BT_DBG("");
+
+ list_for_each(p, &hidp_session_list) {
+ session = list_entry(p, struct hidp_session, list);
+ if (!bacmp(bdaddr, &session->bdaddr))
+ return session;
+ }
+ return NULL;
+}
+
+static void __hidp_link_session(struct hidp_session *session)
+{
+ __module_get(THIS_MODULE);
+ list_add(&session->list, &hidp_session_list);
+}
+
+static void __hidp_unlink_session(struct hidp_session *session)
+{
+ list_del(&session->list);
+ module_put(THIS_MODULE);
+}
+
+static void __hidp_copy_session(struct hidp_session *session, struct hidp_conninfo *ci)
+{
+ bacpy(&ci->bdaddr, &session->bdaddr);
+
+ ci->flags = session->flags;
+ ci->state = session->state;
+
+ ci->vendor = 0x0000;
+ ci->product = 0x0000;
+ ci->version = 0x0000;
+ memset(ci->name, 0, 128);
+
+ if (session->input) {
+ ci->vendor = session->input->id.vendor;
+ ci->product = session->input->id.product;
+ ci->version = session->input->id.version;
+ if (session->input->name)
+ strncpy(ci->name, session->input->name, 128);
+ else
+ strncpy(ci->name, "HID Boot Device", 128);
+ }
+}
+
+static int hidp_input_event(struct input_dev *dev, unsigned int type, unsigned int code, int value)
+{
+ struct hidp_session *session = dev->private;
+ struct sk_buff *skb;
+ unsigned char newleds;
+
+ BT_DBG("session %p hid %p data %p size %d", session, device, data, size);
+
+ if (type != EV_LED)
+ return -1;
+
+ newleds = (!!test_bit(LED_KANA, dev->led) << 3) |
+ (!!test_bit(LED_COMPOSE, dev->led) << 3) |
+ (!!test_bit(LED_SCROLLL, dev->led) << 2) |
+ (!!test_bit(LED_CAPSL, dev->led) << 1) |
+ (!!test_bit(LED_NUML, dev->led));
+
+ if (session->leds == newleds)
+ return 0;
+
+ session->leds = newleds;
+
+ if (!(skb = alloc_skb(3, GFP_ATOMIC))) {
+ BT_ERR("Can't allocate memory for new frame");
+ return -ENOMEM;
+ }
+
+ *skb_put(skb, 1) = 0xa2;
+ *skb_put(skb, 1) = 0x01;
+ *skb_put(skb, 1) = newleds;
+
+ skb_queue_tail(&session->intr_transmit, skb);
+
+ hidp_schedule(session);
+
+ return 0;
+}
+
+static void hidp_input_report(struct hidp_session *session, struct sk_buff *skb)
+{
+ struct input_dev *dev = session->input;
+ unsigned char *keys = session->keys;
+ unsigned char *udata = skb->data + 1;
+ signed char *sdata = skb->data + 1;
+ int i, size = skb->len - 1;
+
+ switch (skb->data[0]) {
+ case 0x01: /* Keyboard report */
+ for (i = 0; i < 8; i++)
+ input_report_key(dev, hidp_keycode[i + 224], (udata[0] >> i) & 1);
+
+ for (i = 2; i < 8; i++) {
+ if (keys[i] > 3 && memscan(udata + 2, keys[i], 6) == udata + 8) {
+ if (hidp_keycode[keys[i]])
+ input_report_key(dev, hidp_keycode[keys[i]], 0);
+ else
+ BT_ERR("Unknown key (scancode %#x) released.", keys[i]);
+ }
+
+ if (udata[i] > 3 && memscan(keys + 2, udata[i], 6) == keys + 8) {
+ if (hidp_keycode[udata[i]])
+ input_report_key(dev, hidp_keycode[udata[i]], 1);
+ else
+ BT_ERR("Unknown key (scancode %#x) pressed.", udata[i]);
+ }
+ }
+
+ memcpy(keys, udata, 8);
+ break;
+
+ case 0x02: /* Mouse report */
+ input_report_key(dev, BTN_LEFT, sdata[0] & 0x01);
+ input_report_key(dev, BTN_RIGHT, sdata[0] & 0x02);
+ input_report_key(dev, BTN_MIDDLE, sdata[0] & 0x04);
+ input_report_key(dev, BTN_SIDE, sdata[0] & 0x08);
+ input_report_key(dev, BTN_EXTRA, sdata[0] & 0x10);
+
+ input_report_rel(dev, REL_X, sdata[1]);
+ input_report_rel(dev, REL_Y, sdata[2]);
+
+ if (size > 3)
+ input_report_rel(dev, REL_WHEEL, sdata[3]);
+ break;
+ }
+
+ input_sync(dev);
+}
+
+static void hidp_idle_timeout(unsigned long arg)
+{
+ struct hidp_session *session = (struct hidp_session *) arg;
+
+ atomic_inc(&session->terminate);
+ hidp_schedule(session);
+}
+
+static inline void hidp_set_timer(struct hidp_session *session)
+{
+ if (session->idle_to > 0)
+ mod_timer(&session->timer, jiffies + HZ * session->idle_to);
+}
+
+static inline void hidp_del_timer(struct hidp_session *session)
+{
+ if (session->idle_to > 0)
+ del_timer(&session->timer);
+}
+
+static inline void hidp_send_message(struct hidp_session *session, unsigned char hdr)
+{
+ struct sk_buff *skb;
+
+ BT_DBG("session %p", session);
+
+ if (!(skb = alloc_skb(1, GFP_ATOMIC))) {
+ BT_ERR("Can't allocate memory for message");
+ return;
+ }
+
+ *skb_put(skb, 1) = hdr;
+
+ skb_queue_tail(&session->ctrl_transmit, skb);
+
+ hidp_schedule(session);
+}
+
+static inline int hidp_recv_frame(struct hidp_session *session, struct sk_buff *skb)
+{
+ __u8 hdr;
+
+ BT_DBG("session %p skb %p len %d", session, skb, skb->len);
+
+ hdr = skb->data[0];
+ skb_pull(skb, 1);
+
+ if (hdr == 0xa1) {
+ hidp_set_timer(session);
+
+ if (session->input)
+ hidp_input_report(session, skb);
+ } else {
+ BT_DBG("Unsupported protocol header 0x%02x", hdr);
+ }
+
+ kfree_skb(skb);
+ return 0;
+}
+
+static int hidp_send_frame(struct socket *sock, unsigned char *data, int len)
+{
+ struct iovec iv = { data, len };
+ struct msghdr msg;
+
+ BT_DBG("sock %p data %p len %d", sock, data, len);
+
+ if (!len)
+ return 0;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.msg_iovlen = 1;
+ msg.msg_iov = &iv;
+
+ return sock_sendmsg(sock, &msg, len);
+}
+
+static int hidp_process_transmit(struct hidp_session *session)
+{
+ struct sk_buff *skb;
+
+ BT_DBG("session %p", session);
+
+ while ((skb = skb_dequeue(&session->ctrl_transmit))) {
+ if (hidp_send_frame(session->ctrl_sock, skb->data, skb->len) < 0) {
+ skb_queue_head(&session->ctrl_transmit, skb);
+ break;
+ }
+
+ hidp_set_timer(session);
+ kfree_skb(skb);
+ }
+
+ while ((skb = skb_dequeue(&session->intr_transmit))) {
+ if (hidp_send_frame(session->intr_sock, skb->data, skb->len) < 0) {
+ skb_queue_head(&session->intr_transmit, skb);
+ break;
+ }
+
+ hidp_set_timer(session);
+ kfree_skb(skb);
+ }
+
+ return skb_queue_len(&session->ctrl_transmit) +
+ skb_queue_len(&session->intr_transmit);
+}
+
+static int hidp_session(void *arg)
+{
+ struct hidp_session *session = arg;
+ struct sock *ctrl_sk = session->ctrl_sock->sk;
+ struct sock *intr_sk = session->intr_sock->sk;
+ struct sk_buff *skb;
+ int vendor = 0x0000, product = 0x0000;
+ wait_queue_t ctrl_wait, intr_wait;
+ unsigned long timeo = HZ;
+
+ BT_DBG("session %p", session);
+
+ if (session->input) {
+ vendor = session->input->id.vendor;
+ product = session->input->id.product;
+ }
+
+ daemonize("khidpd_%04x%04x", vendor, product);
+ set_user_nice(current, -15);
+ current->flags |= PF_NOFREEZE;
+
+ set_fs(KERNEL_DS);
+
+ init_waitqueue_entry(&ctrl_wait, current);
+ init_waitqueue_entry(&intr_wait, current);
+ add_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
+ add_wait_queue(intr_sk->sk_sleep, &intr_wait);
+ while (!atomic_read(&session->terminate)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+
+ if (ctrl_sk->sk_state != BT_CONNECTED || intr_sk->sk_state != BT_CONNECTED)
+ break;
+
+ while ((skb = skb_dequeue(&ctrl_sk->sk_receive_queue))) {
+ skb_orphan(skb);
+ hidp_recv_frame(session, skb);
+ }
+
+ while ((skb = skb_dequeue(&intr_sk->sk_receive_queue))) {
+ skb_orphan(skb);
+ hidp_recv_frame(session, skb);
+ }
+
+ hidp_process_transmit(session);
+
+ schedule();
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(intr_sk->sk_sleep, &intr_wait);
+ remove_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
+
+ down_write(&hidp_session_sem);
+
+ hidp_del_timer(session);
+
+ if (intr_sk->sk_state != BT_CONNECTED) {
+ init_waitqueue_entry(&ctrl_wait, current);
+ add_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
+ while (timeo && ctrl_sk->sk_state != BT_CLOSED) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ timeo = schedule_timeout(timeo);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(ctrl_sk->sk_sleep, &ctrl_wait);
+ timeo = HZ;
+ }
+
+ fput(session->ctrl_sock->file);
+
+ init_waitqueue_entry(&intr_wait, current);
+ add_wait_queue(intr_sk->sk_sleep, &intr_wait);
+ while (timeo && intr_sk->sk_state != BT_CLOSED) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ timeo = schedule_timeout(timeo);
+ }
+ set_current_state(TASK_RUNNING);
+ remove_wait_queue(intr_sk->sk_sleep, &intr_wait);
+
+ fput(session->intr_sock->file);
+
+ __hidp_unlink_session(session);
+
+ if (session->input) {
+ input_unregister_device(session->input);
+ kfree(session->input);
+ }
+
+ up_write(&hidp_session_sem);
+
+ kfree(session);
+ return 0;
+}
+
+static inline void hidp_setup_input(struct hidp_session *session, struct hidp_connadd_req *req)
+{
+ struct input_dev *input = session->input;
+ int i;
+
+ input->private = session;
+
+ input->id.bustype = BUS_BLUETOOTH;
+ input->id.vendor = req->vendor;
+ input->id.product = req->product;
+ input->id.version = req->version;
+
+ if (req->subclass & 0x40) {
+ set_bit(EV_KEY, input->evbit);
+ set_bit(EV_LED, input->evbit);
+ set_bit(EV_REP, input->evbit);
+
+ set_bit(LED_NUML, input->ledbit);
+ set_bit(LED_CAPSL, input->ledbit);
+ set_bit(LED_SCROLLL, input->ledbit);
+ set_bit(LED_COMPOSE, input->ledbit);
+ set_bit(LED_KANA, input->ledbit);
+
+ for (i = 0; i < sizeof(hidp_keycode); i++)
+ set_bit(hidp_keycode[i], input->keybit);
+ clear_bit(0, input->keybit);
+ }
+
+ if (req->subclass & 0x80) {
+ input->evbit[0] = BIT(EV_KEY) | BIT(EV_REL);
+ input->keybit[LONG(BTN_MOUSE)] = BIT(BTN_LEFT) | BIT(BTN_RIGHT) | BIT(BTN_MIDDLE);
+ input->relbit[0] = BIT(REL_X) | BIT(REL_Y);
+ input->keybit[LONG(BTN_MOUSE)] |= BIT(BTN_SIDE) | BIT(BTN_EXTRA);
+ input->relbit[0] |= BIT(REL_WHEEL);
+ }
+
+ input->event = hidp_input_event;
+
+ input_register_device(input);
+}
+
+int hidp_add_connection(struct hidp_connadd_req *req, struct socket *ctrl_sock, struct socket *intr_sock)
+{
+ struct hidp_session *session, *s;
+ int err;
+
+ BT_DBG("");
+
+ if (bacmp(&bt_sk(ctrl_sock->sk)->src, &bt_sk(intr_sock->sk)->src) ||
+ bacmp(&bt_sk(ctrl_sock->sk)->dst, &bt_sk(intr_sock->sk)->dst))
+ return -ENOTUNIQ;
+
+ session = kmalloc(sizeof(struct hidp_session), GFP_KERNEL);
+ if (!session)
+ return -ENOMEM;
+ memset(session, 0, sizeof(struct hidp_session));
+
+ session->input = kmalloc(sizeof(struct input_dev), GFP_KERNEL);
+ if (!session->input) {
+ kfree(session);
+ return -ENOMEM;
+ }
+ memset(session->input, 0, sizeof(struct input_dev));
+
+ down_write(&hidp_session_sem);
+
+ s = __hidp_get_session(&bt_sk(ctrl_sock->sk)->dst);
+ if (s && s->state == BT_CONNECTED) {
+ err = -EEXIST;
+ goto failed;
+ }
+
+ bacpy(&session->bdaddr, &bt_sk(ctrl_sock->sk)->dst);
+
+ session->ctrl_mtu = min_t(uint, l2cap_pi(ctrl_sock->sk)->omtu, l2cap_pi(ctrl_sock->sk)->imtu);
+ session->intr_mtu = min_t(uint, l2cap_pi(intr_sock->sk)->omtu, l2cap_pi(intr_sock->sk)->imtu);
+
+ BT_DBG("ctrl mtu %d intr mtu %d", session->ctrl_mtu, session->intr_mtu);
+
+ session->ctrl_sock = ctrl_sock;
+ session->intr_sock = intr_sock;
+ session->state = BT_CONNECTED;
+
+ init_timer(&session->timer);
+
+ session->timer.function = hidp_idle_timeout;
+ session->timer.data = (unsigned long) session;
+
+ skb_queue_head_init(&session->ctrl_transmit);
+ skb_queue_head_init(&session->intr_transmit);
+
+ session->flags = req->flags & (1 << HIDP_BLUETOOTH_VENDOR_ID);
+ session->idle_to = req->idle_to;
+
+ if (session->input)
+ hidp_setup_input(session, req);
+
+ __hidp_link_session(session);
+
+ hidp_set_timer(session);
+
+ err = kernel_thread(hidp_session, session, CLONE_KERNEL);
+ if (err < 0)
+ goto unlink;
+
+ if (session->input) {
+ hidp_send_message(session, 0x70);
+ session->flags |= (1 << HIDP_BOOT_PROTOCOL_MODE);
+
+ session->leds = 0xff;
+ hidp_input_event(session->input, EV_LED, 0, 0);
+ }
+
+ up_write(&hidp_session_sem);
+ return 0;
+
+unlink:
+ hidp_del_timer(session);
+
+ __hidp_unlink_session(session);
+
+ if (session->input)
+ input_unregister_device(session->input);
+
+failed:
+ up_write(&hidp_session_sem);
+
+ if (session->input)
+ kfree(session->input);
+
+ kfree(session);
+ return err;
+}
+
+int hidp_del_connection(struct hidp_conndel_req *req)
+{
+ struct hidp_session *session;
+ int err = 0;
+
+ BT_DBG("");
+
+ down_read(&hidp_session_sem);
+
+ session = __hidp_get_session(&req->bdaddr);
+ if (session) {
+ if (req->flags & (1 << HIDP_VIRTUAL_CABLE_UNPLUG)) {
+ hidp_send_message(session, 0x15);
+ } else {
+ /* Flush the transmit queues */
+ skb_queue_purge(&session->ctrl_transmit);
+ skb_queue_purge(&session->intr_transmit);
+
+ /* Kill session thread */
+ atomic_inc(&session->terminate);
+ hidp_schedule(session);
+ }
+ } else
+ err = -ENOENT;
+
+ up_read(&hidp_session_sem);
+ return err;
+}
+
+int hidp_get_connlist(struct hidp_connlist_req *req)
+{
+ struct list_head *p;
+ int err = 0, n = 0;
+
+ BT_DBG("");
+
+ down_read(&hidp_session_sem);
+
+ list_for_each(p, &hidp_session_list) {
+ struct hidp_session *session;
+ struct hidp_conninfo ci;
+
+ session = list_entry(p, struct hidp_session, list);
+
+ __hidp_copy_session(session, &ci);
+
+ if (copy_to_user(req->ci, &ci, sizeof(ci))) {
+ err = -EFAULT;
+ break;
+ }
+
+ if (++n >= req->cnum)
+ break;
+
+ req->ci++;
+ }
+ req->cnum = n;
+
+ up_read(&hidp_session_sem);
+ return err;
+}
+
+int hidp_get_conninfo(struct hidp_conninfo *ci)
+{
+ struct hidp_session *session;
+ int err = 0;
+
+ down_read(&hidp_session_sem);
+
+ session = __hidp_get_session(&ci->bdaddr);
+ if (session)
+ __hidp_copy_session(session, ci);
+ else
+ err = -ENOENT;
+
+ up_read(&hidp_session_sem);
+ return err;
+}
+
+static int __init hidp_init(void)
+{
+ l2cap_load();
+
+ BT_INFO("HIDP (Human Interface Emulation) ver %s", VERSION);
+
+ return hidp_init_sockets();
+}
+
+static void __exit hidp_exit(void)
+{
+ hidp_cleanup_sockets();
+}
+
+module_init(hidp_init);
+module_exit(hidp_exit);
+
+MODULE_AUTHOR("Marcel Holtmann <marcel@holtmann.org>");
+MODULE_DESCRIPTION("Bluetooth HIDP ver " VERSION);
+MODULE_VERSION(VERSION);
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("bt-proto-6");
--- /dev/null
+/*
+ HIDP implementation for Linux Bluetooth stack (BlueZ).
+ Copyright (C) 2003-2004 Marcel Holtmann <marcel@holtmann.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License version 2 as
+ published by the Free Software Foundation;
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS.
+ IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) AND AUTHOR(S) BE LIABLE FOR ANY
+ CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+ ALL LIABILITY, INCLUDING LIABILITY FOR INFRINGEMENT OF ANY PATENTS,
+ COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS, RELATING TO USE OF THIS
+ SOFTWARE IS DISCLAIMED.
+*/
+
+#ifndef __HIDP_H
+#define __HIDP_H
+
+#include <linux/types.h>
+#include <net/bluetooth/bluetooth.h>
+
+/* HIDP ioctl defines */
+#define HIDPCONNADD _IOW('H', 200, int)
+#define HIDPCONNDEL _IOW('H', 201, int)
+#define HIDPGETCONNLIST _IOR('H', 210, int)
+#define HIDPGETCONNINFO _IOR('H', 211, int)
+
+#define HIDP_VIRTUAL_CABLE_UNPLUG 0
+#define HIDP_BOOT_PROTOCOL_MODE 1
+#define HIDP_BLUETOOTH_VENDOR_ID 9
+
+struct hidp_connadd_req {
+ int ctrl_sock; // Connected control socket
+ int intr_sock; // Connteted interrupt socket
+ __u16 parser;
+ __u16 rd_size;
+ __u8 *rd_data;
+ __u8 country;
+ __u8 subclass;
+ __u16 vendor;
+ __u16 product;
+ __u16 version;
+ __u32 flags;
+ __u32 idle_to;
+ char name[128];
+};
+
+struct hidp_conndel_req {
+ bdaddr_t bdaddr;
+ __u32 flags;
+};
+
+struct hidp_conninfo {
+ bdaddr_t bdaddr;
+ __u32 flags;
+ __u16 state;
+ __u16 vendor;
+ __u16 product;
+ __u16 version;
+ char name[128];
+};
+
+struct hidp_connlist_req {
+ __u32 cnum;
+ struct hidp_conninfo __user *ci;
+};
+
+int hidp_add_connection(struct hidp_connadd_req *req, struct socket *ctrl_sock, struct socket *intr_sock);
+int hidp_del_connection(struct hidp_conndel_req *req);
+int hidp_get_connlist(struct hidp_connlist_req *req);
+int hidp_get_conninfo(struct hidp_conninfo *ci);
+
+/* HIDP session defines */
+struct hidp_session {
+ struct list_head list;
+
+ struct socket *ctrl_sock;
+ struct socket *intr_sock;
+
+ bdaddr_t bdaddr;
+
+ unsigned long state;
+ unsigned long flags;
+ unsigned long idle_to;
+
+ uint ctrl_mtu;
+ uint intr_mtu;
+
+ atomic_t terminate;
+
+ unsigned char keys[8];
+ unsigned char leds;
+
+ struct input_dev *input;
+
+ struct timer_list timer;
+
+ struct sk_buff_head ctrl_transmit;
+ struct sk_buff_head intr_transmit;
+};
+
+static inline void hidp_schedule(struct hidp_session *session)
+{
+ struct sock *ctrl_sk = session->ctrl_sock->sk;
+ struct sock *intr_sk = session->intr_sock->sk;
+
+ wake_up_interruptible(ctrl_sk->sk_sleep);
+ wake_up_interruptible(intr_sk->sk_sleep);
+}
+
+/* HIDP init defines */
+extern int __init hidp_init_sockets(void);
+extern void __exit hidp_cleanup_sockets(void);
+
+#endif /* __HIDP_H */
--- /dev/null
+/*
+ HIDP implementation for Linux Bluetooth stack (BlueZ).
+ Copyright (C) 2003-2004 Marcel Holtmann <marcel@holtmann.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License version 2 as
+ published by the Free Software Foundation;
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS.
+ IN NO EVENT SHALL THE COPYRIGHT HOLDER(S) AND AUTHOR(S) BE LIABLE FOR ANY
+ CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+ ALL LIABILITY, INCLUDING LIABILITY FOR INFRINGEMENT OF ANY PATENTS,
+ COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS, RELATING TO USE OF THIS
+ SOFTWARE IS DISCLAIMED.
+*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/major.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/poll.h>
+#include <linux/fcntl.h>
+#include <linux/skbuff.h>
+#include <linux/socket.h>
+#include <linux/ioctl.h>
+#include <linux/file.h>
+#include <linux/init.h>
+#include <net/sock.h>
+
+#include "hidp.h"
+
+#ifndef CONFIG_BT_HIDP_DEBUG
+#undef BT_DBG
+#define BT_DBG(D...)
+#endif
+
+static int hidp_sock_release(struct socket *sock)
+{
+ struct sock *sk = sock->sk;
+
+ BT_DBG("sock %p sk %p", sock, sk);
+
+ if (!sk)
+ return 0;
+
+ sock_orphan(sk);
+ sock_put(sk);
+
+ return 0;
+}
+
+static int hidp_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+{
+ void __user *argp = (void __user *) arg;
+ struct hidp_connadd_req ca;
+ struct hidp_conndel_req cd;
+ struct hidp_connlist_req cl;
+ struct hidp_conninfo ci;
+ struct socket *csock;
+ struct socket *isock;
+ int err;
+
+ BT_DBG("cmd %x arg %lx", cmd, arg);
+
+ switch (cmd) {
+ case HIDPCONNADD:
+ if (!capable(CAP_NET_ADMIN))
+ return -EACCES;
+
+ if (copy_from_user(&ca, argp, sizeof(ca)))
+ return -EFAULT;
+
+ csock = sockfd_lookup(ca.ctrl_sock, &err);
+ if (!csock)
+ return err;
+
+ isock = sockfd_lookup(ca.intr_sock, &err);
+ if (!isock) {
+ fput(csock->file);
+ return err;
+ }
+
+ if (csock->sk->sk_state != BT_CONNECTED || isock->sk->sk_state != BT_CONNECTED) {
+ fput(csock->file);
+ fput(isock->file);
+ return -EBADFD;
+ }
+
+ err = hidp_add_connection(&ca, csock, isock);
+ if (!err) {
+ if (copy_to_user(argp, &ca, sizeof(ca)))
+ err = -EFAULT;
+ } else {
+ fput(csock->file);
+ fput(isock->file);
+ }
+
+ return err;
+
+ case HIDPCONNDEL:
+ if (!capable(CAP_NET_ADMIN))
+ return -EACCES;
+
+ if (copy_from_user(&cd, argp, sizeof(cd)))
+ return -EFAULT;
+
+ return hidp_del_connection(&cd);
+
+ case HIDPGETCONNLIST:
+ if (copy_from_user(&cl, argp, sizeof(cl)))
+ return -EFAULT;
+
+ if (cl.cnum <= 0)
+ return -EINVAL;
+
+ err = hidp_get_connlist(&cl);
+ if (!err && copy_to_user(argp, &cl, sizeof(cl)))
+ return -EFAULT;
+
+ return err;
+
+ case HIDPGETCONNINFO:
+ if (copy_from_user(&ci, argp, sizeof(ci)))
+ return -EFAULT;
+
+ err = hidp_get_conninfo(&ci);
+ if (!err && copy_to_user(argp, &ci, sizeof(ci)))
+ return -EFAULT;
+
+ return err;
+ }
+
+ return -EINVAL;
+}
+
+static struct proto_ops hidp_sock_ops = {
+ .family = PF_BLUETOOTH,
+ .owner = THIS_MODULE,
+ .release = hidp_sock_release,
+ .ioctl = hidp_sock_ioctl,
+ .bind = sock_no_bind,
+ .getname = sock_no_getname,
+ .sendmsg = sock_no_sendmsg,
+ .recvmsg = sock_no_recvmsg,
+ .poll = sock_no_poll,
+ .listen = sock_no_listen,
+ .shutdown = sock_no_shutdown,
+ .setsockopt = sock_no_setsockopt,
+ .getsockopt = sock_no_getsockopt,
+ .connect = sock_no_connect,
+ .socketpair = sock_no_socketpair,
+ .accept = sock_no_accept,
+ .mmap = sock_no_mmap
+};
+
+static int hidp_sock_create(struct socket *sock, int protocol)
+{
+ struct sock *sk;
+
+ BT_DBG("sock %p", sock);
+
+ if (sock->type != SOCK_RAW)
+ return -ESOCKTNOSUPPORT;
+
+ if (!(sk = bt_sock_alloc(sock, PF_BLUETOOTH, 0, GFP_KERNEL)))
+ return -ENOMEM;
+
+ sk_set_owner(sk, THIS_MODULE);
+
+ sock->ops = &hidp_sock_ops;
+
+ sock->state = SS_UNCONNECTED;
+
+ sk->sk_destruct = NULL;
+ sk->sk_protocol = protocol;
+
+ return 0;
+}
+
+static struct net_proto_family hidp_sock_family_ops = {
+ .family = PF_BLUETOOTH,
+ .owner = THIS_MODULE,
+ .create = hidp_sock_create
+};
+
+int __init hidp_init_sockets(void)
+{
+ int err;
+
+ err = bt_sock_register(BTPROTO_HIDP, &hidp_sock_family_ops);
+ if (err < 0)
+ BT_ERR("Can't register HIDP socket");
+
+ return err;
+}
+
+void __exit hidp_cleanup_sockets(void)
+{
+ if (bt_sock_unregister(BTPROTO_HIDP) < 0)
+ BT_ERR("Can't unregister HIDP socket");
+}
--- /dev/null
+/*
+ * Sysfs attributes of bridge ports
+ * Linux ethernet bridge
+ *
+ * Authors:
+ * Stephen Hemminger <shemminger@osdl.org>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/if_bridge.h>
+#include <linux/rtnetlink.h>
+#include <linux/spinlock.h>
+#include <linux/times.h>
+
+#include "br_private.h"
+
+#define to_class_dev(obj) container_of(obj,struct class_device,kobj)
+#define to_net_dev(class) container_of(class, struct net_device, class_dev)
+#define to_bridge(cd) ((struct net_bridge *)(to_net_dev(cd)->priv))
+
+/*
+ * Common code for storing bridge parameters.
+ */
+static ssize_t store_bridge_parm(struct class_device *cd,
+ const char *buf, size_t len,
+ void (*set)(struct net_bridge *, unsigned long))
+{
+ struct net_bridge *br = to_bridge(cd);
+ char *endp;
+ unsigned long val;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ val = simple_strtoul(buf, &endp, 0);
+ if (endp == buf)
+ return -EINVAL;
+
+ spin_lock_bh(&br->lock);
+ (*set)(br, val);
+ spin_unlock_bh(&br->lock);
+ return len;
+}
+
+
+static ssize_t show_forward_delay(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%lu\n", jiffies_to_clock_t(br->forward_delay));
+}
+
+static void set_forward_delay(struct net_bridge *br, unsigned long val)
+{
+ unsigned long delay = clock_t_to_jiffies(val);
+ br->forward_delay = delay;
+ if (br_is_root_bridge(br))
+ br->bridge_forward_delay = delay;
+}
+
+static ssize_t store_forward_delay(struct class_device *cd, const char *buf,
+ size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_forward_delay);
+}
+static CLASS_DEVICE_ATTR(forward_delay, S_IRUGO | S_IWUSR,
+ show_forward_delay, store_forward_delay);
+
+static ssize_t show_hello_time(struct class_device *cd, char *buf)
+{
+ return sprintf(buf, "%lu\n",
+ jiffies_to_clock_t(to_bridge(cd)->hello_time));
+}
+
+static void set_hello_time(struct net_bridge *br, unsigned long val)
+{
+ unsigned long t = clock_t_to_jiffies(val);
+ br->hello_time = t;
+ if (br_is_root_bridge(br))
+ br->bridge_hello_time = t;
+}
+
+static ssize_t store_hello_time(struct class_device *cd, const char *buf,
+ size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_hello_time);
+}
+
+static CLASS_DEVICE_ATTR(hello_time, S_IRUGO | S_IWUSR, show_hello_time,
+ store_hello_time);
+
+static ssize_t show_max_age(struct class_device *cd, char *buf)
+{
+ return sprintf(buf, "%lu\n",
+ jiffies_to_clock_t(to_bridge(cd)->max_age));
+}
+
+static void set_max_age(struct net_bridge *br, unsigned long val)
+{
+ unsigned long t = clock_t_to_jiffies(val);
+ br->max_age = t;
+ if (br_is_root_bridge(br))
+ br->bridge_max_age = t;
+}
+
+static ssize_t store_max_age(struct class_device *cd, const char *buf,
+ size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_max_age);
+}
+
+static CLASS_DEVICE_ATTR(max_age, S_IRUGO | S_IWUSR, show_max_age,
+ store_max_age);
+
+static ssize_t show_ageing_time(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%lu\n", jiffies_to_clock_t(br->ageing_time));
+}
+
+static void set_ageing_time(struct net_bridge *br, unsigned long val)
+{
+ br->ageing_time = clock_t_to_jiffies(val);
+}
+
+static ssize_t store_ageing_time(struct class_device *cd, const char *buf,
+ size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_ageing_time);
+}
+
+static CLASS_DEVICE_ATTR(ageing_time, S_IRUGO | S_IWUSR, show_ageing_time,
+ store_ageing_time);
+static ssize_t show_stp_state(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%d\n", br->stp_enabled);
+}
+
+static void set_stp_state(struct net_bridge *br, unsigned long val)
+{
+ br->stp_enabled = val;
+}
+
+static ssize_t store_stp_state(struct class_device *cd,
+ const char *buf, size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_stp_state);
+}
+
+static CLASS_DEVICE_ATTR(stp_state, S_IRUGO | S_IWUSR, show_stp_state,
+ store_stp_state);
+
+static ssize_t show_priority(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%d\n",
+ (br->bridge_id.prio[0] << 8) | br->bridge_id.prio[1]);
+}
+
+static void set_priority(struct net_bridge *br, unsigned long val)
+{
+ br_stp_set_bridge_priority(br, (u16) val);
+}
+
+static ssize_t store_priority(struct class_device *cd,
+ const char *buf, size_t len)
+{
+ return store_bridge_parm(cd, buf, len, set_priority);
+}
+static CLASS_DEVICE_ATTR(priority, S_IRUGO | S_IWUSR, show_priority,
+ store_priority);
+
+static ssize_t show_root_id(struct class_device *cd, char *buf)
+{
+ return br_show_bridge_id(buf, &to_bridge(cd)->designated_root);
+}
+static CLASS_DEVICE_ATTR(root_id, S_IRUGO, show_root_id, NULL);
+
+static ssize_t show_bridge_id(struct class_device *cd, char *buf)
+{
+ return br_show_bridge_id(buf, &to_bridge(cd)->bridge_id);
+}
+static CLASS_DEVICE_ATTR(bridge_id, S_IRUGO, show_bridge_id, NULL);
+
+static ssize_t show_root_port(struct class_device *cd, char *buf)
+{
+ return sprintf(buf, "%d\n", to_bridge(cd)->root_port);
+}
+static CLASS_DEVICE_ATTR(root_port, S_IRUGO, show_root_port, NULL);
+
+static ssize_t show_root_path_cost(struct class_device *cd, char *buf)
+{
+ return sprintf(buf, "%d\n", to_bridge(cd)->root_path_cost);
+}
+static CLASS_DEVICE_ATTR(root_path_cost, S_IRUGO, show_root_path_cost, NULL);
+
+static ssize_t show_topology_change(struct class_device *cd, char *buf)
+{
+ return sprintf(buf, "%d\n", to_bridge(cd)->topology_change);
+}
+static CLASS_DEVICE_ATTR(topology_change, S_IRUGO, show_topology_change, NULL);
+
+static ssize_t show_topology_change_detected(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%d\n", br->topology_change_detected);
+}
+static CLASS_DEVICE_ATTR(topology_change_detected, S_IRUGO, show_topology_change_detected, NULL);
+
+static ssize_t show_hello_timer(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%ld\n", br_timer_value(&br->hello_timer));
+}
+static CLASS_DEVICE_ATTR(hello_timer, S_IRUGO, show_hello_timer, NULL);
+
+static ssize_t show_tcn_timer(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%ld\n", br_timer_value(&br->tcn_timer));
+}
+static CLASS_DEVICE_ATTR(tcn_timer, S_IRUGO, show_tcn_timer, NULL);
+
+static ssize_t show_topology_change_timer(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%ld\n", br_timer_value(&br->topology_change_timer));
+}
+static CLASS_DEVICE_ATTR(topology_change_timer, S_IRUGO, show_topology_change_timer, NULL);
+
+static ssize_t show_gc_timer(struct class_device *cd, char *buf)
+{
+ struct net_bridge *br = to_bridge(cd);
+ return sprintf(buf, "%ld\n", br_timer_value(&br->gc_timer));
+}
+static CLASS_DEVICE_ATTR(gc_timer, S_IRUGO, show_gc_timer, NULL);
+
+static struct attribute *bridge_attrs[] = {
+ &class_device_attr_forward_delay.attr,
+ &class_device_attr_hello_time.attr,
+ &class_device_attr_max_age.attr,
+ &class_device_attr_ageing_time.attr,
+ &class_device_attr_stp_state.attr,
+ &class_device_attr_priority.attr,
+ &class_device_attr_bridge_id.attr,
+ &class_device_attr_root_id.attr,
+ &class_device_attr_root_path_cost.attr,
+ &class_device_attr_root_port.attr,
+ &class_device_attr_topology_change.attr,
+ &class_device_attr_topology_change_detected.attr,
+ &class_device_attr_hello_timer.attr,
+ &class_device_attr_tcn_timer.attr,
+ &class_device_attr_topology_change_timer.attr,
+ &class_device_attr_gc_timer.attr,
+ NULL
+};
+
+static struct attribute_group bridge_group = {
+ .name = SYSFS_BRIDGE_ATTR,
+ .attrs = bridge_attrs,
+};
+
+/*
+ * Export the forwarding information table as a binary file
+ * The records are struct __fdb_entry.
+ *
+ * Returns the number of bytes read.
+ */
+static ssize_t brforward_read(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = to_class_dev(kobj);
+ struct net_bridge *br = to_bridge(cdev);
+ int n;
+
+ /* must read whole records */
+ if (off % sizeof(struct __fdb_entry) != 0)
+ return -EINVAL;
+
+ n = br_fdb_fillbuf(br, buf,
+ count / sizeof(struct __fdb_entry),
+ off / sizeof(struct __fdb_entry));
+
+ if (n > 0)
+ n *= sizeof(struct __fdb_entry);
+
+ return n;
+}
+
+static struct bin_attribute bridge_forward = {
+ .attr = { .name = SYSFS_BRIDGE_FDB,
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE, },
+ .read = brforward_read,
+};
+
+
+/*
+ * This is a dummy kset so bridge objects don't cause
+ * hotplug events
+ */
+struct subsystem bridge_subsys = {
+ .kset = { .hotplug_ops = NULL },
+};
+
+void br_sysfs_init(void)
+{
+ subsystem_register(&bridge_subsys);
+}
+
+void br_sysfs_fini(void)
+{
+ subsystem_unregister(&bridge_subsys);
+}
+
+/*
+ * Add entries in sysfs onto the existing network class device
+ * for the bridge.
+ * Adds a attribute group "bridge" containing tuning parameters.
+ * Binary attribute containing the forward table
+ * Sub directory to hold links to interfaces.
+ *
+ * Note: the ifobj exists only to be a subdirectory
+ * to hold links. The ifobj exists in same data structure
+ * as it's parent the bridge so reference counting works.
+ */
+int br_sysfs_addbr(struct net_device *dev)
+{
+ struct kobject *brobj = &dev->class_dev.kobj;
+ struct net_bridge *br = netdev_priv(dev);
+ int err;
+
+ err = sysfs_create_group(brobj, &bridge_group);
+ if (err) {
+ pr_info("%s: can't create group %s/%s\n",
+ __FUNCTION__, dev->name, bridge_group.name);
+ goto out1;
+ }
+
+ err = sysfs_create_bin_file(brobj, &bridge_forward);
+ if (err) {
+ pr_info("%s: can't create attribue file %s/%s\n",
+ __FUNCTION__, dev->name, bridge_forward.attr.name);
+ goto out2;
+ }
+
+
+ kobject_set_name(&br->ifobj, SYSFS_BRIDGE_PORT_SUBDIR);
+ br->ifobj.ktype = NULL;
+ br->ifobj.kset = &bridge_subsys.kset;
+ br->ifobj.parent = brobj;
+
+ err = kobject_register(&br->ifobj);
+ if (err) {
+ pr_info("%s: can't add kobject (directory) %s/%s\n",
+ __FUNCTION__, dev->name, br->ifobj.name);
+ goto out3;
+ }
+ return 0;
+ out3:
+ sysfs_remove_bin_file(&dev->class_dev.kobj, &bridge_forward);
+ out2:
+ sysfs_remove_group(&dev->class_dev.kobj, &bridge_group);
+ out1:
+ return err;
+
+}
+
+void br_sysfs_delbr(struct net_device *dev)
+{
+ struct kobject *kobj = &dev->class_dev.kobj;
+ struct net_bridge *br = netdev_priv(dev);
+
+ kobject_unregister(&br->ifobj);
+ sysfs_remove_bin_file(kobj, &bridge_forward);
+ sysfs_remove_group(kobj, &bridge_group);
+}
--- /dev/null
+/*
+ * SUCS NET3:
+ *
+ * Generic stream handling routines. These are generic for most
+ * protocols. Even IP. Tonight 8-).
+ * This is used because TCP, LLC (others too) layer all have mostly
+ * identical sendmsg() and recvmsg() code.
+ * So we (will) share it here.
+ *
+ * Authors: Arnaldo Carvalho de Melo <acme@conectiva.com.br>
+ * (from old tcp.c code)
+ * Alan Cox <alan@redhat.com> (Borrowed comments 8-))
+ */
+
+#include <linux/module.h>
+#include <linux/net.h>
+#include <linux/signal.h>
+#include <linux/wait.h>
+#include <net/sock.h>
+
+/**
+ * sk_stream_write_space - stream socket write_space callback.
+ * sk - socket
+ *
+ * FIXME: write proper description
+ */
+void sk_stream_write_space(struct sock *sk)
+{
+ struct socket *sock = sk->sk_socket;
+
+ if (sk_stream_wspace(sk) >= sk_stream_min_wspace(sk) && sock) {
+ clear_bit(SOCK_NOSPACE, &sock->flags);
+
+ if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
+ wake_up_interruptible(sk->sk_sleep);
+ if (sock->fasync_list && !(sk->sk_shutdown & SEND_SHUTDOWN))
+ sock_wake_async(sock, 2, POLL_OUT);
+ }
+}
+
+EXPORT_SYMBOL(sk_stream_write_space);
--- /dev/null
+/*
+ * common UDP/RAW code
+ * Linux INET implementation
+ *
+ * Authors:
+ * Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/ip.h>
+#include <linux/in.h>
+#include <net/sock.h>
+#include <net/tcp.h>
+#include <net/route.h>
+
+int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+{
+ struct inet_opt *inet = inet_sk(sk);
+ struct sockaddr_in *usin = (struct sockaddr_in *) uaddr;
+ struct rtable *rt;
+ u32 saddr;
+ int oif;
+ int err;
+
+
+ if (addr_len < sizeof(*usin))
+ return -EINVAL;
+
+ if (usin->sin_family != AF_INET)
+ return -EAFNOSUPPORT;
+
+ sk_dst_reset(sk);
+
+ oif = sk->sk_bound_dev_if;
+ saddr = inet->saddr;
+ if (MULTICAST(usin->sin_addr.s_addr)) {
+ if (!oif)
+ oif = inet->mc_index;
+ if (!saddr)
+ saddr = inet->mc_addr;
+ }
+ err = ip_route_connect(&rt, usin->sin_addr.s_addr, saddr,
+ RT_CONN_FLAGS(sk), oif,
+ sk->sk_protocol,
+ inet->sport, usin->sin_port, sk);
+ if (err)
+ return err;
+ if ((rt->rt_flags & RTCF_BROADCAST) && !sock_flag(sk, SOCK_BROADCAST)) {
+ ip_rt_put(rt);
+ return -EACCES;
+ }
+ if (!inet->saddr)
+ inet->saddr = rt->rt_src; /* Update source address */
+ if (!inet->rcv_saddr)
+ inet->rcv_saddr = rt->rt_src;
+ inet->daddr = rt->rt_dst;
+ inet->dport = usin->sin_port;
+ sk->sk_state = TCP_ESTABLISHED;
+ inet->id = jiffies;
+
+ sk_dst_set(sk, &rt->u.dst);
+ return(0);
+}
+
+EXPORT_SYMBOL(ip4_datagram_connect);
+
--- /dev/null
+/*
+ * iptables module to match inet_addr_type() of an ip.
+ *
+ * Copyright (c) 2004 Patrick McHardy <kaber@trash.net>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/ip.h>
+#include <net/route.h>
+
+#include <linux/netfilter_ipv4/ipt_addrtype.h>
+#include <linux/netfilter_ipv4/ip_tables.h>
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
+MODULE_DESCRIPTION("iptables addrtype match");
+
+static inline int match_type(u_int32_t addr, u_int16_t mask)
+{
+ return !!(mask & (1 << inet_addr_type(addr)));
+}
+
+static int match(const struct sk_buff *skb, const struct net_device *in,
+ const struct net_device *out, const void *matchinfo,
+ int offset, int *hotdrop)
+{
+ const struct ipt_addrtype_info *info = matchinfo;
+ const struct iphdr *iph = skb->nh.iph;
+ int ret = 1;
+
+ if (info->source)
+ ret &= match_type(iph->saddr, info->source)^info->invert_source;
+ if (info->dest)
+ ret &= match_type(iph->daddr, info->dest)^info->invert_dest;
+
+ return ret;
+}
+
+static int checkentry(const char *tablename, const struct ipt_ip *ip,
+ void *matchinfo, unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ if (matchsize != IPT_ALIGN(sizeof(struct ipt_addrtype_info))) {
+ printk(KERN_ERR "ipt_addrtype: invalid size (%u != %Zu)\n.",
+ matchsize, IPT_ALIGN(sizeof(struct ipt_addrtype_info)));
+ return 0;
+ }
+
+ return 1;
+}
+
+static struct ipt_match addrtype_match = {
+ .name = "addrtype",
+ .match = match,
+ .checkentry = checkentry,
+ .me = THIS_MODULE
+};
+
+static int __init init(void)
+{
+ return ipt_register_match(&addrtype_match);
+}
+
+static void __exit fini(void)
+{
+ ipt_unregister_match(&addrtype_match);
+}
+
+module_init(init);
+module_exit(fini);
--- /dev/null
+/* IP tables module for matching the routing realm
+ *
+ * $Id: ipt_realm.c,v 1.3 2004/03/05 13:25:40 laforge Exp $
+ *
+ * (C) 2003 by Sampsa Ranta <sampsa@netsonic.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <net/route.h>
+
+#include <linux/netfilter_ipv4/ipt_realm.h>
+#include <linux/netfilter_ipv4/ip_tables.h>
+
+MODULE_AUTHOR("Sampsa Ranta <sampsa@netsonic.fi>");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("iptables realm match");
+
+static int
+match(const struct sk_buff *skb,
+ const struct net_device *in,
+ const struct net_device *out,
+ const void *matchinfo,
+ int offset,
+ int *hotdrop)
+{
+ const struct ipt_realm_info *info = matchinfo;
+ struct dst_entry *dst = skb->dst;
+
+ return (info->id == (dst->tclassid & info->mask)) ^ info->invert;
+}
+
+static int check(const char *tablename,
+ const struct ipt_ip *ip,
+ void *matchinfo,
+ unsigned int matchsize,
+ unsigned int hook_mask)
+{
+ if (hook_mask
+ & ~((1 << NF_IP_POST_ROUTING) | (1 << NF_IP_FORWARD) |
+ (1 << NF_IP_LOCAL_OUT) | (1 << NF_IP_LOCAL_IN))) {
+ printk("ipt_realm: only valid for POST_ROUTING, LOCAL_OUT, "
+ "LOCAL_IN or FORWARD.\n");
+ return 0;
+ }
+ if (matchsize != IPT_ALIGN(sizeof(struct ipt_realm_info))) {
+ printk("ipt_realm: invalid matchsize.\n");
+ return 0;
+ }
+ return 1;
+}
+
+static struct ipt_match realm_match = {
+ .name = "realm",
+ .match = match,
+ .checkentry = check,
+ .me = THIS_MODULE
+};
+
+static int __init init(void)
+{
+ return ipt_register_match(&realm_match);
+}
+
+static void __exit fini(void)
+{
+ ipt_unregister_match(&realm_match);
+}
+
+module_init(init);
+module_exit(fini);
--- /dev/null
+/*
+ * xfrm4_output.c - Common IPsec encapsulation code for IPv4.
+ * Copyright (c) 2004 Herbert Xu <herbert@gondor.apana.org.au>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <net/inet_ecn.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+
+/* Add encapsulation header.
+ *
+ * In transport mode, the IP header will be moved forward to make space
+ * for the encapsulation header.
+ *
+ * In tunnel mode, the top IP header will be constructed per RFC 2401.
+ * The following fields in it shall be filled in by x->type->output:
+ * tot_len
+ * check
+ *
+ * On exit, skb->h will be set to the start of the payload to be processed
+ * by x->type->output and skb->nh will be set to the top IP header.
+ */
+static void xfrm4_encap(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+
+ iph = skb->nh.iph;
+ skb->h.ipiph = iph;
+
+ skb->nh.raw = skb_push(skb, x->props.header_len);
+ top_iph = skb->nh.iph;
+
+ if (!x->props.mode) {
+ skb->h.raw += iph->ihl*4;
+ memmove(top_iph, iph, iph->ihl*4);
+ return;
+ }
+
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+
+ /* DS disclosed */
+ top_iph->tos = INET_ECN_encapsulate(iph->tos, iph->tos);
+ if (x->props.flags & XFRM_STATE_NOECN)
+ IP_ECN_clear(top_iph);
+
+ top_iph->frag_off = iph->frag_off & htons(IP_DF);
+ if (!top_iph->frag_off)
+ __ip_select_ident(top_iph, dst, 0);
+
+ /* TTL disclosed */
+ top_iph->ttl = iph->ttl;
+
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ top_iph->protocol = IPPROTO_IPIP;
+
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+}
+
+int xfrm4_output(struct sk_buff **pskb)
+{
+ struct sk_buff *skb = *pskb;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ int err;
+
+ if (skb->ip_summed == CHECKSUM_HW) {
+ err = skb_checksum_help(pskb, 0);
+ skb = *pskb;
+ if (err)
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
+ if (x->props.mode) {
+ err = xfrm4_tunnel_check_size(skb);
+ if (err)
+ goto error;
+ }
+
+ xfrm4_encap(skb);
+
+ err = x->type->output(pskb);
+ skb = *pskb;
+ if (err)
+ goto error;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ if (!(skb->dst = dst_pop(dst))) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ err = NET_XMIT_BYPASS;
+
+out_exit:
+ return err;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ goto out_exit;
+}
--- /dev/null
+/*
+ * xfrm6_output.c - Common IPsec encapsulation code for IPv6.
+ * Copyright (C) 2002 USAGI/WIDE Project
+ * Copyright (c) 2004 Herbert Xu <herbert@gondor.apana.org.au>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/icmpv6.h>
+#include <net/inet_ecn.h>
+#include <net/ipv6.h>
+#include <net/xfrm.h>
+
+/* Add encapsulation header.
+ *
+ * In transport mode, the IP header and mutable extension headers will be moved
+ * forward to make space for the encapsulation header.
+ *
+ * In tunnel mode, the top IP header will be constructed per RFC 2401.
+ * The following fields in it shall be filled in by x->type->output:
+ * payload_len
+ *
+ * On exit, skb->h will be set to the start of the encapsulation header to be
+ * filled in by x->type->output and skb->nh will be set to the nextheader field
+ * of the extension header directly preceding the encapsulation header, or in
+ * its absence, that of the top IP header. The value of skb->data will always
+ * point to the top IP header.
+ */
+static void xfrm6_encap(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct ipv6hdr *iph, *top_iph;
+
+ skb_push(skb, x->props.header_len);
+ iph = skb->nh.ipv6h;
+
+ if (!x->props.mode) {
+ u8 *prevhdr;
+ int hdr_len;
+
+ hdr_len = ip6_find_1stfragopt(skb, &prevhdr);
+ skb->nh.raw = prevhdr - x->props.header_len;
+ skb->h.raw = skb->data + hdr_len;
+ memmove(skb->data, iph, hdr_len);
+ return;
+ }
+
+ skb->nh.raw = skb->data;
+ top_iph = skb->nh.ipv6h;
+ skb->nh.raw = &top_iph->nexthdr;
+ skb->h.ipv6h = top_iph + 1;
+
+ top_iph->version = 6;
+ top_iph->priority = iph->priority;
+ if (x->props.flags & XFRM_STATE_NOECN)
+ IP6_ECN_clear(top_iph);
+ top_iph->flow_lbl[0] = iph->flow_lbl[0];
+ top_iph->flow_lbl[1] = iph->flow_lbl[1];
+ top_iph->flow_lbl[2] = iph->flow_lbl[2];
+ top_iph->nexthdr = IPPROTO_IPV6;
+ top_iph->hop_limit = iph->hop_limit;
+ ipv6_addr_copy(&top_iph->saddr, (struct in6_addr *)&x->props.saddr);
+ ipv6_addr_copy(&top_iph->daddr, (struct in6_addr *)&x->id.daddr);
+}
+
+static int xfrm6_tunnel_check_size(struct sk_buff *skb)
+{
+ int mtu, ret = 0;
+ struct dst_entry *dst = skb->dst;
+
+ mtu = dst_pmtu(dst) - sizeof(struct ipv6hdr);
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+
+ if (skb->len > mtu) {
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ ret = -EMSGSIZE;
+ }
+
+ return ret;
+}
+
+int xfrm6_output(struct sk_buff **pskb)
+{
+ struct sk_buff *skb = *pskb;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ int err;
+
+ if (skb->ip_summed == CHECKSUM_HW) {
+ err = skb_checksum_help(pskb, 0);
+ skb = *pskb;
+ if (err)
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
+ if (x->props.mode) {
+ err = xfrm6_tunnel_check_size(skb);
+ if (err)
+ goto error;
+ }
+
+ xfrm6_encap(skb);
+
+ err = x->type->output(pskb);
+ skb = *pskb;
+ if (err)
+ goto error;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ skb->nh.raw = skb->data;
+
+ if (!(skb->dst = dst_pop(dst))) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ err = NET_XMIT_BYPASS;
+
+out_exit:
+ return err;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ goto out_exit;
+}
--- /dev/null
+/*
+ * Copyright (C)2003,2004 USAGI/WIDE Project
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Authors Mitsuru KANDA <mk@linux-ipv6.org>
+ * YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
+ *
+ * Based on net/ipv4/xfrm4_tunnel.c
+ *
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/xfrm.h>
+#include <linux/list.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/icmp.h>
+#include <net/ipv6.h>
+#include <linux/ipv6.h>
+#include <linux/icmpv6.h>
+
+#ifdef CONFIG_IPV6_XFRM6_TUNNEL_DEBUG
+# define X6TDEBUG 3
+#else
+# define X6TDEBUG 1
+#endif
+
+#define X6TPRINTK(fmt, args...) printk(fmt, ## args)
+#define X6TNOPRINTK(fmt, args...) do { ; } while(0)
+
+#if X6TDEBUG >= 1
+# define X6TPRINTK1 X6TPRINTK
+#else
+# define X6TPRINTK1 X6TNOPRINTK
+#endif
+
+#if X6TDEBUG >= 3
+# define X6TPRINTK3 X6TPRINTK
+#else
+# define X6TPRINTK3 X6TNOPRINTK
+#endif
+
+/*
+ * xfrm_tunnel_spi things are for allocating unique id ("spi")
+ * per xfrm_address_t.
+ */
+struct xfrm6_tunnel_spi {
+ struct hlist_node list_byaddr;
+ struct hlist_node list_byspi;
+ xfrm_address_t addr;
+ u32 spi;
+ atomic_t refcnt;
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+ u32 magic;
+#endif
+};
+
+#ifdef CONFIG_IPV6_XFRM6_TUNNEL_DEBUG
+# define XFRM6_TUNNEL_SPI_MAGIC 0xdeadbeef
+#endif
+
+static rwlock_t xfrm6_tunnel_spi_lock = RW_LOCK_UNLOCKED;
+
+static u32 xfrm6_tunnel_spi;
+
+#define XFRM6_TUNNEL_SPI_MIN 1
+#define XFRM6_TUNNEL_SPI_MAX 0xffffffff
+
+static kmem_cache_t *xfrm6_tunnel_spi_kmem;
+
+#define XFRM6_TUNNEL_SPI_BYADDR_HSIZE 256
+#define XFRM6_TUNNEL_SPI_BYSPI_HSIZE 256
+
+static struct hlist_head xfrm6_tunnel_spi_byaddr[XFRM6_TUNNEL_SPI_BYADDR_HSIZE];
+static struct hlist_head xfrm6_tunnel_spi_byspi[XFRM6_TUNNEL_SPI_BYSPI_HSIZE];
+
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+static int x6spi_check_magic(const struct xfrm6_tunnel_spi *x6spi,
+ const char *name)
+{
+ if (unlikely(x6spi->magic != XFRM6_TUNNEL_SPI_MAGIC)) {
+ X6TPRINTK3(KERN_DEBUG "%s(): x6spi object "
+ "at %p has corrupted magic %08x "
+ "(should be %08x)\n",
+ name, x6spi, x6spi->magic, XFRM6_TUNNEL_SPI_MAGIC);
+ return -1;
+ }
+ return 0;
+}
+#else
+static int inline x6spi_check_magic(const struct xfrm6_tunnel_spi *x6spi,
+ const char *name)
+{
+ return 0;
+}
+#endif
+
+#define X6SPI_CHECK_MAGIC(x6spi) x6spi_check_magic((x6spi), __FUNCTION__)
+
+
+static unsigned inline xfrm6_tunnel_spi_hash_byaddr(xfrm_address_t *addr)
+{
+ unsigned h;
+
+ X6TPRINTK3(KERN_DEBUG "%s(addr=%p)\n", __FUNCTION__, addr);
+
+ h = addr->a6[0] ^ addr->a6[1] ^ addr->a6[2] ^ addr->a6[3];
+ h ^= h >> 16;
+ h ^= h >> 8;
+ h &= XFRM6_TUNNEL_SPI_BYADDR_HSIZE - 1;
+
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, h);
+
+ return h;
+}
+
+static unsigned inline xfrm6_tunnel_spi_hash_byspi(u32 spi)
+{
+ return spi % XFRM6_TUNNEL_SPI_BYSPI_HSIZE;
+}
+
+
+static int xfrm6_tunnel_spi_init(void)
+{
+ int i;
+
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ xfrm6_tunnel_spi = 0;
+ xfrm6_tunnel_spi_kmem = kmem_cache_create("xfrm6_tunnel_spi",
+ sizeof(struct xfrm6_tunnel_spi),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+ if (!xfrm6_tunnel_spi_kmem) {
+ X6TPRINTK1(KERN_ERR
+ "%s(): failed to allocate xfrm6_tunnel_spi_kmem\n",
+ __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)
+ INIT_HLIST_HEAD(&xfrm6_tunnel_spi_byaddr[i]);
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYSPI_HSIZE; i++)
+ INIT_HLIST_HEAD(&xfrm6_tunnel_spi_byspi[i]);
+ return 0;
+}
+
+static void xfrm6_tunnel_spi_fini(void)
+{
+ int i;
+
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) {
+ if (!hlist_empty(&xfrm6_tunnel_spi_byaddr[i]))
+ goto err;
+ }
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYSPI_HSIZE; i++) {
+ if (!hlist_empty(&xfrm6_tunnel_spi_byspi[i]))
+ goto err;
+ }
+ kmem_cache_destroy(xfrm6_tunnel_spi_kmem);
+ xfrm6_tunnel_spi_kmem = NULL;
+ return;
+err:
+ X6TPRINTK1(KERN_ERR "%s(): table is not empty\n", __FUNCTION__);
+ return;
+}
+
+static struct xfrm6_tunnel_spi *__xfrm6_tunnel_spi_lookup(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byaddr[xfrm6_tunnel_spi_hash_byaddr(saddr)],
+ list_byaddr) {
+ if (memcmp(&x6spi->addr, saddr, sizeof(x6spi->addr)) == 0) {
+ X6SPI_CHECK_MAGIC(x6spi);
+ X6TPRINTK3(KERN_DEBUG "%s() = %p(%u)\n", __FUNCTION__, x6spi, x6spi->spi);
+ return x6spi;
+ }
+ }
+
+ X6TPRINTK3(KERN_DEBUG "%s() = NULL(0)\n", __FUNCTION__);
+ return NULL;
+}
+
+u32 xfrm6_tunnel_spi_lookup(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ u32 spi;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ read_lock_bh(&xfrm6_tunnel_spi_lock);
+ x6spi = __xfrm6_tunnel_spi_lookup(saddr);
+ spi = x6spi ? x6spi->spi : 0;
+ read_unlock_bh(&xfrm6_tunnel_spi_lock);
+ return spi;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_spi_lookup);
+
+static u32 __xfrm6_tunnel_alloc_spi(xfrm_address_t *saddr)
+{
+ u32 spi;
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos;
+ unsigned index;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ if (xfrm6_tunnel_spi < XFRM6_TUNNEL_SPI_MIN ||
+ xfrm6_tunnel_spi >= XFRM6_TUNNEL_SPI_MAX)
+ xfrm6_tunnel_spi = XFRM6_TUNNEL_SPI_MIN;
+ else
+ xfrm6_tunnel_spi++;
+
+ for (spi = xfrm6_tunnel_spi; spi <= XFRM6_TUNNEL_SPI_MAX; spi++) {
+ index = xfrm6_tunnel_spi_hash_byspi(spi);
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byspi[index],
+ list_byspi) {
+ if (x6spi->spi == spi)
+ goto try_next_1;
+ }
+ xfrm6_tunnel_spi = spi;
+ goto alloc_spi;
+try_next_1:;
+ }
+ for (spi = XFRM6_TUNNEL_SPI_MIN; spi < xfrm6_tunnel_spi; spi++) {
+ index = xfrm6_tunnel_spi_hash_byspi(spi);
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byspi[index],
+ list_byspi) {
+ if (x6spi->spi == spi)
+ goto try_next_2;
+ }
+ xfrm6_tunnel_spi = spi;
+ goto alloc_spi;
+try_next_2:;
+ }
+ spi = 0;
+ goto out;
+alloc_spi:
+ X6TPRINTK3(KERN_DEBUG "%s(): allocate new spi for "
+ "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ __FUNCTION__,
+ NIP6(*(struct in6_addr *)saddr));
+ x6spi = kmem_cache_alloc(xfrm6_tunnel_spi_kmem, SLAB_ATOMIC);
+ if (!x6spi) {
+ X6TPRINTK1(KERN_ERR "%s(): kmem_cache_alloc() failed\n",
+ __FUNCTION__);
+ goto out;
+ }
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+ x6spi->magic = XFRM6_TUNNEL_SPI_MAGIC;
+#endif
+ memcpy(&x6spi->addr, saddr, sizeof(x6spi->addr));
+ x6spi->spi = spi;
+ atomic_set(&x6spi->refcnt, 1);
+
+ hlist_add_head(&x6spi->list_byspi, &xfrm6_tunnel_spi_byspi[index]);
+
+ index = xfrm6_tunnel_spi_hash_byaddr(saddr);
+ hlist_add_head(&x6spi->list_byaddr, &xfrm6_tunnel_spi_byaddr[index]);
+ X6SPI_CHECK_MAGIC(x6spi);
+out:
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, spi);
+ return spi;
+}
+
+u32 xfrm6_tunnel_alloc_spi(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ u32 spi;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ write_lock_bh(&xfrm6_tunnel_spi_lock);
+ x6spi = __xfrm6_tunnel_spi_lookup(saddr);
+ if (x6spi) {
+ atomic_inc(&x6spi->refcnt);
+ spi = x6spi->spi;
+ } else
+ spi = __xfrm6_tunnel_alloc_spi(saddr);
+ write_unlock_bh(&xfrm6_tunnel_spi_lock);
+
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, spi);
+
+ return spi;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
+
+void xfrm6_tunnel_free_spi(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos, *n;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ write_lock_bh(&xfrm6_tunnel_spi_lock);
+
+ hlist_for_each_entry_safe(x6spi, pos, n,
+ &xfrm6_tunnel_spi_byaddr[xfrm6_tunnel_spi_hash_byaddr(saddr)],
+ list_byaddr)
+ {
+ if (memcmp(&x6spi->addr, saddr, sizeof(x6spi->addr)) == 0) {
+ X6TPRINTK3(KERN_DEBUG "%s(): x6spi object "
+ "for %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x "
+ "found at %p\n",
+ __FUNCTION__,
+ NIP6(*(struct in6_addr *)saddr),
+ x6spi);
+ X6SPI_CHECK_MAGIC(x6spi);
+ if (atomic_dec_and_test(&x6spi->refcnt)) {
+ hlist_del(&x6spi->list_byaddr);
+ hlist_del(&x6spi->list_byspi);
+ kmem_cache_free(xfrm6_tunnel_spi_kmem, x6spi);
+ break;
+ }
+ }
+ }
+ write_unlock_bh(&xfrm6_tunnel_spi_lock);
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_free_spi);
+
+int xfrm6_tunnel_check_size(struct sk_buff *skb)
+{
+ int mtu, ret = 0;
+ struct dst_entry *dst = skb->dst;
+
+ mtu = dst_pmtu(dst) - sizeof(struct ipv6hdr);
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+
+ if (skb->len > mtu) {
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ ret = -EMSGSIZE;
+ }
+
+ return ret;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_check_size);
+
+static int xfrm6_tunnel_output(struct sk_buff **pskb)
+{
+ struct sk_buff *skb = *pskb;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct ipv6hdr *iph, *top_iph;
+ int err;
+
+ if ((err = xfrm6_tunnel_check_size(skb)) != 0)
+ goto error_nolock;
+
+ iph = skb->nh.ipv6h;
+
+ top_iph = (struct ipv6hdr *)skb_push(skb, x->props.header_len);
+ top_iph->version = 6;
+ top_iph->priority = iph->priority;
+ top_iph->flow_lbl[0] = iph->flow_lbl[0];
+ top_iph->flow_lbl[1] = iph->flow_lbl[1];
+ top_iph->flow_lbl[2] = iph->flow_lbl[2];
+ top_iph->nexthdr = IPPROTO_IPV6;
+ top_iph->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ top_iph->hop_limit = iph->hop_limit;
+ memcpy(&top_iph->saddr, (struct in6_addr *)&x->props.saddr, sizeof(struct in6_addr));
+ memcpy(&top_iph->daddr, (struct in6_addr *)&x->id.daddr, sizeof(struct in6_addr));
+ skb->nh.raw = skb->data;
+ skb->h.raw = skb->nh.raw + sizeof(struct ipv6hdr);
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ kfree_skb(skb);
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+
+ return NET_XMIT_BYPASS;
+
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+static int xfrm6_tunnel_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ if (!pskb_may_pull(skb, sizeof(struct ipv6hdr)))
+ return -EINVAL;
+
+ skb->mac.raw = skb->nh.raw;
+ skb->nh.raw = skb->data;
+ dst_release(skb->dst);
+ skb->dst = NULL;
+ skb->protocol = htons(ETH_P_IPV6);
+ skb->pkt_type = PACKET_HOST;
+ netif_rx(skb);
+
+ return 0;
+}
+
+static struct xfrm6_tunnel *xfrm6_tunnel_handler;
+static DECLARE_MUTEX(xfrm6_tunnel_sem);
+
+int xfrm6_tunnel_register(struct xfrm6_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm6_tunnel_sem);
+ ret = 0;
+ if (xfrm6_tunnel_handler != NULL)
+ ret = -EINVAL;
+ if (!ret)
+ xfrm6_tunnel_handler = handler;
+ up(&xfrm6_tunnel_sem);
+
+ return ret;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_register);
+
+int xfrm6_tunnel_deregister(struct xfrm6_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm6_tunnel_sem);
+ ret = 0;
+ if (xfrm6_tunnel_handler != handler)
+ ret = -EINVAL;
+ if (!ret)
+ xfrm6_tunnel_handler = NULL;
+ up(&xfrm6_tunnel_sem);
+
+ synchronize_net();
+
+ return ret;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_deregister);
+
+static int xfrm6_tunnel_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
+{
+ struct sk_buff *skb = *pskb;
+ struct xfrm6_tunnel *handler = xfrm6_tunnel_handler;
+ struct xfrm_state *x = NULL;
+ struct ipv6hdr *iph = skb->nh.ipv6h;
+ int err = 0;
+ u32 spi;
+
+ /* device-like_ip6ip6_handler() */
+ if (handler) {
+ err = handler->handler(pskb, nhoffp);
+ if (!err)
+ goto out;
+ }
+
+ spi = xfrm6_tunnel_spi_lookup((xfrm_address_t *)&iph->saddr);
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr,
+ spi,
+ IPPROTO_IPV6, AF_INET6);
+
+ if (!x)
+ goto drop;
+
+ spin_lock(&x->lock);
+
+ if (unlikely(x->km.state != XFRM_STATE_VALID))
+ goto drop_unlock;
+
+ err = xfrm6_tunnel_input(x, NULL, skb);
+ if (err)
+ goto drop_unlock;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+
+out:
+ return 0;
+
+drop_unlock:
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+drop:
+ kfree_skb(skb);
+ return -1;
+}
+
+static void xfrm6_tunnel_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ int type, int code, int offset, __u32 info)
+{
+ struct xfrm6_tunnel *handler = xfrm6_tunnel_handler;
+
+ /* call here first for device-like ip6ip6 err handling */
+ if (handler) {
+ handler->err_handler(skb, opt, type, code, offset, info);
+ return;
+ }
+
+ /* xfrm6_tunnel native err handling */
+ switch (type) {
+ case ICMPV6_DEST_UNREACH:
+ switch (code) {
+ case ICMPV6_NOROUTE:
+ case ICMPV6_ADM_PROHIBITED:
+ case ICMPV6_NOT_NEIGHBOUR:
+ case ICMPV6_ADDR_UNREACH:
+ case ICMPV6_PORT_UNREACH:
+ default:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Destination Unreach.\n");
+ break;
+ }
+ break;
+ case ICMPV6_PKT_TOOBIG:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Packet Too Big.\n");
+ break;
+ case ICMPV6_TIME_EXCEED:
+ switch (code) {
+ case ICMPV6_EXC_HOPLIMIT:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Too small Hoplimit.\n");
+ break;
+ case ICMPV6_EXC_FRAGTIME:
+ default:
+ break;
+ }
+ break;
+ case ICMPV6_PARAMPROB:
+ switch (code) {
+ case ICMPV6_HDR_FIELD: break;
+ case ICMPV6_UNK_NEXTHDR: break;
+ case ICMPV6_UNK_OPTION: break;
+ }
+ break;
+ default:
+ break;
+ }
+ return;
+}
+
+static int xfrm6_tunnel_init_state(struct xfrm_state *x, void *args)
+{
+ if (!x->props.mode)
+ return -EINVAL;
+
+ x->props.header_len = sizeof(struct ipv6hdr);
+
+ return 0;
+}
+
+static void xfrm6_tunnel_destroy(struct xfrm_state *x)
+{
+ xfrm6_tunnel_free_spi((xfrm_address_t *)&x->props.saddr);
+}
+
+static struct xfrm_type xfrm6_tunnel_type = {
+ .description = "IP6IP6",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_IPV6,
+ .init_state = xfrm6_tunnel_init_state,
+ .destructor = xfrm6_tunnel_destroy,
+ .input = xfrm6_tunnel_input,
+ .output = xfrm6_tunnel_output,
+};
+
+static struct inet6_protocol xfrm6_tunnel_protocol = {
+ .handler = xfrm6_tunnel_rcv,
+ .err_handler = xfrm6_tunnel_err,
+ .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
+};
+
+void __init xfrm6_tunnel_init(void)
+{
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ if (xfrm_register_type(&xfrm6_tunnel_type, AF_INET6) < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init: can't add xfrm type\n");
+ return;
+ }
+ if (inet6_add_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6) < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init(): can't add protocol\n");
+ xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);
+ return;
+ }
+ if (xfrm6_tunnel_spi_init() < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init: failed to initialize spi\n");
+ inet6_del_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6);
+ xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);
+ return;
+ }
+}
+
+void __exit xfrm6_tunnel_fini(void)
+{
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ xfrm6_tunnel_spi_fini();
+ if (inet6_del_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6) < 0)
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel close: can't remove protocol\n");
+ if (xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6) < 0)
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel close: can't remove xfrm type\n");
+}
--- /dev/null
+/*
+ * net/sched/act_api.c Packet action API.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Author: Jamal Hadi Salim
+ *
+ *
+ */
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/bitops.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/socket.h>
+#include <linux/sockios.h>
+#include <linux/in.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/init.h>
+#include <linux/kmod.h>
+#include <net/sock.h>
+#include <net/pkt_sched.h>
+
+#if 1 /* control */
+#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define DPRINTK(format,args...)
+#endif
+#if 0 /* data */
+#define D2PRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define D2PRINTK(format,args...)
+#endif
+
+static struct tc_action_ops *act_base = NULL;
+static rwlock_t act_mod_lock = RW_LOCK_UNLOCKED;
+
+int tcf_register_action(struct tc_action_ops *act)
+{
+ struct tc_action_ops *a, **ap;
+
+ write_lock(&act_mod_lock);
+ for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next) {
+ if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
+ write_unlock(&act_mod_lock);
+ return -EEXIST;
+ }
+ }
+
+ act->next = NULL;
+ *ap = act;
+
+ write_unlock(&act_mod_lock);
+
+ return 0;
+}
+
+int tcf_unregister_action(struct tc_action_ops *act)
+{
+ struct tc_action_ops *a, **ap;
+ int err = -ENOENT;
+
+ write_lock(&act_mod_lock);
+ for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next)
+ if(a == act)
+ break;
+
+ if (a) {
+ *ap = a->next;
+ a->next = NULL;
+ err = 0;
+ }
+ write_unlock(&act_mod_lock);
+ return err;
+}
+
+/* lookup by name */
+struct tc_action_ops *tc_lookup_action_n(char *kind)
+{
+
+ struct tc_action_ops *a = NULL;
+
+ if (kind) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+ if (strcmp(kind,a->kind) == 0) {
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+
+/* lookup by rtattr */
+struct tc_action_ops *tc_lookup_action(struct rtattr *kind)
+{
+
+ struct tc_action_ops *a = NULL;
+
+ if (kind) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+
+ if (strcmp((char*)RTA_DATA(kind),a->kind) == 0){
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+
+/* lookup by id */
+struct tc_action_ops *tc_lookup_action_id(u32 type)
+{
+ struct tc_action_ops *a = NULL;
+
+ if (type) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+ if (a->type == type) {
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+
+int tcf_action_exec(struct sk_buff *skb,struct tc_action *act)
+{
+
+ struct tc_action *a;
+ int ret = -1;
+
+ if (skb->tc_verd & TC_NCLS) {
+ skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);
+ D2PRINTK("(%p)tcf_action_exec: cleared TC_NCLS in %s out %s\n",skb,skb->input_dev?skb->input_dev->name:"xxx",skb->dev->name);
+ return TC_ACT_OK;
+ }
+ while ((a = act) != NULL) {
+repeat:
+ if (a->ops && a->ops->act) {
+ ret = a->ops->act(&skb,a);
+ if (TC_MUNGED & skb->tc_verd) {
+ /* copied already, allow trampling */
+ skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);
+ skb->tc_verd = CLR_TC_MUNGED(skb->tc_verd);
+ }
+
+ if (ret != TC_ACT_PIPE)
+ goto exec_done;
+ if (ret == TC_ACT_REPEAT)
+ goto repeat; /* we need a ttl - JHS */
+
+ }
+ act = a->next;
+ }
+
+exec_done:
+
+ return ret;
+}
+
+void tcf_action_destroy(struct tc_action *act, int bind)
+{
+ struct tc_action *a;
+
+ for (a = act; act; a = act) {
+ if (a && a->ops && a->ops->cleanup) {
+ DPRINTK("tcf_action_destroy destroying %p next %p\n", a,a->next?a->next:NULL);
+ act = act->next;
+ if (ACT_P_DELETED == a->ops->cleanup(a, bind)) {
+ module_put(a->ops->owner);
+ }
+
+ a->ops = NULL;
+ kfree(a);
+ } else { /*FIXME: Remove later - catch insertion bugs*/
+ printk("tcf_action_destroy: BUG? destroying NULL ops \n");
+ if (a) {
+ act = act->next;
+ kfree(a);
+ } else {
+ printk("tcf_action_destroy: BUG? destroying NULL action! \n");
+ break;
+ }
+ }
+ }
+}
+
+int tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+{
+ int err = -EINVAL;
+
+
+ if ( (NULL == a) || (NULL == a->ops)
+ || (NULL == a->ops->dump) )
+ return err;
+ return a->ops->dump(skb, a, bind, ref);
+
+}
+
+
+int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+{
+ int err = -EINVAL;
+ unsigned char *b = skb->tail;
+ struct rtattr *r;
+
+
+ if ( (NULL == a) || (NULL == a->ops)
+ || (NULL == a->ops->dump) || (NULL == a->ops->kind))
+ return err;
+
+
+ RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind);
+ if (tcf_action_copy_stats(skb,a))
+ goto rtattr_failure;
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_OPTIONS, 0, NULL);
+ if ((err = tcf_action_dump_old(skb, a, bind, ref)) > 0) {
+ r->rta_len = skb->tail - (u8*)r;
+ return err;
+ }
+
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+
+}
+
+int tcf_action_dump(struct sk_buff *skb, struct tc_action *act, int bind, int ref)
+{
+ struct tc_action *a;
+ int err = -EINVAL;
+ unsigned char *b = skb->tail;
+ struct rtattr *r ;
+
+ while ((a = act) != NULL) {
+ r = (struct rtattr*) skb->tail;
+ act = a->next;
+ RTA_PUT(skb, a->order, 0, NULL);
+ err = tcf_action_dump_1(skb, a, bind, ref);
+ if (0 > err)
+ goto rtattr_failure;
+
+ r->rta_len = skb->tail - (u8*)r;
+ }
+
+ return 0;
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -err;
+
+}
+
+int tcf_action_init_1(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr, int bind )
+{
+ struct tc_action_ops *a_o;
+ char act_name[4 + IFNAMSIZ + 1];
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+
+ int err = -EINVAL;
+
+ if (NULL == name) {
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ goto err_out;
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk(" Action %s bad\n", (char*)RTA_DATA(kind));
+ goto err_out;
+ }
+
+ } else {
+ printk("Action bad kind\n");
+ goto err_out;
+ }
+ a_o = tc_lookup_action(kind);
+ } else {
+ sprintf(act_name, "%s", name);
+ DPRINTK("tcf_action_init_1: finding %s\n",act_name);
+ a_o = tc_lookup_action_n(name);
+ }
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ DPRINTK("tcf_action_init_1: trying to load module %s\n",act_name);
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+
+#endif
+ if (NULL == a_o) {
+ printk("failed to find %s\n",act_name);
+ goto err_out;
+ }
+
+ if (NULL == a) {
+ goto err_mod;
+ }
+
+ /* backward compatibility for policer */
+ if (NULL == name) {
+ err = a_o->init(tb[TCA_ACT_OPTIONS-1], est, a, ovr, bind);
+ if (0 > err ) {
+ err = -EINVAL;
+ goto err_mod;
+ }
+ } else {
+ err = a_o->init(rta, est, a, ovr, bind);
+ if (0 > err ) {
+ err = -EINVAL;
+ goto err_mod;
+ }
+ }
+
+ /* module count goes up only when brand new policy is created
+ if it exists and is only bound to in a_o->init() then
+ ACT_P_CREATED is not returned (a zero is).
+ */
+ if (ACT_P_CREATED != err) {
+ module_put(a_o->owner);
+ }
+ a->ops = a_o;
+ DPRINTK("tcf_action_init_1: successfull %s \n",act_name);
+
+ return 0;
+err_mod:
+ module_put(a_o->owner);
+err_out:
+ return err;
+}
+
+int tcf_action_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr , int bind)
+{
+ struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
+ int i;
+ struct tc_action *act = a, *a_s = a;
+
+ int err = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ return err;
+
+ for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
+ if (tb[i]) {
+ if (NULL == act) {
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act) {
+ err = -ENOMEM;
+ goto bad_ret;
+ }
+ memset(act, 0,sizeof(*act));
+ }
+ act->next = NULL;
+ if (0 > tcf_action_init_1(tb[i],est,act,name,ovr,bind)) {
+ printk("Error processing action order %d\n",i);
+ return err;
+ }
+
+ act->order = i+1;
+ if (a_s != act) {
+ a_s->next = act;
+ a_s = act;
+ }
+ act = NULL;
+ }
+
+ }
+
+ return 0;
+bad_ret:
+ tcf_action_destroy(a, bind);
+ return err;
+}
+
+int tcf_action_copy_stats (struct sk_buff *skb,struct tc_action *a)
+{
+#ifdef CONFIG_KMOD
+ /* place holder */
+#endif
+
+ if (NULL == a->ops || NULL == a->ops->get_stats)
+ return 1;
+
+ return a->ops->get_stats(skb,a);
+}
+
+
+static int
+tca_get_fill(struct sk_buff *skb, struct tc_action *a,
+ u32 pid, u32 seq, unsigned flags, int event, int bind, int ref)
+{
+ struct tcamsg *t;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+ struct rtattr *x;
+
+ nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*t));
+ nlh->nlmsg_flags = flags;
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ if (0 > tcf_action_dump(skb, a, bind, ref)) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8*)x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ return skb->len;
+
+rtattr_failure:
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int act_get_notify(u32 pid, struct nlmsghdr *n,
+ struct tc_action *a, int event)
+{
+ struct sk_buff *skb;
+
+ int err = 0;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb)
+ return -ENOBUFS;
+
+ if (tca_get_fill(skb, a, pid, n->nlmsg_seq, 0, event, 0, 0) <= 0) {
+ kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ err = netlink_unicast(rtnl,skb, pid, MSG_DONTWAIT);
+ if (err > 0)
+ err = 0;
+ return err;
+}
+
+int tcf_action_get_1(struct rtattr *rta, struct tc_action *a, struct nlmsghdr *n, u32 pid)
+{
+ struct tc_action_ops *a_o;
+ char act_name[4 + IFNAMSIZ + 1];
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+ int index;
+
+ int err = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ goto err_out;
+
+
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk("tcf_action_get_1: action %s bad\n", (char*)RTA_DATA(kind));
+ goto err_out;
+ }
+
+ } else {
+ printk("tcf_action_get_1: action bad kind\n");
+ goto err_out;
+ }
+
+ if (tb[TCA_ACT_INDEX - 1]) {
+ index = *(int *)RTA_DATA(tb[TCA_ACT_INDEX - 1]);
+ } else {
+ printk("tcf_action_get_1: index not received\n");
+ goto err_out;
+ }
+
+ a_o = tc_lookup_action(kind);
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+
+#endif
+ if (NULL == a_o) {
+ printk("failed to find %s\n",act_name);
+ goto err_out;
+ }
+
+ if (NULL == a) {
+ goto err_mod;
+ }
+
+ a->ops = a_o;
+
+ if (NULL == a_o->lookup || 0 == a_o->lookup(a, index)) {
+ a->ops = NULL;
+ err = -EINVAL;
+ goto err_mod;
+ }
+
+ module_put(a_o->owner);
+ return 0;
+err_mod:
+ module_put(a_o->owner);
+err_out:
+ return err;
+}
+
+void cleanup_a (struct tc_action *act)
+{
+ struct tc_action *a;
+
+ for (a = act; act; a = act) {
+ if (a) {
+ act = act->next;
+ a->ops = NULL;
+ a->priv = NULL;
+ kfree(a);
+ } else {
+ printk("cleanup_a: BUG? empty action\n");
+ }
+ }
+}
+
+struct tc_action_ops *get_ao(struct rtattr *kind, struct tc_action *a)
+{
+ char act_name[4 + IFNAMSIZ + 1];
+ struct tc_action_ops *a_o = NULL;
+
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk("get_ao: action %s bad\n", (char*)RTA_DATA(kind));
+ return NULL;
+ }
+
+ } else {
+ printk("get_ao: action bad kind\n");
+ return NULL;
+ }
+
+ a_o = tc_lookup_action(kind);
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ DPRINTK("get_ao: trying to load module %s\n",act_name);
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+#endif
+
+ if (NULL == a_o) {
+ printk("get_ao: failed to find %s\n",act_name);
+ return NULL;
+ }
+
+ a->ops = a_o;
+ return a_o;
+}
+
+struct tc_action *create_a(int i)
+{
+ struct tc_action *act = NULL;
+
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act) { /* grrr .. */
+ printk("create_a: failed to alloc! \n");
+ return NULL;
+ }
+
+ memset(act, 0,sizeof(*act));
+
+ act->order = i;
+
+ return act;
+}
+
+int tca_action_flush(struct rtattr *rta, struct nlmsghdr *n, u32 pid)
+{
+ struct sk_buff *skb;
+ unsigned char *b;
+ struct nlmsghdr *nlh;
+ struct tcamsg *t;
+ struct netlink_callback dcb;
+ struct rtattr *x;
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+ struct tc_action *a = create_a(0);
+ int err = -EINVAL;
+
+ if (NULL == a) {
+ printk("tca_action_flush: couldnt create tc_action\n");
+ return err;
+ }
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb) {
+ printk("tca_action_flush: failed skb alloc\n");
+ kfree(a);
+ return -ENOBUFS;
+ }
+
+ b = (unsigned char *)skb->tail;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
+ goto err_out;
+ }
+
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL == get_ao(kind, a)) {
+ goto err_out;
+ }
+
+ nlh = NLMSG_PUT(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof (*t));
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr *) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ err = a->ops->walk(skb, &dcb, RTM_DELACTION, a);
+ if (0 > err ) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8 *) x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ nlh->nlmsg_flags |= NLM_F_ROOT;
+ module_put(a->ops->owner);
+ kfree(a);
+ err = rtnetlink_send(skb, pid, RTMGRP_TC, n->nlmsg_flags&NLM_F_ECHO);
+ if (err > 0)
+ return 0;
+
+ return err;
+
+
+rtattr_failure:
+ module_put(a->ops->owner);
+nlmsg_failure:
+err_out:
+ kfree_skb(skb);
+ kfree(a);
+ return err;
+}
+
+int tca_action_gd(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int event )
+{
+
+ int s = 0;
+ int i, ret = 0;
+ struct tc_action *act = NULL;
+ struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
+ struct tc_action *a = NULL, *a_s = NULL;
+
+ if (event != RTM_GETACTION && event != RTM_DELACTION)
+ ret = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
+ ret = -EINVAL;
+ goto nlmsg_failure;
+ }
+
+ if (event == RTM_DELACTION && n->nlmsg_flags&NLM_F_ROOT) {
+ if (NULL != tb[0] && NULL == tb[1]) {
+ return tca_action_flush(tb[0],n,pid);
+ }
+ }
+
+ for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
+
+ if (NULL == tb[i])
+ break;
+
+ act = create_a(i+1);
+ if (NULL != a && a != act) {
+ a->next = act;
+ a = act;
+ } else {
+ a = act;
+ }
+
+ if (!s) {
+ s = 1;
+ a_s = a;
+ }
+
+ ret = tcf_action_get_1(tb[i],act,n,pid);
+ if (ret < 0) {
+ printk("tcf_action_get: failed to get! \n");
+ ret = -EINVAL;
+ goto rtattr_failure;
+ }
+
+ }
+
+
+ if (RTM_GETACTION == event) {
+ ret = act_get_notify(pid, n, a_s, event);
+ } else { /* delete */
+
+ struct sk_buff *skb;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb) {
+ ret = -ENOBUFS;
+ goto nlmsg_failure;
+ }
+
+ if (tca_get_fill(skb, a_s, pid, n->nlmsg_seq, 0, event, 0 , 1) <= 0) {
+ kfree_skb(skb);
+ ret = -EINVAL;
+ goto nlmsg_failure;
+ }
+
+ /* now do the delete */
+ tcf_action_destroy(a_s, 0);
+
+ ret = rtnetlink_send(skb, pid, RTMGRP_TC, n->nlmsg_flags&NLM_F_ECHO);
+ if (ret > 0)
+ return 0;
+ return ret;
+ }
+rtattr_failure:
+nlmsg_failure:
+ cleanup_a(a_s);
+ return ret;
+}
+
+
+int tcf_add_notify(struct tc_action *a, u32 pid, u32 seq, int event, unsigned flags)
+{
+ struct tcamsg *t;
+ struct nlmsghdr *nlh;
+ struct sk_buff *skb;
+ struct rtattr *x;
+ unsigned char *b;
+
+
+ int err = 0;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb)
+ return -ENOBUFS;
+
+ b = (unsigned char *)skb->tail;
+
+ nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*t));
+ nlh->nlmsg_flags = flags;
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ if (0 > tcf_action_dump(skb, a, 0, 0)) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8*)x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ NETLINK_CB(skb).dst_groups = RTMGRP_TC;
+
+ err = rtnetlink_send(skb, pid, RTMGRP_TC, flags&NLM_F_ECHO);
+ if (err > 0)
+ err = 0;
+
+ return err;
+
+rtattr_failure:
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+
+int tcf_action_add(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int ovr )
+{
+ int ret = 0;
+ struct tc_action *act = NULL;
+ struct tc_action *a = NULL;
+ u32 seq = n->nlmsg_seq;
+
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act)
+ return -ENOMEM;
+
+ memset(act, 0, sizeof(*act));
+
+ ret = tcf_action_init(rta, NULL,act,NULL,ovr,0);
+ /* NOTE: We have an all-or-none model
+ * This means that of any of the actions fail
+ * to update then all are undone.
+ * */
+ if (0 > ret) {
+ tcf_action_destroy(act, 0);
+ goto done;
+ }
+
+ /* dump then free all the actions after update; inserted policy
+ * stays intact
+ * */
+ ret = tcf_add_notify(act, pid, seq, RTM_NEWACTION, n->nlmsg_flags);
+ for (a = act; act; a = act) {
+ if (a) {
+ act = act->next;
+ a->ops = NULL;
+ a->priv = NULL;
+ kfree(a);
+ } else {
+ printk("tcf_action_add: BUG? empty action\n");
+ }
+ }
+done:
+
+ return ret;
+}
+
+static int tc_ctl_action(struct sk_buff *skb, struct nlmsghdr *n, void *arg)
+{
+ struct rtattr **tca = arg;
+ u32 pid = skb ? NETLINK_CB(skb).pid : 0;
+
+ int ret = 0, ovr = 0;
+
+ if (NULL == tca[TCA_ACT_TAB-1]) {
+ printk("tc_ctl_action: received NO action attribs\n");
+ return -EINVAL;
+ }
+
+ /* n->nlmsg_flags&NLM_F_CREATE
+ * */
+ switch (n->nlmsg_type) {
+ case RTM_NEWACTION:
+ /* we are going to assume all other flags
+ * imply create only if it doesnt exist
+ * Note that CREATE | EXCL implies that
+ * but since we want avoid ambiguity (eg when flags
+ * is zero) then just set this
+ */
+ if (n->nlmsg_flags&NLM_F_REPLACE) {
+ ovr = 1;
+ }
+ ret = tcf_action_add(tca[TCA_ACT_TAB-1], n, pid, ovr);
+ break;
+ case RTM_DELACTION:
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_DELACTION);
+ break;
+ case RTM_GETACTION:
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_GETACTION);
+ break;
+ default:
+ printk(" Unknown cmd was detected\n");
+ break;
+ }
+
+ return ret;
+}
+
+char *
+find_dump_kind(struct nlmsghdr *n)
+{
+ struct rtattr *tb1, *tb2[TCA_ACT_MAX+1];
+ struct rtattr *tb[TCA_ACT_MAX_PRIO + 1];
+ struct rtattr *rta[TCAA_MAX + 1];
+ struct rtattr *kind = NULL;
+ int min_len = NLMSG_LENGTH(sizeof (struct tcamsg));
+
+ int attrlen = n->nlmsg_len - NLMSG_ALIGN(min_len);
+ struct rtattr *attr = (void *) n + NLMSG_ALIGN(min_len);
+
+ if (rtattr_parse(rta, TCAA_MAX, attr, attrlen) < 0)
+ return NULL;
+ tb1 = rta[TCA_ACT_TAB - 1];
+ if (NULL == tb1) {
+ return NULL;
+ }
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(tb1), NLMSG_ALIGN(RTA_PAYLOAD(tb1))) < 0)
+ return NULL;
+ if (NULL == tb[0])
+ return NULL;
+
+ if (rtattr_parse(tb2, TCA_ACT_MAX, RTA_DATA(tb[0]), RTA_PAYLOAD(tb[0]))<0)
+ return NULL;
+ kind = tb2[TCA_ACT_KIND-1];
+
+ return (char *) RTA_DATA(kind);
+}
+
+static int
+tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+ struct rtattr *x;
+ struct tc_action_ops *a_o;
+ struct tc_action a;
+ int ret = 0;
+
+ struct tcamsg *t = (struct tcamsg *) NLMSG_DATA(cb->nlh);
+ char *kind = find_dump_kind(cb->nlh);
+ if (NULL == kind) {
+ printk("tc_dump_action: action bad kind\n");
+ return 0;
+ }
+
+ a_o = tc_lookup_action_n(kind);
+
+ if (NULL == a_o) {
+ printk("failed to find %s\n", kind);
+ return 0;
+ }
+
+ memset(&a,0,sizeof(struct tc_action));
+ a.ops = a_o;
+
+ if (NULL == a_o->walk) {
+ printk("tc_dump_action: %s !capable of dumping table\n",kind);
+ goto rtattr_failure;
+ }
+
+ nlh = NLMSG_PUT(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, cb->nlh->nlmsg_type, sizeof (*t));
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr *) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ ret = a_o->walk(skb, cb, RTM_GETACTION, &a);
+ if (0 > ret ) {
+ goto rtattr_failure;
+ }
+
+ if (ret > 0) {
+ x->rta_len = skb->tail - (u8 *) x;
+ ret = skb->len;
+ } else {
+ skb_trim(skb, (u8*)x - skb->data);
+ }
+
+ nlh->nlmsg_len = skb->tail - b;
+ if (NETLINK_CB(cb->skb).pid && ret)
+ nlh->nlmsg_flags |= NLM_F_MULTI;
+ module_put(a_o->owner);
+ return skb->len;
+
+rtattr_failure:
+nlmsg_failure:
+ module_put(a_o->owner);
+ skb_trim(skb, b - skb->data);
+ return skb->len;
+}
+
+static int __init tc_action_init(void)
+{
+ struct rtnetlink_link *link_p = rtnetlink_links[PF_UNSPEC];
+
+ if (link_p) {
+ link_p[RTM_NEWACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_DELACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_GETACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_GETACTION-RTM_BASE].dumpit = tc_dump_action;
+ }
+
+ printk("TC classifier action (bugs to netdev@oss.sgi.com cc hadi@cyberus.ca)\n");
+ return 0;
+}
+
+subsys_initcall(tc_action_init);
+
+EXPORT_SYMBOL(tcf_register_action);
+EXPORT_SYMBOL(tcf_unregister_action);
+EXPORT_SYMBOL(tcf_action_init_1);
+EXPORT_SYMBOL(tcf_action_init);
+EXPORT_SYMBOL(tcf_action_destroy);
+EXPORT_SYMBOL(tcf_action_exec);
+EXPORT_SYMBOL(tcf_action_copy_stats);
+EXPORT_SYMBOL(tcf_action_dump);
+EXPORT_SYMBOL(tcf_action_dump_1);
+EXPORT_SYMBOL(tcf_action_dump_old);
--- /dev/null
+/*
+ * net/sched/sch_netem.c Network emulator
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Authors: Stephen Hemminger <shemminger@osdl.org>
+ * Catalin(ux aka Dino) BOIE <catab at umbrella dot ro>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <asm/bitops.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+
+#include <net/pkt_sched.h>
+
+/* Network emulator
+ *
+ * This scheduler can alters spacing and order
+ * Similar to NISTnet and BSD Dummynet.
+ */
+
+struct netem_sched_data {
+ struct Qdisc *qdisc;
+ struct sk_buff_head delayed;
+ struct timer_list timer;
+
+ u32 latency;
+ u32 loss;
+ u32 limit;
+ u32 counter;
+ u32 gap;
+ u32 jitter;
+};
+
+/* Time stamp put into socket buffer control block */
+struct netem_skb_cb {
+ psched_time_t time_to_send;
+};
+
+/* This is the distribution table for the normal distribution produced
+ * with NISTnet tools.
+ * The entries represent a scaled inverse of the cumulative distribution
+ * function.
+ */
+#define TABLESIZE 2048
+#define TABLEFACTOR 8192
+
+static const short disttable[TABLESIZE] = {
+ -31473, -26739, -25226, -24269,
+ -23560, -22993, -22518, -22109,
+ -21749, -21426, -21133, -20865,
+ -20618, -20389, -20174, -19972,
+ -19782, -19601, -19430, -19267,
+ -19112, -18962, -18819, -18681,
+ -18549, -18421, -18298, -18178,
+ -18062, -17950, -17841, -17735,
+ -17632, -17532, -17434, -17339,
+ -17245, -17155, -17066, -16979,
+ -16894, -16811, -16729, -16649,
+ -16571, -16494, -16419, -16345,
+ -16272, -16201, -16130, -16061,
+ -15993, -15926, -15861, -15796,
+ -15732, -15669, -15607, -15546,
+ -15486, -15426, -15368, -15310,
+ -15253, -15196, -15140, -15086,
+ -15031, -14977, -14925, -14872,
+ -14821, -14769, -14719, -14669,
+ -14619, -14570, -14522, -14473,
+ -14426, -14379, -14332, -14286,
+ -14241, -14196, -14150, -14106,
+ -14062, -14019, -13976, -13933,
+ -13890, -13848, -13807, -13765,
+ -13724, -13684, -13643, -13604,
+ -13564, -13525, -13486, -13447,
+ -13408, -13370, -13332, -13295,
+ -13258, -13221, -13184, -13147,
+ -13111, -13075, -13040, -13004,
+ -12969, -12934, -12899, -12865,
+ -12830, -12796, -12762, -12729,
+ -12695, -12662, -12629, -12596,
+ -12564, -12531, -12499, -12467,
+ -12435, -12404, -12372, -12341,
+ -12310, -12279, -12248, -12218,
+ -12187, -12157, -12127, -12097,
+ -12067, -12038, -12008, -11979,
+ -11950, -11921, -11892, -11863,
+ -11835, -11806, -11778, -11750,
+ -11722, -11694, -11666, -11639,
+ -11611, -11584, -11557, -11530,
+ -11503, -11476, -11450, -11423,
+ -11396, -11370, -11344, -11318,
+ -11292, -11266, -11240, -11214,
+ -11189, -11164, -11138, -11113,
+ -11088, -11063, -11038, -11013,
+ -10988, -10964, -10939, -10915,
+ -10891, -10866, -10843, -10818,
+ -10794, -10770, -10747, -10723,
+ -10700, -10676, -10652, -10630,
+ -10606, -10583, -10560, -10537,
+ -10514, -10491, -10469, -10446,
+ -10424, -10401, -10378, -10356,
+ -10334, -10312, -10290, -10267,
+ -10246, -10224, -10202, -10180,
+ -10158, -10137, -10115, -10094,
+ -10072, -10051, -10030, -10009,
+ -9988, -9967, -9945, -9925,
+ -9904, -9883, -9862, -9842,
+ -9821, -9800, -9780, -9760,
+ -9739, -9719, -9699, -9678,
+ -9658, -9638, -9618, -9599,
+ -9578, -9559, -9539, -9519,
+ -9499, -9480, -9461, -9441,
+ -9422, -9402, -9383, -9363,
+ -9344, -9325, -9306, -9287,
+ -9268, -9249, -9230, -9211,
+ -9192, -9173, -9155, -9136,
+ -9117, -9098, -9080, -9062,
+ -9043, -9025, -9006, -8988,
+ -8970, -8951, -8933, -8915,
+ -8897, -8879, -8861, -8843,
+ -8825, -8807, -8789, -8772,
+ -8754, -8736, -8718, -8701,
+ -8683, -8665, -8648, -8630,
+ -8613, -8595, -8578, -8561,
+ -8543, -8526, -8509, -8492,
+ -8475, -8458, -8441, -8423,
+ -8407, -8390, -8373, -8356,
+ -8339, -8322, -8305, -8289,
+ -8272, -8255, -8239, -8222,
+ -8206, -8189, -8172, -8156,
+ -8140, -8123, -8107, -8090,
+ -8074, -8058, -8042, -8025,
+ -8009, -7993, -7977, -7961,
+ -7945, -7929, -7913, -7897,
+ -7881, -7865, -7849, -7833,
+ -7817, -7802, -7786, -7770,
+ -7754, -7739, -7723, -7707,
+ -7692, -7676, -7661, -7645,
+ -7630, -7614, -7599, -7583,
+ -7568, -7553, -7537, -7522,
+ -7507, -7492, -7476, -7461,
+ -7446, -7431, -7416, -7401,
+ -7385, -7370, -7356, -7340,
+ -7325, -7311, -7296, -7281,
+ -7266, -7251, -7236, -7221,
+ -7207, -7192, -7177, -7162,
+ -7148, -7133, -7118, -7104,
+ -7089, -7075, -7060, -7046,
+ -7031, -7016, -7002, -6988,
+ -6973, -6959, -6944, -6930,
+ -6916, -6901, -6887, -6873,
+ -6859, -6844, -6830, -6816,
+ -6802, -6788, -6774, -6760,
+ -6746, -6731, -6717, -6704,
+ -6690, -6675, -6661, -6647,
+ -6633, -6620, -6606, -6592,
+ -6578, -6564, -6550, -6537,
+ -6523, -6509, -6495, -6482,
+ -6468, -6454, -6441, -6427,
+ -6413, -6400, -6386, -6373,
+ -6359, -6346, -6332, -6318,
+ -6305, -6291, -6278, -6264,
+ -6251, -6238, -6224, -6211,
+ -6198, -6184, -6171, -6158,
+ -6144, -6131, -6118, -6105,
+ -6091, -6078, -6065, -6052,
+ -6039, -6025, -6012, -5999,
+ -5986, -5973, -5960, -5947,
+ -5934, -5921, -5908, -5895,
+ -5882, -5869, -5856, -5843,
+ -5830, -5817, -5804, -5791,
+ -5779, -5766, -5753, -5740,
+ -5727, -5714, -5702, -5689,
+ -5676, -5663, -5650, -5638,
+ -5625, -5612, -5600, -5587,
+ -5575, -5562, -5549, -5537,
+ -5524, -5512, -5499, -5486,
+ -5474, -5461, -5449, -5436,
+ -5424, -5411, -5399, -5386,
+ -5374, -5362, -5349, -5337,
+ -5324, -5312, -5299, -5287,
+ -5275, -5263, -5250, -5238,
+ -5226, -5213, -5201, -5189,
+ -5177, -5164, -5152, -5140,
+ -5128, -5115, -5103, -5091,
+ -5079, -5067, -5055, -5043,
+ -5030, -5018, -5006, -4994,
+ -4982, -4970, -4958, -4946,
+ -4934, -4922, -4910, -4898,
+ -4886, -4874, -4862, -4850,
+ -4838, -4826, -4814, -4803,
+ -4791, -4778, -4767, -4755,
+ -4743, -4731, -4719, -4708,
+ -4696, -4684, -4672, -4660,
+ -4649, -4637, -4625, -4613,
+ -4601, -4590, -4578, -4566,
+ -4554, -4543, -4531, -4520,
+ -4508, -4496, -4484, -4473,
+ -4461, -4449, -4438, -4427,
+ -4415, -4403, -4392, -4380,
+ -4368, -4357, -4345, -4334,
+ -4322, -4311, -4299, -4288,
+ -4276, -4265, -4253, -4242,
+ -4230, -4219, -4207, -4196,
+ -4184, -4173, -4162, -4150,
+ -4139, -4128, -4116, -4105,
+ -4094, -4082, -4071, -4060,
+ -4048, -4037, -4026, -4014,
+ -4003, -3992, -3980, -3969,
+ -3958, -3946, -3935, -3924,
+ -3913, -3901, -3890, -3879,
+ -3868, -3857, -3845, -3834,
+ -3823, -3812, -3801, -3790,
+ -3779, -3767, -3756, -3745,
+ -3734, -3723, -3712, -3700,
+ -3689, -3678, -3667, -3656,
+ -3645, -3634, -3623, -3612,
+ -3601, -3590, -3579, -3568,
+ -3557, -3545, -3535, -3524,
+ -3513, -3502, -3491, -3480,
+ -3469, -3458, -3447, -3436,
+ -3425, -3414, -3403, -3392,
+ -3381, -3370, -3360, -3348,
+ -3337, -3327, -3316, -3305,
+ -3294, -3283, -3272, -3262,
+ -3251, -3240, -3229, -3218,
+ -3207, -3197, -3185, -3175,
+ -3164, -3153, -3142, -3132,
+ -3121, -3110, -3099, -3088,
+ -3078, -3067, -3056, -3045,
+ -3035, -3024, -3013, -3003,
+ -2992, -2981, -2970, -2960,
+ -2949, -2938, -2928, -2917,
+ -2906, -2895, -2885, -2874,
+ -2864, -2853, -2842, -2832,
+ -2821, -2810, -2800, -2789,
+ -2778, -2768, -2757, -2747,
+ -2736, -2725, -2715, -2704,
+ -2694, -2683, -2673, -2662,
+ -2651, -2641, -2630, -2620,
+ -2609, -2599, -2588, -2578,
+ -2567, -2556, -2546, -2535,
+ -2525, -2515, -2504, -2493,
+ -2483, -2472, -2462, -2451,
+ -2441, -2431, -2420, -2410,
+ -2399, -2389, -2378, -2367,
+ -2357, -2347, -2336, -2326,
+ -2315, -2305, -2295, -2284,
+ -2274, -2263, -2253, -2243,
+ -2232, -2222, -2211, -2201,
+ -2191, -2180, -2170, -2159,
+ -2149, -2139, -2128, -2118,
+ -2107, -2097, -2087, -2076,
+ -2066, -2056, -2046, -2035,
+ -2025, -2014, -2004, -1994,
+ -1983, -1973, -1963, -1953,
+ -1942, -1932, -1921, -1911,
+ -1901, -1891, -1880, -1870,
+ -1860, -1849, -1839, -1829,
+ -1819, -1808, -1798, -1788,
+ -1778, -1767, -1757, -1747,
+ -1736, -1726, -1716, -1706,
+ -1695, -1685, -1675, -1665,
+ -1654, -1644, -1634, -1624,
+ -1613, -1603, -1593, -1583,
+ -1573, -1563, -1552, -1542,
+ -1532, -1522, -1511, -1501,
+ -1491, -1481, -1471, -1461,
+ -1450, -1440, -1430, -1420,
+ -1409, -1400, -1389, -1379,
+ -1369, -1359, -1348, -1339,
+ -1328, -1318, -1308, -1298,
+ -1288, -1278, -1267, -1257,
+ -1247, -1237, -1227, -1217,
+ -1207, -1196, -1186, -1176,
+ -1166, -1156, -1146, -1135,
+ -1126, -1115, -1105, -1095,
+ -1085, -1075, -1065, -1055,
+ -1044, -1034, -1024, -1014,
+ -1004, -994, -984, -974,
+ -964, -954, -944, -933,
+ -923, -913, -903, -893,
+ -883, -873, -863, -853,
+ -843, -833, -822, -812,
+ -802, -792, -782, -772,
+ -762, -752, -742, -732,
+ -722, -712, -702, -691,
+ -682, -671, -662, -651,
+ -641, -631, -621, -611,
+ -601, -591, -581, -571,
+ -561, -551, -541, -531,
+ -521, -511, -501, -491,
+ -480, -471, -460, -451,
+ -440, -430, -420, -410,
+ -400, -390, -380, -370,
+ -360, -350, -340, -330,
+ -320, -310, -300, -290,
+ -280, -270, -260, -250,
+ -240, -230, -220, -210,
+ -199, -190, -179, -170,
+ -159, -150, -139, -129,
+ -119, -109, -99, -89,
+ -79, -69, -59, -49,
+ -39, -29, -19, -9,
+ 1, 11, 21, 31,
+ 41, 51, 61, 71,
+ 81, 91, 101, 111,
+ 121, 131, 141, 152,
+ 161, 172, 181, 192,
+ 202, 212, 222, 232,
+ 242, 252, 262, 272,
+ 282, 292, 302, 312,
+ 322, 332, 342, 352,
+ 362, 372, 382, 392,
+ 402, 412, 422, 433,
+ 442, 453, 462, 473,
+ 483, 493, 503, 513,
+ 523, 533, 543, 553,
+ 563, 573, 583, 593,
+ 603, 613, 623, 633,
+ 643, 653, 664, 673,
+ 684, 694, 704, 714,
+ 724, 734, 744, 754,
+ 764, 774, 784, 794,
+ 804, 815, 825, 835,
+ 845, 855, 865, 875,
+ 885, 895, 905, 915,
+ 925, 936, 946, 956,
+ 966, 976, 986, 996,
+ 1006, 1016, 1026, 1037,
+ 1047, 1057, 1067, 1077,
+ 1087, 1097, 1107, 1117,
+ 1128, 1138, 1148, 1158,
+ 1168, 1178, 1188, 1198,
+ 1209, 1219, 1229, 1239,
+ 1249, 1259, 1269, 1280,
+ 1290, 1300, 1310, 1320,
+ 1330, 1341, 1351, 1361,
+ 1371, 1381, 1391, 1402,
+ 1412, 1422, 1432, 1442,
+ 1452, 1463, 1473, 1483,
+ 1493, 1503, 1513, 1524,
+ 1534, 1544, 1554, 1565,
+ 1575, 1585, 1595, 1606,
+ 1616, 1626, 1636, 1647,
+ 1656, 1667, 1677, 1687,
+ 1697, 1708, 1718, 1729,
+ 1739, 1749, 1759, 1769,
+ 1780, 1790, 1800, 1810,
+ 1821, 1831, 1841, 1851,
+ 1862, 1872, 1883, 1893,
+ 1903, 1913, 1923, 1934,
+ 1944, 1955, 1965, 1975,
+ 1985, 1996, 2006, 2016,
+ 2027, 2037, 2048, 2058,
+ 2068, 2079, 2089, 2099,
+ 2110, 2120, 2130, 2141,
+ 2151, 2161, 2172, 2182,
+ 2193, 2203, 2213, 2224,
+ 2234, 2245, 2255, 2265,
+ 2276, 2286, 2297, 2307,
+ 2318, 2328, 2338, 2349,
+ 2359, 2370, 2380, 2391,
+ 2401, 2412, 2422, 2433,
+ 2443, 2454, 2464, 2475,
+ 2485, 2496, 2506, 2517,
+ 2527, 2537, 2548, 2559,
+ 2569, 2580, 2590, 2601,
+ 2612, 2622, 2632, 2643,
+ 2654, 2664, 2675, 2685,
+ 2696, 2707, 2717, 2728,
+ 2738, 2749, 2759, 2770,
+ 2781, 2791, 2802, 2813,
+ 2823, 2834, 2845, 2855,
+ 2866, 2877, 2887, 2898,
+ 2909, 2919, 2930, 2941,
+ 2951, 2962, 2973, 2984,
+ 2994, 3005, 3015, 3027,
+ 3037, 3048, 3058, 3069,
+ 3080, 3091, 3101, 3113,
+ 3123, 3134, 3145, 3156,
+ 3166, 3177, 3188, 3199,
+ 3210, 3220, 3231, 3242,
+ 3253, 3264, 3275, 3285,
+ 3296, 3307, 3318, 3329,
+ 3340, 3351, 3362, 3373,
+ 3384, 3394, 3405, 3416,
+ 3427, 3438, 3449, 3460,
+ 3471, 3482, 3493, 3504,
+ 3515, 3526, 3537, 3548,
+ 3559, 3570, 3581, 3592,
+ 3603, 3614, 3625, 3636,
+ 3647, 3659, 3670, 3681,
+ 3692, 3703, 3714, 3725,
+ 3736, 3747, 3758, 3770,
+ 3781, 3792, 3803, 3814,
+ 3825, 3837, 3848, 3859,
+ 3870, 3881, 3893, 3904,
+ 3915, 3926, 3937, 3949,
+ 3960, 3971, 3983, 3994,
+ 4005, 4017, 4028, 4039,
+ 4051, 4062, 4073, 4085,
+ 4096, 4107, 4119, 4130,
+ 4141, 4153, 4164, 4175,
+ 4187, 4198, 4210, 4221,
+ 4233, 4244, 4256, 4267,
+ 4279, 4290, 4302, 4313,
+ 4325, 4336, 4348, 4359,
+ 4371, 4382, 4394, 4406,
+ 4417, 4429, 4440, 4452,
+ 4464, 4475, 4487, 4499,
+ 4510, 4522, 4533, 4545,
+ 4557, 4569, 4581, 4592,
+ 4604, 4616, 4627, 4639,
+ 4651, 4663, 4674, 4686,
+ 4698, 4710, 4722, 4734,
+ 4746, 4758, 4769, 4781,
+ 4793, 4805, 4817, 4829,
+ 4841, 4853, 4865, 4877,
+ 4889, 4900, 4913, 4925,
+ 4936, 4949, 4961, 4973,
+ 4985, 4997, 5009, 5021,
+ 5033, 5045, 5057, 5070,
+ 5081, 5094, 5106, 5118,
+ 5130, 5143, 5155, 5167,
+ 5179, 5191, 5204, 5216,
+ 5228, 5240, 5253, 5265,
+ 5278, 5290, 5302, 5315,
+ 5327, 5340, 5352, 5364,
+ 5377, 5389, 5401, 5414,
+ 5426, 5439, 5451, 5464,
+ 5476, 5489, 5502, 5514,
+ 5527, 5539, 5552, 5564,
+ 5577, 5590, 5603, 5615,
+ 5628, 5641, 5653, 5666,
+ 5679, 5691, 5704, 5717,
+ 5730, 5743, 5756, 5768,
+ 5781, 5794, 5807, 5820,
+ 5833, 5846, 5859, 5872,
+ 5885, 5897, 5911, 5924,
+ 5937, 5950, 5963, 5976,
+ 5989, 6002, 6015, 6028,
+ 6042, 6055, 6068, 6081,
+ 6094, 6108, 6121, 6134,
+ 6147, 6160, 6174, 6187,
+ 6201, 6214, 6227, 6241,
+ 6254, 6267, 6281, 6294,
+ 6308, 6321, 6335, 6348,
+ 6362, 6375, 6389, 6403,
+ 6416, 6430, 6443, 6457,
+ 6471, 6485, 6498, 6512,
+ 6526, 6540, 6554, 6567,
+ 6581, 6595, 6609, 6623,
+ 6637, 6651, 6665, 6679,
+ 6692, 6706, 6721, 6735,
+ 6749, 6763, 6777, 6791,
+ 6805, 6819, 6833, 6848,
+ 6862, 6876, 6890, 6905,
+ 6919, 6933, 6948, 6962,
+ 6976, 6991, 7005, 7020,
+ 7034, 7049, 7064, 7078,
+ 7093, 7107, 7122, 7136,
+ 7151, 7166, 7180, 7195,
+ 7210, 7225, 7240, 7254,
+ 7269, 7284, 7299, 7314,
+ 7329, 7344, 7359, 7374,
+ 7389, 7404, 7419, 7434,
+ 7449, 7465, 7480, 7495,
+ 7510, 7526, 7541, 7556,
+ 7571, 7587, 7602, 7618,
+ 7633, 7648, 7664, 7680,
+ 7695, 7711, 7726, 7742,
+ 7758, 7773, 7789, 7805,
+ 7821, 7836, 7852, 7868,
+ 7884, 7900, 7916, 7932,
+ 7948, 7964, 7981, 7997,
+ 8013, 8029, 8045, 8061,
+ 8078, 8094, 8110, 8127,
+ 8143, 8160, 8176, 8193,
+ 8209, 8226, 8242, 8259,
+ 8276, 8292, 8309, 8326,
+ 8343, 8360, 8377, 8394,
+ 8410, 8428, 8444, 8462,
+ 8479, 8496, 8513, 8530,
+ 8548, 8565, 8582, 8600,
+ 8617, 8634, 8652, 8670,
+ 8687, 8704, 8722, 8740,
+ 8758, 8775, 8793, 8811,
+ 8829, 8847, 8865, 8883,
+ 8901, 8919, 8937, 8955,
+ 8974, 8992, 9010, 9029,
+ 9047, 9066, 9084, 9103,
+ 9121, 9140, 9159, 9177,
+ 9196, 9215, 9234, 9253,
+ 9272, 9291, 9310, 9329,
+ 9349, 9368, 9387, 9406,
+ 9426, 9445, 9465, 9484,
+ 9504, 9524, 9544, 9563,
+ 9583, 9603, 9623, 9643,
+ 9663, 9683, 9703, 9723,
+ 9744, 9764, 9785, 9805,
+ 9826, 9846, 9867, 9888,
+ 9909, 9930, 9950, 9971,
+ 9993, 10013, 10035, 10056,
+ 10077, 10099, 10120, 10142,
+ 10163, 10185, 10207, 10229,
+ 10251, 10273, 10294, 10317,
+ 10339, 10361, 10384, 10406,
+ 10428, 10451, 10474, 10496,
+ 10519, 10542, 10565, 10588,
+ 10612, 10635, 10658, 10682,
+ 10705, 10729, 10752, 10776,
+ 10800, 10824, 10848, 10872,
+ 10896, 10921, 10945, 10969,
+ 10994, 11019, 11044, 11069,
+ 11094, 11119, 11144, 11169,
+ 11195, 11221, 11246, 11272,
+ 11298, 11324, 11350, 11376,
+ 11402, 11429, 11456, 11482,
+ 11509, 11536, 11563, 11590,
+ 11618, 11645, 11673, 11701,
+ 11728, 11756, 11785, 11813,
+ 11842, 11870, 11899, 11928,
+ 11957, 11986, 12015, 12045,
+ 12074, 12104, 12134, 12164,
+ 12194, 12225, 12255, 12286,
+ 12317, 12348, 12380, 12411,
+ 12443, 12475, 12507, 12539,
+ 12571, 12604, 12637, 12670,
+ 12703, 12737, 12771, 12804,
+ 12839, 12873, 12907, 12942,
+ 12977, 13013, 13048, 13084,
+ 13120, 13156, 13192, 13229,
+ 13267, 13304, 13341, 13379,
+ 13418, 13456, 13495, 13534,
+ 13573, 13613, 13653, 13693,
+ 13734, 13775, 13817, 13858,
+ 13901, 13943, 13986, 14029,
+ 14073, 14117, 14162, 14206,
+ 14252, 14297, 14343, 14390,
+ 14437, 14485, 14533, 14582,
+ 14631, 14680, 14731, 14782,
+ 14833, 14885, 14937, 14991,
+ 15044, 15099, 15154, 15210,
+ 15266, 15324, 15382, 15441,
+ 15500, 15561, 15622, 15684,
+ 15747, 15811, 15877, 15943,
+ 16010, 16078, 16148, 16218,
+ 16290, 16363, 16437, 16513,
+ 16590, 16669, 16749, 16831,
+ 16915, 17000, 17088, 17177,
+ 17268, 17362, 17458, 17556,
+ 17657, 17761, 17868, 17977,
+ 18090, 18207, 18328, 18452,
+ 18581, 18715, 18854, 18998,
+ 19149, 19307, 19472, 19645,
+ 19828, 20021, 20226, 20444,
+ 20678, 20930, 21204, 21503,
+ 21835, 22206, 22630, 23124,
+ 23721, 24478, 25529, 27316,
+};
+
+/* tabledist - return a pseudo-randomly distributed value with mean mu and
+ * std deviation sigma. Uses table lookup to approximate the desired
+ * distribution, and a uniformly-distributed pseudo-random source.
+ */
+static inline int tabledist(int mu, int sigma)
+{
+ int x;
+ int index;
+ int sigmamod, sigmadiv;
+
+ if (sigma == 0)
+ return mu;
+
+ index = (net_random() & (TABLESIZE-1));
+ sigmamod = sigma%TABLEFACTOR;
+ sigmadiv = sigma/TABLEFACTOR;
+ x = sigmamod*disttable[index];
+
+ if (x >= 0)
+ x += TABLEFACTOR/2;
+ else
+ x -= TABLEFACTOR/2;
+
+ x /= TABLEFACTOR;
+ x += sigmadiv*disttable[index];
+ x += mu;
+ return x;
+}
+
+/* Enqueue packets with underlying discipline (fifo)
+ * but mark them with current time first.
+ */
+static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ struct netem_skb_cb *cb = (struct netem_skb_cb *)skb->cb;
+ psched_time_t now;
+ long delay;
+
+ pr_debug("netem_enqueue skb=%p @%lu\n", skb, jiffies);
+
+ /* Random packet drop 0 => none, ~0 => all */
+ if (q->loss && q->loss >= net_random()) {
+ sch->stats.drops++;
+ return 0; /* lie about loss so TCP doesn't know */
+ }
+
+
+ /* If doing simple delay then gap == 0 so all packets
+ * go into the delayed holding queue
+ * otherwise if doing out of order only "1 out of gap"
+ * packets will be delayed.
+ */
+ if (q->counter < q->gap) {
+ int ret;
+
+ ++q->counter;
+ ret = q->qdisc->enqueue(skb, q->qdisc);
+ if (ret)
+ sch->stats.drops++;
+ return ret;
+ }
+
+ q->counter = 0;
+
+ PSCHED_GET_TIME(now);
+ if (q->jitter)
+ delay = tabledist(q->latency, q->jitter);
+ else
+ delay = q->latency;
+
+ PSCHED_TADD2(now, delay, cb->time_to_send);
+
+ /* Always queue at tail to keep packets in order */
+ __skb_queue_tail(&q->delayed, skb);
+ sch->q.qlen++;
+ sch->stats.bytes += skb->len;
+ sch->stats.packets++;
+ return 0;
+}
+
+/* Requeue packets but don't change time stamp */
+static int netem_requeue(struct sk_buff *skb, struct Qdisc *sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ int ret;
+
+ if ((ret = q->qdisc->ops->requeue(skb, q->qdisc)) == 0)
+ sch->q.qlen++;
+
+ return ret;
+}
+
+static unsigned int netem_drop(struct Qdisc* sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ unsigned int len;
+
+ if ((len = q->qdisc->ops->drop(q->qdisc)) != 0) {
+ sch->q.qlen--;
+ sch->stats.drops++;
+ }
+ return len;
+}
+
+/* Dequeue packet.
+ * Move all packets that are ready to send from the delay holding
+ * list to the underlying qdisc, then just call dequeue
+ */
+static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ struct sk_buff *skb;
+ psched_time_t now;
+
+ PSCHED_GET_TIME(now);
+ while ((skb = skb_peek(&q->delayed)) != NULL) {
+ const struct netem_skb_cb *cb
+ = (const struct netem_skb_cb *)skb->cb;
+ long delay
+ = PSCHED_US2JIFFIE(PSCHED_TDIFF(cb->time_to_send, now));
+ pr_debug("netem_dequeue: delay queue %p@%lu %ld\n",
+ skb, jiffies, delay);
+
+ /* if more time remaining? */
+ if (delay > 0) {
+ mod_timer(&q->timer, jiffies + delay);
+ break;
+ }
+ __skb_unlink(skb, &q->delayed);
+
+ if (q->qdisc->enqueue(skb, q->qdisc))
+ sch->stats.drops++;
+ }
+
+ skb = q->qdisc->dequeue(q->qdisc);
+ if (skb)
+ sch->q.qlen--;
+ return skb;
+}
+
+static void netem_watchdog(unsigned long arg)
+{
+ struct Qdisc *sch = (struct Qdisc *)arg;
+
+ pr_debug("netem_watchdog: fired @%lu\n", jiffies);
+ netif_schedule(sch->dev);
+}
+
+static void netem_reset(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+
+ qdisc_reset(q->qdisc);
+ skb_queue_purge(&q->delayed);
+
+ sch->q.qlen = 0;
+ del_timer_sync(&q->timer);
+}
+
+static int set_fifo_limit(struct Qdisc *q, int limit)
+{
+ struct rtattr *rta;
+ int ret = -ENOMEM;
+
+ rta = kmalloc(RTA_LENGTH(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
+ if (rta) {
+ rta->rta_type = RTM_NEWQDISC;
+ rta->rta_len = RTA_LENGTH(sizeof(struct tc_fifo_qopt));
+ ((struct tc_fifo_qopt *)RTA_DATA(rta))->limit = limit;
+
+ ret = q->ops->change(q, rta);
+ kfree(rta);
+ }
+ return ret;
+}
+
+static int netem_change(struct Qdisc *sch, struct rtattr *opt)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ struct tc_netem_qopt *qopt = RTA_DATA(opt);
+ struct Qdisc *child;
+ int ret;
+
+ if (opt->rta_len < RTA_LENGTH(sizeof(*qopt)))
+ return -EINVAL;
+
+ child = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops);
+ if (!child)
+ return -EINVAL;
+
+ ret = set_fifo_limit(child, qopt->limit);
+ if (ret) {
+ qdisc_destroy(child);
+ return ret;
+ }
+
+ sch_tree_lock(sch);
+ if (child) {
+ child = xchg(&q->qdisc, child);
+ if (child != &noop_qdisc)
+ qdisc_destroy(child);
+
+ q->latency = qopt->latency;
+ q->jitter = qopt->jitter;
+ q->limit = qopt->limit;
+ q->gap = qopt->gap;
+ q->loss = qopt->loss;
+ }
+ sch_tree_unlock(sch);
+
+ return 0;
+}
+
+static int netem_init(struct Qdisc *sch, struct rtattr *opt)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+
+ if (!opt)
+ return -EINVAL;
+
+ skb_queue_head_init(&q->delayed);
+ q->qdisc = &noop_qdisc;
+
+ init_timer(&q->timer);
+ q->timer.function = netem_watchdog;
+ q->timer.data = (unsigned long) sch;
+ q->counter = 0;
+
+ return netem_change(sch, opt);
+}
+
+static void netem_destroy(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+
+ del_timer_sync(&q->timer);
+}
+
+static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
+{
+ struct netem_sched_data *q = (struct netem_sched_data *)sch->data;
+ unsigned char *b = skb->tail;
+ struct tc_netem_qopt qopt;
+
+ qopt.latency = q->latency;
+ qopt.jitter = q->jitter;
+ qopt.limit = sch->dev->tx_queue_len;
+ qopt.loss = q->loss;
+ qopt.gap = q->gap;
+
+ RTA_PUT(skb, TCA_OPTIONS, sizeof(qopt), &qopt);
+
+ return skb->len;
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static struct Qdisc_ops netem_qdisc_ops = {
+ .id = "netem",
+ .priv_size = sizeof(struct netem_sched_data),
+ .enqueue = netem_enqueue,
+ .dequeue = netem_dequeue,
+ .requeue = netem_requeue,
+ .drop = netem_drop,
+ .init = netem_init,
+ .reset = netem_reset,
+ .destroy = netem_destroy,
+ .change = netem_change,
+ .dump = netem_dump,
+ .owner = THIS_MODULE,
+};
+
+
+static int __init netem_module_init(void)
+{
+ return register_qdisc(&netem_qdisc_ops);
+}
+static void __exit netem_module_exit(void)
+{
+ unregister_qdisc(&netem_qdisc_ops);
+}
+module_init(netem_module_init)
+module_exit(netem_module_exit)
+MODULE_LICENSE("GPL");
--- /dev/null
+#!/usr/bin/perl
+
+# Check the stack usage of functions
+#
+# Copyright Joern Engel <joern@wh.fh-wedel.de>
+# Inspired by Linus Torvalds
+# Original idea maybe from Keith Owens
+# s390 port and big speedup by Arnd Bergmann <arnd@bergmann-dalldorf.de>
+# Mips port by Juan Quintela <quintela@mandrakesoft.com>
+# IA64 port via Andreas Dilger
+# Arm port by Holger Schurig
+# Random bits by Matt Mackall <mpm@selenic.com>
+#
+# Usage:
+# objdump -d vmlinux | stackcheck_ppc.pl [arch]
+#
+# TODO : Port to all architectures (one regex per arch)
+
+# check for arch
+#
+# $re is used for two matches:
+# $& (whole re) matches the complete objdump line with the stack growth
+# $1 (first bracket) matches the size of the stack growth
+#
+# use anything else and feel the pain ;)
+{
+ my $arch = shift;
+ if ($arch eq "") {
+ $arch = `uname -m`;
+ }
+
+ $x = "[0-9a-f]"; # hex character
+ $xs = "[0-9a-f ]"; # hex character or space
+ if ($arch =~ /^arm$/) {
+ #c0008ffc: e24dd064 sub sp, sp, #100 ; 0x64
+ $re = qr/.*sub.*sp, sp, #(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch =~ /^i[3456]86$/) {
+ #c0105234: 81 ec ac 05 00 00 sub $0x5ac,%esp
+ $re = qr/^.*[as][du][db] \$(0x$x{1,8}),\%esp$/o;
+ } elsif ($arch =~ /^ia64$/) {
+ #e0000000044011fc: 01 0f fc 8c adds r12=-384,r12
+ $re = qr/.*adds.*r12=-(([0-9]{2}|[3-9])[0-9]{2}),r12/o;
+ } elsif ($arch =~ /^mips64$/) {
+ #8800402c: 67bdfff0 daddiu sp,sp,-16
+ $re = qr/.*daddiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch =~ /^mips$/) {
+ #88003254: 27bdffe0 addiu sp,sp,-32
+ $re = qr/.*addiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch =~ /^ppc$/) {
+ #c00029f4: 94 21 ff 30 stwu r1,-208(r1)
+ $re = qr/.*stwu.*r1,-($x{1,8})\(r1\)/o;
+ } elsif ($arch =~ /^ppc64$/) {
+ #XXX
+ $re = qr/.*stdu.*r1,-($x{1,8})\(r1\)/o;
+ } elsif ($arch =~ /^s390x?$/) {
+ # 11160: a7 fb ff 60 aghi %r15,-160
+ $re = qr/.*ag?hi.*\%r15,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } else {
+ print("wrong or unknown architecture\n");
+ exit
+ }
+}
+
+sub bysize($) {
+ ($asize = $a) =~ s/.* +(.*)$/$1/;
+ ($bsize = $b) =~ s/.* +(.*)$/$1/;
+ $bsize <=> $asize
+}
+
+#
+# main()
+#
+$funcre = qr/^$x* \<(.*)\>:$/;
+while ($line = <STDIN>) {
+ if ($line =~ m/$funcre/) {
+ $func = $1;
+ }
+ if ($line =~ m/$re/) {
+ my $size = $1;
+ $size = hex($size) if ($size =~ /^0x/);
+
+ if ($size > 0x80000000) {
+ $size = - $size;
+ $size += 0x80000000;
+ $size += 0x80000000;
+ }
+
+ $line =~ m/^($xs*).*/;
+ my $addr = $1;
+ $addr =~ s/ /0/g;
+ $addr = "0x$addr";
+
+ my $intro = "$addr $func:";
+ my $padlen = 56 - length($intro);
+ while ($padlen > 0) {
+ $intro .= ' ';
+ $padlen -= 8;
+ }
+ next if ($size < 100);
+ $stack[@stack] = "$intro$size\n";
+ }
+}
+
+@sortedstack = sort bysize @stack;
+
+foreach $i (@sortedstack) {
+ print("$i");
+}
--- /dev/null
+#define KERNEL_ELFCLASS ELFCLASS32
+#define KERNEL_ELFDATA ELFDATA2LSB
+#define HOST_ELFCLASS ELFCLASS32
+#define HOST_ELFDATA ELFDATA2LSB
+#define MODULE_SYMBOL_PREFIX ""
--- /dev/null
+#!/bin/sh
+# Generates a small Makefile used in the root of the output
+# directory, to allow make to be started from there.
+# The Makefile also allow for more convinient build of external modules
+
+# Usage
+# $1 - Kernel src directory
+# $2 - Output directory
+# $3 - version
+# $4 - patchlevel
+
+
+cat << EOF
+# Automatically generated by $0: don't edit
+
+VERSION = $3
+PATCHLEVEL = $4
+
+KERNELSRC := $1
+KERNELOUTPUT := $2
+
+MAKEFLAGS += --no-print-directory
+
+all:
+ \$(MAKE) -C \$(KERNELSRC) O=\$(KERNELOUTPUT)
+
+%::
+ \$(MAKE) -C \$(KERNELSRC) O=\$(KERNELOUTPUT) \$@
+
+EOF
+
--- /dev/null
+host-progs := modpost mk_elfconfig
+always := $(host-progs) empty.o
+
+modpost-objs := modpost.o file2alias.o sumversion.o
+
+# dependencies on generated files need to be listed explicitly
+
+$(obj)/modpost.o $(obj)/file2alias.o $(obj)/sumversion.o: $(obj)/elfconfig.h
+
+quiet_cmd_elfconfig = MKELF $@
+ cmd_elfconfig = $(obj)/mk_elfconfig $(ARCH) < $< > $@
+
+$(obj)/elfconfig.h: $(obj)/empty.o $(obj)/mk_elfconfig FORCE
+ $(call if_changed,elfconfig)
+
+targets += elfconfig.h
--- /dev/null
+/* empty file to figure out endianness / word size */
--- /dev/null
+/* Simple code to turn various tables in an ELF file into alias definitions.
+ * This deals with kernel datastructures where they should be
+ * dealt with: in the kernel source.
+ *
+ * Copyright 2002-2003 Rusty Russell, IBM Corporation
+ * 2003 Kai Germaschewski
+ *
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License, incorporated herein by reference.
+ */
+
+#include "modpost.h"
+
+/* We use the ELF typedefs, since we can't rely on stdint.h being present. */
+
+#if KERNEL_ELFCLASS == ELFCLASS32
+typedef Elf32_Addr kernel_ulong_t;
+#else
+typedef Elf64_Addr kernel_ulong_t;
+#endif
+
+typedef Elf32_Word __u32;
+typedef Elf32_Half __u16;
+typedef unsigned char __u8;
+
+/* Big exception to the "don't include kernel headers into userspace, which
+ * even potentially has different endianness and word sizes, since
+ * we handle those differences explicitly below */
+#include "../../include/linux/mod_devicetable.h"
+
+#define ADD(str, sep, cond, field) \
+do { \
+ strcat(str, sep); \
+ if (cond) \
+ sprintf(str + strlen(str), \
+ sizeof(field) == 1 ? "%02X" : \
+ sizeof(field) == 2 ? "%04X" : \
+ sizeof(field) == 4 ? "%08X" : "", \
+ field); \
+ else \
+ sprintf(str + strlen(str), "*"); \
+} while(0)
+
+/* Looks like "usb:vNpNdlNdhNdcNdscNdpNicNiscNipN" */
+static int do_usb_entry(const char *filename,
+ struct usb_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->idVendor = TO_NATIVE(id->idVendor);
+ id->idProduct = TO_NATIVE(id->idProduct);
+ id->bcdDevice_lo = TO_NATIVE(id->bcdDevice_lo);
+ id->bcdDevice_hi = TO_NATIVE(id->bcdDevice_hi);
+
+ /*
+ * Some modules (visor) have empty slots as placeholder for
+ * run-time specification that results in catch-all alias
+ */
+ if (!(id->idVendor | id->bDeviceClass | id->bInterfaceClass))
+ return 1;
+
+ strcpy(alias, "usb:");
+ ADD(alias, "v", id->match_flags&USB_DEVICE_ID_MATCH_VENDOR,
+ id->idVendor);
+ ADD(alias, "p", id->match_flags&USB_DEVICE_ID_MATCH_PRODUCT,
+ id->idProduct);
+ ADD(alias, "dl", id->match_flags&USB_DEVICE_ID_MATCH_DEV_LO,
+ id->bcdDevice_lo);
+ ADD(alias, "dh", id->match_flags&USB_DEVICE_ID_MATCH_DEV_HI,
+ id->bcdDevice_hi);
+ ADD(alias, "dc", id->match_flags&USB_DEVICE_ID_MATCH_DEV_CLASS,
+ id->bDeviceClass);
+ ADD(alias, "dsc",
+ id->match_flags&USB_DEVICE_ID_MATCH_DEV_SUBCLASS,
+ id->bDeviceSubClass);
+ ADD(alias, "dp",
+ id->match_flags&USB_DEVICE_ID_MATCH_DEV_PROTOCOL,
+ id->bDeviceProtocol);
+ ADD(alias, "ic",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_CLASS,
+ id->bInterfaceClass);
+ ADD(alias, "isc",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_SUBCLASS,
+ id->bInterfaceSubClass);
+ ADD(alias, "ip",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_PROTOCOL,
+ id->bInterfaceProtocol);
+ return 1;
+}
+
+/* Looks like: ieee1394:venNmoNspNverN */
+static int do_ieee1394_entry(const char *filename,
+ struct ieee1394_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->vendor_id = TO_NATIVE(id->vendor_id);
+ id->model_id = TO_NATIVE(id->model_id);
+ id->specifier_id = TO_NATIVE(id->specifier_id);
+ id->version = TO_NATIVE(id->version);
+
+ strcpy(alias, "ieee1394:");
+ ADD(alias, "ven", id->match_flags & IEEE1394_MATCH_VENDOR_ID,
+ id->vendor_id);
+ ADD(alias, "mo", id->match_flags & IEEE1394_MATCH_MODEL_ID,
+ id->model_id);
+ ADD(alias, "sp", id->match_flags & IEEE1394_MATCH_SPECIFIER_ID,
+ id->specifier_id);
+ ADD(alias, "ver", id->match_flags & IEEE1394_MATCH_VERSION,
+ id->version);
+
+ return 1;
+}
+
+/* Looks like: pci:vNdNsvNsdNbcNscNiN. */
+static int do_pci_entry(const char *filename,
+ struct pci_device_id *id, char *alias)
+{
+ /* Class field can be divided into these three. */
+ unsigned char baseclass, subclass, interface,
+ baseclass_mask, subclass_mask, interface_mask;
+
+ id->vendor = TO_NATIVE(id->vendor);
+ id->device = TO_NATIVE(id->device);
+ id->subvendor = TO_NATIVE(id->subvendor);
+ id->subdevice = TO_NATIVE(id->subdevice);
+ id->class = TO_NATIVE(id->class);
+ id->class_mask = TO_NATIVE(id->class_mask);
+
+ strcpy(alias, "pci:");
+ ADD(alias, "v", id->vendor != PCI_ANY_ID, id->vendor);
+ ADD(alias, "d", id->device != PCI_ANY_ID, id->device);
+ ADD(alias, "sv", id->subvendor != PCI_ANY_ID, id->subvendor);
+ ADD(alias, "sd", id->subdevice != PCI_ANY_ID, id->subdevice);
+
+ baseclass = (id->class) >> 16;
+ baseclass_mask = (id->class_mask) >> 16;
+ subclass = (id->class) >> 8;
+ subclass_mask = (id->class_mask) >> 8;
+ interface = id->class;
+ interface_mask = id->class_mask;
+
+ if ((baseclass_mask != 0 && baseclass_mask != 0xFF)
+ || (subclass_mask != 0 && subclass_mask != 0xFF)
+ || (interface_mask != 0 && interface_mask != 0xFF)) {
+ fprintf(stderr,
+ "*** Warning: Can't handle masks in %s:%04X\n",
+ filename, id->class_mask);
+ return 0;
+ }
+
+ ADD(alias, "bc", baseclass_mask == 0xFF, baseclass);
+ ADD(alias, "sc", subclass_mask == 0xFF, subclass);
+ ADD(alias, "i", interface_mask == 0xFF, interface);
+ return 1;
+}
+
+/* looks like: "ccw:tNmNdtNdmN" */
+static int do_ccw_entry(const char *filename,
+ struct ccw_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->cu_type = TO_NATIVE(id->cu_type);
+ id->cu_model = TO_NATIVE(id->cu_model);
+ id->dev_type = TO_NATIVE(id->dev_type);
+ id->dev_model = TO_NATIVE(id->dev_model);
+
+ strcpy(alias, "ccw:");
+ ADD(alias, "t", id->match_flags&CCW_DEVICE_ID_MATCH_CU_TYPE,
+ id->cu_type);
+ ADD(alias, "m", id->match_flags&CCW_DEVICE_ID_MATCH_CU_MODEL,
+ id->cu_model);
+ ADD(alias, "dt", id->match_flags&CCW_DEVICE_ID_MATCH_DEVICE_TYPE,
+ id->dev_type);
+ ADD(alias, "dm", id->match_flags&CCW_DEVICE_ID_MATCH_DEVICE_TYPE,
+ id->dev_model);
+ return 1;
+}
+
+/* looks like: "pnp:dD" */
+static int do_pnp_entry(const char *filename,
+ struct pnp_device_id *id, char *alias)
+{
+ sprintf(alias, "pnp:d%s", id->id);
+ return 1;
+}
+
+/* looks like: "pnp:cCdD..." */
+static int do_pnp_card_entry(const char *filename,
+ struct pnp_card_device_id *id, char *alias)
+{
+ int i;
+
+ sprintf(alias, "pnp:c%s", id->id);
+ for (i = 0; i < PNP_MAX_DEVICES; i++) {
+ if (! *id->devs[i].id)
+ break;
+ sprintf(alias + strlen(alias), "d%s", id->devs[i].id);
+ }
+ return 1;
+}
+
+/* Ignore any prefix, eg. v850 prepends _ */
+static inline int sym_is(const char *symbol, const char *name)
+{
+ const char *match;
+
+ match = strstr(symbol, name);
+ if (!match)
+ return 0;
+ return match[strlen(symbol)] == '\0';
+}
+
+static void do_table(void *symval, unsigned long size,
+ unsigned long id_size,
+ void *function,
+ struct module *mod)
+{
+ unsigned int i;
+ char alias[500];
+ int (*do_entry)(const char *, void *entry, char *alias) = function;
+
+ if (size % id_size || size < id_size) {
+ fprintf(stderr, "*** Warning: %s ids %lu bad size "
+ "(each on %lu)\n", mod->name, size, id_size);
+ }
+ /* Leave last one: it's the terminator. */
+ size -= id_size;
+
+ for (i = 0; i < size; i += id_size) {
+ if (do_entry(mod->name, symval+i, alias)) {
+ /* Always end in a wildcard, for future extension */
+ if (alias[strlen(alias)-1] != '*')
+ strcat(alias, "*");
+ buf_printf(&mod->dev_table_buf,
+ "MODULE_ALIAS(\"%s\");\n", alias);
+ }
+ }
+}
+
+/* Create MODULE_ALIAS() statements.
+ * At this time, we cannot write the actual output C source yet,
+ * so we write into the mod->dev_table_buf buffer. */
+void handle_moddevtable(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname)
+{
+ void *symval;
+
+ /* We're looking for a section relative symbol */
+ if (!sym->st_shndx || sym->st_shndx >= info->hdr->e_shnum)
+ return;
+
+ symval = (void *)info->hdr
+ + info->sechdrs[sym->st_shndx].sh_offset
+ + sym->st_value;
+
+ if (sym_is(symname, "__mod_pci_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pci_device_id),
+ do_pci_entry, mod);
+ else if (sym_is(symname, "__mod_usb_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct usb_device_id),
+ do_usb_entry, mod);
+ else if (sym_is(symname, "__mod_ieee1394_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct ieee1394_device_id),
+ do_ieee1394_entry, mod);
+ else if (sym_is(symname, "__mod_ccw_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct ccw_device_id),
+ do_ccw_entry, mod);
+ else if (sym_is(symname, "__mod_pnp_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pnp_device_id),
+ do_pnp_entry, mod);
+ else if (sym_is(symname, "__mod_pnp_card_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pnp_card_device_id),
+ do_pnp_card_entry, mod);
+}
+
+/* Now add out buffered information to the generated C source */
+void add_moddevtable(struct buffer *buf, struct module *mod)
+{
+ buf_printf(buf, "\n");
+ buf_write(buf, mod->dev_table_buf.p, mod->dev_table_buf.pos);
+ free(mod->dev_table_buf.p);
+}
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <elf.h>
+
+int
+main(int argc, char **argv)
+{
+ unsigned char ei[EI_NIDENT];
+ union { short s; char c[2]; } endian_test;
+
+ if (argc != 2) {
+ fprintf(stderr, "Error: no arch\n");
+ }
+ if (fread(ei, 1, EI_NIDENT, stdin) != EI_NIDENT) {
+ fprintf(stderr, "Error: input truncated\n");
+ return 1;
+ }
+ if (memcmp(ei, ELFMAG, SELFMAG) != 0) {
+ fprintf(stderr, "Error: not ELF\n");
+ return 1;
+ }
+ switch (ei[EI_CLASS]) {
+ case ELFCLASS32:
+ printf("#define KERNEL_ELFCLASS ELFCLASS32\n");
+ break;
+ case ELFCLASS64:
+ printf("#define KERNEL_ELFCLASS ELFCLASS64\n");
+ break;
+ default:
+ abort();
+ }
+ switch (ei[EI_DATA]) {
+ case ELFDATA2LSB:
+ printf("#define KERNEL_ELFDATA ELFDATA2LSB\n");
+ break;
+ case ELFDATA2MSB:
+ printf("#define KERNEL_ELFDATA ELFDATA2MSB\n");
+ break;
+ default:
+ abort();
+ }
+
+ if (sizeof(unsigned long) == 4) {
+ printf("#define HOST_ELFCLASS ELFCLASS32\n");
+ } else if (sizeof(unsigned long) == 8) {
+ printf("#define HOST_ELFCLASS ELFCLASS64\n");
+ }
+
+ endian_test.s = 0x0102;
+ if (memcmp(endian_test.c, "\x01\x02", 2) == 0)
+ printf("#define HOST_ELFDATA ELFDATA2MSB\n");
+ else if (memcmp(endian_test.c, "\x02\x01", 2) == 0)
+ printf("#define HOST_ELFDATA ELFDATA2LSB\n");
+ else
+ abort();
+
+ if ((strcmp(argv[1], "v850") == 0) || (strcmp(argv[1], "h8300") == 0))
+ printf("#define MODULE_SYMBOL_PREFIX \"_\"\n");
+ else
+ printf("#define MODULE_SYMBOL_PREFIX \"\"\n");
+
+ return 0;
+}
+
--- /dev/null
+/* Postprocess module symbol versions
+ *
+ * Copyright 2003 Kai Germaschewski
+ * 2002-2003 Rusty Russell, IBM Corporation
+ *
+ * Based in part on module-init-tools/depmod.c,file2alias
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License, incorporated herein by reference.
+ *
+ * Usage: modpost vmlinux module1.o module2.o ...
+ */
+
+#include <ctype.h>
+#include "modpost.h"
+
+/* Are we using CONFIG_MODVERSIONS? */
+int modversions = 0;
+/* Warn about undefined symbols? (do so if we have vmlinux) */
+int have_vmlinux = 0;
+
+void
+fatal(const char *fmt, ...)
+{
+ va_list arglist;
+
+ fprintf(stderr, "FATAL: ");
+
+ va_start(arglist, fmt);
+ vfprintf(stderr, fmt, arglist);
+ va_end(arglist);
+
+ exit(1);
+}
+
+void
+warn(const char *fmt, ...)
+{
+ va_list arglist;
+
+ fprintf(stderr, "WARNING: ");
+
+ va_start(arglist, fmt);
+ vfprintf(stderr, fmt, arglist);
+ va_end(arglist);
+}
+
+void *do_nofail(void *ptr, const char *file, int line, const char *expr)
+{
+ if (!ptr) {
+ fatal("Memory allocation failure %s line %d: %s.\n",
+ file, line, expr);
+ }
+ return ptr;
+}
+
+/* A list of all modules we processed */
+
+static struct module *modules;
+
+struct module *
+find_module(char *modname)
+{
+ struct module *mod;
+
+ for (mod = modules; mod; mod = mod->next)
+ if (strcmp(mod->name, modname) == 0)
+ break;
+ return mod;
+}
+
+struct module *
+new_module(char *modname)
+{
+ struct module *mod;
+ char *p, *s;
+
+ mod = NOFAIL(malloc(sizeof(*mod)));
+ memset(mod, 0, sizeof(*mod));
+ p = NOFAIL(strdup(modname));
+
+ /* strip trailing .o */
+ if ((s = strrchr(p, '.')) != NULL)
+ if (strcmp(s, ".o") == 0)
+ *s = '\0';
+
+ /* add to list */
+ mod->name = p;
+ mod->next = modules;
+ modules = mod;
+
+ return mod;
+}
+
+/* A hash of all exported symbols,
+ * struct symbol is also used for lists of unresolved symbols */
+
+#define SYMBOL_HASH_SIZE 1024
+
+struct symbol {
+ struct symbol *next;
+ struct module *module;
+ unsigned int crc;
+ int crc_valid;
+ char name[0];
+};
+
+static struct symbol *symbolhash[SYMBOL_HASH_SIZE];
+
+/* This is based on the hash agorithm from gdbm, via tdb */
+static inline unsigned int tdb_hash(const char *name)
+{
+ unsigned value; /* Used to compute the hash value. */
+ unsigned i; /* Used to cycle through random values. */
+
+ /* Set the initial value from the key size. */
+ for (value = 0x238F13AF * strlen(name), i=0; name[i]; i++)
+ value = (value + (((unsigned char *)name)[i] << (i*5 % 24)));
+
+ return (1103515243 * value + 12345);
+}
+
+/* Allocate a new symbols for use in the hash of exported symbols or
+ * the list of unresolved symbols per module */
+
+struct symbol *
+alloc_symbol(const char *name, struct symbol *next)
+{
+ struct symbol *s = NOFAIL(malloc(sizeof(*s) + strlen(name) + 1));
+
+ memset(s, 0, sizeof(*s));
+ strcpy(s->name, name);
+ s->next = next;
+ return s;
+}
+
+/* For the hash of exported symbols */
+
+void
+new_symbol(const char *name, struct module *module, unsigned int *crc)
+{
+ unsigned int hash;
+ struct symbol *new;
+
+ hash = tdb_hash(name) % SYMBOL_HASH_SIZE;
+ new = symbolhash[hash] = alloc_symbol(name, symbolhash[hash]);
+ new->module = module;
+ if (crc) {
+ new->crc = *crc;
+ new->crc_valid = 1;
+ }
+}
+
+struct symbol *
+find_symbol(const char *name)
+{
+ struct symbol *s;
+
+ /* For our purposes, .foo matches foo. PPC64 needs this. */
+ if (name[0] == '.')
+ name++;
+
+ for (s = symbolhash[tdb_hash(name) % SYMBOL_HASH_SIZE]; s; s=s->next) {
+ if (strcmp(s->name, name) == 0)
+ return s;
+ }
+ return NULL;
+}
+
+/* Add an exported symbol - it may have already been added without a
+ * CRC, in this case just update the CRC */
+void
+add_exported_symbol(const char *name, struct module *module, unsigned int *crc)
+{
+ struct symbol *s = find_symbol(name);
+
+ if (!s) {
+ new_symbol(name, module, crc);
+ return;
+ }
+ if (crc) {
+ s->crc = *crc;
+ s->crc_valid = 1;
+ }
+}
+
+void *
+grab_file(const char *filename, unsigned long *size)
+{
+ struct stat st;
+ void *map;
+ int fd;
+
+ fd = open(filename, O_RDONLY);
+ if (fd < 0 || fstat(fd, &st) != 0)
+ return NULL;
+
+ *size = st.st_size;
+ map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+ close(fd);
+
+ if (map == MAP_FAILED)
+ return NULL;
+ return map;
+}
+
+/*
+ Return a copy of the next line in a mmap'ed file.
+ spaces in the beginning of the line is trimmed away.
+ Return a pointer to a static buffer.
+*/
+char*
+get_next_line(unsigned long *pos, void *file, unsigned long size)
+{
+ static char line[4096];
+ int skip = 1;
+ size_t len = 0;
+ char *p = (char *)file + *pos;
+ char *s = line;
+
+ for (; *pos < size ; (*pos)++)
+ {
+ if (skip && isspace(*p)) {
+ p++;
+ continue;
+ }
+ skip = 0;
+ if (*p != '\n' && (*pos < size)) {
+ len++;
+ *s++ = *p++;
+ if (len > 4095)
+ break; /* Too long, stop */
+ } else {
+ /* End of string */
+ *s = '\0';
+ return line;
+ }
+ }
+ /* End of buffer */
+ return NULL;
+}
+
+void
+release_file(void *file, unsigned long size)
+{
+ munmap(file, size);
+}
+
+void
+parse_elf(struct elf_info *info, const char *filename)
+{
+ unsigned int i;
+ Elf_Ehdr *hdr = info->hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *sym;
+
+ hdr = grab_file(filename, &info->size);
+ if (!hdr) {
+ perror(filename);
+ abort();
+ }
+ info->hdr = hdr;
+ if (info->size < sizeof(*hdr))
+ goto truncated;
+
+ /* Fix endianness in ELF header */
+ hdr->e_shoff = TO_NATIVE(hdr->e_shoff);
+ hdr->e_shstrndx = TO_NATIVE(hdr->e_shstrndx);
+ hdr->e_shnum = TO_NATIVE(hdr->e_shnum);
+ hdr->e_machine = TO_NATIVE(hdr->e_machine);
+ sechdrs = (void *)hdr + hdr->e_shoff;
+ info->sechdrs = sechdrs;
+
+ /* Fix endianness in section headers */
+ for (i = 0; i < hdr->e_shnum; i++) {
+ sechdrs[i].sh_type = TO_NATIVE(sechdrs[i].sh_type);
+ sechdrs[i].sh_offset = TO_NATIVE(sechdrs[i].sh_offset);
+ sechdrs[i].sh_size = TO_NATIVE(sechdrs[i].sh_size);
+ sechdrs[i].sh_link = TO_NATIVE(sechdrs[i].sh_link);
+ sechdrs[i].sh_name = TO_NATIVE(sechdrs[i].sh_name);
+ }
+ /* Find symbol table. */
+ for (i = 1; i < hdr->e_shnum; i++) {
+ const char *secstrings
+ = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ if (sechdrs[i].sh_offset > info->size)
+ goto truncated;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".modinfo") == 0) {
+ info->modinfo = (void *)hdr + sechdrs[i].sh_offset;
+ info->modinfo_len = sechdrs[i].sh_size;
+ }
+ if (sechdrs[i].sh_type != SHT_SYMTAB)
+ continue;
+
+ info->symtab_start = (void *)hdr + sechdrs[i].sh_offset;
+ info->symtab_stop = (void *)hdr + sechdrs[i].sh_offset
+ + sechdrs[i].sh_size;
+ info->strtab = (void *)hdr +
+ sechdrs[sechdrs[i].sh_link].sh_offset;
+ }
+ if (!info->symtab_start) {
+ fprintf(stderr, "modpost: %s no symtab?\n", filename);
+ abort();
+ }
+ /* Fix endianness in symbols */
+ for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
+ sym->st_shndx = TO_NATIVE(sym->st_shndx);
+ sym->st_name = TO_NATIVE(sym->st_name);
+ sym->st_value = TO_NATIVE(sym->st_value);
+ sym->st_size = TO_NATIVE(sym->st_size);
+ }
+ return;
+
+ truncated:
+ fprintf(stderr, "modpost: %s is truncated.\n", filename);
+ abort();
+}
+
+void
+parse_elf_finish(struct elf_info *info)
+{
+ release_file(info->hdr, info->size);
+}
+
+#define CRC_PFX MODULE_SYMBOL_PREFIX "__crc_"
+#define KSYMTAB_PFX MODULE_SYMBOL_PREFIX "__ksymtab_"
+
+void
+handle_modversions(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname)
+{
+ unsigned int crc;
+
+ switch (sym->st_shndx) {
+ case SHN_COMMON:
+ fprintf(stderr, "*** Warning: \"%s\" [%s] is COMMON symbol\n",
+ symname, mod->name);
+ break;
+ case SHN_ABS:
+ /* CRC'd symbol */
+ if (memcmp(symname, CRC_PFX, strlen(CRC_PFX)) == 0) {
+ crc = (unsigned int) sym->st_value;
+ add_exported_symbol(symname + strlen(CRC_PFX),
+ mod, &crc);
+ modversions = 1;
+ }
+ break;
+ case SHN_UNDEF:
+ /* undefined symbol */
+ if (ELF_ST_BIND(sym->st_info) != STB_GLOBAL)
+ break;
+ /* ignore global offset table */
+ if (strcmp(symname, "_GLOBAL_OFFSET_TABLE_") == 0)
+ break;
+ /* ignore __this_module, it will be resolved shortly */
+ if (strcmp(symname, MODULE_SYMBOL_PREFIX "__this_module") == 0)
+ break;
+#ifdef STT_REGISTER
+ if (info->hdr->e_machine == EM_SPARC ||
+ info->hdr->e_machine == EM_SPARCV9) {
+ /* Ignore register directives. */
+ if (ELF_ST_TYPE(sym->st_info) == STT_REGISTER)
+ break;
+ }
+#endif
+
+ if (memcmp(symname, MODULE_SYMBOL_PREFIX,
+ strlen(MODULE_SYMBOL_PREFIX)) == 0)
+ mod->unres = alloc_symbol(symname +
+ strlen(MODULE_SYMBOL_PREFIX),
+ mod->unres);
+ break;
+ default:
+ /* All exported symbols */
+ if (memcmp(symname, KSYMTAB_PFX, strlen(KSYMTAB_PFX)) == 0) {
+ add_exported_symbol(symname + strlen(KSYMTAB_PFX),
+ mod, NULL);
+ }
+ break;
+ }
+}
+
+int
+is_vmlinux(const char *modname)
+{
+ const char *myname;
+
+ if ((myname = strrchr(modname, '/')))
+ myname++;
+ else
+ myname = modname;
+
+ return strcmp(myname, "vmlinux") == 0;
+}
+
+void
+read_symbols(char *modname)
+{
+ const char *symname;
+ struct module *mod;
+ struct elf_info info = { };
+ Elf_Sym *sym;
+
+ parse_elf(&info, modname);
+
+ mod = new_module(modname);
+
+ /* When there's no vmlinux, don't print warnings about
+ * unresolved symbols (since there'll be too many ;) */
+ if (is_vmlinux(modname)) {
+ unsigned int fake_crc = 0;
+ have_vmlinux = 1;
+ /* May not have this if !CONFIG_MODULE_UNLOAD: fake it.
+ If it appears, we'll get the real CRC. */
+ add_exported_symbol("cleanup_module", mod, &fake_crc);
+ add_exported_symbol("struct_module", mod, &fake_crc);
+ mod->skip = 1;
+ }
+
+ for (sym = info.symtab_start; sym < info.symtab_stop; sym++) {
+ symname = info.strtab + sym->st_name;
+
+ handle_modversions(mod, &info, sym, symname);
+ handle_moddevtable(mod, &info, sym, symname);
+ }
+ maybe_frob_version(modname, info.modinfo, info.modinfo_len,
+ (void *)info.modinfo - (void *)info.hdr);
+ parse_elf_finish(&info);
+
+ /* Our trick to get versioning for struct_module - it's
+ * never passed as an argument to an exported function, so
+ * the automatic versioning doesn't pick it up, but it's really
+ * important anyhow */
+ if (modversions) {
+ mod->unres = alloc_symbol("struct_module", mod->unres);
+
+ /* Always version init_module and cleanup_module, in
+ * case module doesn't have its own. */
+ mod->unres = alloc_symbol("init_module", mod->unres);
+ mod->unres = alloc_symbol("cleanup_module", mod->unres);
+ }
+}
+
+#define SZ 500
+
+/* We first write the generated file into memory using the
+ * following helper, then compare to the file on disk and
+ * only update the later if anything changed */
+
+void __attribute__((format(printf, 2, 3)))
+buf_printf(struct buffer *buf, const char *fmt, ...)
+{
+ char tmp[SZ];
+ int len;
+ va_list ap;
+
+ va_start(ap, fmt);
+ len = vsnprintf(tmp, SZ, fmt, ap);
+ if (buf->size - buf->pos < len + 1) {
+ buf->size += 128;
+ buf->p = realloc(buf->p, buf->size);
+ }
+ strncpy(buf->p + buf->pos, tmp, len + 1);
+ buf->pos += len;
+ va_end(ap);
+}
+
+void
+buf_write(struct buffer *buf, const char *s, int len)
+{
+ if (buf->size - buf->pos < len) {
+ buf->size += len;
+ buf->p = realloc(buf->p, buf->size);
+ }
+ strncpy(buf->p + buf->pos, s, len);
+ buf->pos += len;
+}
+
+/* Header for the generated file */
+
+void
+add_header(struct buffer *b)
+{
+ buf_printf(b, "#include <linux/module.h>\n");
+ buf_printf(b, "#include <linux/vermagic.h>\n");
+ buf_printf(b, "#include <linux/compiler.h>\n");
+ buf_printf(b, "\n");
+ buf_printf(b, "MODULE_INFO(vermagic, VERMAGIC_STRING);\n");
+ buf_printf(b, "\n");
+ buf_printf(b, "#undef unix\n"); /* We have a module called "unix" */
+ buf_printf(b, "struct module __this_module\n");
+ buf_printf(b, "__attribute__((section(\".gnu.linkonce.this_module\"))) = {\n");
+ buf_printf(b, " .name = __stringify(KBUILD_MODNAME),\n");
+ buf_printf(b, " .init = init_module,\n");
+ buf_printf(b, "#ifdef CONFIG_MODULE_UNLOAD\n");
+ buf_printf(b, " .exit = cleanup_module,\n");
+ buf_printf(b, "#endif\n");
+ buf_printf(b, "};\n");
+}
+
+/* Record CRCs for unresolved symbols */
+
+void
+add_versions(struct buffer *b, struct module *mod)
+{
+ struct symbol *s, *exp;
+
+ for (s = mod->unres; s; s = s->next) {
+ exp = find_symbol(s->name);
+ if (!exp || exp->module == mod) {
+ if (have_vmlinux)
+ fprintf(stderr, "*** Warning: \"%s\" [%s.ko] "
+ "undefined!\n", s->name, mod->name);
+ continue;
+ }
+ s->module = exp->module;
+ s->crc_valid = exp->crc_valid;
+ s->crc = exp->crc;
+ }
+
+ if (!modversions)
+ return;
+
+ buf_printf(b, "\n");
+ buf_printf(b, "static const struct modversion_info ____versions[]\n");
+ buf_printf(b, "__attribute_used__\n");
+ buf_printf(b, "__attribute__((section(\"__versions\"))) = {\n");
+
+ for (s = mod->unres; s; s = s->next) {
+ if (!s->module) {
+ continue;
+ }
+ if (!s->crc_valid) {
+ fprintf(stderr, "*** Warning: \"%s\" [%s.ko] "
+ "has no CRC!\n",
+ s->name, mod->name);
+ continue;
+ }
+ buf_printf(b, "\t{ %#8x, \"%s\" },\n", s->crc, s->name);
+ }
+
+ buf_printf(b, "};\n");
+}
+
+void
+add_depends(struct buffer *b, struct module *mod, struct module *modules)
+{
+ struct symbol *s;
+ struct module *m;
+ int first = 1;
+
+ for (m = modules; m; m = m->next) {
+ m->seen = is_vmlinux(m->name);
+ }
+
+ buf_printf(b, "\n");
+ buf_printf(b, "static const char __module_depends[]\n");
+ buf_printf(b, "__attribute_used__\n");
+ buf_printf(b, "__attribute__((section(\".modinfo\"))) =\n");
+ buf_printf(b, "\"depends=");
+ for (s = mod->unres; s; s = s->next) {
+ if (!s->module)
+ continue;
+
+ if (s->module->seen)
+ continue;
+
+ s->module->seen = 1;
+ buf_printf(b, "%s%s", first ? "" : ",",
+ strrchr(s->module->name, '/') + 1);
+ first = 0;
+ }
+ buf_printf(b, "\";\n");
+}
+
+void
+write_if_changed(struct buffer *b, const char *fname)
+{
+ char *tmp;
+ FILE *file;
+ struct stat st;
+
+ file = fopen(fname, "r");
+ if (!file)
+ goto write;
+
+ if (fstat(fileno(file), &st) < 0)
+ goto close_write;
+
+ if (st.st_size != b->pos)
+ goto close_write;
+
+ tmp = NOFAIL(malloc(b->pos));
+ if (fread(tmp, 1, b->pos, file) != b->pos)
+ goto free_write;
+
+ if (memcmp(tmp, b->p, b->pos) != 0)
+ goto free_write;
+
+ free(tmp);
+ fclose(file);
+ return;
+
+ free_write:
+ free(tmp);
+ close_write:
+ fclose(file);
+ write:
+ file = fopen(fname, "w");
+ if (!file) {
+ perror(fname);
+ exit(1);
+ }
+ if (fwrite(b->p, 1, b->pos, file) != b->pos) {
+ perror(fname);
+ exit(1);
+ }
+ fclose(file);
+}
+
+void
+read_dump(const char *fname)
+{
+ unsigned long size, pos = 0;
+ void *file = grab_file(fname, &size);
+ char *line;
+
+ if (!file)
+ /* No symbol versions, silently ignore */
+ return;
+
+ while ((line = get_next_line(&pos, file, size))) {
+ char *symname, *modname, *d;
+ unsigned int crc;
+ struct module *mod;
+
+ if (!(symname = strchr(line, '\t')))
+ goto fail;
+ *symname++ = '\0';
+ if (!(modname = strchr(symname, '\t')))
+ goto fail;
+ *modname++ = '\0';
+ if (strchr(modname, '\t'))
+ goto fail;
+ crc = strtoul(line, &d, 16);
+ if (*symname == '\0' || *modname == '\0' || *d != '\0')
+ goto fail;
+
+ if (!(mod = find_module(modname))) {
+ if (is_vmlinux(modname)) {
+ modversions = 1;
+ have_vmlinux = 1;
+ }
+ mod = new_module(NOFAIL(strdup(modname)));
+ mod->skip = 1;
+ }
+ add_exported_symbol(symname, mod, &crc);
+ }
+ return;
+fail:
+ fatal("parse error in symbol dump file\n");
+}
+
+void
+write_dump(const char *fname)
+{
+ struct buffer buf = { };
+ struct symbol *symbol;
+ int n;
+
+ for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ symbol = symbolhash[n];
+ while (symbol) {
+ symbol = symbol->next;
+ }
+ }
+
+ for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ symbol = symbolhash[n];
+ while (symbol) {
+ buf_printf(&buf, "0x%08x\t%s\t%s\n", symbol->crc,
+ symbol->name, symbol->module->name);
+ symbol = symbol->next;
+ }
+ }
+ write_if_changed(&buf, fname);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct module *mod;
+ struct buffer buf = { };
+ char fname[SZ];
+ char *dump_read = NULL, *dump_write = NULL;
+ int opt;
+
+ while ((opt = getopt(argc, argv, "i:o:")) != -1) {
+ switch(opt) {
+ case 'i':
+ dump_read = optarg;
+ break;
+ case 'o':
+ dump_write = optarg;
+ break;
+ default:
+ exit(1);
+ }
+ }
+
+ if (dump_read)
+ read_dump(dump_read);
+
+ while (optind < argc) {
+ read_symbols(argv[optind++]);
+ }
+
+ for (mod = modules; mod; mod = mod->next) {
+ if (mod->skip)
+ continue;
+
+ buf.pos = 0;
+
+ add_header(&buf);
+ add_versions(&buf, mod);
+ add_depends(&buf, mod, modules);
+ add_moddevtable(&buf, mod);
+
+ sprintf(fname, "%s.mod.c", mod->name);
+ write_if_changed(&buf, fname);
+ }
+
+ if (dump_write)
+ write_dump(dump_write);
+
+ return 0;
+}
+
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <elf.h>
+
+#include "elfconfig.h"
+
+#if KERNEL_ELFCLASS == ELFCLASS32
+
+#define Elf_Ehdr Elf32_Ehdr
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define ELF_ST_BIND ELF32_ST_BIND
+#define ELF_ST_TYPE ELF32_ST_TYPE
+
+#else
+
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define ELF_ST_BIND ELF64_ST_BIND
+#define ELF_ST_TYPE ELF64_ST_TYPE
+
+#endif
+
+#if KERNEL_ELFDATA != HOST_ELFDATA
+
+static inline void __endian(const void *src, void *dest, unsigned int size)
+{
+ unsigned int i;
+ for (i = 0; i < size; i++)
+ ((unsigned char*)dest)[i] = ((unsigned char*)src)[size - i-1];
+}
+
+
+
+#define TO_NATIVE(x) \
+({ \
+ typeof(x) __x; \
+ __endian(&(x), &(__x), sizeof(__x)); \
+ __x; \
+})
+
+#else /* endianness matches */
+
+#define TO_NATIVE(x) (x)
+
+#endif
+
+#define NOFAIL(ptr) do_nofail((ptr), __FILE__, __LINE__, #ptr)
+void *do_nofail(void *ptr, const char *file, int line, const char *expr);
+
+struct buffer {
+ char *p;
+ int pos;
+ int size;
+};
+
+void __attribute__((format(printf, 2, 3)))
+buf_printf(struct buffer *buf, const char *fmt, ...);
+
+void
+buf_write(struct buffer *buf, const char *s, int len);
+
+struct module {
+ struct module *next;
+ const char *name;
+ struct symbol *unres;
+ int seen;
+ int skip;
+ struct buffer dev_table_buf;
+};
+
+struct elf_info {
+ unsigned long size;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *symtab_start;
+ Elf_Sym *symtab_stop;
+ const char *strtab;
+ char *modinfo;
+ unsigned int modinfo_len;
+};
+
+void handle_moddevtable(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname);
+
+void add_moddevtable(struct buffer *buf, struct module *mod);
+
+void maybe_frob_version(const char *modfilename,
+ void *modinfo,
+ unsigned long modinfo_len,
+ unsigned long modinfo_offset);
+
+void *grab_file(const char *filename, unsigned long *size);
+char* get_next_line(unsigned long *pos, void *file, unsigned long size);
+void release_file(void *file, unsigned long size);
--- /dev/null
+#include <netinet/in.h>
+#include <stdint.h>
+#include <ctype.h>
+#include <errno.h>
+#include <string.h>
+#include "modpost.h"
+
+/* Parse tag=value strings from .modinfo section */
+static char *next_string(char *string, unsigned long *secsize)
+{
+ /* Skip non-zero chars */
+ while (string[0]) {
+ string++;
+ if ((*secsize)-- <= 1)
+ return NULL;
+ }
+
+ /* Skip any zero padding. */
+ while (!string[0]) {
+ string++;
+ if ((*secsize)-- <= 1)
+ return NULL;
+ }
+ return string;
+}
+
+static char *get_modinfo(void *modinfo, unsigned long modinfo_len,
+ const char *tag)
+{
+ char *p;
+ unsigned int taglen = strlen(tag);
+ unsigned long size = modinfo_len;
+
+ for (p = modinfo; p; p = next_string(p, &size)) {
+ if (strncmp(p, tag, taglen) == 0 && p[taglen] == '=')
+ return p + taglen + 1;
+ }
+ return NULL;
+}
+
+/*
+ * Stolen form Cryptographic API.
+ *
+ * MD4 Message Digest Algorithm (RFC1320).
+ *
+ * Implementation derived from Andrew Tridgell and Steve French's
+ * CIFS MD4 implementation, and the cryptoapi implementation
+ * originally based on the public domain implementation written
+ * by Colin Plumb in 1993.
+ *
+ * Copyright (c) Andrew Tridgell 1997-1998.
+ * Modified by Steve French (sfrench@us.ibm.com) 2002
+ * Copyright (c) Cryptoapi developers.
+ * Copyright (c) 2002 David S. Miller (davem@redhat.com)
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#define MD4_DIGEST_SIZE 16
+#define MD4_HMAC_BLOCK_SIZE 64
+#define MD4_BLOCK_WORDS 16
+#define MD4_HASH_WORDS 4
+
+struct md4_ctx {
+ uint32_t hash[MD4_HASH_WORDS];
+ uint32_t block[MD4_BLOCK_WORDS];
+ uint64_t byte_count;
+};
+
+static inline uint32_t lshift(uint32_t x, unsigned int s)
+{
+ x &= 0xFFFFFFFF;
+ return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s));
+}
+
+static inline uint32_t F(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | ((~x) & z);
+}
+
+static inline uint32_t G(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | (x & z) | (y & z);
+}
+
+static inline uint32_t H(uint32_t x, uint32_t y, uint32_t z)
+{
+ return x ^ y ^ z;
+}
+
+#define ROUND1(a,b,c,d,k,s) (a = lshift(a + F(b,c,d) + k, s))
+#define ROUND2(a,b,c,d,k,s) (a = lshift(a + G(b,c,d) + k + (uint32_t)0x5A827999,s))
+#define ROUND3(a,b,c,d,k,s) (a = lshift(a + H(b,c,d) + k + (uint32_t)0x6ED9EBA1,s))
+
+/* XXX: this stuff can be optimized */
+static inline void le32_to_cpu_array(uint32_t *buf, unsigned int words)
+{
+ while (words--) {
+ *buf = ntohl(*buf);
+ buf++;
+ }
+}
+
+static inline void cpu_to_le32_array(uint32_t *buf, unsigned int words)
+{
+ while (words--) {
+ *buf = htonl(*buf);
+ buf++;
+ }
+}
+
+static void md4_transform(uint32_t *hash, uint32_t const *in)
+{
+ uint32_t a, b, c, d;
+
+ a = hash[0];
+ b = hash[1];
+ c = hash[2];
+ d = hash[3];
+
+ ROUND1(a, b, c, d, in[0], 3);
+ ROUND1(d, a, b, c, in[1], 7);
+ ROUND1(c, d, a, b, in[2], 11);
+ ROUND1(b, c, d, a, in[3], 19);
+ ROUND1(a, b, c, d, in[4], 3);
+ ROUND1(d, a, b, c, in[5], 7);
+ ROUND1(c, d, a, b, in[6], 11);
+ ROUND1(b, c, d, a, in[7], 19);
+ ROUND1(a, b, c, d, in[8], 3);
+ ROUND1(d, a, b, c, in[9], 7);
+ ROUND1(c, d, a, b, in[10], 11);
+ ROUND1(b, c, d, a, in[11], 19);
+ ROUND1(a, b, c, d, in[12], 3);
+ ROUND1(d, a, b, c, in[13], 7);
+ ROUND1(c, d, a, b, in[14], 11);
+ ROUND1(b, c, d, a, in[15], 19);
+
+ ROUND2(a, b, c, d,in[ 0], 3);
+ ROUND2(d, a, b, c, in[4], 5);
+ ROUND2(c, d, a, b, in[8], 9);
+ ROUND2(b, c, d, a, in[12], 13);
+ ROUND2(a, b, c, d, in[1], 3);
+ ROUND2(d, a, b, c, in[5], 5);
+ ROUND2(c, d, a, b, in[9], 9);
+ ROUND2(b, c, d, a, in[13], 13);
+ ROUND2(a, b, c, d, in[2], 3);
+ ROUND2(d, a, b, c, in[6], 5);
+ ROUND2(c, d, a, b, in[10], 9);
+ ROUND2(b, c, d, a, in[14], 13);
+ ROUND2(a, b, c, d, in[3], 3);
+ ROUND2(d, a, b, c, in[7], 5);
+ ROUND2(c, d, a, b, in[11], 9);
+ ROUND2(b, c, d, a, in[15], 13);
+
+ ROUND3(a, b, c, d,in[ 0], 3);
+ ROUND3(d, a, b, c, in[8], 9);
+ ROUND3(c, d, a, b, in[4], 11);
+ ROUND3(b, c, d, a, in[12], 15);
+ ROUND3(a, b, c, d, in[2], 3);
+ ROUND3(d, a, b, c, in[10], 9);
+ ROUND3(c, d, a, b, in[6], 11);
+ ROUND3(b, c, d, a, in[14], 15);
+ ROUND3(a, b, c, d, in[1], 3);
+ ROUND3(d, a, b, c, in[9], 9);
+ ROUND3(c, d, a, b, in[5], 11);
+ ROUND3(b, c, d, a, in[13], 15);
+ ROUND3(a, b, c, d, in[3], 3);
+ ROUND3(d, a, b, c, in[11], 9);
+ ROUND3(c, d, a, b, in[7], 11);
+ ROUND3(b, c, d, a, in[15], 15);
+
+ hash[0] += a;
+ hash[1] += b;
+ hash[2] += c;
+ hash[3] += d;
+}
+
+static inline void md4_transform_helper(struct md4_ctx *ctx)
+{
+ le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(uint32_t));
+ md4_transform(ctx->hash, ctx->block);
+}
+
+static void md4_init(struct md4_ctx *mctx)
+{
+ mctx->hash[0] = 0x67452301;
+ mctx->hash[1] = 0xefcdab89;
+ mctx->hash[2] = 0x98badcfe;
+ mctx->hash[3] = 0x10325476;
+ mctx->byte_count = 0;
+}
+
+static void md4_update(struct md4_ctx *mctx,
+ const unsigned char *data, unsigned int len)
+{
+ const uint32_t avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
+
+ mctx->byte_count += len;
+
+ if (avail > len) {
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, len);
+ return;
+ }
+
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, avail);
+
+ md4_transform_helper(mctx);
+ data += avail;
+ len -= avail;
+
+ while (len >= sizeof(mctx->block)) {
+ memcpy(mctx->block, data, sizeof(mctx->block));
+ md4_transform_helper(mctx);
+ data += sizeof(mctx->block);
+ len -= sizeof(mctx->block);
+ }
+
+ memcpy(mctx->block, data, len);
+}
+
+static void md4_final_ascii(struct md4_ctx *mctx, char *out, unsigned int len)
+{
+ const unsigned int offset = mctx->byte_count & 0x3f;
+ char *p = (char *)mctx->block + offset;
+ int padding = 56 - (offset + 1);
+
+ *p++ = 0x80;
+ if (padding < 0) {
+ memset(p, 0x00, padding + sizeof (uint64_t));
+ md4_transform_helper(mctx);
+ p = (char *)mctx->block;
+ padding = 56;
+ }
+
+ memset(p, 0, padding);
+ mctx->block[14] = mctx->byte_count << 3;
+ mctx->block[15] = mctx->byte_count >> 29;
+ le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
+ sizeof(uint64_t)) / sizeof(uint32_t));
+ md4_transform(mctx->hash, mctx->block);
+ cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(uint32_t));
+
+ snprintf(out, len, "%08X%08X%08X%08X",
+ mctx->hash[0], mctx->hash[1], mctx->hash[2], mctx->hash[3]);
+}
+
+static inline void add_char(unsigned char c, struct md4_ctx *md)
+{
+ md4_update(md, &c, 1);
+}
+
+static int parse_string(const char *file, unsigned long len,
+ struct md4_ctx *md)
+{
+ unsigned long i;
+
+ add_char(file[0], md);
+ for (i = 1; i < len; i++) {
+ add_char(file[i], md);
+ if (file[i] == '"' && file[i-1] != '\\')
+ break;
+ }
+ return i;
+}
+
+static int parse_comment(const char *file, unsigned long len)
+{
+ unsigned long i;
+
+ for (i = 2; i < len; i++) {
+ if (file[i-1] == '*' && file[i] == '/')
+ break;
+ }
+ return i;
+}
+
+/* FIXME: Handle .s files differently (eg. # starts comments) --RR */
+static int parse_file(const char *fname, struct md4_ctx *md)
+{
+ char *file;
+ unsigned long i, len;
+
+ file = grab_file(fname, &len);
+ if (!file)
+ return 0;
+
+ for (i = 0; i < len; i++) {
+ /* Collapse and ignore \ and CR. */
+ if (file[i] == '\\' && (i+1 < len) && file[i+1] == '\n') {
+ i++;
+ continue;
+ }
+
+ /* Ignore whitespace */
+ if (isspace(file[i]))
+ continue;
+
+ /* Handle strings as whole units */
+ if (file[i] == '"') {
+ i += parse_string(file+i, len - i, md);
+ continue;
+ }
+
+ /* Comments: ignore */
+ if (file[i] == '/' && file[i+1] == '*') {
+ i += parse_comment(file+i, len - i);
+ continue;
+ }
+
+ add_char(file[i], md);
+ }
+ release_file(file, len);
+ return 1;
+}
+
+/* We have dir/file.o. Open dir/.file.o.cmd, look for deps_ line to
+ * figure out source file. */
+static int parse_source_files(const char *objfile, struct md4_ctx *md)
+{
+ char *cmd, *file, *line, *dir;
+ const char *base;
+ unsigned long flen, pos = 0;
+ int dirlen, ret = 0, check_files = 0;
+
+ cmd = NOFAIL(malloc(strlen(objfile) + sizeof("..cmd")));
+
+ base = strrchr(objfile, '/');
+ if (base) {
+ base++;
+ dirlen = base - objfile;
+ sprintf(cmd, "%.*s.%s.cmd", dirlen, objfile, base);
+ } else {
+ dirlen = 0;
+ sprintf(cmd, ".%s.cmd", objfile);
+ }
+ dir = NOFAIL(malloc(dirlen + 1));
+ strncpy(dir, objfile, dirlen);
+ dir[dirlen] = '\0';
+
+ file = grab_file(cmd, &flen);
+ if (!file) {
+ fprintf(stderr, "Warning: could not find %s for %s\n",
+ cmd, objfile);
+ goto out;
+ }
+
+ /* There will be a line like so:
+ deps_drivers/net/dummy.o := \
+ drivers/net/dummy.c \
+ $(wildcard include/config/net/fastroute.h) \
+ include/linux/config.h \
+ $(wildcard include/config/h.h) \
+ include/linux/module.h \
+
+ Sum all files in the same dir or subdirs.
+ */
+ while ((line = get_next_line(&pos, file, flen)) != NULL) {
+ char* p = line;
+ if (strncmp(line, "deps_", sizeof("deps_")-1) == 0) {
+ check_files = 1;
+ continue;
+ }
+ if (!check_files)
+ continue;
+
+ /* Continue until line does not end with '\' */
+ if ( *(p + strlen(p)-1) != '\\')
+ break;
+ /* Terminate line at first space, to get rid of final ' \' */
+ while (*p) {
+ if (isspace(*p)) {
+ *p = '\0';
+ break;
+ }
+ p++;
+ }
+
+ /* Check if this file is in same dir as objfile */
+ if ((strstr(line, dir)+strlen(dir)-1) == strrchr(line, '/')) {
+ if (!parse_file(line, md)) {
+ fprintf(stderr,
+ "Warning: could not open %s: %s\n",
+ line, strerror(errno));
+ goto out_file;
+ }
+
+ }
+
+ }
+
+ /* Everyone parsed OK */
+ ret = 1;
+out_file:
+ release_file(file, flen);
+out:
+ free(dir);
+ free(cmd);
+ return ret;
+}
+
+static int get_version(const char *modname, char sum[])
+{
+ void *file;
+ unsigned long len;
+ int ret = 0;
+ struct md4_ctx md;
+ char *sources, *end, *fname;
+ const char *basename;
+ char filelist[sizeof(".tmp_versions/%s.mod") + strlen(modname)];
+
+ /* Source files for module are in .tmp_versions/modname.mod,
+ after the first line. */
+ if (strrchr(modname, '/'))
+ basename = strrchr(modname, '/') + 1;
+ else
+ basename = modname;
+ sprintf(filelist, ".tmp_versions/%s", basename);
+ /* Truncate .o, add .mod */
+ strcpy(filelist + strlen(filelist)-2, ".mod");
+
+ file = grab_file(filelist, &len);
+ if (!file) {
+ fprintf(stderr, "Warning: could not find versions for %s\n",
+ filelist);
+ return 0;
+ }
+
+ sources = strchr(file, '\n');
+ if (!sources) {
+ fprintf(stderr, "Warning: malformed versions file for %s\n",
+ modname);
+ goto release;
+ }
+
+ sources++;
+ end = strchr(sources, '\n');
+ if (!end) {
+ fprintf(stderr, "Warning: bad ending versions file for %s\n",
+ modname);
+ goto release;
+ }
+ *end = '\0';
+
+ md4_init(&md);
+ for (fname = strtok(sources, " "); fname; fname = strtok(NULL, " ")) {
+ if (!parse_source_files(fname, &md))
+ goto release;
+ }
+
+ /* sum is of form \0<padding>. */
+ md4_final_ascii(&md, sum, 1 + strlen(sum+1));
+ ret = 1;
+release:
+ release_file(file, len);
+ return ret;
+}
+
+static void write_version(const char *filename, const char *sum,
+ unsigned long offset)
+{
+ int fd;
+
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
+ fprintf(stderr, "Warning: changing sum in %s failed: %s\n",
+ filename, strerror(errno));
+ return;
+ }
+
+ if (lseek(fd, offset, SEEK_SET) == (off_t)-1) {
+ fprintf(stderr, "Warning: changing sum in %s:%lu failed: %s\n",
+ filename, offset, strerror(errno));
+ goto out;
+ }
+
+ if (write(fd, sum, strlen(sum)+1) != strlen(sum)+1) {
+ fprintf(stderr, "Warning: writing sum in %s failed: %s\n",
+ filename, strerror(errno));
+ goto out;
+ }
+out:
+ close(fd);
+}
+
+void strip_rcs_crap(char *version)
+{
+ unsigned int len, full_len;
+
+ if (strncmp(version, "$Revision", strlen("$Revision")) != 0)
+ return;
+
+ /* Space for version string follows. */
+ full_len = strlen(version) + strlen(version + strlen(version) + 1) + 2;
+
+ /* Move string to start with version number: prefix will be
+ * $Revision$ or $Revision: */
+ len = strlen("$Revision");
+ if (version[len] == ':' || version[len] == '$')
+ len++;
+ while (isspace(version[len]))
+ len++;
+ memmove(version, version+len, full_len-len);
+ full_len -= len;
+
+ /* Preserve up to next whitespace. */
+ len = 0;
+ while (version[len] && !isspace(version[len]))
+ len++;
+ memmove(version + len, version + strlen(version),
+ full_len - strlen(version));
+}
+
+/* If the modinfo contains a "version" value, then set this. */
+void maybe_frob_version(const char *modfilename,
+ void *modinfo,
+ unsigned long modinfo_len,
+ unsigned long modinfo_offset)
+{
+ char *version, *csum;
+
+ version = get_modinfo(modinfo, modinfo_len, "version");
+ if (!version)
+ return;
+
+ /* RCS $Revision gets stripped out. */
+ strip_rcs_crap(version);
+
+ /* Check against double sumversion */
+ if (strchr(version, ' '))
+ return;
+
+ /* Version contains embedded NUL: second half has space for checksum */
+ csum = version + strlen(version);
+ *(csum++) = ' ';
+ if (get_version(modfilename, csum))
+ write_version(modfilename, version,
+ modinfo_offset + (version - (char *)modinfo));
+}
--- /dev/null
+# Makefile for the different targets used to generate full packages of a kernel
+# It uses the generic clean infrastructure of kbuild
+
+# Ignore the following files/directories during tar operation
+TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS
+
+
+# RPM target
+# ---------------------------------------------------------------------------
+# The rpm target generates two rpm files:
+# /usr/src/packages/SRPMS/kernel-2.6.7rc2-1.src.rpm
+# /usr/src/packages/RPMS/i386/kernel-2.6.7rc2-1.<arch>.rpm
+# The src.rpm files includes all source for the kernel being built
+# The <arch>.rpm includes kernel configuration, modules etc.
+#
+# Process to create the rpm files
+# a) clean the kernel
+# b) Generate .spec file
+# c) Build a tar ball, using symlink to make kernel version
+# first entry in the path
+# d) and pack the result to a tar.gz file
+# e) generate the rpm files, based on kernel.spec
+# - Use /. to avoid tar packing just the symlink
+
+# Do we have rpmbuild, otherwise fall back to the older rpm
+RPM := $(shell if [ -x "/usr/bin/rpmbuild" ]; then echo rpmbuild; \
+ else echo rpm; fi)
+
+# Remove hyphens since they have special meaning in RPM filenames
+KERNELPATH := kernel-$(subst -,,$(KERNELRELEASE))
+MKSPEC := $(srctree)/scripts/package/mkspec
+PREV := set -e; cd ..;
+
+.PHONY: rpm-pkg rpm
+
+$(objtree)/kernel.spec: $(MKSPEC)
+ $(CONFIG_SHELL) $(MKSPEC) > $@
+
+rpm-pkg rpm: $(objtree)/kernel.spec
+ $(MAKE) clean
+ $(PREV) ln -sf $(srctree) $(KERNELPATH)
+ $(PREV) tar -cz $(RCS_TAR_IGNORE) -f $(KERNELPATH).tar.gz $(KERNELPATH)/.
+ $(PREV) rm $(KERNELPATH)
+
+ set -e; \
+ $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version
+ set -e; \
+ mv -f $(objtree)/.tmp_version $(objtree)/.version
+
+ $(RPM) --target $(UTS_MACHINE) -ta ../$(KERNELPATH).tar.gz
+ rm ../$(KERNELPATH).tar.gz
+
+clean-rule += rm -f $(objtree)/kernel.spec
+
+# Deb target
+# ---------------------------------------------------------------------------
+#
+.PHONY: deb-pkg
+deb-pkg:
+ $(MAKE)
+ $(CONFIG_SHELL) $(srctree)/scripts/package/builddeb
+
+clean-rule += && rm -rf $(objtree)/debian/
+
+
+# Help text displayed when executing 'make help'
+# ---------------------------------------------------------------------------
+help:
+ @echo ' rpm-pkg - Build the kernel as an RPM package'
+ @echo ' deb-pkg - Build the kernel as an deb package'
+
--- /dev/null
+#!/bin/sh
+#
+# builddeb 1.2
+# Copyright 2003 Wichert Akkerman <wichert@wiggy.net>
+#
+# Simple script to generate a deb package for a Linux kernel. All the
+# complexity of what to do with a kernel after it is installer or removed
+# is left to other scripts and packages: they can install scripts in the
+# /etc/kernel/{pre,post}{inst,rm}.d/ directories that will be called on
+# package install and removal.
+
+set -e
+
+# Some variables and settings used throughout the script
+version="$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+tmpdir="$objtree/debian/tmp"
+
+# Setup the directory structure
+rm -rf "$tmpdir"
+mkdir -p "$tmpdir/DEBIAN" "$tmpdir/lib" "$tmpdir/boot"
+
+# Build and install the kernel
+cp System.map "$tmpdir/boot/System.map-$version"
+cp .config "$tmpdir/boot/config-$version"
+cp $KBUILD_IMAGE "$tmpdir/boot/vmlinuz-$version"
+
+if grep -q '^CONFIG_MODULES=y' .config ; then
+ INSTALL_MOD_PATH="$tmpdir" make modules_install
+fi
+
+# Install the maintainer scripts
+for script in postinst postrm preinst prerm ; do
+ mkdir -p "$tmpdir/etc/kernel/$script.d"
+ cat <<EOF > "$tmpdir/DEBIAN/$script"
+#!/bin/sh
+
+set -e
+
+test -d /etc/kernel/$script.d && run-parts --arg="$version" /etc/kernel/$script.d
+exit 0
+EOF
+ chmod 755 "$tmpdir/DEBIAN/$script"
+done
+
+name="Kernel Compiler <$(id -nu)@$(hostname -f)>"
+# Generate a simple changelog template
+cat <<EOF > debian/changelog
+linux ($version) unstable; urgency=low
+
+ * A standard release
+
+ -- $name $(date -R)
+EOF
+
+# Generate a control file
+cat <<EOF > debian/control
+Source: linux
+Section: base
+Priority: optional
+Maintainer: $name
+Standards-Version: 3.6.1
+
+Package: linux-$version
+Architecture: any
+Description: Linux kernel, version $version
+ This package contains the Linux kernel, modules and corresponding other
+ files version $version.
+EOF
+
+# Fix some ownership and permissions
+chown -R root:root "$tmpdir"
+chmod -R go-w "$tmpdir"
+
+# Perform the final magic
+dpkg-gencontrol -isp
+dpkg --build "$tmpdir" ..
+
+exit 0
+
--- /dev/null
+#!/bin/sh
+#
+# Output a simple RPM spec file that uses no fancy features requring
+# RPM v4. This is intended to work with any RPM distro.
+#
+# The only gothic bit here is redefining install_post to avoid
+# stripping the symbols from files in the kernel which we want
+#
+# Patched for non-x86 by Opencon (L) 2002 <opencon@rio.skydome.net>
+#
+
+# starting to output the spec
+if [ "`grep CONFIG_DRM=y .config | cut -f2 -d\=`" = "y" ]; then
+ PROVIDES=kernel-drm
+fi
+
+PROVIDES="$PROVIDES kernel-$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+
+echo "Name: kernel"
+echo "Summary: The Linux Kernel"
+echo "Version: "$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION | sed -e "s/-//g"
+# we need to determine the NEXT version number so that uname and
+# rpm -q will agree
+echo "Release: `. $srctree/scripts/mkversion`"
+echo "License: GPL"
+echo "Group: System Environment/Kernel"
+echo "Vendor: The Linux Community"
+echo "URL: http://www.kernel.org"
+echo -n "Source: kernel-$VERSION.$PATCHLEVEL.$SUBLEVEL"
+echo "$EXTRAVERSION.tar.gz" | sed -e "s/-//g"
+echo "BuildRoot: /var/tmp/%{name}-%{PACKAGE_VERSION}-root"
+echo "Provides: $PROVIDES"
+echo "%define __spec_install_post /usr/lib/rpm/brp-compress || :"
+echo "%define debug_package %{nil}"
+echo ""
+echo "%description"
+echo "The Linux Kernel, the operating system core itself"
+echo ""
+echo "%prep"
+echo "%setup -q"
+echo ""
+echo "%build"
+echo "make clean && make"
+echo ""
+echo "%install"
+echo 'mkdir -p $RPM_BUILD_ROOT/boot $RPM_BUILD_ROOT/lib $RPM_BUILD_ROOT/lib/modules'
+
+echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make modules_install'
+echo 'cp $KBUILD_IMAGE $RPM_BUILD_ROOT'"/boot/vmlinuz-$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+
+echo 'cp System.map $RPM_BUILD_ROOT'"/boot/System.map-$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+
+echo 'cp .config $RPM_BUILD_ROOT'"/boot/config-$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+echo ""
+echo "%clean"
+echo '#echo -rf $RPM_BUILD_ROOT'
+echo ""
+echo "%files"
+echo '%defattr (-, root, root)'
+echo "%dir /lib/modules"
+echo "/lib/modules/$VERSION.$PATCHLEVEL.$SUBLEVEL$EXTRAVERSION"
+echo "/boot/*"
+echo ""
--- /dev/null
+#!/usr/bin/perl -w
+#
+# reference_init.pl (C) Keith Owens 2002 <kaos@ocs.com.au>
+#
+# List references to vmlinux init sections from non-init sections.
+
+# Unfortunately I had to exclude references from read only data to .init
+# sections, almost all of these are false positives, they are created by
+# gcc. The downside of excluding rodata is that there really are some
+# user references from rodata to init code, e.g. drivers/video/vgacon.c
+#
+# const struct consw vga_con = {
+# con_startup: vgacon_startup,
+#
+# where vgacon_startup is __init. If you want to wade through the false
+# positives, take out the check for rodata.
+
+use strict;
+die($0 . " takes no arguments\n") if($#ARGV >= 0);
+
+my %object;
+my $object;
+my $line;
+my $ignore;
+
+$| = 1;
+
+printf("Finding objects, ");
+open(OBJDUMP_LIST, "find . -name '*.o' | xargs objdump -h |") || die "getting objdump list failed";
+while (defined($line = <OBJDUMP_LIST>)) {
+ chomp($line);
+ if ($line =~ /:\s+file format/) {
+ ($object = $line) =~ s/:.*//;
+ $object{$object}->{'module'} = 0;
+ $object{$object}->{'size'} = 0;
+ $object{$object}->{'off'} = 0;
+ }
+ if ($line =~ /^\s*\d+\s+\.modinfo\s+/) {
+ $object{$object}->{'module'} = 1;
+ }
+ if ($line =~ /^\s*\d+\s+\.comment\s+/) {
+ ($object{$object}->{'size'}, $object{$object}->{'off'}) = (split(' ', $line))[2,5];
+ }
+}
+close(OBJDUMP_LIST);
+printf("%d objects, ", scalar keys(%object));
+$ignore = 0;
+foreach $object (keys(%object)) {
+ if ($object{$object}->{'module'}) {
+ ++$ignore;
+ delete($object{$object});
+ }
+}
+printf("ignoring %d module(s)\n", $ignore);
+
+# Ignore conglomerate objects, they have been built from multiple objects and we
+# only care about the individual objects. If an object has more than one GCC:
+# string in the comment section then it is conglomerate. This does not filter
+# out conglomerates that consist of exactly one object, can't be helped.
+
+printf("Finding conglomerates, ");
+$ignore = 0;
+foreach $object (keys(%object)) {
+ if (exists($object{$object}->{'off'})) {
+ my ($off, $size, $comment, $l);
+ $off = hex($object{$object}->{'off'});
+ $size = hex($object{$object}->{'size'});
+ open(OBJECT, "<$object") || die "cannot read $object";
+ seek(OBJECT, $off, 0) || die "seek to $off in $object failed";
+ $l = read(OBJECT, $comment, $size);
+ die "read $size bytes from $object .comment failed" if ($l != $size);
+ close(OBJECT);
+ if ($comment =~ /GCC\:.*GCC\:/m) {
+ ++$ignore;
+ delete($object{$object});
+ }
+ }
+}
+printf("ignoring %d conglomerate(s)\n", $ignore);
+
+printf("Scanning objects\n");
+foreach $object (sort(keys(%object))) {
+ my $from;
+ open(OBJDUMP, "objdump -r $object|") || die "cannot objdump -r $object";
+ while (defined($line = <OBJDUMP>)) {
+ chomp($line);
+ if ($line =~ /RELOCATION RECORDS FOR /) {
+ ($from = $line) =~ s/.*\[([^]]*).*/$1/;
+ }
+ if (($line =~ /\.init$/ || $line =~ /\.init\./) &&
+ ($from !~ /\.init$/ &&
+ $from !~ /\.init\./ &&
+ $from !~ /\.stab$/ &&
+ $from !~ /\.rodata$/ &&
+ $from !~ /\.text\.lock$/ &&
+ $from !~ /\.debug_/)) {
+ printf("Error: %s %s refers to %s\n", $object, $from, $line);
+ }
+ }
+ close(OBJDUMP);
+}
+printf("Done\n");
--- /dev/null
+/*
+ * Netlink message type permission tables, for user generated messages.
+ *
+ * Author: James Morris <jmorris@redhat.com>
+ *
+ * Copyright (C) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2,
+ * as published by the Free Software Foundation.
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+#include <linux/if.h>
+#include <linux/netfilter_ipv4/ip_queue.h>
+#include <linux/tcp_diag.h>
+#include <linux/xfrm.h>
+#include <linux/audit.h>
+
+#include "flask.h"
+#include "av_permissions.h"
+
+struct nlmsg_perm
+{
+ u16 nlmsg_type;
+ u32 perm;
+};
+
+static struct nlmsg_perm nlmsg_route_perms[] =
+{
+ { RTM_NEWLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETLINK, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_SETLINK, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_NEWADDR, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELADDR, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETADDR, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWROUTE, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELROUTE, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETROUTE, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWNEIGH, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELNEIGH, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETNEIGH, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWRULE, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELRULE, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETRULE, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWQDISC, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELQDISC, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETQDISC, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWTCLASS, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELTCLASS, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETTCLASS, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_DELTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETTFILTER, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_NEWPREFIX, NETLINK_ROUTE_SOCKET__NLMSG_WRITE },
+ { RTM_GETPREFIX, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_GETMULTICAST, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+ { RTM_GETANYCAST, NETLINK_ROUTE_SOCKET__NLMSG_READ },
+};
+
+static struct nlmsg_perm nlmsg_firewall_perms[] =
+{
+ { IPQM_MODE, NETLINK_FIREWALL_SOCKET__NLMSG_WRITE },
+ { IPQM_VERDICT, NETLINK_FIREWALL_SOCKET__NLMSG_WRITE },
+};
+
+static struct nlmsg_perm nlmsg_tcpdiag_perms[] =
+{
+ { TCPDIAG_GETSOCK, NETLINK_TCPDIAG_SOCKET__NLMSG_READ },
+};
+
+static struct nlmsg_perm nlmsg_xfrm_perms[] =
+{
+ { XFRM_MSG_NEWSA, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_DELSA, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_GETSA, NETLINK_XFRM_SOCKET__NLMSG_READ },
+ { XFRM_MSG_NEWPOLICY, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_DELPOLICY, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_GETPOLICY, NETLINK_XFRM_SOCKET__NLMSG_READ },
+ { XFRM_MSG_ALLOCSPI, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_UPDPOLICY, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+ { XFRM_MSG_UPDSA, NETLINK_XFRM_SOCKET__NLMSG_WRITE },
+};
+
+static struct nlmsg_perm nlmsg_audit_perms[] =
+{
+ { AUDIT_GET, NETLINK_AUDIT_SOCKET__NLMSG_READ },
+ { AUDIT_SET, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
+ { AUDIT_LIST, NETLINK_AUDIT_SOCKET__NLMSG_READ },
+ { AUDIT_ADD, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
+ { AUDIT_DEL, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
+ { AUDIT_USER, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
+ { AUDIT_LOGIN, NETLINK_AUDIT_SOCKET__NLMSG_WRITE },
+};
+
+
+static int nlmsg_perm(u16 nlmsg_type, u32 *perm, struct nlmsg_perm *tab, size_t tabsize)
+{
+ int i, err = -EINVAL;
+
+ for (i = 0; i < tabsize/sizeof(struct nlmsg_perm); i++)
+ if (nlmsg_type == tab[i].nlmsg_type) {
+ *perm = tab[i].perm;
+ err = 0;
+ break;
+ }
+
+ return err;
+}
+
+int selinux_nlmsg_lookup(u16 sclass, u16 nlmsg_type, u32 *perm)
+{
+ int err = 0;
+
+ switch (sclass) {
+ case SECCLASS_NETLINK_ROUTE_SOCKET:
+ err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms,
+ sizeof(nlmsg_route_perms));
+ break;
+
+ case SECCLASS_NETLINK_FIREWALL_SOCKET:
+ case NETLINK_IP6_FW:
+ err = nlmsg_perm(nlmsg_type, perm, nlmsg_firewall_perms,
+ sizeof(nlmsg_firewall_perms));
+ break;
+
+ case SECCLASS_NETLINK_TCPDIAG_SOCKET:
+ err = nlmsg_perm(nlmsg_type, perm, nlmsg_tcpdiag_perms,
+ sizeof(nlmsg_tcpdiag_perms));
+ break;
+
+ case SECCLASS_NETLINK_XFRM_SOCKET:
+ err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms,
+ sizeof(nlmsg_xfrm_perms));
+ break;
+
+ case SECCLASS_NETLINK_AUDIT_SOCKET:
+ err = nlmsg_perm(nlmsg_type, perm, nlmsg_audit_perms,
+ sizeof(nlmsg_audit_perms));
+ break;
+
+ /* No messaging from userspace, or class unknown/unhandled */
+ default:
+ err = -ENOENT;
+ break;
+ }
+
+ return err;
+}